id int64 39 79M | url stringlengths 31 227 | text stringlengths 6 334k | source stringlengths 1 150 ⌀ | categories listlengths 1 6 | token_count int64 3 71.8k | subcategories listlengths 0 30 |
|---|---|---|---|---|---|---|
179,660 | https://en.wikipedia.org/wiki/Function%20word | In linguistics, function words (also called functors) are words that have little lexical meaning or have ambiguous meaning and express grammatical relationships among other words within a sentence, or specify the attitude or mood of the speaker. They signal the structural relationships that words have to one another and are the glue that holds sentences together. Thus they form important elements in the structures of sentences.
Words that are not function words are called content words (or open class words, lexical words, or autosemantic words) and include nouns, most verbs, adjectives, and most adverbs, although some adverbs are function words (like then and why). Dictionaries define the specific meanings of content words but can describe only the general usages of function words. By contrast, grammars describe the use of function words in detail but treat lexical words only in general terms.
Since it was first proposed in 1952 by C. C. Fries, the distinguishing of function/structure words from content/lexical words has been highly influential in the grammar used in second-language acquisition and English-language teaching.
Overview
Function words might be prepositions, pronouns, auxiliary verbs, conjunctions, grammatical articles or particles, all of which belong to the group of closed-class words. Interjections are sometimes considered function words but they belong to the group of open-class words. Function words might or might not be inflected or might have affixes.
Function words belong to the closed class of words in grammar because it is very uncommon to have new function words created in the course of speech. In the open class of words, i.e., nouns, verbs, adjectives, or adverbs, new words may be added readily, such as slang words, technical terms, and adoptions and adaptations of foreign words.
Each function word either: gives grammatical information about other words in a sentence or clause, and cannot be isolated from other words; or gives information about the speaker's mental model as to what is being said.
Grammatical words, as a class, can have distinct phonological properties from content words. For example, in some of the Khoisan languages, most content words begin with clicks, but very few function words do. In English, very few words other than function words begin with the voiced th . English function words may be spelled with fewer than three letters; e.g., 'I', 'an', 'in', while non-function words usually are spelled with three or more (e.g., 'eye', 'Ann', 'inn').
The following is a list of the kind of words considered to be function words with English examples. They are all uninflected in English unless marked otherwise:
articles — the and a. In some inflected languages, the articles may take on the case of the declension of the following noun.
pronouns — he :: him, she :: her, etc. — inflected in English
adpositions — in, under, towards, before, of, for, etc.
conjunctions — and and but
subordinating conjunctions — if, then, well, however, thus, etc.
auxiliary verbs — would, could, should, etc. — inflected in English
particles — up, on, down
interjections — oh, ah, eh, sometimes called "filled pauses"
expletives — take the place of sentences, among other functions.
pro-sentences — yes, no, okay, etc.
See also
Content word, words that name objects of reality and their qualities
Grammaticalization, process by which words representing objects and actions transform to become grammatical markers
Grammatical relation
References
Further reading
Summary.
External links
Short list of 225 English function words
Grammar
Parts of speech
kk:Шылау | Function word | [
"Technology"
] | 785 | [
"Parts of speech",
"Components"
] |
179,924 | https://en.wikipedia.org/wiki/Helix | A helix (; ) is a shape like a cylindrical coil spring or the thread of a machine screw. It is a type of smooth space curve with tangent lines at a constant angle to a fixed axis. Helices are important in biology, as the DNA molecule is formed as two intertwined helices, and many proteins have helical substructures, known as alpha helices. The word helix comes from the Greek word , "twisted, curved".
A "filled-in" helix – for example, a "spiral" (helical) ramp – is a surface called a helicoid.
Properties and types
The pitch of a helix is the height of one complete helix turn, measured parallel to the axis of the helix.
A double helix consists of two (typically congruent) helices with the same axis, differing by a translation along the axis.
A circular helix (i.e. one with constant radius) has constant band curvature and constant torsion. The slope of a circular helix is commonly defined as the ratio of the circumference of the circular cylinder that it spirals around, and its pitch (the height of one complete helix turn).
A conic helix, also known as a conic spiral, may be defined as a spiral on a conic surface, with the distance to the apex an exponential function of the angle indicating direction from the axis.
A curve is called a general helix or cylindrical helix if its tangent makes a constant angle with a fixed line in space. A curve is a general helix if and only if the ratio of curvature to torsion is constant.
A curve is called a slant helix if its principal normal makes a constant angle with a fixed line in space. It can be constructed by applying a transformation to the moving frame of a general helix.
For more general helix-like space curves can be found, see space spiral; e.g., spherical spiral.
Handedness
Helices can be either right-handed or left-handed. With the line of sight along the helix's axis, if a clockwise screwing motion moves the helix away from the observer, then it is called a right-handed helix; if towards the observer, then it is a left-handed helix. Handedness (or chirality) is a property of the helix, not of the perspective: a right-handed helix cannot be turned to look like a left-handed one unless it is viewed in a mirror, and vice versa.
Mathematical description
In mathematics, a helix is a curve in 3-dimensional space. The following parametrisation in Cartesian coordinates defines a particular helix; perhaps the simplest equations for one is
As the parameter increases, the point traces a right-handed helix of pitch (or slope 1) and radius 1 about the -axis, in a right-handed coordinate system.
In cylindrical coordinates , the same helix is parametrised by:
A circular helix of radius and slope (or pitch ) is described by the following parametrisation:
Another way of mathematically constructing a helix is to plot the complex-valued function as a function of the real number (see Euler's formula).
The value of and the real and imaginary parts of the function value give this plot three real dimensions.
Except for rotations, translations, and changes of scale, all right-handed helices are equivalent to the helix defined above. The equivalent left-handed helix can be constructed in a number of ways, the simplest being to negate any one of the , or components.
Arc length, curvature and torsion
A circular helix of radius and slope (or pitch ) expressed in Cartesian coordinates as the parametric equation
has an arc length of
a curvature of
and a torsion of
A helix has constant non-zero curvature and torsion.
A helix is the vector-valued function
So a helix can be reparameterized as a function of , which must be unit-speed:
The unit tangent vector is
The normal vector is
Its curvature is
.
The unit normal vector is
The binormal vector is
Its torsion is
Examples
An example of a double helix in molecular biology is the nucleic acid double helix.
An example of a conic helix is the Corkscrew roller coaster at Cedar Point amusement park.
Some curves found in nature consist of multiple helices of different handedness joined together by transitions known as tendril perversions.
Most hardware screw threads are right-handed helices. The alpha helix in biology as well as the A and B forms of DNA are also right-handed helices. The Z form of DNA is left-handed.
In music, pitch space is often modeled with helices or double helices, most often extending out of a circle such as the circle of fifths, so as to represent octave equivalency.
In aviation, geometric pitch is the distance an element of an airplane propeller would advance in one revolution if it were moving along a helix having an angle equal to that between the chord of the element and a plane perpendicular to the propeller axis; see also: pitch angle (aviation).
See also
Alpha helix
Arc spring
Boerdijk–Coxeter helix
Circular polarization
Collagen helix
Helical symmetry
Helicity
Helix angle
Helical axis
Hemihelix
Seashell surface
Solenoid
Superhelix
Triple helix
References
Geometric shapes
Curves | Helix | [
"Mathematics"
] | 1,089 | [
"Geometric shapes",
"Mathematical objects",
"Geometric objects"
] |
179,945 | https://en.wikipedia.org/wiki/Jean%20Baptiste%20Joseph%20Delambre | Jean Baptiste Joseph, chevalier Delambre (19 September 1749 – 19 August 1822) was a French mathematician, astronomer, historian of astronomy, and geodesist. He was also director of the Paris Observatory, and author of well-known books on the history of astronomy from ancient times to the 18th century.
Biography
After a childhood fever, he suffered from very sensitive eyes, and believed that he would soon go blind. For fear of losing his ability to read, he devoured any book available and trained his memory. He thus immersed himself in Greek and Latin literature, acquired the ability to recall entire pages verbatim weeks after reading them, became fluent in Italian, English and German and even wrote an unpublished Règle ou méthode facile pour apprendre la langue anglaise (Easy rule or method for learning English).
Delambre's quickly achieved success in his career in astronomy, such that in 1788, he was elected a foreign member of the Royal Swedish Academy of Sciences. In 1790, to establish a universally accepted foundation for the definition of measures, the National Constituent Assembly asked the French Academy of Sciences to introduce a new unit of length. The academics decided on the metre, defined as 1 / 10,000,000 of the distance from the North Pole to the equator, and prepared to organise an expedition to measure the length of the meridian arc between Dunkirk and Barcelona. This portion of the meridian, which also passes through Paris, was to serve as the basis for the length of the quarter meridian, connecting the North Pole with the Equator. In April 1791, the academy's Metric Commission confided this mission to Jean-Dominique de Cassini, Adrien-Marie Legendre and Pierre Méchain. Cassini was chosen to head the northern expedition but, as a royalist, he refused to serve under the revolutionary government after the arrest of King Louis XVI on his Flight to Varennes. On 15 February 1792, Delambre was elected unanimously a member of the French Academy of Sciences and in May 1792, after Cassini's final refusal, was placed in charge of the northern expedition, measuring the meridian from Dunkirk to Rodez in the south of France. Pierre Méchain headed the southern expedition, measuring from Barcelona to Rodez. The measurements were finished in 1798. The gathered data were presented to an international conference of savants in Paris the following year.
In 1801, First Consul Bonaparte took the presidency of the French Academy of Sciences and appointed Delambre its Permanent Secretary for the Mathematical Sciences, a post he held until his death. In 1803, he was elected a member of the American Philosophical Society in Philadelphia.
After Méchain's death in 1804, he was appointed director of the Paris Observatory. He was also professor of astronomy at the Collège de France. The same year he married Elisabeth-Aglaée Leblanc de Pommard, a widow with whom he had lived already for a long time. Her son, Achille-César-Charles de Pommard (1781–1807) assisted Delambre on several occasions in his astronomical and geodetical surveys, notably the measuring of the baselines for the meridian survey, and the latitude definition for Paris in December 1799 which was presented to the Conference of Savants.
Delambre was one of the first astronomers to derive astronomical equations from analytical formulas, was the author of Delambre's analogies and, after the age of 70, also the author of works on the history of astronomy like the Histoire de l'astronomie. He was a knight (chevalier) of the Order of Saint Michael and of the Légion d'honneur. His name is also one of the 72 names inscribed on the Eiffel tower. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1822.
Delambre died in 1822 and was interred in Père Lachaise Cemetery in Paris. The crater Delambre on the Moon is named after him.
Delambre was an atheist.
Works
Méthodes analytiques pour la détermination d'un arc du méridien (Crapelet, Paris, 1799)
Notice historique sur M. Méchain, lue le 5 messidor XIII (Baudouin, Paris, January 1806; this is the eulogy on the late Pierre Méchain, read at the academy by Secretary Delambre on 24 June 1805)
Base du système métrique décimal, ou Mesure de l'arc du méridien – compris entre les parallèles de Dunkerque et Barcelone, exécutée en 1792 et années suivantes, par MM. Méchain et Delambre. (editor; Baudouin, Imprimeur de l'Institut National; Paris; 3. vol.; January 1806, 1807, 1810; this includes both his own and Méchain's data gathered during the meridian survey 1792–1799 and calculations derived thereof)
Rapport historique sur le progrès des sciences mathématiques depuis 1799 (Imprimerie Impériale, Paris, 1810)
Tables écliptiques des satellites de Jupiter: d'après la théorie de M. le Marquis de Laplace, et la totalité des observations faites depuis 1662 jusqu'à l'an 1802 (Paris : Courcier, 1817.)
A history of astronomy, comprising four works and six volumes in all:
Histoire de l'astronomie ancienne, Paris: Mme Ve Courcier, 1817. 2 volumes; vol. 1, lxxii, 556 pp., 1 folded plate; vol. 2, viii, 639 pp., [1], 16 folded plates. .Reprinted by New York and London: Johnson Reprint Corporation, 1965 (Sources of Science, #23), with a new preface by Otto Neugebauer. .Text on line: vol. 1, , , ; vol. 2, , .
Histoire de l'astronomie du moyen age, Paris: Mme Ve Courcier, 1819. lxxxiv, 640 pp., 17 folded plates. .Reprinted by New York and London: Johnson Reprint Corporation, 1965 (Sources of Science, #24.) .Also reprinted by Paris: J. Gabay, 2006. .Text on line: .
Histoire de l'astronomie moderne, Paris: Mme Ve Courcier, 1821. 2 volumes; vol. 1, lxxxii, 715 pp., [1], 9 folded plates; vol. 2, [4], 804 pp., 8 folded plates. .Reprinted by New York and London: Johnson Reprint Corporation, 1969 (Sources of Science, #25), with a new introduction and tables of contents by I. Bernard Cohen. .Also reprinted by Paris: Editions Jacques Gabay, 2006. .This takes the history to the 17th century.Text on line: both volumes, with usable plates, ; vol. 1, , , ; vol. 2, .
Histoire de l'astronomie au dix-huitième siècle, edited by Claude-Louis Mathieu, Paris: Bachelier (successeur de Mme Ve Courcier), 1827. lii, 796 p., 3 folded plates. Reprinted by Paris: J. Gabay, 2004. .This includes the history of astronomy in the 18th century, especially critiques of his colleagues at the academy, which he withheld to be published posthumously.Text on line: ; with usable plates, .
Grandeur et figure de la terre, ouvrage augmenté de notes, de cartes (1912)(edited by Guillaume Bigourdan, Gauthiers-Villars, Paris, 1912; about the figure of the Earth)
Some works are digitalized on Paris Observatory digital library.
See also
Delambre analogies
History of the metre
Arc measurement of Delambre and Méchain
Seconds pendulum
References
Further reading
Ken Alder: The Measure of All Things – The Seven-Year Odyssey and Hidden Error That Transformed the World (The Free Press; New York, London, Toronto, Sydney, Singapore; 2002; )
External links
A brief biography of Delambre, partly from the 1880 Encyclopædia Britannica, including an account of Delambre's intervention to request liberation (from French imprisonment) of James Smithson, who went on to endow the foundation of the Smithsonian Institution, national museum of the United States of America
Portrait of Jean Baptiste Joseph Delambre from the Lick Observatory Records Digital Archive, UC Santa Cruz Library's Digital Collections
Jean Baptiste Joseph Delambre papers Jean Baptiste Joseph Delambre papers, MSS 458 at L. Tom Perry Special Collections, Brigham Young University
Delambre's publications on Paris Observatory digital library (in French)
1749 births
1822 deaths
People from Amiens
19th-century French astronomers
Burials at Père Lachaise Cemetery
Knights of the Legion of Honour
Academic staff of the Collège de France
Fellows of the American Academy of Arts and Sciences
Fellows of the Royal Society
French atheists
18th-century French astronomers
19th-century French historians
Historians of astronomy
Members of the Royal Swedish Academy of Sciences
Officers of the French Academy of Sciences
18th-century French mathematicians
19th-century French mathematicians
French male writers
19th-century French male writers
French geodesists
Honorary members of the Saint Petersburg Academy of Sciences | Jean Baptiste Joseph Delambre | [
"Astronomy"
] | 1,933 | [
"People associated with astronomy",
"Historians of astronomy",
"History of astronomy"
] |
179,947 | https://en.wikipedia.org/wiki/Doubly%20special%20relativity | Doubly special relativity (DSR) – also called deformed special relativity or, by some, extra-special relativity – is a modified theory of special relativity in which there is not only an observer-independent maximum velocity (the speed of light), but also an observer-independent maximum energy scale (the Planck energy) and/or a minimum length scale (the Planck length). This contrasts with other Lorentz-violating theories, such as the Standard-Model Extension, where Lorentz invariance is instead broken by the presence of a preferred frame. The main motivation for this theory is that the Planck energy should be the scale where as yet unknown quantum gravity effects become important and, due to invariance of physical laws, this scale should remain fixed in all inertial frames.
History
First attempts to modify special relativity by introducing an observer-independent length were made by Pavlopoulos (1967), who estimated this length at about . In the context of quantum gravity, Giovanni Amelino-Camelia (2000) introduced what is now called doubly special relativity, by proposing a specific realization of preserving invariance of the Planck length . This was reformulated by Kowalski-Glikman (2001) in terms of an observer-independent Planck mass. A different model, inspired by that of Amelino-Camelia, was proposed in 2001 by João Magueijo and Lee Smolin, who also focused on the invariance of Planck energy.
It was realized that there are, indeed, three kinds of deformation of special relativity that allow one to achieve an invariance of the Planck energy; either as a maximum energy, as a maximal momentum, or both. DSR models are possibly related to loop quantum gravity in 2+1 dimensions (two space, one time), and it has been conjectured that a relation also exists in 3+1 dimensions.
The motivation for these proposals is mainly theoretical, based on the following observation: The Planck energy is expected to play a fundamental role in a theory of quantum gravity; setting the scale at which quantum gravity effects cannot be neglected and new phenomena might become important. If special relativity is to hold up exactly to this scale, different observers would observe quantum gravity effects at different scales, due to the Lorentz–FitzGerald contraction, in contradiction to the principle that all inertial observers should be able to describe phenomena by the same physical laws. This motivation has been criticized, on the grounds that the result of a Lorentz transformation does not itself constitute an observable phenomenon. DSR also suffers from several inconsistencies in formulation that have yet to be resolved. Most notably, it is difficult to recover the standard transformation behavior for macroscopic bodies, known as the soccer ball problem. The other conceptual difficulty is that DSR is a priori formulated in momentum space. There is, as of yet, no consistent formulation of the model in position space.
Predictions
Experiments to date have not observed contradictions to Special Relativity.
It was initially speculated that ordinary special relativity and doubly special relativity would make distinct physical predictions in high-energy processes and, in particular, the derivation of the GZK limit on energies of cosmic rays from distant sources would not be valid. However, it is now established that standard doubly special relativity does not predict any suppression of the GZK cutoff, contrary to the models where an absolute local rest frame exists, such as effective field theories like the Standard-Model Extension.
Since DSR generically (though not necessarily) implies an energy-dependence of the speed of light, it has further been predicted that, if there are modifications to first order in energy over the Planck mass, this energy-dependence would be observable in high energetic photons reaching Earth from distant gamma ray bursts. Depending on whether the now energy-dependent speed of light increases or decreases with energy (a model-dependent feature), highly energetic photons would be faster or slower than the lower energetic ones. However, the Fermi-LAT experiment in 2009 measured a 31 GeV photon, which nearly simultaneously arrived with other photons from the same burst, which excluded such dispersion effects even above the Planck energy. Moreover, it has been argued that DSR, with an energy-dependent speed of light, is inconsistent and first order effects are ruled out already because they would lead to non-local particle interactions that would long have been observed in particle physics experiments.
De Sitter relativity
Since the de Sitter group naturally incorporates an invariant length parameter, de Sitter relativity can be interpreted as an example of doubly special relativity because de Sitter spacetime incorporates invariant velocity, as well as length parameter. There is a fundamental difference, though: whereas in all doubly special relativity models the Lorentz symmetry is violated, in de Sitter relativity it remains as a physical symmetry. A drawback of the usual doubly special relativity models is that they are valid only at the energy scales where ordinary special relativity is supposed to break down, giving rise to a patchwork relativity. On the other hand, de Sitter relativity is found to be invariant under a simultaneous re-scaling of mass, energy and momentum, and is consequently valid at all energy scales.
See also
Planck scale
Planck units
Planck epoch
Fock–Lorentz symmetry
References
Further reading
Smolin writes for the layman a brief history of the development of DSR and how it ties in with string theory and cosmology.
Special relativity
Quantum gravity | Doubly special relativity | [
"Physics"
] | 1,118 | [
"Unsolved problems in physics",
"Special relativity",
"Quantum gravity",
"Theory of relativity",
"Physics beyond the Standard Model"
] |
179,978 | https://en.wikipedia.org/wiki/Antiandrogen | Antiandrogens, also known as androgen antagonists or testosterone blockers, are a class of drugs that prevent androgens like testosterone and dihydrotestosterone (DHT) from mediating their biological effects in the body. They act by blocking the androgen receptor (AR) and/or inhibiting or suppressing androgen production. They can be thought of as the functional opposites of AR agonists, for instance androgens and anabolic steroids (AAS) like testosterone, DHT, and nandrolone and selective androgen receptor modulators (SARMs) like enobosarm. Antiandrogens are one of three types of sex hormone antagonists, the others being antiestrogens and antiprogestogens.
Antiandrogens are used to treat an assortment of androgen-dependent conditions. In men, antiandrogens are used in the treatment of prostate cancer, enlarged prostate, scalp hair loss, overly high sex drive, unusual and problematic sexual urges, and early puberty. In women, antiandrogens are used to treat acne, seborrhea, excessive hair growth, scalp hair loss, and high androgen levels, such as those that occur in polycystic ovary syndrome (PCOS). Antiandrogens are also used as a component of feminizing hormone therapy for transgender women and as puberty blockers in transgender girls.
Side effects of antiandrogens depend on the type of antiandrogen and the specific antiandrogen in question. In any case, common side effects of antiandrogens in men include breast tenderness, breast enlargement, feminization, hot flashes, sexual dysfunction, infertility, and osteoporosis. In women, antiandrogens are much better tolerated, and antiandrogens that work only by directly blocking androgens are associated with minimal side effects. However, because estrogens are made from androgens in the body, antiandrogens that suppress androgen production can cause low estrogen levels and associated symptoms like hot flashes, menstrual irregularities, and osteoporosis in premenopausal women.
There are a few different major types of antiandrogens. These include AR antagonists, androgen synthesis inhibitors, and antigonadotropins. AR antagonists work by directly blocking the effects of androgens, while androgen synthesis inhibitors and antigonadotropins work by lowering androgen levels. AR antagonists can be further divided into steroidal antiandrogens and nonsteroidal antiandrogens; androgen synthesis inhibitors can be further divided mostly into CYP17A1 inhibitors and 5α-reductase inhibitors; and antigonadotropins can be further divided into gonadotropin-releasing hormone modulators (GnRH modulators), progestogens, and estrogens.
Medical uses
Antiandrogens are used in the treatment of an assortment of androgen-dependent conditions in both males and females. They are used to treat men with prostate cancer, benign prostatic hyperplasia, pattern hair loss, hypersexuality, paraphilias, and priapism, as well as boys with precocious puberty. In women and girls, antiandrogens are used to treat acne, seborrhea, hidradenitis suppurativa, hirsutism, and hyperandrogenism. Antiandrogens are also used in transgender women as a component of feminizing hormone therapy and as puberty blockers in transgender girls.
Men and boys
Prostate cancer
Androgens like testosterone and particularly DHT are importantly involved in the development and progression of prostate cancer. They act as growth factors in the prostate gland, stimulating cell division and tissue growth. In accordance, therapeutic modalities that reduce androgen signaling in the prostate gland, referred to collectively as androgen deprivation therapy, are able to significantly slow the course of prostate cancer and extend life in men with the disease. Although antiandrogens are effective in slowing the progression of prostate cancer, they are not generally curative, and with time, the disease adapts and androgen deprivation therapy eventually becomes ineffective. When this occurs, other treatment approaches, such as chemotherapy, may be considered.
The most common methods of androgen deprivation therapy currently employed to treat prostate cancer are castration (with a GnRH modulator or orchiectomy), nonsteroidal antiandrogens, and the androgen synthesis inhibitor abiraterone acetate. Castration may be used alone or in combination with one of the other two treatments. When castration is combined with a nonsteroidal antiandrogen like bicalutamide, this strategy is referred to as combined androgen blockade (also known as complete or maximal androgen blockade). Enzalutamide, apalutamide, and abiraterone acetate are specifically approved for use in combination with castration to treat castration-resistant prostate cancer. Monotherapy with the nonsteroidal antiandrogen bicalutamide is also used in the treatment of prostate cancer as an alternative to castration with comparable effectiveness but with a different and potentially advantageous side effect profile.
High-dose estrogen was the first functional antiandrogen used to treat prostate cancer. It was widely used, but has largely been abandoned for this indication in favor of newer agents with improved safety profiles and fewer feminizing side effects. Cyproterone acetate was developed subsequently to high-dose estrogen and is the only steroidal antiandrogen that has been widely used in the treatment of prostate cancer, but it has largely been replaced by nonsteroidal antiandrogens, which are newer and have greater effectiveness, tolerability, and safety. Bicalutamide, as well as enzalutamide, have largely replaced the earlier nonsteroidal antiandrogens flutamide and nilutamide, which are now little used. The earlier androgen synthesis inhibitors aminoglutethimide and ketoconazole have only limitedly been used in the treatment of prostate cancer due to toxicity concerns and have been replaced by abiraterone acetate.
In addition to active treatment of prostate cancer, antiandrogens are effective as prophylaxis (preventatives) in reducing the risk of ever developing prostate cancer. Antiandrogens have only limitedly been assessed for this purpose, but the 5α-reductase inhibitors finasteride and dutasteride and the steroidal AR antagonist spironolactone have been associated with significantly reduced risk of prostate cancer. In addition, it is notable that prostate cancer is extremely rare in transgender women who have been on feminizing hormone therapy for an extended period of time.
Enlarged prostate
The 5α-reductase inhibitors finasteride and dutasteride are used to treat benign prostatic hyperplasia, a condition in which the prostate becomes enlarged and this results in urinary obstruction and discomfort. They are effective because androgens act as growth factors in the prostate gland. The antiandrogens chlormadinone acetate and oxendolone and the functional antiandrogens allylestrenol and gestonorone caproate are also approved in some countries for the treatment of benign prostatic hyperplasia.
Scalp hair loss
5α-Reductase inhibitors like finasteride, dutasteride, and alfatradiol and the topical nonsteroidal AR antagonist topilutamide (fluridil) are approved for the treatment of pattern hair loss, also known as scalp hair loss or baldness. This condition is generally caused by androgens, so antiandrogens can slow or halt its progression. Systemic antiandrogens besides 5α-reductase inhibitors are not generally used to treat scalp hair loss in males due to risks like feminization (e.g., gynecomastia) and sexual dysfunction. However, they have been assessed and reported to be effective for this indication.
Acne
Systemic antiandrogens are generally not used to treat acne in males due to their high risk of feminization (e.g., gynecomastia) and sexual dysfunction. However, they have been studied for acne in males and found to be effective. Clascoterone, a topical antiandrogen, is effective for acne in males and has been approved by the FDA in August 2020.
Paraphilia
Androgens increase sex drive, and for this reason, antiandrogens are able to reduce sex drive in men. In accordance, antiandrogens are used in the treatment of conditions such as hypersexuality (excessively high sex drive) and paraphilias (atypical and sometimes societally unacceptable sexual interests) like pedophilia (sexual attraction to children). They have been used to decrease sex drive in sex offenders so as to reduce the likelihood of recidivism (repeat offenses). Antiandrogens used for these indications include cyproterone acetate, medroxyprogesterone acetate, and GnRH modulators.
Early puberty
Antiandrogens are used to treat precocious puberty in boys. They work by opposing the effects of androgens and delaying the development of secondary sexual characteristics and onset of changes in sex drive and function until a more appropriate age. Antiandrogens that have been used for this purpose include cyproterone acetate, medroxyprogesterone acetate, GnRH modulators, spironolactone, bicalutamide, and ketoconazole. Spironolactone and bicalutamide require combination with an aromatase inhibitor to prevent the effects of unopposed estrogens, while the others can be used alone.
Long-lasting erections
Antiandrogens are effective in the treatment of recurrent priapism (potentially painful penile erections that last more than four hours).
Women and girls
Skin and hair conditions
Antiandrogens are used in the treatment of androgen-dependent skin and hair conditions including acne, seborrhea, hidradenitis suppurativa, hirsutism, and pattern hair loss in women. All of these conditions are dependent on androgens, and for this reason, antiandrogens are effective in treating them. The most commonly used antiandrogens for these indications are cyproterone acetate and spironolactone. Flutamide has also been studied extensively for such uses, but has fallen out of favor due to its association with hepatotoxicity. Bicalutamide, which has a relatively minimal risk of hepatotoxicity, has been evaluated for the treatment of hirsutism and found effective similarly to flutamide and may be used instead of it. In addition to AR antagonists, oral contraceptives containing ethinylestradiol are effective in treating these conditions, and may be combined with AR antagonists.
High androgen levels
Hyperandrogenism is a condition in women in which androgen levels are excessively and abnormally high. It is commonly seen in women with PCOS, and also occurs in women with intersex conditions like congenital adrenal hyperplasia. Hyperandrogenism is associated with virilization – that is, the development of masculine secondary sexual characteristics like male-pattern facial and body hair growth (or hirsutism), voice deepening, increased muscle mass and strength, and broadening of the shoulders, among others. Androgen-dependent skin and hair conditions like acne and pattern hair loss may also occur in hyperandrogenism, and menstrual disturbances, like amenorrhea, are commonly seen. Although antiandrogens do not treat the underlying cause of hyperandrogenism (e.g., PCOS), they are able to prevent and reverse its manifestation and effects. As with androgen-dependent skin and hair conditions, the most commonly used antiandrogens in the treatment of hyperandrogenism in women are cyproterone acetate and spironolactone. Other antiandrogens, like bicalutamide, may be used alternatively.
Gender-affirming hormone therapy
Antiandrogens are used to prevent or reverse masculinization and to facilitate feminization in transgender women and some nonbinary individuals who are undergoing hormone therapy and who have not undergone sex reassignment surgery or orchiectomy. Besides estrogens, the main antiandrogens that have been used for this purpose are cyproterone acetate, spironolactone, and GnRH modulators. Nonsteroidal antiandrogens like bicalutamide are also used for this indication. In addition to use in transgender women, antiandrogens, mainly GnRH modulators, are used as puberty blockers to prevent the onset of puberty in transgender girls until they are older and ready to begin hormone therapy.
Available forms
There are several different types of antiandrogens, including the following:
Androgen receptor antagonists: Drugs that bind directly to and block the AR. These drugs include the steroidal antiandrogens cyproterone acetate, megestrol acetate, chlormadinone acetate, spironolactone, oxendolone, and osaterone acetate (veterinary) and the nonsteroidal antiandrogens flutamide, bicalutamide, nilutamide, topilutamide, enzalutamide, apalutamideand Finasteride. Aside from cyproterone acetate and chlormadinone acetate, a few other progestins used in oral contraceptives and/or in menopausal HRT including dienogest, drospirenone, medrogestone, nomegestrol acetate, promegestone, and trimegestone also have varying degrees of AR antagonistic activity.
Androgen synthesis inhibitors: Drugs that directly inhibit the enzymatic biosynthesis of androgens like testosterone and/or DHT. Examples include the CYP17A1 inhibitors ketoconazole, abiraterone acetate, and seviteronel, the CYP11A1 (P450scc) inhibitor aminoglutethimide, and the 5α-reductase inhibitors finasteride, dutasteride, epristeride, alfatradiol, and saw palmetto extract (Serenoa repens). A number of other antiandrogens, including cyproterone acetate, spironolactone, medrogestone, flutamide, nilutamide, and bifluranol, are also known to weakly inhibit androgen synthesis.
Antigonadotropins: Drugs that suppress the gonadotropin-releasing hormone (GnRH)-induced release of gonadotropins and consequent activation of gonadal androgen production. Examples include GnRH modulators like leuprorelin (a GnRH agonist) and cetrorelix (a GnRH antagonist), progestogens like allylestrenol, chlormadinone acetate, cyproterone acetate, gestonorone caproate, hydroxyprogesterone caproate, medroxyprogesterone acetate, megestrol acetate, osaterone acetate (veterinary), and oxendolone, and estrogens like estradiol, estradiol esters, ethinylestradiol, conjugated estrogens, and diethylstilbestrol.
Miscellaneous: Drugs that oppose the effects of androgens by means other than the above. Examples include estrogens, especially oral and synthetic (e.g., ethinylestradiol, diethylstilbestrol), which stimulate sex hormone-binding globulin (SHBG) production in the liver and thereby decrease free and hence bioactive levels of testosterone and DHT; anticorticotropins such as glucocorticoids, which suppress the adrenocorticotropic hormone (ACTH)-induced production of adrenal androgens; and immunogens and vaccines against androstenedione like ovandrotone albumin and androstenedione albumin, which decrease levels of androgens via the generation of antibodies against the androgen and androgen precursor androstenedione (used only in veterinary medicine).
Certain antiandrogens combine multiple of the above mechanisms. An example is the steroidal antiandrogen cyproterone acetate, which is a potent AR antagonist, a potent progestogen and hence antigonadotropin, a weak glucocorticoid and hence anticorticotropin, and a weak androgen synthesis inhibitor.
Side effects
The side effects of antiandrogens vary depending on the type of antiandrogen – namely whether it is a selective AR antagonist or lowers androgen levels – as well as the presence of off-target activity in the antiandrogen in question. For instance, whereas antigonadotropic antiandrogens like GnRH modulators and cyproterone acetate are associated with pronounced sexual dysfunction and osteoporosis in men, selective AR antagonists like bicalutamide are not associated with osteoporosis and have been associated with only minimal sexual dysfunction. These differences are thought related to the fact that antigonadotropins suppress androgen levels and by extension levels of bioactive metabolites of androgens like estrogens and neurosteroids whereas selective AR antagonists similarly neutralize the effects of androgens but leave levels of androgens and hence their metabolites intact (and in fact can even increase them as a result of their progonadotropic effects). As another example, the steroidal antiandrogens cyproterone acetate and spironolactone possess off-target actions including progestogenic, antimineralocorticoid, and/or glucocorticoid activity in addition to their antiandrogen activity, and these off-target activities can result in additional side effects.
In males, the major side effects of antiandrogens are demasculinization and feminization. These side effects include breast pain/tenderness and gynecomastia (breast development/enlargement), reduced body hair growth/density, decreased muscle mass and strength, feminine changes in fat mass and distribution, and reduced penile length and testicular size. The rates of gynecomastia in men with selective AR antagonist monotherapy have been found to range from 30 to 85%. In addition, antiandrogens can cause infertility, osteoporosis, hot flashes, sexual dysfunction (including loss of libido and erectile dysfunction), depression, fatigue, anemia, and decreased semen/ejaculate volume in males. Conversely, the side effects of selective AR antagonists in women are minimal. However, antigonadotropic antiandrogens like cyproterone acetate can produce hypoestrogenism, amenorrhea, and osteoporosis in premenopausal women, among other side effects. In addition, androgen receptor antagonists can produce unfavorable effects on cholesterol levels, which long-term may increase the risk of cardiovascular disease.
A number of antiandrogens have been associated with hepatotoxicity. These include, to varying extents, cyproterone acetate, flutamide, nilutamide, bicalutamide, aminoglutethimide, and ketoconazole. In contrast, spironolactone, enzalutamide, and other antiandrogens are not associated with significant rates of hepatotoxicity. However, although they do not pose a risk of hepatotoxicity, spironolactone has a risk of hyperkalemia and enzalutamide has a risk of seizures.
In women who are pregnant, antiandrogens can interfere with the androgen-mediated sexual differentiation of the genitalia and brain of male fetuses. This manifests primarily as ambiguous genitalia – that is, undervirilized or feminized genitalia, which, anatomically, are a cross between a penis and a vagina – and theoretically also as femininity. As such, antiandrogens are teratogens, and women who are pregnant should not be treated with an antiandrogen. Moreover, women who can or may become pregnant are strongly recommended to take an antiandrogen only in combination with proper contraception.
Overdose
Antiandrogens are relatively safe in acute overdose.
Interactions
Inhibitors and inducers of cytochrome P450 enzymes may interact with various antiandrogens.
Mechanism of action
Androgen receptor antagonists
AR antagonists act by directly binding to and competitively displacing androgens like testosterone and DHT from the AR, thereby preventing them from activating the receptor and mediating their biological effects. AR antagonists are classified into two types, based on chemical structure: steroidal and nonsteroidal. Steroidal AR antagonists are structurally related to steroid hormones like testosterone and progesterone, whereas nonsteroidal AR antagonists are not steroids and are structurally distinct. Steroidal AR antagonists tend to have off-target hormonal actions due to their structural similarity to other steroid hormones. In contrast, nonsteroidal AR antagonists are selective for the AR and have no off-target hormonal activity. For this reason, they are sometimes described as "pure" antiandrogens.
Although they are described as antiandrogens and indeed show only such effects generally, most or all steroidal AR antagonists are actually not silent antagonists of the AR but rather are weak partial agonists and are able to activate the receptor in the absence of more potent AR agonists like testosterone and DHT. This may have clinical implications in the specific context of prostate cancer treatment. As an example, steroidal AR antagonists are able to increase prostate weight and accelerate prostate cancer cell growth in the absence of more potent AR agonists, and spironolactone has been found to accelerate progression of prostate cancer in case reports. In addition, whereas cyproterone acetate produces ambiguous genitalia via feminization in male fetuses when administered to pregnant animals, it has been found to produce masculinization of the genitalia of female fetuses of pregnant animals. In contrast to steroidal AR antagonists, nonsteroidal AR antagonists are silent antagonists of the AR and do not activate the receptor. This may be why they have greater efficacy than steroidal AR antagonists in the treatment of prostate cancer and is an important reason as to why they have largely replaced them for this indication in medicine.
Nonsteroidal antiandrogens have relatively low affinity for the AR compared to steroidal AR ligands. For example, bicalutamide has around 2% of the affinity of DHT for the AR and around 20% of the affinity of CPA for the AR. Despite their low affinity for the AR however, the lack of weak partial agonist activity of NSAAs appears to improve their potency relative to steroidal antiandrogens. For example, although flutamide has about 10-fold lower affinity for the AR than CPA, it shows equal or slightly greater potency to CPA as an antiandrogen in bioassays. In addition, circulating therapeutic concentrations of nonsteroidal antiandrogens are very high, on the order of thousands of times higher than those of testosterone and DHT, and this allows them to efficaciously compete and block AR signaling.
AR antagonists may not bind to or block membrane androgen receptors (mARs), which are distinct from the classical nuclear AR. However, the mARs do not appear to be involved in masculinization. This is evidenced by the perfectly female phenotype of women with complete androgen insensitivity syndrome. These women have a 46,XY karyotype (i.e., are genetically "male") and high levels of androgens but possess a defective AR and for this reason never masculinize. They are described as highly feminine, both physically as well as mentally and behaviorally.
N-Terminal domain antagonists
N-Terminal domain AR antagonists are a new type of AR antagonist that, unlike all currently marketed AR antagonists, bind to the N-terminal domain (NTD) of the AR rather than the ligand-binding domain (LBD). Whereas conventional AR antagonists bind to the LBD of the AR and competitively displace androgens, thereby preventing them from activating the receptor, AR NTD antagonists bind covalently to the NTD of the AR and prevent protein–protein interactions subsequent to activation that are required for transcriptional activity. As such, they are non-competitive and irreversible antagonists of the AR. Examples of AR NTD antagonists include bisphenol A diglycidyl ether (BADGE) and its derivatives EPI-001, ralaniten (EPI-002), and ralaniten acetate (EPI-506). AR NTD antagonists are under investigation for the potential treatment of prostate cancer, and it is thought that they may have greater efficacy as antiandrogens relative to conventional AR antagonists. In accordance with this notion, AR NTD antagonists are active against splice variants of the AR, which conventional AR antagonists are not, and AR NTD antagonists are immune to gain-of-function mutations in the AR LBD that convert AR antagonists into AR agonists and commonly occur in prostate cancer.
Androgen receptor degraders
Selective androgen receptor degraders (SARDs) are another new type of antiandrogen that has recently been developed. They work by enhancing the degradation of the AR, and are analogous to selective estrogen receptor degraders (SERDs) like fulvestrant (a drug used to treat estrogen receptor-positive breast cancer). Similarly to AR NTD antagonists, it is thought that SARDs may have greater efficacy than conventional AR antagonists, and for this reason, they are under investigation for the treatment of prostate cancer. An example of a SARD is dimethylcurcumin (ASC-J9), which is under development as a topical medication for the potential treatment of acne. SARDs like dimethylcurcumin differ from conventional AR antagonists and AR NTD antagonists in that they may not necessarily bind directly to the AR.
Androgen synthesis inhibitors
Androgen synthesis inhibitors are enzyme inhibitors that prevent the biosynthesis of androgens. This process occurs mainly in the gonads and adrenal glands, but also occurs in other tissues like the prostate gland, skin, and hair follicles. These drugs include aminoglutethimide, ketoconazole, and abiraterone acetate. Aminoglutethimide inhibits cholesterol side-chain cleavage enzyme, also known as P450scc or CYP11A1, which is responsible for the conversion of cholesterol into pregnenolone and by extension the production of all steroid hormones, including the androgens. Ketoconazole and abiraterone acetate are inhibitors of the enzyme CYP17A1, also known as 17α-hydroxylase/17,20-lyase, which is responsible for the conversion of pregnane steroids into androgens, as well as the conversion of mineralocorticoids into glucocorticoids. Because these drugs all prevent the formation of glucocorticoids in addition to androgens, they must be combined with a glucocorticoid like prednisone to avoid adrenal insufficiency. A newer drug currently under development for treatment of prostate cancer, seviteronel, is selective for inhibition of the 17,20-lyase functionality of CYP17A1, and for this reason, unlike earlier drugs, does not require concomitant treatment with a glucocorticoid.
5α-Reductase inhibitors
5α-Reductase inhibitors such as finasteride and dutasteride are inhibitors of 5α-reductase, an enzyme that is responsible for the formation of DHT from testosterone. DHT is between 2.5- and 10-fold more potent than testosterone as an androgen and is produced in a tissue-selective manner based on expression of 5α-reductase. Tissues in which DHT forms at a high rate include the prostate gland, skin, and hair follicles. In accordance, DHT is involved in the pathophysiology of benign prostatic hyperplasia, pattern hair loss, and hirsutism, and 5α-reductase inhibitors are used to treat these conditions.
Antigonadotropins
Antigonadotropins are drugs that suppress the GnRH-mediated secretion of gonadotropins from the pituitary gland. Gonadotropins include luteinizing hormone (LH) and follicle-stimulating hormone (FSH) and are peptide hormones that signal the gonads to produce sex hormones. By suppressing gonadotropin secretion, antigonadotropins suppress gonadal sex hormone production and by extension circulating androgen levels. GnRH modulators, including both GnRH agonists and GnRH antagonists, are powerful antigonadotropins that are able to suppress androgen levels by 95% in men. In addition, estrogens and progestogens are antigonadotropins via exertion of negative feedback on the hypothalamic–pituitary–gonadal axis (HPG axis). High-dose estrogens are able to suppress androgen levels to castrate levels in men similarly to GnRH modulators, while high-dose progestogens are able to suppress androgen levels by up to approximately 70 to 80% in men.
Examples of GnRH agonists include leuprorelin (leuprolide) and goserelin, while an example of a GnRH antagonist is cetrorelix. Estrogens that are or that have been used as antigonadotropins include estradiol, estradiol esters like estradiol valerate, estradiol undecylate, and polyestradiol phosphate, conjugated estrogens, ethinylestradiol, diethylstilbestrol (no longer widely used), and bifluranol. Progestogens that are used as antigonadotropins include chlormadinone acetate, cyproterone acetate, gestonorone caproate, hydroxyprogesterone caproate, medroxyprogesterone acetate, megestrol acetate, and oxendolone.
Miscellaneous
Sex hormone-binding globulin modulators
In addition to their antigonadotropic effects, estrogens are also functional antiandrogens by decreasing free concentrations of androgens via increasing the hepatic production of sex hormone-binding globulin (SHBG) and by extension circulating SHBG levels. Combined oral contraceptives containing ethinylestradiol have been found to increase circulating SHBG levels by 2- to 4-fold in women and to reduce free testosterone concentrations by 40 to 80%. However, combined oral contraceptives that contain the particularly androgenic progestin levonorgestrel have been found to increase SHBG levels by only 50 to 100%, which is likely because activation of the AR in the liver has the opposite effect of estrogen and suppresses production of SHBG. Levonorgestrel and certain other 19-nortestosterone progestins used in combined oral contraceptives like norethisterone also directly bind to and displace androgens from SHBG, which may additionally antagonize the functional antiandrogenic effects of ethinylestradiol. In men, a study found that treatment with a relatively low dosage of 20 μg/day ethinylestradiol for 5 weeks increased circulating SHBG levels by 150% and, due to the accompanying decrease free testosterone levels, increased total circulating levels of testosterone by 50% (via reduced negative feedback by androgens on the HPG axis).
Corticosteroid-binding globulin modulators
Estrogens at high doses can partially suppress adrenal androgen production. A study found that treatment with a high-dose ethinylestradiol (100 μg/day) reduced levels of major circulating adrenal androgens by 27 to 48% in transgender women. Decreased adrenal androgens with estrogens is apparent with oral and synthetic estrogens like ethinylestradiol and estramustine phosphate but is minimal with parenteral bioidentical estradiol forms like polyestradiol phosphate. It is thought to be mediated via a hepatic mechanism, probably increased corticosteroid-binding globulin (CBG) production and levels and compensatory changes in adrenal steroid production (e.g., shunting of adrenal androgen synthesis to cortisol production). It is notable in this regard that oral and synthetic estrogens, due to the oral first pass and resistance to hepatic metabolism, have much stronger influences on liver protein synthesis than parenteral estradiol. The decrease in adrenal androgen levels with high-dose estrogen therapy may be beneficial in the treatment of prostate cancer.
Anticorticotropins
Anticorticotropins such as glucocorticoids and mineralocorticoids work by exerting negative feedback on the hypothalamic–pituitary–adrenal axis (HPA axis), thereby inhibiting the secretion of corticotropin-releasing hormone (CRH) and hence adrenocorticotropic hormone (ACTH; corticotropin) and consequently suppressing the production of androgen prohormones like dehydroepiandrosterone (DHEA), dehydroepiandrosterone sulfate (DHEA-S), and androstenedione in the adrenal gland. They are rarely used clinically as functional antiandrogens, but are used as such in the case of congenital adrenal hyperplasia in girls and women, in which there are excessive production and levels of adrenal androgens due to glucocorticoid deficiency and hence HPA axis overactivity.
Insulin sensitizers
In women with insulin resistance, such as those with polycystic ovary syndrome, androgen levels are often elevated. Metformin, an insulin-sensitizing medication, has indirect antiandrogenic effects in such women, decreasing testosterone levels by as much as 50% secondary to its beneficial effects on insulin sensitivity.
Immunogens and vaccines
Ovandrotone albumin (Fecundin, Ovastim) and Androvax (androstenedione albumin) are immunogens and vaccines against androstenedione that are used in veterinary medicine to improve fecundity (reproductive rate) in ewes (adult female sheep). The generation of antibodies against androstenedione by these agents is thought to decrease circulating levels of androstenedione and its metabolites (e.g., testosterone and estrogens), which in turn increases the activity of the HPG axis via reduced negative feedback and increases the rate of ovulation, resulting in greater fertility and fecundity.
Chemistry
Antiandrogens can be divided into several different types based on chemical structure, including steroidal antiandrogens, nonsteroidal antiandrogens, and peptides. Steroidal antiandrogens include compounds like cyproterone acetate, spironolactone, estradiol, abiraterone acetate, and finasteride; nonsteroidal antiandrogens include compounds like bicalutamide, elagolix, diethylstilbestrol, aminoglutethimide, and ketoconazole; and peptides include GnRH analogues like leuprorelin and cetrorelix.
History
Antigonadotropins like estrogens and progestogens were both first introduced in the 1930s. The beneficial effects of androgen deprivation via surgical castration or high-dose estrogen therapy on prostate cancer were discovered in 1941. AR antagonists were first discovered in the early 1960s. The steroidal antiandrogen cyproterone acetate was discovered in 1961. and introduced in 1973. and is often described as the first antiandrogen to have been marketed. However, spironolactone was introduced in 1959., although its antiandrogen effects were not recognized or taken advantage of until later and were originally an unintended off-target action of the drug. In addition to spironolactone, chlormadinone acetate and megestrol acetate are steroidal antiandrogens that are weaker than cyproterone acetate but were also introduced earlier, in the 1960s. Other early steroidal antiandrogens that were developed around this time but were never marketed include benorterone (SKF-7690; 17α-methyl-B-nortestosterone), BOMT (Ro 7–2340), cyproterone (SH-80881), and trimethyltrienolone (R-2956).
The nonsteroidal antiandrogen flutamide was first reported in 1967. It was introduced in 1983 and was the first nonsteroidal antiandrogen marketed. Another early nonsteroidal antiandrogen, DIMP (Ro 7–8117), which is structurally related to thalidomide and is a relatively weak antiandrogen, was first described in 1973 and was never marketed. Flutamide was followed by nilutamide in 1989. and bicalutamide in 1995. In addition to these three drugs, which have been regarded as first-generation nonsteroidal antiandrogens, the second-generation nonsteroidal antiandrogens enzalutamide and apalutamide were introduced in 2012. and 2018. They differ from the earlier nonsteroidal antiandrogens namely in that they are much more efficacious in comparison.
The androgen synthesis inhibitors aminoglutethimide and ketoconazole were first marketed in 1960. and 1977., respectively, and the newer drug abiraterone acetate was introduced in 2011. GnRH modulators were first introduced in the 1980s. The 5α-reductase inhibitors finasteride and dutasteride were introduced in 1992. and 2002. respectively. Elagolix, the first orally active GnRH modulator to be marketed, was introduced in 2018.
Timeline
The following is a timeline of events in the history of antiandrogens:
1941: Hudgins and Hodges show that androgen deprivation via high-dose estrogen therapy or surgical castration treats prostate cancer
1957: The steroidal antiandrogen spironolactone is first synthesized
1960: Spironolactone is first introduced for medical use, as an antimineralocorticoid
1961: The steroidal antiandrogen cyproterone acetate is first synthesized
1962: Spironolactone is first reported to produce gynecomastia in men
1966: Benorterone is the first known antiandrogen to be studied clinically, to treat acne and hirsutism in women
1963: The antiandrogenic activity of cyproterone acetate is discovered
1967: A known antiandrogen, benorterone, is first reported to induce gynecomastia in males
1967: The first-generation nonsteroidal antiandrogen flutamide is first synthesized
1967: Cyproterone acetate was first studied clinically, to treat sexual deviance in men
1969: Cyproterone acetate was first studied in the treatment of acne, hirsutism, seborrhea, and scalp hair loss in women
1969: The antiandrogenic activity of spironolactone is discovered
1972: The antiandrogenic activity of flutamide is first reported
1973: Cyproterone acetate was first introduced for medical use, to treat sexual deviance
1977: The first-generation antiandrogen nilutamide is first described
1978: Spironolactone is first studied in the treatment of hirsutism in women
1979: Combined androgen blockade is first studied
1980: Medical castration via a GnRH analogue is first achieved
1982: The first-generation antiandrogen bicalutamide is first described
1982: Combined androgen blockade for prostate cancer is developed
1983: Flutamide is first introduced, in Chile, for medical use, to treat prostate cancer
1987: Nilutamide is first introduced, in France, for medical use, to treat prostate cancer
1989: Combined androgen blockade via flutamide and a GnRH analogue is found to be superior to a GnRH analogue alone for prostate cancer
1989: Flutamide is first introduced for medical use in the United States, to treat prostate cancer
1989: Flutamide is first studied in the treatment of hirsutism in women
1992: The androgen synthesis inhibitor abiraterone acetate is first described
1995: Bicalutamide is first introduced for medical use, to treat prostate cancer
1996: Nilutamide is first introduced for medical use in the United States, to treat prostate cancer
2006: The second-generation nonsteroidal antiandrogen enzalutamide is first described
2007: The second-generation nonsteroidal antiandrogen apalutamide is first described
2011: Abiraterone acetate is first introduced for medical use, to treat prostate cancer
2012: Enzalutamide is first introduced for medical use, to treat prostate cancer
2018: Apalutamide is first introduced for medical use, to treat prostate cancer
2018: Elagolix is the first orally active GnRH antagonist to be introduced for medical use
2019: Relugolix is the second orally active GnRH antagonist to be introduced for medical use
Society and culture
Etymology
The term antiandrogen is generally used to refer specifically to AR antagonists, as described by Dorfman (1970):
However, in spite of the above, the term may also be used to describe functional antiandrogens like androgen synthesis inhibitors and antigonadotropins, including even estrogens and progestogens. For example, the progestogen and hence antigonadotropin medroxyprogesterone acetate is sometimes described as a steroidal antiandrogen, even though it is not an antagonist of the AR.
Research
Topical administration
There has been much interest and effort in the development of topical AR antagonists to treat androgen-dependent conditions like acne and pattern hair loss in males. Unfortunately, whereas systemic administration of antiandrogens is very effective in treating these conditions, topical administration has disappointingly been found generally to possess limited and only modest effectiveness, even when high-affinity steroidal AR antagonists like cyproterone acetate and spironolactone have been employed. Moreover, in the specific case of acne treatment, topical AR antagonists have been found much less effective compared to established treatments like benzoyl peroxide and antibiotics.
A variety of AR antagonists have been developed for topical use but have not completed development and hence have never been marketed. These include the steroidal AR antagonists clascoterone, cyproterone, rosterolone, and topterone and the nonsteroidal AR antagonists cioteronel, inocoterone acetate, RU-22930, RU-58642, and RU-58841. However, one topical AR antagonist, topilutamide (fluridil), has been introduced in a few European countries for the treatment of pattern hair loss in men. In addition, a topical 5α-reductase inhibitor and weak estrogen, alfatradiol, has also been introduced in some European countries for the same indication, although its effectiveness is controversial. Spironolactone has been marketed in Italy in the form of a topical cream under the brand name Spiroderm for the treatment of acne and hirsutism, but this formulation was discontinued and hence is no longer available.
Male contraception
Antiandrogens, such as cyproterone acetate, have been studied for potential use as male hormonal contraceptives. While effective in suppressing male fertility, their use as monotherapies is precluded by side effects, such as androgen deficiency (e.g., demasculinization, sexual dysfunction, hot flashes, osteoporosis) and feminization (e.g., gynecomastia). The combination of a primary antigonadotropin such as cyproterone acetate to prevent fertility and an androgen like testosterone to prevent systemic androgen deficiency, resulting in a selective antiandrogenic action locally in the testes, has been extensively studied and has shown promising results, but has not been approved for clinical use at this time. Dimethandrolone undecanoate (developmental code name CDB-4521), an orally active dual AAS and progestogen, is under investigation as a potential male contraceptive and as the first male birth control pill.
Breast cancer
Antiandrogens such as bicalutamide, enzalutamide, and abiraterone acetate are under investigation for the potential treatment of breast cancer, including AR-expressing triple-negative breast cancer and other types of AR-expressing breast cancer.
Miscellaneous
Antiandrogens may be effective in the treatment of obsessive–compulsive disorder.
See also
Androgen insensitivity syndrome
Antiandrogens in the environment
Androgen replacement therapy
References
Further reading
Anaphrodisia
Anti-acne preparations
Hair loss medications
Hair removal
Hormonal antineoplastic drugs
Prostate cancer
Sex hormones
Psychoactive drugs | Antiandrogen | [
"Chemistry",
"Biology"
] | 9,843 | [
"Behavior",
"Sex hormones",
"Psychoactive drugs",
"Neurochemistry",
"Sexuality"
] |
180,046 | https://en.wikipedia.org/wiki/Watercolor%20painting | Watercolor (American English) or watercolour (Commonwealth English; see spelling differences), also aquarelle (; from Italian diminutive of Latin 'water'), is a painting method in which the paints are made of pigments suspended in a water-based solution. Watercolor refers to both the medium and the resulting artwork. Aquarelles painted with water-soluble colored ink instead of modern water colors are called (Latin for "aquarelle made with ink") by experts. However, this term has now tended to pass out of use.
The conventional and most common support—material to which the paint is applied—for watercolor paintings is watercolor paper. Other supports or substrates include stone, ivory, silk, reed, papyrus, bark papers, plastics, vellum, leather, fabric, wood, and watercolor canvas (coated with a gesso that is specially formulated for use with watercolors). Watercolor paper is often made entirely or partially with cotton. This gives the surface the appropriate texture and minimizes distortion when wet. Watercolor papers are usually cold-pressed papers that provide better texture and appearance with a weight at least 300 gsm (140 lb). Under 300 gsm (140 lb) is commonly not recommended for anything but sketching. Transparency is the main characteristic of watercolors. Watercolors can also be made opaque by adding Chinese white. This is not a method to be used in "true watercolor" (traditional).
Watercolor paint is an ancient form of painting, if not the most ancient form of art itself. In East Asia, watercolor painting with inks is referred to as brush painting or scroll painting. In Chinese, Korean and Japanese painting it has been the dominant medium, often in monochrome black or browns, often using inkstick or other pigments. India, Ethiopia and other countries have long watercolor painting traditions as well.
Many Western artists, especially in the early 19th century, used watercolor primarily as a sketching tool in preparation for the "finished" work in oil or engraving. Until the end of the eighteenth century, traditional watercolors were known as 'tinted drawings'.
History
Watercolor art dates back to the cave paintings of paleolithic Europe and has been used for manuscript illustration since at least Egyptian times, with particular prominence in the European Middle Ages. However, its continuous history as an art medium begins with the Renaissance. The German Northern Renaissance artist Albrecht Dürer (1471–1528), who painted several fine botanical, wildlife, and landscape watercolors, is generally considered among the earliest examples of watercolor. An important school of watercolor painting in Germany was led by Hans Bol (1534–1593) as part of the Dürer Renaissance.
Despite this early start, watercolors were generally used by Baroque easel painters only for sketches, copies or cartoons (full-scale design drawings) Notable early practitioners of watercolor painting were Van Dyck (during his stay in England), Claude Lorrain, Giovanni Benedetto Castiglione, and many Dutch and Flemish artists. However, botanical illustration and wildlife illustration perhaps form the oldest and most important traditions in watercolor painting. Botanical illustrations became popular during the Renaissance, both as hand-tinted woodblock illustrations in books or broadsheets and as tinted ink drawings on vellum or paper. Botanical artists have traditionally been some of the most exacting and accomplished watercolor painters, and even today, watercolors—with their unique ability to summarize, clarify, and idealize in full color—are used to illustrate scientific and museum publications. Wildlife illustration reached its peak in the 19th century with artists such as John James Audubon, and today many naturalist field guides are still illustrated with watercolor paintings.
English school
Several factors contributed to the spread of watercolor painting during the 18th century, particularly in England. Among the elite and aristocratic classes, watercolor painting was one of the incidental adornments of a good education; mapmakers, military officers, and engineers valued it for its usefulness in depicting properties, terrain, fortifications, field geology, and for illustrating public works or commissioned projects. Watercolor artists were commonly taken on geological or archaeological expeditions, funded by the Society of Dilettanti (founded in 1733), to document discoveries in the Mediterranean, Asia, and the New World. These expeditions stimulated the demand for topographical painters, who churned out memento paintings of famous sites (and sights) along the Grand Tour to Italy that was undertaken by every fashionable young man of the time.
In the late 18th century, the English cleric William Gilpin wrote a series of hugely popular books describing his picturesque journeys throughout rural England, and illustrated them with self-made sentimentalized monochrome watercolors of river valleys, ancient castles, and abandoned churches. This example popularized watercolors as a form of personal tourist journal. The confluence of these cultural, engineering, scientific, tourist, and amateur interests culminated in the celebration and promotion of watercolor as a distinctly English "national art". William Blake published several books of hand-tinted engraved poetry, provided illustrations to Dante's Inferno, and he also experimented with large monotype works in watercolor. Among the many other significant watercolorists of this period were Thomas Gainsborough, John Robert Cozens, Francis Towne, Michael Angelo Rooker, William Pars, Thomas Hearne, and John Warwick Smith.
From the late 18th century through the 19th century, the market for printed books and domestic art contributed substantially to the growth of the medium. Watercolors were used as the basic document from which collectible landscape or tourist engravings were developed, and hand-painted watercolor originals or copies of famous paintings contributed to many upper class art portfolios. Satirical broadsides by Thomas Rowlandson, many published by Rudolph Ackermann, were also extremely popular.
The three English artists credited with establishing watercolor as an independent, mature painting medium are Paul Sandby (1730–1809), often called the "father of the English watercolor"; Thomas Girtin (1775–1802), who pioneered its use for large format, romantic or picturesque landscape painting; and Joseph Mallord William Turner (1775–1851), who brought watercolor painting to the highest pitch of power and refinement, and created hundreds of superb historical, topographical, architectural, and mythological watercolor paintings. His method of developing the watercolor painting in stages, starting with large, vague color areas established on wet paper, then refining the image through a sequence of washes and glazes, permitted him to produce large numbers of paintings with "workshop efficiency" and made him a multimillionaire, partly by sales from his personal art gallery, the first of its kind. Among the important and highly talented contemporaries of Turner and Girtin were John Varley, John Sell Cotman, Anthony Copley Fielding, Samuel Palmer, William Havell, and Samuel Prout. The Swiss painter Abraham-Louis-Rodolphe Ducros was also widely known for his large format, romantic paintings in watercolor.
The confluence of amateur activity, publishing markets, middle class art collecting, and 19th-century technique led to the formation of English watercolor painting societies: the Society of Painters in Water Colours (1804, now known as the Royal Watercolour Society) and the New Water Colour Society (1832, now known as the Royal Institute of Painters in Water Colours). (A Scottish Society of Painters in Water Colour was founded in 1878, now known as the Royal Scottish Society of Painters in Watercolour.) These societies provided annual exhibitions and buyer referrals for many artists. They also engaged in petty status rivalries and aesthetic debates, particularly between advocates of traditional ("transparent") watercolor and the early adopters of the denser color possible with body color or gouache ("opaque" watercolor). The late Georgian and Victorian periods produced the zenith of the British watercolor, among the most impressive 19th-century works on paper, due to artists Turner, Varley, Cotman, David Cox, Peter de Wint William Henry Hunt, John Frederick Lewis, Myles Birket Foster, Frederick Walker, Thomas Collier, Arthur Melville and many others. In particular, the graceful, lapidary, and atmospheric watercolors ("genre paintings") by Richard Parkes Bonington created an international fad for watercolor painting, especially in England and France in the 1820s. In the latter half of the 19th century, portrait painter Frederick Havill became a key player in the establishment of watercolour in England. Art critic Huntly Carter described Havill as a "founder of the water colour school."
The popularity of watercolors stimulated many innovations, including heavier and more sized wove papers, and brushes (called "pencils") manufactured expressly for watercolor. Watercolor tutorials were first published in this period by Varley, Cox, and others, establishing the step-by-step painting instructions that still characterize the genre today; The Elements of Drawing, a watercolor tutorial by English art critic John Ruskin, has been out of print only once since it was first published in 1857. Commercial brands of watercolor were marketed and paints were packaged in metal tubes or as dry cakes that could be "rubbed out" (dissolved) in studio porcelain or used in portable metal paint boxes in the field. Breakthroughs in chemistry made many new pigments available, including synthetic ultramarine blue, cobalt blue, viridian, cobalt violet, cadmium yellow, aureolin (potassium cobaltinitrite), zinc white, and a wide range of carmine and madder lakes. These pigments, in turn, stimulated a greater use of color with all painting media, but in English watercolors, particularly by the Pre-Raphaelite Brotherhood.
United States
Watercolor painting also became popular in the United States during the 19th century; outstanding early practitioners included John James Audubon, as well as early Hudson River School painters such as William H. Bartlett and George Harvey. By mid-century, the influence of John Ruskin led to increasing interest in watercolors, particularly the use of a detailed "Ruskinian" style by such artists as John W. Hill Henry, William Trost Richards, Roderick Newman, and Fidelia Bridges. The American Society of Painters in Watercolor (now the American Watercolor Society) was founded in 1866. Late-19th-century American exponents of the medium included Thomas Moran, Thomas Eakins, John LaFarge, John Singer Sargent, Childe Hassam, and, preeminently, Winslow Homer.
Europe
Watercolor was less popular in Continental Europe. In the 18th century, gouache was an important medium for the Italian artists Marco Ricci and Francesco Zuccarelli, whose landscape paintings were widely collected. Gouache was used by a number of artists in France as well. In the 19th century, the influence of the English school helped popularize "transparent" watercolor in France, and it became an important medium for Eugène Delacroix, François Marius Granet, Henri-Joseph Harpignies, and the satirist Honoré Daumier. Other European painters who worked frequently in watercolor were Adolph Menzel in Germany and Stanisław Masłowski in Poland.
The adoption of brightly colored, petroleum-derived aniline dyes (and pigments compounded from them), which all fade rapidly on exposure to light, and the efforts to properly conserve the twenty thousand J. M. W. Turner paintings inherited by the British Museum in 1857, led to a negative reevaluation of the permanence of pigments in watercolor. This caused a sharp decline in their status and market value. Nevertheless, isolated practitioners continued to prefer and develop the medium into the 20th century. Paul Signac created landscape and maritime watercolors, and Paul Cézanne developed a watercolor painting style consisting entirely of overlapping small glazes of pure color.
20th and 21st centuries
Among the many 20th-century artists who produced important works in watercolor were Wassily Kandinsky, Emil Nolde, Paul Klee, Egon Schiele, and Raoul Dufy. In America, the major exponents included Charles Burchfield, Edward Hopper, Georgia O'Keeffe, Charles Demuth, and John Marin (80% of his total work is watercolor). In this period, American watercolor painting often emulated European Impressionism and Post-Impressionism, but significant individualism flourished in "regional" styles of watercolor painting from the 1920s to 1940s. In particular, the "Cleveland School" or "Ohio School" of painters centered around the Cleveland Museum of Art, and the California Scene painters were often associated with Hollywood animation studios or the Chouinard Art Institute (now California Institute of the Arts). The California painters exploited their state's varied geography, Mediterranean climate, and "automobility" to reinvigorate the outdoor or "plein air" tradition. The most influential among them were Phil Dike, Millard Sheets, Rex Brandt, Dong Kingman, and Milford Zornes. The California Water Color Society, founded in 1921 and later renamed the National Watercolor Society, sponsored important exhibitions of their work. The largest watercolor in the world at the moment (at tall and wide) is Building 6 Portrait: Interior. Produced by American artist Barbara Prey on commission for MASS MoCA, the work can be seen at MASS MoCA's Robert W. Wilson Building.
Although the rise of abstract expressionism, and the trivializing influence of amateur painters and advertising- or workshop-influenced painting styles, led to a temporary decline in the popularity of watercolor painting after , watercolors continue to be utilized by artists like Martha Burchfield, Joseph Raffael, Andrew Wyeth, Philip Pearlstein, Eric Fischl, Gerhard Richter, Anselm Kiefer, and Francesco Clemente. In Spain, Ceferí Olivé created an innovative style followed by his students, such as Rafael Alonso López-Montero and Francesc Torné Gavaldà. In Mexico, the major exponents are Ignacio Barrios, Edgardo Coghlan, Ángel Mauro, Vicente Mendiola, and Pastor Velázquez. In the Canary Islands, where this pictorial technique has many followers, there are stand-out artists such as Francisco Bonnín Guerín, José Comas Quesada, and Alberto Manrique.
Watercolor paint
Watercolor paint consists of four principal ingredients: a pigment; gum arabic as a binder to hold the pigment in suspension; additives like glycerin, ox gall, honey, and preservatives to alter the viscosity, hiding, durability or color of the pigment and vehicle mixture; and, evaporating water, as a solvent used to thin or dilute the paint for application.
The more general term watermedia refers to any painting medium that uses water as a solvent and that can be applied with a brush, pen, or sprayer. This includes most inks, watercolors, temperas, caseins, gouaches, and modern acrylic paints.
The term "watercolor" refers to paints that use water-soluble, complex carbohydrates as a binder. Originally (in the 16th to 18th centuries), watercolor binders were sugars and/or hide glues, but since the 19th century, the preferred binder is natural gum arabic, with glycerin and/or honey as additives to improve plasticity and solubility of the binder, and with other chemicals added to improve product shelf life.
The term "bodycolor" refers to paint that is opaque rather than transparent. It usually refers to opaque watercolor, known as gouache. Modern acrylic paints use an acrylic resin dispersion as a binder.
Commercial watercolors
Watercolor painters before the turn of the 18th century had to make paints themselves using pigments purchased from an apothecary or specialized "colorman", and mixing them with gum arabic or some other binder. The earliest commercial paints were small, resinous blocks that had to be wetted and laboriously "rubbed out" in water to obtain a usable color intensity. William Reeves started his business as a colorman around 1766. In 1781, he and his brother, Thomas Reeves, were awarded the Silver Palette of the Society of Arts, for the invention of the moist watercolor paint-cake, a time-saving convenience, introduced in the "golden age" of English watercolor painting. The "cake" was immediately soluble when touched by a wet brush.
Modern commercial watercolor paints are available in tubes, pans and liquids. The majority of paints sold today are in collapsible small metal tubes in standard sizes and formulated to a consistency similar to toothpaste by being already mixed with a certain water component. For use, this paste has to be further diluted with water. Pan paints (small dried cakes or bars of paint in an open plastic container) are usually sold in two sizes, full pans and half pans.
Owing to modern industrial organic chemistry, the variety, saturation, and permanence of artists' colors available today has been vastly improved. Correct and non-toxic primary colors are now present through the introduction of hansa yellow, phthalo blue and quinacridone. From such a set of three colors, in principle all others can be mixed, as in a classical technique no white is used. The modern development of pigments was not driven by artistic demand. The art materials industry is too small to exert any market leverage on global dye or pigment manufacture. With rare exceptions such as aureolin, all modern watercolor paints utilize pigments that have a wider industrial use. Paint manufacturers buy, by industrial standards very small, supplies of these pigments, mill them with the vehicle, solvent, and additives, and package them. The milling process with inorganic pigments, in more expensive brands, reduces the particle size to improve the color flow when the paint is applied with water.
Transparency
In the partisan debates of the 19th-century English art world, gouache was emphatically contrasted to traditional watercolors and denigrated for its high hiding power or lack of "transparency"; "transparent" watercolors were exalted. The aversion to opaque paint had its origin in the fact that well into the 19th century lead white was used to increase the covering quality. That pigment tended to soon discolor into black under the influence of sulphurous air pollution, totally ruining the artwork. The traditional claim that "transparent" watercolors gain "luminosity" because they function like a pane of stained glass laid on paper—the color intensified because the light passes through the pigment, reflects from the paper, and passes a second time through the pigment on its way to the viewer—is false. Watercolor paints typically do not form a cohesive paint layer, as do acrylic or oil paints, but simply scatter pigment particles randomly across the paper surface; the transparency is caused by the paper being visible between the particles. Watercolors may appear more vivid than acrylics or oils because the pigments are laid down in a purer form, with few or no fillers (such as kaolin) obscuring the pigment colors. Typically, most or all of the gum binder will be absorbed by the paper, preventing the binder from changing the visibility of the pigment. The gum being absorbed does not decrease but increase the adhesion of the pigment to the paper, as its particles will then penetrate the fibres more easily. In fact, an important function of the gum is to facilitate the "lifting" (removal) of color, should the artist want to create a lighter spot in a painted area. Furthermore, the gum prevents flocculation of the pigment particles.
See also
Acrylic painting techniques
History of painting
Ink wash painting
Oil painting
:Category:Watercolorists
References
Works cited
Further reading
History
Andrew Wilton & Anne Lyles. The Great Age of British Watercolours (1750–1880). Prestel, 1993.
Anne Lyles & Robin Hamlyn. British watercolours from the Oppé Collection. Tate Gallery Publishing, 1997.
Christopher Finch. American Watercolors. Abbeville Press, 1991. ASIN B000IBDWGK
Christopher Finch. Nineteenth-Century Watercolors. Abbeville Press, 1991.
Christopher Finch. Twentieth-Century Watercolors. Abbeville Press, 1988.
Eric Shanes. Turner: The Great Watercolours. Royal Academy of Arts, 2001.
Martin Hardie. Water-Colour Painting in Britain (3 volumes: I. The Eighteenth Century; II. The Romantic Period; III. The Victorian Period.). Batsford, 1966–1968.
Michael Clarke. The Tempting Prospect: A Social History of English Watercolours. British Museum Publications, 1981. ASIN B000UCV0XO
Moore, Sean. Ultimate Visual Dictionary. Dorling Kindersley, 1994.
Tutorials and Technique
Rex Brandt. The Winning Ways of Watercolor: Basic Techniques and Methods of Transparent Watercolor in Twenty Lessons. Van Nostrand Reinhold, 1973.
David Dewey. The Watercolor Book: Materials and Techniques for Today's Artist. Watson-Guptill, 1995.
Donna Seldin Janis. Sargent Abroad: Figures and Landscapes. Abbeville Press; 1st edition (October 1997). .
Charles LeClair. The Art of Watercolor (Revised and Expanded Edition). Watson-Guptill, 1999.
Royal Watercolour Society. The Watercolour Expert. Cassell Illustrated, 2004.
John Ruskin. The Elements of Drawing [1857]. Watson-Guptill, 1991. (Reprints from other publishers are also available.)
Pip Seymour. Watercolour Painting: A Handbook for Artists. Lee Press, 1997.
Stan Smith. Watercolor: The Complete Course. Reader's Digest, 1995.
Curtis Tappenden. Foundation Course: Watercolour. Cassell Illustrated, 2003.
Edgar A. Whitney. Complete Guide to Watercolor Painting. Watson-Guptill, 1974. [Dover Edition ]
Materials
Ian Sidaway. The Watercolor Artist's Paper Directory. North Light, 2000.
Jacques Turner. Brushes: A Handbook for Artists and Artisans. Design Press, 1992.
Sylvie Turner. The Book of Fine Paper. Thames & Hudson, 1998.
Michael Wilcox. The Wilcox Guide To The Best Watercolor Paints. School of Colour Publications, 2000.
External links
American Watercolor Society
National Watercolor Society (USA)
Belgian Watercolor Institute
Painting
Painting techniques
Watermedia | Watercolor painting | [
"Chemistry"
] | 4,678 | [
"Paints",
"Coatings"
] |
180,079 | https://en.wikipedia.org/wiki/Karl%20Guthe%20Jansky | Karl Guthe Jansky (October 22, 1905 – February 14, 1950) was an American physicist and radio engineer who in April 1933 first announced his discovery of radio waves emanating from the Milky Way in the constellation Sagittarius. He is considered one of the founding figures of radio astronomy.
Early life
Karl Guthe Jansky was born 1905 in what was then the Territory of Oklahoma where his father, Cyril M. Jansky, was dean of the college of engineering at the University of Oklahoma at Norman. Cyril M. Jansky, born in Wisconsin of Czech immigrants, had started teaching at the age of sixteen. He was a teacher throughout his active life, retiring as professor of electrical engineering at the University of Wisconsin. He was an engineer with a strong interest in physics, a trait passed on to his sons. Karl Jansky was named after Dr. Karl Eugen Guthe, a professor of physics at the University of Michigan who had been an important mentor to Cyril M. Jansky.
Karl Jansky's mother, born Nellie Moreau, was of French and English descent. Karl's brother Cyril Jansky Jr., who was ten years older, helped build some of the earliest radio transmitters in the country, including 9XM in Wisconsin (now WHA of Wisconsin Public Radio) and 9XI in Minnesota (now KUOM).
Karl Jansky attended college at the University of Wisconsin where he received his BS in physics in 1927. He stayed an extra year at Madison, completing all the graduate course work for a master's degree in physics except for the thesis. In July 1928 at age 22, he was able to join the Bell Telephone Laboratories, and because of a kidney condition he had since college (which eventually led to his early death), he was sent to the healthier environs of the field station in Holmdel, New Jersey. Bell Labs wanted to investigate atmospheric and ionospheric properties using "short waves" (wavelengths of about 10–20 meters) for use in trans-Atlantic radio telephone service. As a radio engineer, Jansky was assigned the job of investigating sources of static that might interfere with radio voice transmissions.
Radio astronomy
At Bell Telephone Laboratories, Jansky built a directional antenna designed to receive radio waves at a frequency of 20.5 MHz (wavelength about 14.6 meters). It had a diameter of approximately 100 ft. (30 meters) and stood 20 ft. (6 meters) tall. It was mounted on top of a turntable on a set of four Ford Model-T wheels, which allowed it to be rotated in the azimuthal direction, earning it the nickname "Jansky's merry-go-round" (the cost of which was later estimated to be less than $1000). By rotating the antenna, the direction of a received signal could be pinpointed. The intensity of the signal was recorded by an analog pen-and-paper recording system housed in a small shed to the side of the antenna.
After recording signals from all directions for several months, Jansky eventually categorized them into three types of static: nearby thunderstorms, distant thunderstorms, and a faint static or "hiss" of unknown origin. He spent over a year investigating the source of the third type of static. The location of maximum intensity rose and fell once a day, leading Jansky to surmise initially that he was detecting radiation from the Sun.
After a few months of following the signal, however, the point of maximum static moved away from the position of the Sun. Jansky also determined that the signal repeated on a cycle of 23 hours and 56 minutes. Jansky discussed the puzzling phenomena with his friend the astrophysicist Albert Melvin Skellett, who pointed out that the observed time between the signal peaks was the exact length of a sidereal day; the time it took for "fixed" astronomical objects, such as a star, to pass in front of the antenna every time the Earth rotated. By comparing his observations with optical astronomical maps, Jansky concluded that the radiation was coming from the Milky Way and was strongest (7:10 p.m. on September 16, 1932) in the direction of the center of the galaxy, in the constellation of Sagittarius.
Jansky announced his discovery at a meeting in Washington D.C. in April 1933 to a small audience who could not comprehend its significance. His discovery was widely publicized, appearing in the New York Times of May 5, 1933, and he was interviewed on a special NBC program on "Radio sounds from among the stars". In October 1933, his discovery was published in a journal article entitled "Electrical disturbances apparently of extraterrestrial origin" in the Proceedings of the Institute of Radio Engineers.
If the radio sources were from the stars, the Sun should also be producing radio noise, but Jansky found that it did not. In the early 1930s, the Sun was at an inactive phase in its sunspot cycle. In 1935 Jansky made the suggestion that the strange radio signals were produced from interstellar gas, in particular, by "thermal agitation of charged particles." Jansky accomplished these investigations while still in his twenties with a bachelor's degree in physics.
Jansky wanted to further investigate the Milky Way radio waves after 1935 (he called the radiation "Star Noise" in the thesis he submitted to earn his 1936 University of Wisconsin Masters degree), but he found little support from either astronomers, for whom it was completely foreign, or Bell Labs, which could not justify, during the Great Depression, the cost of research on a phenomenon that did not significantly affect trans-Atlantic communications systems.
Follow-up
Several scientists were interested in Jansky's discovery, but radio astronomy remained a dormant field for several years, due in part to Jansky's lack of formal training as an astronomer. His discovery had come in the midst of the Great Depression, and observatories were wary of taking on any new and potentially risky projects.
Two men who learned of Jansky's 1933 discovery were of great influence on the later development of the new study of radio astronomy: one was Grote Reber, a radio engineer who singlehandedly built a radio telescope in his Illinois back yard in 1937 and did the first systematic survey of astronomical radio waves. The second was John D. Kraus, who, after World War II, started a radio observatory at Ohio State University and wrote a textbook on radio astronomy, long considered a standard by radio astronomers.
Death and legacy
Jansky was a resident of Little Silver, New Jersey, and died at age 44 in a Red Bank, New Jersey, hospital (now called Riverview Medical Center) due to a heart condition.
In honor of Jansky, the unit used by radio astronomers for the spectral irradiance of radio sources is the jansky (1 Jy = 10−26 W⋅m−2⋅Hz−1). The crater Jansky on the Moon is also named after him. The National Radio Astronomy Observatory (NRAO) postdoctoral fellowship program is named after Karl Jansky. NRAO awards the Jansky Prize annually in Jansky's honor. On January 10, 2012, the NRAO announced the Very Large Array (VLA), the radio telescope in Magdalena, New Mexico, would be renamed the Karl G. Jansky Very Large Array in honor of Karl Jansky's contribution to radio astronomy.
A full-scale replica of Jansky's original rotating telescope is located on the grounds of the Green Bank Observatory (, formerly an NRAO site) in Green Bank, West Virginia, near a reconstructed version of Grote Reber's 9-meter dish.
In 1998, the original site of Jansky's antenna at what is now the Bell Labs Holmdel Complex at 101 Crawfords Corner Road, Holmdel, New Jersey, was determined by Tony Tyson and Robert Wilson of Lucent Technologies (the successor of Bell Telephone Laboratories) and a monument and a plaque were placed there to honor the achievement. The monument is a stylized sculpture of the antenna and is oriented as Jansky's antenna was at 7:10 p.m. on September 16, 1932, at a moment of maximum signal caused by alignment with the center of our galaxy in the direction of the constellation Sagittarius.
Jansky noise is named after Jansky, and refers to high frequency static disturbances of cosmic origin. (Cosmic noise).
Asteroid 1932 Jansky is named after him, as is the lunar crater Jansky.
Selected writings
Reprinted 65 years later as along with an explanatory preface by W.A. Imbriale, .
See also
Reber Radio Telescope
Astronomical radio source
Radio Astronomy
References
. In particular Chap.1 by Sullivan, "Karl Jansky and the discovery of extraterrestrial radio waves," pp. 3–42.
. In particular Chap. 2.
External links
My Brother Karl Jansky and His Discovery of Radio Waves from Beyond the Earth
Serendipitous Discoveries in Radio Astronomy: Proceedings of a Workshop held at the National Radio Astronomy Observatory, Green Bank, West Virginia on May 4, 5, 6,1983; Honoring the 50th Anniversary Announcing the Discovery of Cosmic Radio Waves by Karl G. Jansky on May 5, 1933. Edited by K. Kellermann and B. Sheets (1983) 321pp
Encyclopedia of Oklahoma History and Culture – Jansky, Karl
Detective Work Leads to Monument Honoring the Father of Radio Astronomy — Radio Astronomy Celebration at NOKIA Bell Labs
1905 births
1950 deaths
Amateur astronomers
American astronomers
20th-century American physicists
American people of Czech descent
American people of English descent
American people of French descent
Czech-American culture in Oklahoma
People from Little Silver, New Jersey
People from Norman, Oklahoma
Scientists at Bell Labs
Radio astronomers
American electrical engineers
Engineers from New Jersey
20th-century American engineers
Astronomical instrument makers | Karl Guthe Jansky | [
"Astronomy"
] | 1,999 | [
"Amateur astronomers",
"Astronomers",
"Astronomical instrument makers",
"Astronomical instruments"
] |
180,090 | https://en.wikipedia.org/wiki/Jansky | The jansky (symbol Jy, plural janskys) is a non-SI unit of spectral flux density, or spectral irradiance, used especially in radio astronomy. It is equivalent to 10−26 watts per square metre per hertz.
The spectral flux density or monochromatic flux, , of a source is the integral of the spectral radiance, , over the source solid angle:
The unit is named after pioneering US radio astronomer Karl Guthe Jansky and is defined as
Since the jansky is obtained by integrating over the whole source solid angle, it is most simply used to describe point sources; for example, the Third Cambridge Catalogue of Radio Sources (3C) reports results in janskys.
For extended sources, the surface brightness is often described with units of janskys per solid angle; for example, far-infrared (FIR) maps from the IRAS satellite are in megajanskys per steradian (MJy⋅sr−1).
Although extended sources at all wavelengths can be reported with these units, for radio-frequency maps, extended sources have traditionally been described in terms of a brightness temperature; for example the Haslam et al. 408 MHz all-sky continuum survey is reported in terms of a brightness temperature in kelvin.
Unit conversions
Jansky units are not a standard SI unit, so it may be necessary to convert the measurements made in the unit to the SI equivalent in terms of watts per square metre per hertz (W·m−2·Hz−1). However, other unit conversions are possible with respect to measuring this unit.
AB magnitude
The flux density in janskys can be converted to a magnitude basis, for suitable assumptions about the spectrum. For instance, converting an AB magnitude to a flux density in microjanskys is straightforward:
dBW·m−2·Hz−1
The linear flux density in janskys can be converted to a decibel basis, suitable for use in fields of telecommunication and radio engineering.
1 jansky is equal to −260 dBW·m−2·Hz−1, or −230 dBm·m−2·Hz−1:
Temperature units
The spectral radiance in janskys per steradian can be converted to a brightness temperature, useful in radio and microwave astronomy.
Starting with Planck's law, we see
This can be solved for temperature, giving
In the low-frequency, high-temperature regime, when , we can use the asymptotic expression:
A less accurate form is
which can be derived from the Rayleigh–Jeans law
Usage
The flux to which the jansky refers can be in any form of radiant energy.
It was created for and is still most frequently used in reference to electromagnetic energy, especially in the context of radio astronomy.
The brightest astronomical radio sources have flux densities of the order of 1–100 janskys. For example, the Third Cambridge Catalogue of Radio Sources lists some 300 to 400 radio sources in the Northern Hemisphere brighter than 9 Jy at 159 MHz. This range makes the jansky a suitable unit for radio astronomy.
Gravitational waves also carry energy, so their flux density can also be expressed in terms of janskys. Typical signals on Earth are expected to be 1020 Jy or more. However, because of the poor coupling of gravitational waves to matter, such signals are difficult to detect.
When measuring broadband continuum emissions, where the energy is roughly evenly distributed across the detector bandwidth, the detected signal will increase in proportion to the bandwidth of the detector (as opposed to signals with bandwidth narrower than the detector bandpass). To calculate the flux density in janskys, the total power detected (in watts) is divided by the receiver collecting area (in square meters), and then divided by the detector bandwidth (in hertz). The flux density of astronomical sources is many orders of magnitude below 1 W·m−2·Hz−1, so the result is multiplied by 1026 to get a more appropriate unit for natural astrophysical phenomena.
The millijansky, mJy, was sometimes referred to as a milli-flux unit (mfu) in older astronomical literature.
Orders of magnitude
Note: Unless noted, all values are as seen from the Earth's surface.
References
Radio astronomy
Units of measurement
Non-SI metric units
Units of measurement in astronomy | Jansky | [
"Astronomy",
"Mathematics"
] | 883 | [
"Astronomical sub-disciplines",
"Non-SI metric units",
"Quantity",
"Units of measurement in astronomy",
"Radio astronomy",
"Units of measurement"
] |
180,091 | https://en.wikipedia.org/wiki/Pediment | Pediments are a form of gable in classical architecture, usually of a triangular shape. Pediments are placed above the horizontal structure of the cornice (an elaborated lintel), or entablature if supported by columns. In ancient architecture, a wide and low triangular pediment (the side angles 12.5° to 16°) typically formed the top element of the portico of a Greek temple, a style continued in Roman temples. But large pediments were rare on other types of building before Renaissance architecture. For symmetric designs, it provides a center point and is often used to add grandness to entrances.
The cornice continues round the top of the pediment, as well as below it; the rising sides are often called the "raking cornice". The tympanum is the triangular area within the pediment, which is often decorated with a pedimental sculpture which may be freestanding or a relief sculpture. The tympanum may hold an inscription, or in modern times, a clock face.
The main variant shapes are the "segmental", "curved", or "arch" pediment, where the straight line triangle of the cornice is replaced by a curve making a segment of a circle, the broken pediment where the cornice has a gap at the apex, and the open pediment, with a gap in the cornice along the base. Both triangular and segmental pediments can have "broken" and "open" forms.
Pediments are found in ancient Greek architecture as early as 580 BC, in the archaic Temple of Artemis, Corfu, which was probably one of the first. Pediments return in Renaissance architecture and are then much used in later styles such as Baroque, Neoclassical, and Beaux-Arts architecture, which favoured the segmental variant.
Variant forms
A variant is the "segmental" or "arch" pediment, where the normal angular slopes of the cornice are replaced by one in the form of a segment of a circle, in the manner of a depressed arch. Both traditional and segmental pediments have "broken" and "open" forms. In the broken pediment the raking cornice is left open at the apex. The open pediment is open along the base, with a gap in the cornice for part or all of the space under the pediment.
All these forms were used in Hellenistic architecture, especially in Alexandria and the Middle East. The so-called "Treasury" or Al-Khazneh, a 1st-century rock-cut tomb in Petra, Jordan, is a famously extreme example, with not merely the pediment, but the whole entablature, very "broken" and retreating into the cliff face. Broken pediments where the gap is extremely wide in this way are often called "half-pediments".
They were adopted in Mannerist architecture, and applied to furniture designed by Thomas Chippendale. Another variant is the swan's neck pediment, a broken pediment with two S-shaped profiles resembling a swan's neck, typically volutes; this is mostly found in furniture rather than buildings. It was popular in American doorways from the 1760's onwards. Very often there is a vase-like ornament in the middle, between the volutes. Non-triangular variations of pediments are often found over doors, windows, niches, and porches.
History
Classical
The pediment is found in classical Greek temples, Etruscan, Roman, Renaissance, Baroque, Rococo, Neoclassical, and Beaux-Arts architecture. Greek temples, normally rectangular in plan, generally had a pediment at each end, but Roman temples, and subsequent revivals, often had only one, in both cases across the whole width of the main front or facade. The rear of the typical Roman temple was a blank wall, usually without columns, but often a full pediment above. This effectively divorced the pediment from the columns beneath it in the original temple front ensemble, and thereafter it was no longer considered necessary for a pediment to be above columns.
The most famous example of the Greek scheme is the Parthenon, with two tympanums filled with large groups of sculpted figures. An extreme but very influential example of the Roman style is the Pantheon, Rome, where a portico with pediment fronts a circular temple.
In ancient Rome, the Renaissance, and later architectural revivals, small pediments are a non-structural element over windows, doors, and aediculae, protecting windows and openings from rain, as well as being decorative. From the 5th century pediments also might appear on tombs and later non-architectural objects such as sarcophagi.
In the Hellenistic period pediments became used for a wider range of buildings, and treated much more freely, especially outside Greece itself. Broken and open pediments are used in a way that is often described as "baroque". The large 2nd-century Market Gate of Miletus, now reconstructed in the Pergamon Museum in Berlin, has a pediment that retreats in the centre, so appears both broken and open, a feature also seen at the Al-Khazneh (so-called "Treasury") tomb at Petra in modern Jordan. The broken pediments on each of the four sides of the Arch of Septimius Severus at Leptis Magna in Libya are very small elements, raking at an extremely steep angle, but not extending beyond the entablature for the columns below. There are two faces to each pediment, both carved, with one lying parallel to the wall of the monument, and the other at right angles to that.
The Arch of Augustus in Rimini, Italy (27 BC), an early imperial monument, suggests that at this stage provincial Roman architects were not well practiced in the classical vocabulary; the base of the pediment ends close to, but not over, the capitals of the columns. Here the whole temple front is decoration applied to a very solid wall, but the lack of respect for the conventions of Greek trabeated architecture remains rather disconcerting.
Conventional Roman pediments have a slightly steeper pitch than classical Greek ones, perhaps because they ended tiled roofs that received heavier rainfall.
Medieval
In Carolingian and Romanesque architecture pediments tended towards the equilateral triangle, and the enclosing cornice has little emphasis; they are often merely gable ends with some ornament. In Gothic architecture pediments with a much more acute angle at the top were used, especially over doorways and windows, but while the rising sides of the cornice is elaborate, the horizontal bottom element was typically not very distinct. Often there is a pointed arch underneath, and no bottom element at all. "Pediment" is typically not used for these; they are often called a "canopy". From the Renaissance onwards, some pediments no longer fitted the steeply pitched roofs and became freestanding, sometimes sloping in the opposite direction to the roof behind.
Renaissance, Baroque and Rococo
When classical-style low triangular pediments returned in Italian Renaissance architecture, they were initially mostly used to top a relatively flat facade, with engaged elements rather than freestanding porticos supported by columns. Leon Battista Alberti used them in this way in his churches: the Tempio Malatestiano (1450s, incomplete), Santa Maria Novella (to 1470), San Sebastiano in Mantua (unfinished by the 1470s), Sant'Andrea, Mantua (begun 1472), and Pienza Cathedral ), where the design was probably his. Here the cornice comes out and then retreats back, forming the top of pilasters with no capitals, a very unclassical note, which was to become much used.
In most of these, Alberti followed classical precedent by having the pediment occupy the whole width of the facade, or at least that part that projects outwards. Santa Maria Novella and Sant'Agostino, Rome (1483, by Giacomo di Pietrasanta, perhaps designed by Alberti) were early examples of what was to become a very common scheme, where the pediment at the top of the facade was much less wide, forming a third zone above a middle zone that transitioned the width from that of the bottom. The giant curving volute or scroll used at the sides of the middle zone at Sant'Agostino was to be a very common feature over the next two centuries. As in Gothic architecture, this often reflected the shapes of the roofs behind, where the nave was higher than the side-aisles.
Sant'Agostino also has a low, squashed down pediment at the top of the full-width section. This theme was developed by Andrea Palladio in the next century. The main facade of his San Giorgio Maggiore in Venice (begun 1566) has "two interpenetrating temple fronts", a wider one being overlaid with a narrower and higher one, respectively following the roof lines of the aisles and nave. Several of Palladio's villas also introduced the pediment to country house architecture, which was to be become extremely common in English Palladian architecture. In cities, Palladio reserved the temple front for churches, but in the Baroque, and especially outside Italy, this distinction was abandoned.
The first use of pediments over windows in the Renaissance was on the Palazzo Bartolini Salimbeni in Florence, completed in 1523 by Baccio d'Agnolo. Vasari says the innovation caused ridicule initially, but later came to be admired and widely adopted. Baccio was accused of turning a palazzo into a church. Three windows on each of three storeys (and the door) alternate regular and segmental pediments; there is no pediment at the top of the facade, just a large cornice, as was usual.
In St Peter's Basilica there is a conventional pediment over the main entrance, but the complicated facade stretches beyond it to both sides and above, and though large in absolute terms it makes a relatively small impression. Many later buildings used a temple front with pediment as a highlight of a much wider building. The St Peter's facade also has many small pedimented windows and aedicular niches, using a mixture of segmental, broken, and open pediments.
Variations using multiple pediments became very popular in Baroque architecture, and the central vertical line of church facades often ascended through several pediments of different sizes and shapes, in Rome five at the Church of the Gesù (Giacomo della Porta 1584) and six at Santi Vincenzo e Anastasio a Trevi (Martino Longhi the Younger, 1646), the top three folding into each other, using the same base line. This facade has been described as "a veritable symphony in repetitious pedimentry, bringing together a superimposed array of broken pediments, open pediments and arched pediments". The Gesù is the home church of the Jesuit order, who favoured this style, which was first seen in many cities around Europe in a new main Jesuit church.
From 1750 to Art Deco
Pediments became extremely common on the main facades of English country houses, and many across northern Europe; these might be placed over a porch with columns, or simply decorations to an essentially flat facade. In England, if there was any sculpture within the tympanum, it was often restricted to a coat of arms.
Neoclassical architecture returned to "purer" classical models mostly using conventional triangular pediments, often over a portico with columns. Large schemes of pedimental sculpture were used where the budget allowed. In 19th-century styles, freer treatments returned, and large segmental pediments were especially popular in eclectic styles such as Beaux-Arts architecture, often overwhelmed by sculpture within, above, and to the sides.
Large pediments with columns, often called the "temple front", became widely used for important public buildings such as stock exchanges, reserve banks, law courts, legislatures, and museums, where an impression of solidity, reliability, and respectability was desired.
Postmodern reinterpretations
Postmodernism, a movement that questioned Modernism (the status quo after WW2), promoted the inclusion of elements of historic styles in new designs. An early text questioning Modernism was by architect Robert Venturi, Complexity and Contradiction in Architecture (1966), in which he recommended a revival of the 'presence of the past' in architectural design. He tried to include in his own buildings qualities that he described as 'inclusion, inconsistency, compromise, accommodation, adaptation, superadjacency, equivalence, multiple focus, juxtaposition, or good and bad space.'
Venturi encouraged 'quotation', which means reusing elements of the past in new designs. Part manifesto, part architectural scrapbook accumulated over the previous decade, the book represented the vision for a new generation of architects and designers who had grown up with Modernism but who felt increasingly constrained by its perceived rigidities. Multiple Postmodern architects and designers put simplified reinterpretations of the pediment found in Classical decoration at the top of their creations. As with other elements and ornaments taken from styles of the pre-Modern past, they were in most cases highly simplified. Especially when it comes to office architecture, Postmodernism was only skin deep; the underlying structure was usually very similar, if not identical, to that of Modernist buildings.
In 1984 Philip Johnson designed what is now called 550 Madison Avenue in New York City (formerly known as the Sony Tower, Sony Plaza, and AT&T Building), a famous work of Post-Modern architecture, where a broken pediment at the top of a typical skyscraper wittily evokes a Thomas Chippendale-style tallboy at a massive scale. Marco Polo House in London (1989, now demolished) was similar.
See also
Pedimental sculptures in Canada
Pedimental sculptures in the United States
Notes
References
Furman, Adam Nathaniel , "Seven broken pediments", 14 July 2014, The RIBA Journal blog
Lawrence, A. W., Greek Architecture, 1957, Penguin, Pelican history of art
.
Summerson, John, The Classical Language of Architecture, 1980 edition, Thames and Hudson World of Art series,
Yarwood, Doreen, The Architecture of Europe, 1987 (first edn. 1974), Spring Books,
Ancient Roman architectural elements
Architectural elements
Columns and entablature | Pediment | [
"Technology",
"Engineering"
] | 2,986 | [
"Building engineering",
"Structural system",
"Architectural elements",
"Columns and entablature",
"Components",
"Architecture"
] |
180,121 | https://en.wikipedia.org/wiki/Medication | A medication (also called medicament, medicine, pharmaceutical drug, medicinal product, medicinal drug or simply drug) is a drug used to diagnose, cure, treat, or prevent disease. Drug therapy (pharmacotherapy) is an important part of the medical field and relies on the science of pharmacology for continual advancement and on pharmacy for appropriate management.
Drugs are classified in many ways. One of the key divisions is by level of control, which distinguishes prescription drugs (those that a pharmacist dispenses only on the medical prescription) from over-the-counter drugs (those that consumers can order for themselves). Medicines may be classified by mode of action, route of administration, biological system affected, or therapeutic effects. The World Health Organization keeps a list of essential medicines.
Drug discovery and drug development are complex and expensive endeavors undertaken by pharmaceutical companies, academic scientists, and governments. As a result of this complex path from discovery to commercialization, partnering has become a standard practice for advancing drug candidates through development pipelines. Governments generally regulate what drugs can be marketed, how drugs are marketed, and in some jurisdictions, drug pricing. Controversies have arisen over drug pricing and disposal of used medications.
Definition
Medication is a medicine or a chemical compound used to treat or cure illness. According to Encyclopædia Britannica, medication is "a substance used in treating a disease or relieving pain".
As defined by the National Cancer Institute, dosage forms of medication can include tablets, capsules, liquids, creams, and patches. Medications can be administered in different ways, such as by mouth, by infusion into a vein, or by drops put into the ear or eye. A medication that does not contain an active ingredient and is used in research studies is called a placebo.
In Europe, the term is "medicinal product", and it is defined by EU law as:
"Any substance or combination of substances presented as having properties for treating or preventing disease in human beings; or"
"Any substance or combination of substances which may be used in or administered to human beings either with a view to restoring, correcting, or modifying physiological functions by exerting a pharmacological, immunological or metabolic action or to making a medical diagnosis."
In the US, a "drug" is:
A substance (other than food) intended to affect the structure or any function of the body.
A substance intended for use as a component of a medicine but not a device or a component, part, or accessory of a device.
A substance intended for use in the diagnosis, cure, mitigation, treatment, or prevention of disease.
A substance recognized by an official pharmacopeia or formulary.
Biological products are included within this definition and are generally covered by the same laws and regulations, but differences exist regarding their manufacturing processes (chemical process versus biological process).
Usage
Drug use among elderly Americans has been studied; in a group of 2,377 people with an average age of 71 surveyed between 2005 and 2006, 84% took at least one prescription drug, 44% took at least one over-the-counter (OTC) drug, and 52% took at least one dietary supplement; in a group of 2245 elderly Americans (average age of 71) surveyed over the period 2010 – 2011, those percentages were 88%, 38%, and 64%.
Classification
One of the key classifications is between traditional small molecule drugs; usually derived from chemical synthesis and biological medical products; which include recombinant proteins, vaccines, blood products used therapeutically (such as IVIG), gene therapy, and cell therapy (for instance, stem cell therapies).
Pharmaceuticals or drugs or medicines are classified into various other groups besides their origin on the basis of pharmacological properties like mode of action and their pharmacological action or activity, such as by chemical properties, mode or route of administration, biological system affected, or therapeutic effects. An elaborate and widely used classification system is the Anatomical Therapeutic Chemical Classification System (ATC system). The World Health Organization keeps a list of essential medicines.
A sampling of classes of medicine includes:
Antipyretics: reducing fever (pyrexia/pyresis)
Analgesics: reducing pain (painkillers)
Antimalarial drugs: treating malaria
Antibiotics: inhibiting germ growth
Antiseptics: prevention of germ growth near burns, cuts,and wounds
Mood stabilizers: lithium and valproate
Hormone replacements: Premarin
Oral contraceptives: Enovid, "biphasic" pill, and "triphasic" pill
Stimulants: methylphenidate, amphetamine
Tranquilizers: meprobamate, chlorpromazine, reserpine, chlordiazepoxide, diazepam, and alprazolam
Statins: lovastatin, pravastatin, and simvastatin
Pharmaceuticals may also be described as "specialty", independent of other classifications, which is an ill-defined class of drugs that might be difficult to administer, require special handling during administration, require patient monitoring during and immediately after administration, have particular regulatory requirements restricting their use, and are generally expensive relative to other drugs.
Types of medicines
For the digestive system
Lower digestive tract: laxatives, antispasmodics, antidiarrhoeals, bile acid sequestrants, opioids.
Upper digestive tract: antacids, reflux suppressants, antiflatulents, antidopaminergics, proton pump inhibitors (PPIs), H2-receptor antagonists, cytoprotectants, prostaglandin analogues.
For the cardiovascular system
Affecting blood pressure/(antihypertensive drugs): ACE inhibitors, angiotensin receptor blockers, beta-blockers, α blockers, calcium channel blockers, thiazide diuretics, loop diuretics, aldosterone inhibitors.
Coagulation: anticoagulants, heparin, antiplatelet drugs, fibrinolytics, anti-hemophilic factors, haemostatic drugs.
General: β-receptor blockers ("beta blockers"), calcium channel blockers, diuretics, cardiac glycosides, antiarrhythmics, nitrate, antianginals, vasoconstrictors, vasodilators.
HMG-CoA reductase inhibitors (statins) for lowering LDL cholesterol inhibitors: hypolipidaemic agents.
For the central nervous system
Drugs affecting the central nervous system include psychedelics, hypnotics, anaesthetics, antipsychotics, eugeroics, antidepressants (including tricyclic antidepressants, monoamine oxidase inhibitors, lithium salts, and selective serotonin reuptake inhibitors (SSRIs)), antiemetics, anticonvulsants/antiepileptics, anxiolytics, barbiturates, movement disorder (e.g., Parkinson's disease) drugs, nootropics, stimulants (including amphetamines), benzodiazepines, cyclopyrrolones, dopamine antagonists, antihistamines, cholinergics, anticholinergics, emetics, cannabinoids, and 5-HT (serotonin) antagonists.
For pain
The main classes of painkillers are NSAIDs, opioids, and local anesthetics.
For consciousness (anesthetic drugs)
Some anesthetics include benzodiazepines and barbiturates.
For musculoskeletal disorders
The main categories of drugs for musculoskeletal disorders are: NSAIDs (including COX-2 selective inhibitors), muscle relaxants, neuromuscular drugs, and anticholinesterases.
For the eye
Anti-allergy: mast cell inhibitors.
Anti-fungal: imidazoles, polyenes.
Anti-glaucoma: adrenergic agonists, beta-blockers, carbonic anhydrase inhibitors/hyperosmotics, cholinergics, miotics, parasympathomimetics, prostaglandin agonists/prostaglandin inhibitors, nitroglycerin.
Anti-inflammatory: NSAIDs, corticosteroids.
Antibacterial: antibiotics, topical antibiotics, sulfa drugs, aminoglycosides, fluoroquinolones.
Antiviral drugs.
Diagnostic: topical anesthetics, sympathomimetics, parasympatholytics, mydriatics, cycloplegics.
General: adrenergic neurone blocker, astringent.
For the ear, nose, and oropharynx
Antibiotics, sympathomimetics, antihistamines, anticholinergics, NSAIDs, corticosteroids, antiseptics, local anesthetics, antifungals, and cerumenolytics.
For the respiratory system
Bronchodilators, antitussives, mucolytics, decongestants, inhaled and systemic corticosteroids, beta2-adrenergic agonists, anticholinergics, mast cell stabilizers, leukotriene antagonists.
For endocrine problems
Androgens, antiandrogens, estrogens, gonadotropin, corticosteroids, human growth hormone, insulin, antidiabetics (sulfonylureas, biguanides/metformin, thiazolidinediones, insulin), thyroid hormones, antithyroid drugs, calcitonin, diphosphonate, vasopressin analogues.
For the reproductive system or urinary system
Antifungal, alkalinizing agents, quinolones, antibiotics, cholinergics, anticholinergics, antispasmodics, 5-alpha reductase inhibitor, selective alpha-1 blockers, sildenafils, fertility medications.
For contraception
Hormonal contraception.
Ormeloxifene.
Spermicide.
For obstetrics and gynecology
NSAIDs, anticholinergics, haemostatic drugs, antifibrinolytics, Hormone Replacement Therapy (HRT), bone regulators, beta-receptor agonists, follicle stimulating hormone, luteinising hormone, LHRH, gamolenic acid, gonadotropin release inhibitor, progestogen, dopamine agonists, oestrogen, prostaglandins, gonadorelin, clomiphene, tamoxifen, diethylstilbestrol.
For the skin
Emollients, anti-pruritics, antifungals, antiseptics, scabicides, pediculicides, tar products, vitamin A derivatives, vitamin D analogues, keratolytics, abrasives, systemic antibiotics, topical antibiotics, hormones, desloughing agents, exudate absorbents, fibrinolytics, proteolytics, sunscreens, antiperspirants, corticosteroids, immune modulators.
For infections and infestations
Antibiotics, antifungals, antileprotics, antituberculous drugs, antimalarials, anthelmintics, amoebicides, antivirals, antiprotozoals, probiotics, prebiotics, antitoxins, and antivenoms.
For the immune system
Vaccines, immunoglobulins, immunosuppressants, interferons, and monoclonal antibodies.
For allergic disorders
Anti-allergics, antihistamines, NSAIDs, corticosteroids.
For nutrition
Tonics, electrolytes and mineral preparations (including iron preparations and magnesium preparations), parenteral nutrition, vitamins, anti-obesity drugs, anabolic drugs, haematopoietic drugs, food product drugs.
For neoplastic disorders
Cytotoxic drugs, therapeutic antibodies, sex hormones, aromatase inhibitors, somatostatin inhibitors, recombinant interleukins, G-CSF, erythropoietin.
For diagnostics
Contrast media.
For euthanasia
A euthanaticum is used for euthanasia and physician-assisted suicide. Euthanasia is not permitted by law in many countries, and consequently, medicines will not be licensed for this use in those countries.
Administration
A single drug may contain single or multiple active ingredients.
The administration is the process by which a patient takes medicine. There are three major categories of drug administration: enteral (via the human gastrointestinal tract), injection into the body, and by other routes (dermal, nasal, ophthalmic, otologic, and urogenital).
Oral administration, the most common form of enteral administration, can be performed using various dosage forms including tablets or capsules and liquid such as syrup or suspension. Other ways to take the medication include buccally (placed inside the cheek), sublingually (placed underneath the tongue), eye and ear drops (dropped into the eye or ear), and transdermally (applied to the skin).
They can be administered in one dose, as a bolus. Administration frequencies are often abbreviated from Latin, such as every 8 hours reading Q8H from Quaque VIII Hora. The drug frequencies are often expressed as the number of times a drug is used per day (e.g., four times a day). It may include event-related information (e.g., 1 hour before meals, in the morning, at bedtime), or complimentary to an interval, although equivalent expressions may have different implications (e.g., every 8 hours versus 3 times a day).
Drug discovery
In the fields of medicine, biotechnology, and pharmacology, drug discovery is the process by which new drugs are discovered.
Historically, drugs were discovered by identifying the active ingredient from traditional remedies or by serendipitous discovery. Later chemical libraries of synthetic small molecules, natural products, or extracts were screened in intact cells or whole organisms to identify substances that have a desirable therapeutic effect in a process known as classical pharmacology. Since sequencing of the human genome which allowed rapid cloning and synthesis of large quantities of purified proteins, it has become common practice to use high throughput screening of large compound libraries against isolated biological targets which are hypothesized to be disease-modifying in a process known as reverse pharmacology. Hits from these screens are then tested in cells and then in animals for efficacy. Even more recently, scientists have been able to understand the shape of biological molecules at the atomic level and to use that knowledge to design (see drug design) drug candidates.
Modern drug discovery involves the identification of screening hits, medicinal chemistry, and optimization of those hits to increase the affinity, selectivity (to reduce the potential of side effects), efficacy/potency, metabolic stability (to increase the half-life), and oral bioavailability. Once a compound that fulfills all of these requirements has been identified, it will begin the process of drug development prior to clinical trials. One or more of these steps may, but not necessarily, involve computer-aided drug design.
Despite advances in technology and understanding of biological systems, drug discovery is still a lengthy, "expensive, difficult, and inefficient process" with a low rate of new therapeutic discovery. In 2010, the research and development cost of each new molecular entity (NME) was approximately US$1.8 billion. Drug discovery is done by pharmaceutical companies, sometimes with research assistance from universities. The "final product" of drug discovery is a patent on the potential drug. The drug requires very expensive Phase I, II, and III clinical trials, and most of them fail. Small companies have a critical role, often then selling the rights to larger companies that have the resources to run the clinical trials.
Drug discovery is different from Drug Development. Drug Discovery is often considered the process of identifying new medicine. At the same time, Drug development is delivering a new drug molecule into clinical practice. In its broad definition, this encompasses all steps from the basic research process of finding a suitable molecular target to supporting the drug's commercial launch.
Development
Drug development is the process of bringing a new drug to the market once a lead compound has been identified through the process of drug discovery. It includes pre-clinical research (microorganisms/animals) and clinical trials (on humans) and may include the step of obtaining regulatory approval to market the drug.
Drug Development Process
Discovery: The Drug Development process starts with Discovery, a process of identifying a new medicine.
Development: Chemicals extracted from natural products are used to make pills, capsules, or syrups for oral use. Injections for direct infusion into the blood drops for eyes or ears.
Preclinical research: Drugs go under laboratory or animal testing, to ensure that they can be used on Humans.
Clinical testing: The drug is used on people to confirm that it is safe to use.
FDA Review: drug is sent to FDA before launching the drug into the market.
FDA post-Market Review: The drug is reviewed and monitored by FDA for the safety once it is available to the public.
Regulation
The regulation of drugs varies by jurisdiction. In some countries, such as the United States, they are regulated at the national level by a single agency. In other jurisdictions, they are regulated at the state level, or at both state and national levels by various bodies, as is the case in Australia. The role of therapeutic goods regulation is designed mainly to protect the health and safety of the population. Regulation is aimed at ensuring the safety, quality, and efficacy of the therapeutic goods which are covered under the scope of the regulation. In most jurisdictions, therapeutic goods must be registered before they are allowed to be marketed. There is usually some degree of restriction on the availability of certain therapeutic goods depending on their risk to consumers.
Depending upon the jurisdiction, drugs may be divided into over-the-counter drugs (OTC) which may be available without special restrictions, and prescription drugs, which must be prescribed by a licensed medical practitioner in accordance with medical guidelines due to the risk of adverse effects and contraindications. The precise distinction between OTC and prescription depends on the legal jurisdiction. A third category, "behind-the-counter" drugs, is implemented in some jurisdictions. These do not require a prescription, but must be kept in the dispensary, not visible to the public, and be sold only by a pharmacist or pharmacy technician. Doctors may also prescribe prescription drugs for off-label use – purposes which the drugs were not originally approved for by the regulatory agency. The Classification of Pharmaco-Therapeutic Referrals helps guide the referral process between pharmacists and doctors.
The International Narcotics Control Board of the United Nations imposes a world law of prohibition of certain drugs. They publish a lengthy list of chemicals and plants whose trade and consumption (where applicable) are forbidden. OTC drugs are sold without restriction as they are considered safe enough that most people will not hurt themselves accidentally by taking it as instructed. Many countries, such as the United Kingdom have a third category of "pharmacy medicines", which can be sold only in registered pharmacies by or under the supervision of a pharmacist.
Medical errors include over-prescription and polypharmacy, mis-prescription, contraindication and lack of detail in dosage and administration instructions. In 2000 the definition of a prescription error was studied using a Delphi method conference; the conference was motivated by ambiguity in what a prescription error is and a need to use a uniform definition in studies.
Drug pricing
In many jurisdictions, drug prices are regulated.
United Kingdom
In the UK, the Pharmaceutical Price Regulation Scheme is intended to ensure that the National Health Service is able to purchase drugs at reasonable prices. The prices are negotiated between the Department of Health, acting with the authority of Northern Ireland and the UK Government, and the representatives of the Pharmaceutical industry brands, the Association of the British Pharmaceutical Industry (ABPI). For 2017 this payment percentage set by the PPRS will be 4,75%.
Canada
In Canada, the Patented Medicine Prices Review Board examines drug pricing and determines if a price is excessive or not. In these circumstances, drug manufacturers must submit a proposed price to the appropriate regulatory agency. Furthermore, "the International Therapeutic Class Comparison Test is responsible for comparing the National Average Transaction Price of the patented drug product under review" different countries that the prices are being compared to are the following: France, Germany, Italy, Sweden, Switzerland, the United Kingdom, and the United States
Brazil
In Brazil, the prices are regulated through legislation under the name of Medicamento Genérico (generic drugs) since 1999.
India
In India, drug prices are regulated by the National Pharmaceutical Pricing Authority.
United States
In the United States, drug costs are partially unregulated, but instead are the result of negotiations between drug companies and insurance companies.
High prices have been attributed to monopolies given to manufacturers by the government. New drug development costs continue to rise as well. Despite the enormous advances in science and technology, the number of new blockbuster drugs approved by the government per billion dollars spent has halved every 9 years since 1950.
Blockbuster drug
A blockbuster drug is a drug that generates more than $1 billion in revenue for a pharmaceutical company in a single year. Cimetidine was the first drug ever to reach more than $1 billion a year in sales, thus making it the first blockbuster drug.
History
Prescription drug history
Antibiotics first arrived on the medical scene in 1932 thanks to Gerhard Domagk; and were coined the "wonder drugs". The introduction of the sulfa drugs led to the mortality rate from pneumonia in the U.S. to drop from 0.2% each year to 0.05% (, as much) by 1939. Antibiotics inhibit the growth or the metabolic activities of bacteria and other microorganisms by a chemical substance of microbial origin. Penicillin, introduced a few years later, provided a broader spectrum of activity compared to sulfa drugs and reduced side effects. Streptomycin, found in 1942, proved to be the first drug effective against the cause of tuberculosis and also came to be the best known of a long series of important antibiotics. A second generation of antibiotics was introduced in the 1940s: aureomycin and chloramphenicol. Aureomycin was the best known of the second generation.
Lithium was discovered in the 19th century for nervous disorders and its possible mood-stabilizing or prophylactic effect; it was cheap and easily produced. As lithium fell out of favor in France, valpromide came into play. This antibiotic was the origin of the drug that eventually created the mood stabilizer category. Valpromide had distinct psychotrophic effects that were of benefit in both the treatment of acute manic states and in the maintenance treatment of manic depression illness. Psychotropics can either be sedative or stimulant; sedatives aim at damping down the extremes of behavior. Stimulants aim at restoring normality by increasing tone. Soon arose the notion of a tranquilizer which was quite different from any sedative or stimulant. The term tranquilizer took over the notions of sedatives and became the dominant term in the West through the 1980s. In Japan, during this time, the term tranquilizer produced the notion of a psyche-stabilizer and the term mood stabilizer vanished.
Premarin (conjugated estrogens, introduced in 1942) and Prempro (a combination estrogen-progestin pill, introduced in 1995) dominated the hormone replacement therapy (HRT) during the 1990s. HRT is not a life-saving drug, nor does it cure any disease. HRT has been prescribed to improve one's quality of life. Doctors prescribe estrogen for their older female patients both to treat short-term menopausal symptoms and to prevent long-term diseases. In the 1960s and early 1970s, more and more physicians began to prescribe estrogen for their female patients. Between 1991 and 1999, Premarin was listed as the most popular prescription and best-selling drug in America.
The first oral contraceptive, Enovid, was approved by FDA in 1960. Oral contraceptives inhibit ovulation and so prevent conception. Enovid was known to be much more effective than alternatives including the condom and the diaphragm. As early as 1960, oral contraceptives were available in several different strengths by every manufacturer. In the 1980s and 1990s, an increasing number of options arose including, most recently, a new delivery system for the oral contraceptive via a transdermal patch. In 1982, a new version of "the pill" was introduced, known as the biphasic pill. By 1985, a new triphasic pill was approved. Physicians began to think of "the pill" as an excellent means of birth control for young women.
Stimulants such as Ritalin (methylphenidate) came to be pervasive tools for behavior management and modification in young children. Ritalin was first marketed in 1955 for narcolepsy; its potential users were middle-aged and the elderly. It was not until some time in the 1980s along with hyperactivity in children that Ritalin came onto the market. Medical use of methylphenidate is predominantly for symptoms of attention deficit hyperactivity disorder (ADHD). Consumption of methylphenidate in the U.S. out-paced all other countries between 1991 and 1999. Significant growth in consumption was also evident in Canada, New Zealand, Australia, and Norway. Currently, 85% of the world's methylphenidate is consumed in America.
The first minor tranquilizer was meprobamate. Only fourteen months after it was made available, meprobamate had become the country's largest-selling prescription drug. By 1957, meprobamate had become the fastest-growing drug in history. The popularity of meprobamate paved the way for Librium and Valium, two minor tranquilizers that belonged to a new chemical class of drugs called the benzodiazepines. These were drugs that worked chiefly as anti-anxiety agents and muscle relaxants. The first benzodiazepine was Librium. Three months after it was approved, Librium had become the most prescribed tranquilizer in the nation. Three years later, Valium hit the shelves and was ten times more effective as a muscle relaxant and anti-convulsant. Valium was the most versatile of the minor tranquilizers. Later came the widespread adoption of major tranquilizers such as chlorpromazine and the drug reserpine. In 1970, sales began to decline for Valium and Librium, but sales of new and improved tranquilizers, such as Xanax, introduced in 1981 for the newly created diagnosis of panic disorder, soared.
Mevacor (lovastatin) is the first and most influential statin in the American market. The 1991 launch of Pravachol (pravastatin), the second available in the United States, and the release of Zocor (simvastatin) made Mevacor no longer the only statin on the market.
In 1998, Viagra was released as a treatment for erectile dysfunction.
Ancient pharmacology
Using plants and plant substances to treat all kinds of diseases and medical conditions is believed to date back to prehistoric medicine.
The Kahun Gynaecological Papyrus, the oldest known medical text of any kind, dates to about 1800 BC and represents the first documented use of any kind of drug. It and other medical papyri describe Ancient Egyptian medical practices, such as using honey to treat infections and the legs of bee-eaters to treat neck pains.
Ancient Babylonian medicine demonstrated the use of medication in the first half of the 2nd millennium BC. Medicinal creams and pills were employed as treatments.
On the Indian subcontinent, the Atharvaveda, a sacred text of Hinduism whose core dates from the second millennium BC, although the hymns recorded in it are believed to be older, is the first Indic text dealing with medicine. It describes plant-based drugs to counter diseases. The earliest foundations of ayurveda were built on a synthesis of selected ancient herbal practices, together with a massive addition of theoretical conceptualizations, new nosologies and new therapies dating from about 400 BC onwards. The student of Āyurveda was expected to know ten arts that were indispensable in the preparation and application of his medicines: distillation, operative skills, cooking, horticulture, metallurgy, sugar manufacture, pharmacy, analysis and separation of minerals, compounding of metals, and preparation of alkalis.
The Hippocratic Oath for physicians, attributed to fifth century BC Greece, refers to the existence of "deadly drugs", and ancient Greek physicians imported drugs from Egypt and elsewhere. The pharmacopoeia , written between 50 and 70 CE by the Greek physician Pedanius Dioscorides, was widely read for more than 1,500 years.
Medieval pharmacology
Al-Kindi's ninth century AD book, De Gradibus and Ibn Sina (Avicenna)'s The Canon of Medicine, covers a range of drugs known to the practice of medicine in the medieval Islamic world.
Medieval medicine of Western Europe saw advances in surgery compared to previously, but few truly effective drugs existed, beyond opium (found in such extremely popular drugs as the "Great Rest" of the Antidotarium Nicolai at the time) and quinine. Folklore cures and potentially poisonous metal-based compounds were popular treatments. Theodoric Borgognoni, (1205–1296), one of the most significant surgeons of the medieval period, responsible for introducing and promoting important surgical advances including basic antiseptic practice and the use of anaesthetics. Garcia de Orta described some herbal treatments that were used.
Modern pharmacology
For most of the 19th century, drugs were not highly effective, leading Oliver Wendell Holmes Sr. to famously comment in 1842 that "if all medicines in the world were thrown into the sea, it would be all the better for mankind and all the worse for the fishes".
During the First World War, Alexis Carrel and Henry Dakin developed the Carrel-Dakin method of treating wounds with an irrigation, Dakin's solution, a germicide which helped prevent gangrene.
In the inter-war period, the first anti-bacterial agents such as the sulpha antibiotics were developed. The Second World War saw the introduction of widespread and effective antimicrobial therapy with the development and mass production of penicillin antibiotics, made possible by the pressures of the war and the collaboration of British scientists with the American pharmaceutical industry.
Medicines commonly used by the late 1920s included aspirin, codeine, and morphine for pain; digitalis, nitroglycerin, and quinine for heart disorders, and insulin for diabetes. Other drugs included antitoxins, a few biological vaccines, and a few synthetic drugs. In the 1930s, antibiotics emerged: first sulfa drugs, then penicillin and other antibiotics. Drugs increasingly became "the center of medical practice". In the 1950s, other drugs emerged including corticosteroids for inflammation, rauvolfia alkaloids as tranquilizers and antihypertensives, antihistamines for nasal allergies, xanthines for asthma, and typical antipsychotics for psychosis. As of 2007, thousands of approved drugs have been developed. Increasingly, biotechnology is used to discover biopharmaceuticals. Recently, multi-disciplinary approaches have yielded a wealth of new data on the development of novel antibiotics and antibacterials and on the use of biological agents for antibacterial therapy.
In the 1950s, new psychiatric drugs, notably the antipsychotic chlorpromazine, were designed in laboratories and slowly came into preferred use. Although often accepted as an advance in some ways, there was some opposition, due to serious adverse effects such as tardive dyskinesia. Patients often opposed psychiatry and refused or stopped taking the drugs when not subject to psychiatric control.
Governments have been heavily involved in the regulation of drug development and drug sales. In the U.S., the Elixir Sulfanilamide disaster led to the establishment of the Food and Drug Administration, and the 1938 Federal Food, Drug, and Cosmetic Act required manufacturers to file new drugs with the FDA. The 1951 Humphrey-Durham Amendment required certain drugs to be sold by prescription. In 1962, a subsequent amendment required new drugs to be tested for efficacy and safety in clinical trials.
Until the 1970s, drug prices were not a major concern for doctors and patients. As more drugs became prescribed for chronic illnesses, however, costs became burdensome, and by the 1970s nearly every U.S. state required or encouraged the substitution of generic drugs for higher-priced brand names. This also led to the 2006 U.S. law, Medicare Part D, which offers Medicare coverage for drugs.
As of 2008, the United States is the leader in medical research, including pharmaceutical development. U.S. drug prices are among the highest in the world, and drug innovation is correspondingly high. In 2000, U.S.-based firms developed 29 of the 75 top-selling drugs; firms from the second-largest market, Japan, developed eight, and the United Kingdom contributed 10. France, which imposes price controls, developed three. Throughout the 1990s, outcomes were similar.
Controversies
Controversies concerning pharmaceutical drugs include patient access to drugs under development and not yet approved, pricing, and environmental issues.
Access to unapproved drugs
Governments worldwide have created provisions for granting access to drugs prior to approval for patients who have exhausted all alternative treatment options and do not match clinical trial entry criteria. Often grouped under the labels of compassionate use, expanded access, or named patient supply, these programs are governed by rules which vary by country defining access criteria, data collection, promotion, and control of drug distribution.
Within the United States, pre-approval demand is generally met through treatment IND (investigational new drug) applications (INDs), or single-patient INDs. These mechanisms, which fall under the label of expanded access programs, provide access to drugs for groups of patients or individuals residing in the US. Outside the US, Named Patient Programs provide controlled, pre-approval access to drugs in response to requests by physicians on behalf of specific, or "named", patients before those medicines are licensed in the patient's home country. Through these programs, patients are able to access drugs in late-stage clinical trials or approved in other countries for a genuine, unmet medical need, before those drugs have been licensed in the patient's home country.
Patients who have not been able to get access to drugs in development have organized and advocated for greater access. In the United States, ACT UP formed in the 1980s, and eventually formed its Treatment Action Group in part to pressure the US government to put more resources into discovering treatments for AIDS and then to speed release of drugs that were under development.
The Abigail Alliance was established in November 2001 by Frank Burroughs in memory of his daughter, Abigail. The Alliance seeks broader availability of investigational drugs on behalf of terminally ill patients.
In 2013, BioMarin Pharmaceutical was at the center of a high-profile debate regarding expanded access of cancer patients to experimental drugs.
Access to medicines and drug pricing
Essential medicines, as defined by the World Health Organization (WHO), are "those drugs that satisfy the health care needs of the majority of the population; they should therefore be available at all times in adequate amounts and in appropriate dosage forms, at a price the community can afford." Recent studies have found that most of the medicines on the WHO essential medicines list, outside of the field of HIV drugs, are not patented in the developing world, and that lack of widespread access to these medicines arise from issues fundamental to economic development – lack of infrastructure and poverty. Médecins Sans Frontières also runs a Campaign for Access to Essential Medicines campaign, which includes advocacy for greater resources to be devoted to currently untreatable diseases that primarily occur in the developing world. The Access to Medicine Index tracks how well pharmaceutical companies make their products available in the developing world.
World Trade Organization negotiations in the 1990s, including the TRIPS Agreement and the Doha Declaration, have centered on issues at the intersection of international trade in pharmaceuticals and intellectual property rights, with developed world nations seeking strong intellectual property rights to protect investments made to develop new drugs, and developing world nations seeking to promote their generic pharmaceuticals industries and their ability to make medicine available to their people via compulsory licenses.
Some have raised ethical objections specifically with respect to pharmaceutical patents and the high prices for drugs that they enable their proprietors to charge, which poor people around the world, cannot afford. Critics also question the rationale that exclusive patent rights and the resulting high prices are required for pharmaceutical companies to recoup the large investments needed for research and development. One study concluded that marketing expenditures for new drugs often doubled the amount that was allocated for research and development. Other critics claim that patent settlements would be costly for consumers, the health care system, and state and federal governments because it would result in delaying access to lower cost generic medicines.
Novartis fought a protracted battle with the government of India over the patenting of its drug, Gleevec, in India, which ended up in a Supreme Court in a case known as Novartis v. Union of India & Others. The Supreme Court ruled narrowly against Novartis, but opponents of patenting drugs claimed it as a major victory.
Environmental issues
Pharmaceutical medications are commonly described as "ubiquitous" in nearly every type of environmental medium (i.e. lakes, rivers, streams, estuaries, seawater, and soil) worldwide. Their chemical components are typically present at relatively low concentrations in the ng/L to μg/L ranges. The primary avenue for medications reaching the environment are through the effluent of wastewater treatment plants, both from industrial plants during production, and from municipal plants after consumption. Agricultural pollution is another significant source derived from the prevalence of antibiotic use in livestock.
Scientists generally divide environmental impacts of a chemical into three primary categories: persistence, bioaccumulation, and toxicity. Since medications are inherently bio-active, most are naturally degradable in the environment, however they are classified as "pseudopersistent" because they are constantly being replenished from their sources. These Environmentally Persistent Pharmaceutical Pollutants (EPPPs) rarely reach toxic concentrations in the environment, however they have been known to bioaccumulate in some species. Their effects have been observed to compound gradually across food webs, rather than becoming acute, leading to their classification by the US Geological Survey as "Ecological Disrupting Compounds."
See also
Adherence
Deprescribing
Drug nomenclature
List of drugs
List of pharmaceutical companies
Orphan drug
Overmedication
Pharmaceutical code
Pharmacy
References
External links
Drug Reference Site Directory – OpenMD
Drugs & Medications Directory – Curlie
European Medicines Agency
NHS Medicines A–Z
U.S. Food & Drug Administration: Drugs
WHO Model List of Essential Medicines
Chemicals in medicine
Pharmaceutical industry
Products of chemical industry | Medication | [
"Chemistry",
"Engineering",
"Biology"
] | 8,234 | [
"Pharmacology",
"Life sciences industry",
"Products of chemical industry",
"Pharmaceutical industry",
"Chemical engineering",
"Medicinal chemistry",
"Chemicals in medicine",
"Drugs"
] |
180,201 | https://en.wikipedia.org/wiki/Riff | A riff is a short, repeated motif or figure in the melody or accompaniment of a musical composition. Riffs are most often found in rock music, heavy metal music, Latin, funk, and jazz, although classical music is also sometimes based on a riff, such as Ravel's Boléro. Riffs can be as simple as a tenor saxophone honking a simple, catchy rhythmic figure, or as complex as the riff-based variations in the head arrangements played by the Count Basie Orchestra.
David Brackett (1999) defines riffs as "short melodic phrases", while Richard Middleton (1999) defines them as "short rhythmic, melodic, or harmonic figures repeated to form a structural framework". Author Rikky Rooksby states: "A riff is a short, repeated, memorable musical phrase, often pitched low on the guitar, which focuses much of the energy and excitement of a rock song."
BBC Radio 2, in compiling its list of 100 Greatest Guitar Riffs, defined a riff as the "main hook of a song", often beginning the song, and is "repeated throughout it, giving the song its distinctive voice".
Use of the term has extended to comedy, where riffing means the verbal exploration of a particular subject, thus moving the meaning away from the original jazz sense of a repeated figure that a soloist improvises over, to instead indicate the improvisation itself—improvising on a melody or progression as one would improvise on a subject by extending a singular thought, idea or inspiration into a bit, or routine.
Etymology
The term riff entered musical slang in the 1920s and is used primarily in discussion of forms of rock music, heavy metal or jazz. One explanation holds that "most rock musicians use riff as a near-synonym for musical idea" (Middleton 1990, p. 125), but the etymology of the term is not clearly known.
Ian Anderson, in the documentary A World Without Beethoven, states (repeatedly) that "riff" is the abbreviation of "repeated motif." Other sources propose riff as an abbreviation for "rhythmic figure," "rhythm fragment," or "refrain".
Usage in jazz, blues, and R&B
In jazz, blues and R&B, riffs are often used as the starting point for longer compositions. Count Basie's band used many riffs in the 1930's, like in "Jumping at the Woodside" and "One O Clock Jump". Charlie Parker used riffs on "Now's the Time" and "Buzzy". Oscar Pettiford's tune "Blues in the Closet" is a rifftune and so is Duke Ellington's tune "C Jam Blues". Blues guitarist John Lee Hooker used riff on "Boogie Chillen" in 1948.
The riff from Charlie Parker's bebop number "Now's the Time" (1945) re-emerged four years later as the R&B dance hit "The Hucklebuck". The verse of "The Hucklebuck", which was another riff, was "borrowed" from the Artie Matthews composition "Weary Blues". Glenn Miller's "In the Mood" had an earlier life as Wingy Manone's "Tar Paper Stomp". All these songs use twelve-bar blues riffs, and most of these riffs probably precede the examples given (Covach 2005, p. 71).
In classical music, individual musical phrases used as the basis of classical music pieces are called ostinatos or simply phrases. Contemporary jazz writers also use riff- or lick-like ostinatos in modal music and Latin jazz.
Riff-driven
The term "riff-driven" is used to describe a piece of music that relies on a repeated instrumental riff as the basis of its most prominent melody, cadence, or (in some cases) leitmotif. Riff-driven songs are largely a product of jazz, blues, and post-blues era music (rock and pop). The musical goal of riff-driven songs is akin to the classical continuo effect, but raised to much higher importance (in fact, the repeated riff is used to anchor the song in the ears of the listener). The riff/continuo is brought to the forefront of the musical piece and often is the primary melody that remains in the listener's ears. A call and response often holds the song together, creating a "circular" rather than linear feel.
Who recorded the first riff-driven rock and roll song is contested, but very early examples include the playing by René Hall on Ritchie Valens’ 1958 version of "La Bamba" (on a Danelectro six-string bass guitar), as well as Link Wray's 1958 instrumental record "Rumble."
A few examples of classic rock riff-driven songs are "Whole Lotta Love" and "Black Dog" by Led Zeppelin, "Day Tripper" by The Beatles, "Brown Sugar" and "(I Can't Get No) Satisfaction" by The Rolling Stones, "Smoke on the Water" by Deep Purple, "Back in Black" by AC/DC, "Smells Like Teen Spirit" by Nirvana, "Johnny B Goode" by Chuck Berry, "Back in the Saddle" by Aerosmith, and "You Really Got Me" by The Kinks.
See also
Fill
Riffusion
Vamp
References
Sources
Covach, John. "Form in Rock Music: A Primer", in Stein, Deborah (2005). Engaging Music: Essays in Music Analysis. New York: Oxford University Press. .
External links
Jazz Guitar Riffs
Accompaniment
Formal sections in music analysis | Riff | [
"Technology"
] | 1,158 | [
"Components",
"Formal sections in music analysis"
] |
180,210 | https://en.wikipedia.org/wiki/Biogeographic%20realm | A biogeographic realm is the broadest biogeographic division of Earth's land surface, based on distributional patterns of terrestrial organisms. They are subdivided into bioregions, which are further subdivided into ecoregions.
A biogeographic realm is also known as "ecozone", although that term may also refer to ecoregions.
Description
The realms delineate large areas of Earth's surface within which organisms have evolved in relative isolation over long periods of time, separated by geographic features, such as oceans, broad deserts, or high mountain ranges, that constitute natural barriers to migration. As such, biogeographic realm designations are used to indicate general groupings of organisms based on their shared biogeography. Biogeographic realms correspond to the floristic kingdoms of botany or zoogeographic regions of zoology.
From 1872, Alfred Russel Wallace developed a system of zoogeographic regions, extending the ornithologist Philip Sclater's system of six regions.
Biogeographic realms are characterized by the evolutionary history of the organisms they contain. They are distinct from biomes, also known as major habitat types, which are divisions of the Earth's surface based on life form, or the adaptation of animals, fungi, micro-organisms and plants to climatic, soil, and other conditions. Biomes are characterized by similar climax vegetation. Each realm may include a number of different biomes. A tropical moist broadleaf forest in Central America, for example, may be similar to one in New Guinea in its vegetation type and structure, climate, soils, etc., but these forests are inhabited by animals, fungi, micro-organisms and plants with very different evolutionary histories.
The distribution of organisms among the world's biogeographic realms has been influenced by the distribution of landmasses, as shaped by plate tectonics over the geological history of the Earth.
Concept history
The "biogeographic realms" of Udvardy were defined based on taxonomic composition. The rank corresponds more or less to the floristic kingdoms and zoogeographic regions.
The usage of the term "ecozone" is more variable. Beginning in the 1960s, it was used originally in the field of biostratigraphy to denote intervals of geological strata with fossil content demonstrating a specific ecology. In Canadian literature, the term was used by Wiken in macro level land classification, with geographic criteria (see Ecozones of Canada). Later, Schultz would use it with ecological and physiognomical criteria, in a way similar to the concept of biome.
In the Global 200/WWF scheme, originally the term "biogeographic realm" in Udvardy sense was used. However, in a scheme of BBC, it was replaced by the term "ecozone".
Terrestrial biogeographic realms
Udvardy biogeographic realms
WWF / Global 200 biogeographic realms
The World Wildlife Fund scheme is broadly similar to Miklos Udvardy's system, the chief difference being the delineation of the Australasian realm relative to the Antarctic, Oceanic, and Indomalayan realms. In the WWF system, the Australasia realm includes Australia, Tasmania, the islands of Wallacea, New Guinea, the East Melanesian Islands, New Caledonia, and New Zealand. Udvardy's Australian realm includes only Australia and Tasmania; he places Wallacea in the Indomalayan Realm, New Guinea, New Caledonia, and East Melanesia in the Oceanian Realm, and New Zealand in the Antarctic Realm.
The Palearctic and Nearctic are sometimes grouped into the Holarctic realm.
Morrone biogeographic kingdoms
Following the nomenclatural conventions set out in the International Code of Area Nomenclature, Morrone defined the next biogeographic kingdoms (or realms) and regions:
Holarctic kingdom Heilprin (1887)
Nearctic region Sclater (1858)
Palearctic region Sclater (1858)
Holotropical kingdom Rapoport (1968)
Neotropical region Sclater (1858)
Ethiopian region Sclater (1858)
Oriental region Wallace (1876)
Austral kingdom Engler (1899)
Cape region Grisebach (1872)
Andean region Engler (1882)
Australian region Sclater (1858)
Antarctic region Grisebach (1872)
Transition zones:
Mexican transition zone (Nearctic–Neotropical transition)
Saharo-Arabian transition zone (Palearctic–Ethiopian transition)
Chinese transition zone (Palearctic–Oriental transition zone transition)
Indo-Malayan, Indonesian or Wallace's transition zone (Oriental–Australian transition)
South American transition zone (Neotropical–Austral transition)
Freshwater biogeographic realms
The applicability of Udvardy scheme to most freshwater taxa is unresolved.
The drainage basins of the principal oceans and seas of the world are marked by continental divides. The grey areas are endorheic basins that do not drain to the ocean.
Marine biogeographic realms
According to Briggs and Morrone:
According to the WWF scheme:
See also
Biome
Cosmopolitan distribution
Ecotone
Phytochorion and World Geographical Scheme for Recording Plant Distributions, used in botany
References
Biogeography
Habitat | Biogeographic realm | [
"Biology"
] | 1,059 | [
"Biogeography"
] |
180,223 | https://en.wikipedia.org/wiki/Palearctic%20realm | The Palearctic or Palaearctic is a biogeographic realm of the Earth, the largest of eight. Confined almost entirely to the Eastern Hemisphere, it stretches across all of Eurasia north of the foothills of the Himalayas, and North Africa.
The realm consists of several bioregions, variously spanning the Euro-Siberian region; the Mediterranean Basin; North Africa; North Arabia; and Western, Central and East Asia. The Palaearctic realm also has numerous rivers and lakes, forming several freshwater ecoregions.
Both the eastern and westernmost extremes of the Paleartic span into the Western Hemisphere, including Cape Dezhnyov in Chukotka Autonomous Okrug the to the east and Iceland to the west. The term was first used in the 19th century, and is still in use as the basis for zoogeographic classification.
History
In an 1858 paper for the Proceedings of the Linnean Society, British zoologist Philip Sclater first identified six terrestrial zoogeographic realms of the world: Palaearctic, Aethiopian/Afrotropic, Indian/Indomalayan, Australasian, Nearctic, and Neotropical. The six indicated general groupings of fauna, based on shared biogeography and large-scale geographic barriers to migration.
Alfred Wallace adopted Sclater's scheme for his book The Geographical Distribution of Animals, published in 1876. This is the same scheme that persists today, with relatively minor revisions, and the addition of two more realms: Oceania and the Antarctic realm.
Major ecological regions
The Palearctic realm includes mostly boreal/subarctic-climate and temperate-climate ecoregions, which run across Eurasia from western Europe to the Bering Sea.
Euro-Siberian region
The boreal and temperate Euro-Siberian region is the Palearctic's largest biogeographic region, which transitions from tundra in the northern reaches of Russia and Scandinavia to the vast taiga, the boreal coniferous forests which run across the continent. South of the taiga are a belt of temperate broadleaf and mixed forests and temperate coniferous forests. This vast Euro-Siberian region is characterized by many shared plant and animal species, and has many affinities with the temperate and boreal regions of the Nearctic realm of North America. Eurasia and North America were often connected by the Bering land bridge, and have very similar mammal and bird fauna, with many Eurasian species having moved into North America, and fewer North American species having moved into Eurasia. Many zoologists consider the Palearctic and Nearctic to be a single Holarctic realm. The Palearctic and Nearctic also share many plant species, which botanists call the Arcto-Tertiary Geoflora.
Mediterranean Basin
The lands bordering the Mediterranean Sea in southern Europe, north Africa, and western Asia are home to the Mediterranean Basin ecoregions, which together constitute the world's largest and most diverse mediterranean climate region of the world, with generally mild, rainy winters and hot, dry summers. The Mediterranean basin's mosaic of Mediterranean forests, woodlands, and scrub are home to 13,000 endemic species. The Mediterranean basin is also one of the world's most endangered biogeographic regions; only 4% of the region's original vegetation remains, and human activities, including overgrazing, deforestation, and conversion of lands for pasture, agriculture, and urbanization, have degraded much of the region. Formerly the region was mostly covered with forests and woodlands, but heavy human use has reduced much of the region to the sclerophyll shrublands known as chaparral, matorral, maquis, or garrigue. Conservation International has designated the Mediterranean basin as one of the world's biodiversity hotspots.
Sahara and Arabian deserts
A great belt of deserts, including the Atlantic coastal desert, Sahara Desert, and Arabian Desert, separates the Palearctic and Afrotropic ecoregions. This scheme includes these desert ecoregions in the palearctic realm; other biogeographers identify the realm boundary as the transition zone between the desert ecoregions and the Mediterranean basin ecoregions to the north, which places the deserts in the Afrotropic, while others place the boundary through the middle of the desert.
Western and Central Asia
The Caucasus mountains, which run between the Black Sea and the Caspian Sea, are a particularly rich mix of coniferous, broadleaf, and mixed forests, and include the temperate rain forests of the Euxine-Colchic deciduous forests ecoregion.
Central Asia and the Iranian plateau are home to dry steppe grasslands and desert basins, with montane forests, woodlands, and grasslands in the region's high mountains and plateaux. In southern Asia the boundary of the Palearctic is largely altitudinal. The middle altitude foothills of the Himalaya between about form the boundary between the Palearctic and Indomalaya ecoregions.
East Asia
China, Korea and Japan are more humid and temperate than adjacent Siberia and Central Asia, and are home to rich temperate coniferous, broadleaf, and mixed forests, which are now mostly limited to mountainous areas, as the densely populated lowlands and river basins have been converted to intensive agricultural and urban use. East Asia was not much affected by glaciation in the ice ages, and retained 96 percent of Pliocene tree genera, while Europe retained only 27 percent. In the subtropical region of southern China and southern edge of the Himalayas, the Palearctic temperate forests transition to the subtropical and tropical forests of Indomalaya, creating a rich and diverse mix of plant and animal species. The mountains of southwest China are also designated as a biodiversity hotspot. In Southeastern Asia, high mountain ranges form tongues of Palearctic flora and fauna in northern Indochina and southern China. Isolated small outposts (sky islands) occur as far south as central Myanmar (on Nat Ma Taung, ), northernmost Vietnam (on Fan Si Pan, ) and the high mountains of Taiwan.
Freshwater
The realm contains several important freshwater ecoregions as well, including the heavily developed rivers of Europe, the rivers of Russia, which flow into the Arctic, Baltic, Black, and Caspian seas, Siberia's Lake Baikal, the oldest and deepest lake on the planet, and Japan's ancient Lake Biwa.
Flora and fauna
One bird family, the accentors (Prunellidae), is endemic to the Palearctic region. The Holarctic has four other endemic bird families: the divers or loons (Gaviidae), grouse (Tetraoninae), auks (Alcidae), and waxwings (Bombycillidae).
There are no endemic mammal orders in the region, but several families are endemic: Calomyscidae (mouse-like hamsters), Prolagidae, and Ailuridae (red pandas). Several mammal species originated in the Palearctic and spread to the Nearctic during the Ice Age, including the brown bear (Ursus arctos, known in North America as the grizzly), red deer (Cervus elaphus) in Europe and the closely related elk (Cervus canadensis) in far eastern Siberia, American bison (Bison bison), and reindeer (Rangifer tarandus, known in North America as the caribou).
Megafaunal extinctions
Several large Palearctic animals became extinct from the end of the Pleistocene into historic times, including Irish elk (Megaloceros giganteus), aurochs (Bos primigenius), woolly rhinoceros (Coelodonta antiquitatis), woolly mammoth (Mammuthus primigenius), North African elephant (Loxodonta africana pharaoensis), Chinese elephant (Elephas maximus rubridens), cave bear (Ursus spelaeus), Straight tusked elephant (Palaeoloxodon antiquus) and European lion (Panthera leo europaea).
Palearctic terrestrial ecoregions
References
General references
Amorosi, T. "Contributions to the zooarchaeology of Iceland: some preliminary notes" in The Anthropology of Iceland (eds. E.P. Durrenberger & G. Pálsson). Iowa City: University of Iowa Press, pp. 203–227, 1989.
Buckland, P.C., et al. "Holt in Eyjafjasveit, Iceland: a paleoecological study of the impact of Landnám" in Acta Archaeologica 61: pp. 252–271. 1991.
http://www.Merriam-Webster.com
http://www.Canadianbiodiversity.mcgill.ca
http://www.bbc.co.uk/nature/ecozones/Palearctic_ecozone
Edmund Burke III, "The Transformation of the middle Eastern Environment, 1500 B.C.E.–2000 C.E." in The Environment and World History, ed. Edmund Burke III and Kenneth Pomeranz. Berkeley: University of California Press. 2009, 82–84.
External links
Avionary 1500 Bird species of the Western and Central Palaearctic in 46 languages
Map of the ecozones
Biogeography
.
.
Biogeographic realms
Natural history of Asia
Natural history of Europe
Natural history of Africa
Phytogeography | Palearctic realm | [
"Biology"
] | 1,935 | [
"Biogeography"
] |
180,224 | https://en.wikipedia.org/wiki/Proscenium | A proscenium (, ) is the metaphorical vertical plane of space in a theatre, usually surrounded on the top and sides by a physical proscenium arch (whether or not truly "arched") and on the bottom by the stage floor itself, which serves as the frame into which the audience observes from a more or less unified angle the events taking place upon the stage during a theatrical performance. The concept of the fourth wall of the theatre stage space that faces the audience is essentially the same.
It can be considered as a social construct which divides the actors and their stage-world from the audience which has come to witness it. But since the curtain usually comes down just behind the proscenium arch, it has a physical reality when the curtain is down, hiding the stage from view. The same plane also includes the drop, in traditional theatres of modern times, from the stage level to the "stalls" level of the audience, which was the original meaning of the proscaenium in Roman theatres, where this mini-facade was given more architectural emphasis than is the case in modern theatres. A proscenium stage is structurally different from a thrust stage or an arena stage, as explained below.
Origin
In later Hellenistic Greek theatres the proskenion (προσκήνιον) was a rather narrow raised stage where solo actors performed, while the Greek chorus and musicians remained in the "orchestra" in front and below it, and there were often further areas for performing from above and behind the proskenion, on and behind the skene. Skene is the Greek word (meaning "tent") for the tent, and later building, at the back of the stage from which actors entered, and which often supported painted scenery. In the Hellenistic period it became an increasingly large and elaborate stone structure, often with three storeys. In Greek theatre, which unlike Roman included painted scenery, the proskenion might also carry scenery.
In ancient Rome, the stage area in front of the scaenae frons (equivalent to the Greek skene) was known as the pulpitum, and the vertical front dropping from the stage to the orchestra floor, often in stone and decorated, as the proscaenium, again meaning "in front of the skene".
In the Greek and Roman theatre, no proscenium arch existed, in the modern sense, and the acting space was always fully in the view of the audience. However, Roman theatres were similar to modern proscenium theatres in the sense that the entire audience had a restricted range of views on the stage—all of which were from the front, rather than the sides or back.
Renaissance
The oldest surviving indoor theatre of the modern era, the Teatro Olimpico in Vicenza (1585), is sometimes incorrectly referred to as the first example of a proscenium theatre. The Teatro Olimpico was an academic reconstruction of a Roman theatre. It has a plain proscaenium at the front of the stage, dropping to the orchestra level, now usually containing "stalls" seating, but no proscenium arch.
However, the Teatro Olimpico's exact replication of the open and accessible Roman stage was the exception rather than the rule in sixteenth-century theatre design. Engravings suggest that the proscenium arch was already in use as early as 1560 at a production in Siena.
The earliest true proscenium arch to survive in a permanent theatre is the Teatro Farnese in Parma (1618), many earlier such theatres having been lost. Parma has a clearly defined "boccascena", or scene mouth, as Italians call it, more like a picture frame than an arch but serving the same purpose: to deineate the stage and separate the audience from its action.
Baroque
While the proscenium arch became an important feature of the traditional European theatre, often becoming very large and elaborate, the original proscaenium front below the stage became plainer.
The introduction of an orchestra pit for musicians during the Baroque era further devalued the proscaenium, bringing the lowest level of the audience's view forward to the front of the pit, where a barrier, typically in wood, screened the pit. What the Romans would have called the proscaenium is, in modern theatres with orchestra pits, normally painted black in order that it does not draw attention.
Confusion around Teatro Olimpico
In this early modern recreation of a Roman theatre, confusion seems to have been introduced to the use of the revived term in Italian. This emulation of the Roman model extended to refer to the stage area as the "proscenium", and some writers have incorrectly referred to the theatre's scaenae frons as a proscenium, and have even suggested that the central archway in the middle of the scaenae frons was the inspiration for the later development of the full-size proscenium arch. There is no evidence at all for this assumption (indeed, contemporary illustrations of performances at the Teatro Olimpico clearly show that the action took place in front of the scaenae frons and that the actors were rarely framed by the central archway).
The Italian word for a scaenae frons is "proscenio," a major change from Latin. One modern translator explains the wording problem that arises here: "[In this translation from Italian,] we retain the Italian proscenio in the text; it cannot be rendered proscenium for obvious reasons; and there is no English equivalent ... It would also be possible to retain the classical frons scaenae. The Italian "arco scenico" has been translated as "proscenium arch."
In practice, however, the stage in the Teatro Olimpico runs from one edge of the seating area to the other, and only a very limited framing effect is created by the coffered ceiling over the stage and by the partition walls at the corners of the stage where the seating area abuts the floorboards. The result is that in this theatre "the architectural spaces for the audience and the action ... are distinct in treatment yet united by their juxtaposition; no proscenium arch separates them."
Function
A proscenium arch creates a "window" around the scenery and performers. The advantages are that it gives everyone in the audience a good view because the performers need only focus on one direction rather than continually moving around the stage to give a good view from all sides. A proscenium theatre layout also simplifies the hiding and obscuring of objects from the audience's view (sets, performers not currently performing, and theatre technology). Anything that is not meant to be seen is simply placed outside the "window" created by the proscenium arch, either in the wings or in the flyspace above the stage. The phrase "breaking the proscenium" or "breaking the fourth wall" refers to when a performer addresses the audience directly as part of the dramatic production.
Proscenium theatres have fallen out of favor in some theatre circles because they perpetuate the fourth wall concept. The staging in proscenium theatres often implies that the characters performing on stage are doing so in a four-walled environment, with the "wall" facing the audience being invisible. Many modern theatres attempt to do away with the fourth wall concept and so are instead designed with a thrust stage that projects out of the proscenium arch and "reaches" into the audience (technically, this can still be referred to as a proscenium theatre because it still contains a proscenium arch, but the term thrust stage is more specific and more widely used).
In dance history, the use of the proscenium arch has affected dance in different ways. Prior to the use of proscenium stages, early court ballets took place in large chambers where the audience members sat around and above the dance space. The performers, often led by the queen or king, focused in symmetrical figures and patterns of symbolic meaning. Ballet's choreographic patterns were being born. In addition, since dancing was considered a way of socializing, most of the court ballets finished with a ‘grand ballet’ followed by a ball in which the members of the audience joined the performance.
Later on, the use of the proscenium stage for performances established a separation of the audience from the performers. Therefore, more devotion was placed on the performers, and in what was occurring in the ‘show.’ It was the beginning of dance-performance as a form of entertainment like we know it today. Since the use of the proscenium stages, dances have developed and evolved into more complex figures, patterns, and movements. At this point, it was not only significantly important how the performers arrived to a certain shape on the stage during a performance, but also how graciously they executed their task. Additionally, these stages allowed for the use of stage effects generated by ingenious machinery. It was the beginning of scenography design, and perhaps also it was also the origin of the use of backstage personnel or "stage hands".
Other forms of theatre staging
Traverse stage: The stage is surrounded on two sides by the audience.
Thrust stage: The stage is surrounded on three sides (or 270°) by audience. Can be a modification of a proscenium stage. Sometimes known as "three quarter round". Also known as an apron stage.
Theatre in the round: The stage is surrounded by audience on all sides.
Black box theatre: The theatre is a large rectangular room with black walls and a flat floor. The seating is typically composed of loose chairs on platforms, which can be easily moved or removed to allow the entire space to be adapted to the artistic elements of a production.
Site-specific theatre (a.k.a. environmental theatre): The stage and audience either blend together, or are in numerous or oddly shaped sections. Includes any form of staging that is not easily classifiable under the above categories.
References
External links
Scenography - The Theatre Design Website Diagram and images of proscenium stage
Parts of a theatre
Stage terminology
Stagecraft | Proscenium | [
"Technology"
] | 2,110 | [
"Parts of a theatre",
"Components"
] |
180,234 | https://en.wikipedia.org/wiki/Kilowatt-hour | A kilowatt-hour (unit symbol: kW⋅h or kW h; commonly written as kWh) is a non-SI unit of energy equal to 3.6 megajoules (MJ) in SI units, which is the energy delivered by one kilowatt of power for one hour. Kilowatt-hours are a common billing unit for electrical energy supplied by electric utilities. Metric prefixes are used for multiples and submultiples of the basic unit, the watt-hour (3.6 kJ).
Definition
The kilowatt-hour is a composite unit of energy equal to one kilowatt (kW) sustained for (multiplied by) one hour. The International System of Units (SI) unit of energy meanwhile is the joule (symbol J). Because a watt is by definition one joule per second, and because there are 3,600 seconds in an hour, one kWh equals 3,600 kilojoules or 3.6 MJ.
Unit representations
A widely used representation of the kilowatt-hour is kWh, derived from its component units, kilowatt and hour. It is commonly used in billing for delivered energy to consumers by electric utility companies, and in commercial, educational, and scientific publications, and in the media. It is also the usual unit representation in electrical power engineering. This common representation, however, does not comply with the style guide of the International System of Units (SI).
Other representations of the unit may be encountered:
kW⋅h and kW h are less commonly used, but they are consistent with the SI. The SI brochure states that in forming a compound unit symbol, "Multiplication must be indicated by a space or a half-high (centred) dot (⋅), since otherwise some prefixes could be misinterpreted as a unit symbol." This is supported by a standard issued jointly by an international (IEEE) and national (ASTM) organization, and by a major style guide. However, the IEEE/ASTM standard allows kWh (but does not mention other multiples of the watt-hour). One guide published by NIST specifically recommends against kWh "to avoid possible confusion".
In 2014, the United States official fuel-economy window sticker for electric vehicles used the abbreviation kW-hrs.
Variations in capitalization are sometimes encountered: KWh, KWH, kwh, etc., which are inconsistent with the International System of Units.
The notation kW/h for the kilowatt-hour is incorrect, as it denotes kilowatt per hour.
The hour is a unit of time listed among the non-SI units accepted by the International Bureau of Weights and Measures for use with the SI.
An electric heater consuming 1,000 watts (1 kilowatt) operating for one hour uses one kilowatt-hour of energy. A television consuming 100 watts operating continuously for 10 hours uses one kilowatt-hour. A 40-watt electric appliance operating continuously for 25 hours uses one kilowatt-hour.
Electricity sales
Electrical energy is typically sold to consumers in kilowatt-hours. The cost of running an electrical device is calculated by multiplying the device's power consumption in kilowatts by the operating time in hours, and by the price per kilowatt-hour. The unit price of electricity charged by utility companies may depend on the customer's consumption profile over time. Prices vary considerably by locality. In the United States prices in different states can vary by a factor of three.
While smaller customer loads are usually billed only for energy, transmission services, and the rated capacity, larger consumers also pay for peak power consumption, the greatest power recorded in a fairly short time, such as 15 minutes. This compensates the power company for maintaining the infrastructure needed to provide peak power. These charges are billed as demand changes. Industrial users may also have extra charges according to the power factor of their load.
Major energy production or consumption is often expressed as terawatt-hours (TWh) for a given period that is often a calendar year or financial year. A 365-day year equals 8,760 hours, so over a period of one year, power of one gigawatt equates to 8.76 terawatt-hours of energy. Conversely, one terawatt-hour is equal to a sustained power of about 114 megawatts for a period of one year.
Examples
In 2020, the average household in the United States consumed 893 kWh per month.
Raising the temperature of 1 litre of water from room temperature to the boiling point with an electric kettle takes about 0.1 kWh.
A 12-watt LED lamp lit constantly uses about 0.3 kWh per 24 hours and about 9 kWh per month.
In terms of human power, a healthy adult male manual laborer performs work equal to about half a kilowatt-hour over an eight-hour day.
Conversions
To convert a quantity measured in a unit in the left column to the units in the top row, multiply by the factor in the cell where the row and column intersect.
Watt-hour multiples
All the SI prefixes are commonly applied to the watt-hour: a kilowatt-hour (kWh) is 1,000 Wh; a megawatt-hour (MWh) is 1 million Wh; a milliwatt-hour (mWh) is and so on.
The kilowatt-hour is commonly used by electrical energy providers for purposes of billing, since the monthly energy consumption of a typical residential customer ranges from a few hundred to a few thousand kilowatt-hours. Megawatt-hours (MWh), gigawatt-hours (GWh), and terawatt-hours (TWh) are often used for metering larger amounts of electrical energy to industrial customers and in power generation. The terawatt-hour and petawatt-hour (PWh) units are large enough to conveniently express the annual electricity generation for whole countries and the world energy consumption.
Distinction between kWh (energy) and kW (power)
A kilowatt is a unit of power (rate of flow of energy per unit of time). A kilowatt-hour is a unit of energy. Kilowatt per hour would be a rate of change of power flow with time.
Work is the amount of energy transferred to a system; power is the rate of delivery of energy.
Energy is measured in joules, or watt-seconds. Power is measured in watts, or joules per second.
For example, a battery stores energy. When the battery delivers its energy, it does so at a certain power, that is, the rate of delivery of the energy. The higher the power, the quicker the battery's stored energy is delivered. A higher power output will cause the battery's stored energy to be depleted in a shorter time period.
Annualized power
Electric energy production and consumption are sometimes reported on a yearly basis, in units such as megawatt-hours per year (MWh/yr) gigawatt-hours/year (GWh/yr) or terawatt-hours per year (TWh/yr). These units have dimensions of energy divided by time and thus are units of power. They can be converted to SI power units by dividing by the number of hours in a year, about .
Thus, = ≈ .
Misuse of watts per hour
Many compound units for various kinds of rates explicitly mention units of time to indicate a change over time. For example: miles per hour, kilometres per hour, dollars per hour. Power units, such as kW, already measure the rate of energy per unit time (kW=kJ/s). Kilowatt-hours are a product of power and time, not a rate of change of power with time.
Watts per hour (W/h) is a unit of a change of power per hour, i.e. an acceleration in the delivery of energy. It is used to measure the daily variation of demand (e.g. the slope of the duck curve), or ramp-up behavior of power plants. For example, a power plant that reaches a power output of from in 15 minutes has a ramp-up rate of .
Other uses of terms such as watts per hour are likely to be errors.
Other related energy units
Several other units related to kilowatt-hour are commonly used to indicate power or energy capacity or use in specific application areas.
Average annual energy production or consumption can be expressed in kilowatt-hours per year. This is used with loads or output that vary during the year but whose annual totals are similar from one year to the next. For example, it is useful to compare the energy efficiency of household appliances whose power consumption varies with time or the season of the year. Another use is to measure the energy produced by a distributed power source. One kilowatt-hour per year equals about 114.08 milliwatts applied constantly during one year.
The energy content of a battery is usually expressed indirectly by its capacity in ampere-hours; to convert ampere-hour (Ah) to watt-hours (Wh), the ampere-hour value must be multiplied by the voltage of the power source. This value is approximate, since the battery voltage is not constant during its discharge, and because higher discharge rates reduce the total amount of energy that the battery can provide. In the case of devices that output a different voltage than the battery, it is the battery voltage (typically 3.7 V for Li-ion) that must be used to calculate rather than the device output (for example, usually 5.0 V for USB portable chargers). This results in a 500 mA USB device running for about 3.7 hours on a 2,500 mAh battery, not five hours.
The Board of Trade unit (B.T.U.) is an obsolete UK synonym for kilowatt-hour. The term derives from the name of the Board of Trade which regulated the electricity industry until 1942 when the Ministry of Power took over. It is distinct from a British Thermal Unit (BTU) which is 1055 J.
In India, the kilowatt-hour is often simply called a unit of energy. A million units, designated MU, is a gigawatt-hour and a BU (billion units) is a terawatt-hour.
See also
Ampere-hour
Electric vehicle battery
Electric energy consumption
IEEE Std 260.1-2004
Orders of magnitude (energy)
References
Units of energy
Electric power
Non-SI metric units | Kilowatt-hour | [
"Physics",
"Mathematics",
"Engineering"
] | 2,193 | [
"Physical quantities",
"Non-SI metric units",
"Quantity",
"Units of energy",
"Power (physics)",
"Electric power",
"Electrical engineering",
"Units of measurement"
] |
180,236 | https://en.wikipedia.org/wiki/Greisen%E2%80%93Zatsepin%E2%80%93Kuzmin%20limit | The Greisen–Zatsepin–Kuzmin limit (GZK limit or GZK cutoff) is a theoretical upper limit on the energy of cosmic ray protons traveling from other galaxies through the intergalactic medium to our galaxy. The limit is (50 EeV), or about 8 joules (the energy of a proton travelling at ≈ % the speed of light). The limit is set by the slowing effect of interactions of the protons with the microwave background radiation over long distances (≈ 160 million light-years). The limit is at the same order of magnitude as the upper limit for energy at which cosmic rays have experimentally been detected, although indeed some detections appear to have exceeded the limit, as noted below. For example, one extreme-energy cosmic ray, the Oh-My-God Particle, which has been found to possess a record-breaking (50 joules) of energy (about the same as the kinetic energy of a 95 km/h baseball).
In the past, the apparent violation of the GZK limit has inspired cosmologists and theoretical physicists to suggest other ways that circumvent the limit. These theories propose that ultra-high energy cosmic rays are produced near our galaxy or that Lorentz covariance is violated in such a way that protons do not lose energy on their way to our galaxy.
Computation
The limit was independently computed in 1966 by Kenneth Greisen, Georgy Zatsepin, and Vadim Kuzmin based on interactions between cosmic rays and the photons of the cosmic microwave background radiation (CMB). They predicted that cosmic rays with energies over the threshold energy of would interact with cosmic microwave background photons relatively blueshifted by the speed of the cosmic rays, to produce pions through the resonance,
or
Pions produced in this manner proceed to decay in the standard pion channels – ultimately to photons for neutral pions, and photons, positrons, and various neutrinos for positive pions. Neutrons also decay to similar products, so that ultimately the energy of any cosmic ray proton is drained off by production of high-energy photons plus (in some cases) high-energy electron–positron pairs and neutrino pairs.
The pion production process begins at a higher energy than ordinary electron-positron pair production (lepton production) from protons impacting the CMB, which starts at cosmic-ray proton energies of only about . However, pion production events drain 20% of the energy of a cosmic-ray proton, as compared with only 0.1% of its energy for electron–positron pair production.
This factor comes from two causes: The pion has a mass only about ~130 times the leptons, but the extra energy appears as different kinetic energies of the pion or leptons, and results in relatively more kinetic energy transferred to a heavier product pion, in order to conserve momentum. The much larger total energy losses from pion production result in pion production becoming the process limiting high-energy cosmic-ray travel, rather than the lower-energy process of light-lepton production.
The pion production process continues until the cosmic ray energy falls below the threshold for pion production. Due to the mean path associated with this interaction, extragalactic cosmic ray protons traveling over distances larger than () and with energies greater than the threshold should never be observed on Earth. This distance is also known as GZK horizon.
The precise GZK limit is derived under the assumption that ultra-high energy cosmic rays, those with energies above , are protons. Measurements by the largest cosmic-ray observatory, the Pierre Auger Observatory, suggest that most ultra-high energy cosmic rays are heavier elements known as HZE ions. In this case, the argument behind the GZK limit does not apply in the originally simple form: however, as Greisen noted, the giant dipole resonance also occurs roughly in this energy range (at 10 EeV/nucleon) and similarly restricts very long-distance propagation.
GZK paradox
A number of observations have been made by the largest cosmic-ray experiments Akeno Giant Air Shower Array (AGASA), High Resolution Fly's Eye Cosmic Ray Detector, the Pierre Auger Observatory and Telescope Array Project that appeared to show cosmic rays with energies above the GZK limit.
These observations appear to contradict the predictions of special relativity and particle physics as they are presently understood. However, there are a number of possible explanations for these observations that may resolve this inconsistency.
The observed EECR particles can be heavier nuclei instead of protons
The observations could be due to an instrument error or an incorrect interpretation of the experiment, especially wrong energy assignment.
The cosmic rays could have local sources within the GZK horizon (although it is unclear what these sources could be).
Weakly interacting particles
Another suggestion involves ultra-high-energy weakly interacting particles (for instance, neutrinos), which might be created at great distances and later react locally to give rise to the particles observed. In the proposed Z-burst model, an ultra-high-energy cosmic neutrino collides with a relic anti-neutrino in our galaxy and annihilates to hadrons. This process proceeds through a (virtual) Z-boson:
The cross-section for this process becomes large if the center-of-mass energy of the neutrino antineutrino pair is equal to the Z-boson mass (such a peak in the cross-section is called "resonance"). Assuming that the relic anti-neutrino is at rest, the energy of the incident cosmic neutrino has to be
where is the mass of the Z-boson, and the mass of the neutrino.
Controversy about cosmic rays above the GZK limit
A suppression of the cosmic-ray flux that can be explained with the GZK limit has been confirmed by the latest generation of cosmic-ray observatories. A former claim by the AGASA experiment that there is no suppression was overruled. It remains controversial whether the suppression is due to the GZK effect. The GZK limit only applies if ultra-high-energy cosmic rays are mostly protons.
In July 2007, during the 30th International Cosmic Ray Conference in Mérida, Yucatán, México, the High Resolution Fly's Eye Experiment (HiRes) and the Pierre Auger Observatory (Auger) presented their results on ultra-high-energy cosmic rays (UHECR). HiRes observed a suppression in the UHECR spectrum at just the right energy, observing only 13 events with an energy above the threshold, while expecting 43 with no suppression. This was interpreted as the first observation of the GZK limit. Auger confirmed the flux suppression, but did not claim it to be the GZK limit: instead of the 30 events necessary to confirm the AGASA results, Auger saw only two, which are believed to be heavy-nuclei events. The flux suppression was previously brought into question when the AGASA experiment found no suppression in their spectrum. According to Alan Watson, former spokesperson for the Auger Collaboration, AGASA results have been shown to be incorrect, possibly due to the systematic shift in energy assignment.
In 2010 and the following years, both the Pierre Auger Observatory and HiRes confirmed again a flux suppression, in case of the Pierre Auger Observatory the effect is statistically significant at the level of 20 standard deviations.
After the flux suppression was established, a heated debate ensued whether cosmic rays that violate the GZK limit are protons. The Pierre Auger Observatory, the world's largest observatory, found with high statistical significance that ultra-high-energy cosmic rays are not purely protons, but a mixture of elements, which is getting heavier with increasing energy.
The Telescope Array Project, a joint effort from members of the HiRes and AGASA collaborations, agrees with the former HiRes result that these cosmic rays look like protons. The claim is based on data with lower statistical significance, however. The area covered by Telescope Array is about one third of the area covered by the Pierre Auger Observatory, and the latter has been running for a longer time.
The controversy was partially resolved in 2017, when a joint working group formed by members of both experiments presented a report at the 35th International Cosmic Ray Conference. According to the report, the raw experimental results are not in contradiction with each other. The different interpretations are mainly based on the use of different theoretical models and the fact that Telescope Array has not collected enough events yet to distinguish the pure-proton hypothesis from the mixed-nuclei hypothesis.
Extreme Universe Space Observatory on Japanese Experiment Module (JEM-EUSO)
EUSO, which was scheduled to fly on the International Space Station (ISS) in 2009, was designed to use the atmospheric-fluorescence technique to monitor a huge area and boost the statistics of UHECRs considerably. EUSO is to make a deep survey of UHECR-induced extensive air showers (EASs) from space, extending the measured energy spectrum well beyond the GZK cutoff. It is to search for the origin of UHECRs, determine the nature of the origin of UHECRs, make an all-sky survey of the arrival direction of UHECRs, and seek to open the astronomical window on the extreme-energy universe with neutrinos. The fate of the EUSO Observatory is still unclear, since NASA is considering early retirement of the ISS.
The Fermi Gamma-ray Space Telescope
Launched in June 2008, the Fermi Gamma-ray Space Telescope (formerly GLAST) will also provide data that will help resolve these inconsistencies.
With the Fermi Gamma-ray Space Telescope, one has the possibility of detecting gamma rays from the freshly accelerated cosmic-ray nuclei at their acceleration site (the source of the UHECRs).
UHECR protons accelerated (see also Centrifugal mechanism of acceleration) in astrophysical objects produce secondary electromagnetic cascades during propagation in the cosmic microwave and infrared backgrounds, of which the GZK process of pion production is one of the contributors. Such cascades can contribute between about 1% and 50% of the GeV–TeV diffuse photon flux measured by the EGRET experiment. The Fermi Gamma-ray Space Telescope may discover this flux.
See also
References
External links
Rutgers University experimental high energy physics HIRES research page
Pierre Auger Observatory page
Cosmic-ray.org
History of Cosmic Ray Research
Cosmic rays
Physical paradoxes
Energy
Special relativity
Astroparticle physics
Unsolved problems in physics
Unsolved problems in astronomy | Greisen–Zatsepin–Kuzmin limit | [
"Physics",
"Astronomy"
] | 2,192 | [
"Physical phenomena",
"Unsolved problems in astronomy",
"Physical quantities",
"Concepts in astronomy",
"Astroparticle physics",
"Unsolved problems in physics",
"Astrophysics",
"Special relativity",
"Energy (physics)",
"Energy",
"Radiation",
"Particle physics",
"Astronomical controversies",
... |
180,244 | https://en.wikipedia.org/wiki/Arena | An arena is a large enclosed platform, often circular or oval-shaped, designed to showcase theatre, musical performances, or sporting events. It is composed of a large open space surrounded on most or all sides by tiered seating for spectators, and may be covered by a roof. The key feature of an arena is that the event space is the lowest point, allowing maximum visibility. Arenas are usually designed to accommodate a multitude of spectators.
Background
The word derives from Latin , a particularly fine-grained sand that covered the floor of ancient arenas such as the Colosseum in Rome, Italy, to absorb blood.
The term arena is sometimes used as a synonym for a very large venue such as Pasadena's Rose Bowl, but such a facility is typically called a stadium. The use of one term over the other has mostly to do with the type of event. Football (be it association, rugby, gridiron, Australian rules, or Gaelic) is typically played in a stadium, while basketball, volleyball, handball, and ice hockey are typically played in an arena, although many of the larger arenas hold more spectators than do the stadiums of smaller colleges or high schools. There are exceptions. The home of the Duke University men's and women's basketball teams would qualify as an arena, but the facility is called Cameron Indoor Stadium. Domed stadiums, which, like arenas, are enclosed but have the larger playing surfaces and seating capacities found in stadiums, are generally not referred to as arenas in North America. There is also the sport of indoor American football (one variant of which is explicitly known as arena football), a variant of the outdoor game that is designed for the usual smaller playing surface of most arenas; variants of other traditionally outdoor sports, including box lacrosse as well as futsal and indoor soccer, also exist.
The term "arena" is also used loosely to refer to any event or type of event which either literally or metaphorically takes place in such a location, often with the specific intent of comparing an idea to a sporting event. Such examples of these would be terms such as "the arena of war", "the arena of love" or "the political arena".
Gallery
See also
Amphitheatre
Architectural structure
Field house
Ice hockey arena
List of nonbuilding structure types
List of indoor arenas by capacity
List of stadiums by capacity
References
External links
Music venues
Sports venues by type
Buildings and structures by type | Arena | [
"Engineering"
] | 486 | [
"Buildings and structures by type",
"Architecture"
] |
180,246 | https://en.wikipedia.org/wiki/Black%20box%20theater | A black box theater is a simple performance space, typically a square room with black walls and a flat floor. The simplicity of the space allows it to be used to create a variety of configurations of stage and audience interaction. The black box is a relatively recent innovation in theatre.
History
Black box theaters have their roots in the American avant-garde of the early 20th century. The black box theaters became popular and increasingly widespread in the 1960s as rehearsal spaces. Almost any large room can be transformed into a "black box" with the aid of paint or curtains, making black box theaters an easily accessible option for theater artists. Storefronts, church basements, and old trolley barns were some examples of the earliest versions of spaces transformed into black box theaters. Sets are simple and small and costs are lower, appealing to nonprofit and low-income artists or companies. The black box is also considered by many to be a place where more "pure" theatre can be explored, with the most human and least technical elements in focus.
The concept of a building designed for flexible staging techniques can be attributed to Swiss designer Adolphe Appia, circa 1921. The invention of such a stage instigated a half-century of innovations in the relationship between audience and performers. This idea would again be re-visited by Harley Granville Barker, using Appia's design as his basis. Barker would have ideas of directing productions in “a great white box,” which would see success in 1970. As time went on, black boxes were decided on instead as black provided the most neutral setting for productions. Antonin Artaud also had ideas of a stage of this kind. The first flexible stage in America was located in the home living room of actor and manager Gilmor Brown in Pasadena, California. While the domestic decor meant that Brown's stage was not a proper black box, the idea was still a revolutionary one. This venue, and two subsequent permutations, are known as the Playbox Theatre, and functioned as an experimental space for Brown's larger venue, the Pasadena Playhouse.
Use
Such spaces are easily built and maintained. Black box theaters are usually home to plays or other performances requiring very basic technical arrangements, such as limited set construction. Common floor plans include thrust stage, modified thrust stage, and theater in the round.
Universities and other theater training programs employ the black box theater because the space is versatile and easy to change. The black backdrop can encourage the audience to focus on the actors, furthering the benefits. Additionally, as the audience is now closer to the stage due to the lack of a proscenium, a more intimate atmosphere is able to be created. This intimate space may also serve to try and eliminate the implied mental distance between the audience and actors, while it still physically remains. Many theater training programs will have both a large proscenium theater, as well as a black box theater. Not only does this allow two productions to be mounted simultaneously, but they can also have a large extravagant production in the main stage while having a small experimental show in the black box.
Black box spaces are also popular at fringe theater festivals; due to their simple design and equipment they can be used for many performances each day. This simplicity also means that a black box theater can be adapted from other spaces, such as hotel conference rooms. This is common at the Edinburgh Festival Fringe where the larger venues will hire entire buildings and divide each room to be rented out to several theater companies. "The Black Box Theatre" in Oslo, Norway, and the Alvina Krause Studio at Northwestern University are theaters of this type.
Black box theaters have also been known to come with a handful of disadvantages. The open space may leave "too many" options that can leave many at a loss for direction or inspiration. Lighting issues arise as the primary lighting is typically above the performance area. During blackout scenes, the close proximity of the audience allows them to still see the transitions happening on stage.
Black box spaces also see success within the music industry. These spaces are known to be used to host vocal and instrumental performances, rehearsals, shows, and competitions.
Technical features
Most older black boxes were built like television studios, with a low pipe grid overhead. Newer black boxes typically feature catwalks or tension grids, the latter combining the flexibility of the pipe grid with the accessibility of a catwalk. They were designed to be able to be spaces that can be molded into different settings easily for multiple performances. Black box theaters accommodate smaller audiences with the goal of having more intimate experiences.
The interiors of most black box theatres are painted black, although that is not exclusive. A black box doesn't have to be black to be considered a black box. Black is most common because black paint is easily restored, leaving a flexibility to the space for productions: a wall or floor can be painted other colours for a set and then returned to the neutral black with little time or expense. While the absence of colour not only gives the audience a sense of "anyplace" (and thus allows flexibility from play to play or from scene to scene), Having the non stage areas black allows for an innovative lighting design to shine through. The architecture of black box theaters typically allow for easy modifications and decorations, but at the expense of time and monetary cost.
See also
Dogville (2003) and Manderlay (2005), two Lars von Trier films akin to black box theater
References
External links
Stagecraft
Parts of a theatre | Black box theater | [
"Technology"
] | 1,108 | [
"Parts of a theatre",
"Components"
] |
180,249 | https://en.wikipedia.org/wiki/Snub%20cube | In geometry, the snub cube, or snub cuboctahedron, is an Archimedean solid with 38 faces: 6 squares and 32 equilateral triangles. It has 60 edges and 24 vertices. Kepler first named it in Latin as cubus simus in 1619 in his Harmonices Mundi. H. S. M. Coxeter, noting it could be derived equally from the octahedron as the cube, called it snub cuboctahedron, with a vertical extended Schläfli symbol , and representing an alternation of a truncated cuboctahedron, which has Schläfli symbol .
Construction
The snub cube can be generated by taking the six faces of the cube, pulling them outward so they no longer touch, then giving them each a small rotation on their centers (all clockwise or all counter-clockwise) until the spaces between can be filled with equilateral triangles.
The snub cube may also be constructed from a rhombicuboctahedron. It started by twisting its square face (in blue), allowing its triangles (in red) to be automatically twisted in opposite directions, forming other square faces (in white) to be skewed quadrilaterals that can be filled in two equilateral triangles.
The snub cube can also be derived from the truncated cuboctahedron by the process of alternation. 24 vertices of the truncated cuboctahedron form a polyhedron topologically equivalent to the snub cube; the other 24 form its mirror-image. The resulting polyhedron is vertex-transitive but not uniform.
Cartesian coordinates
Cartesian coordinates for the vertices of a snub cube are all the even permutations of
with an even number of plus signs, along with all the odd permutations with an odd number of plus signs, where is the tribonacci constant. Taking the even permutations with an odd number of plus signs, and the odd permutations with an even number of plus signs, gives a different snub cube, the mirror image. Taking them together yields the compound of two snub cubes.
This snub cube has edges of length , a number which satisfies the equation
and can be written as
To get a snub cube with unit edge length, divide all the coordinates above by the value α given above.
Properties
For a snub cube with edge length , its surface area and volume are:
The snub cube is an Archimedean solid, meaning it is a highly symmetric and semi-regular polyhedron, and two or more different regular polygonal faces meet in a vertex. It is chiral, meaning there are two distinct forms whenever being mirrored. Therefore, the snub cube has the rotational octahedral symmetry . The polygonal faces that meet for every vertex are four equilateral triangles and one square, and the vertex figure of a snub cube is . The dual polyhedron of a snub cube is pentagonal icositetrahedron, a Catalan solid.
Graph
The skeleton of a snub cube can be represented as a graph with 24 vertices and 60 edges, an Archimedean graph.
References
(Section 3-9)
External links
The Uniform Polyhedra
Virtual Reality Polyhedra The Encyclopedia of Polyhedra
Editable printable net of a Snub Cube with interactive 3D view
Chiral polyhedra
Uniform polyhedra
Archimedean solids
Snub tilings | Snub cube | [
"Physics"
] | 729 | [
"Symmetry",
"Uniform polytopes",
"Snub tilings",
"Tessellation",
"Uniform polyhedra"
] |
180,279 | https://en.wikipedia.org/wiki/Ounce | The ounce () is any of several different units of mass, weight, or volume and is derived almost unchanged from the , an Ancient Roman unit of measurement.
The avoirdupois ounce (exactly ) is avoirdupois pound; this is the United States customary and British imperial ounce. It is primarily used in the United States.
Although the avoirdupois ounce is the mass measure used for most purposes, the 'troy ounce' of exactly is used instead for the mass of precious metals such as gold, silver, platinum, palladium, rhodium, etc.
The term 'ounce' is also used in other contexts:
The ounce-force is a measure of force (see below).
The fluid ounce is a measure of volume.
Historically, a variety of different ounces measuring mass or volume were used in different jurisdictions by different trades and at different times in history.
Etymology
Ounce derives from the Ancient Roman (meaning: a twelfth), a unit in the Ancient Roman units of measurement weighing about 27.4 grams or 96.7% of an avoirdupois ounce, that was one-twelfth () of the Roman pound (). This in turn comes from Latin ('one'), and thus originally meant simply 'unit'. The term uncia was borrowed twice: first into Old English as or from an unattested Vulgar Latin form with ts for c before i (palatalization), which survives in modern English as inch, and a second time into Middle English through Anglo-Norman and Middle French (), yielding English ounce. The abbreviation oz came later from the Italian cognate , pronounced (now , pronounced ).
Definitions
Historically, in different parts of the world, at different points in time, and for different applications, the ounce (or its translation) has referred to broadly similar but still slightly different standards of mass.
Currently in use
International avoirdupois ounce
The international avoirdupois ounce (abbreviated oz) is defined as exactly 28.349523125 g under the international yard and pound agreement of 1959, signed by the United States and countries of the Commonwealth of Nations.
In the avoirdupois system, sixteen ounces make up an avoirdupois pound, and the avoirdupois pound is defined as 7000 grains; one avoirdupois ounce is therefore equal to 437.5 grains.
The ounce is still a standard unit in the United States. In the United Kingdom it ceased to be an independent unit of measure in 2000, but may still be seen as a general indicator of portion sizes in burger and steak restaurants.
International troy ounce
A troy ounce (abbreviated oz t) is equal to 480 grains. Consequently, the international troy ounce is equal to exactly 31.1034768 grams. There are 12 troy ounces in the now obsolete troy pound.
Today, the troy ounce is used only to express the mass of precious metals such as gold, platinum, palladium, rhodium or silver. Bullion coins are the most common products produced and marketed in troy ounces, but precious metal bars also exist in gram and kilogram (kg) sizes. (A kilogram bullion bar contains .)
For historical measurement of gold,
a fine ounce is a troy ounce of pure gold content in a gold bar, computed as fineness multiplied by gross weight
a standard ounce is a troy ounce of 22 carat gold, 91.66% pure (an 11 to 1 proportion of gold to alloy material)
Metric ounces
Some countries have redefined their ounces in the metric system. For example, the German apothecaries' ounce of 30 grams is very close to the previously widespread Nuremberg ounce, but the divisions and multiples come out in metric.
In 1820, the Dutch redefined their ounce (in Dutch, ons) as 100 grams. In 1937 the IJkwet of the Netherlands officially abolished the term, but it is still commonly used.
Dutch amendments to the metric system, such as an ons or 100 grams, has been inherited, adopted, and taught in Indonesia beginning in elementary school. It is also listed as standard usage in Indonesia's national dictionary, the Kamus Besar Bahasa Indonesia, and the government's official elementary-school curriculum.
Historical
Apothecaries' ounce
The apothecaries' ounce (abbreviated ℥) equivalent to the troy ounce, was formerly used by apothecaries, and is thus obsolete.
Maria Theresa ounce
"Maria Theresa ounce" was once introduced in Ethiopia and some European countries, which was equal to the weight of one Maria Theresa thaler, or 28.0668 g. Both the weight and the value are the definition of one birr, still in use in present-day Ethiopia and formerly in Eritrea.
Spanish ounce
The Spanish pound () was 460 g. The Spanish ounce (Spanish ) was of a pound, i.e. 28.75 g. It was further subdivided into 16 (each 1.8 grams). For pharmaceutical use, the Greek was used, subdividing the Spanish ounce into 8 (3.6 grams), due to being equivalent to of an avoirdupois ounce. In either case, it could be further subdivided into grains, each one 49.9 milligrams.
Tower ounce
The Tower ounce of was a fraction of the tower pound used in the English mints, the principal one being in the Tower of London. It dates back to the Anglo-Saxon coinage weight standard. It was abolished in favour of the Troy ounce by Henry VIII in 1527.
Ounce-force
An ounce-force is of a pound-force, or about . It is defined as the force exerted by a mass of one avoirdupois ounce under standard gravity (at the surface of the earth, its weight).
The "ounce" in "ounce-force" is equivalent to an avoirdupois ounce; ounce-force is a measurement of force using avoirdupois ounces. It is customarily not identified or differentiated. The term has limited use in engineering calculations to simplify unit conversions between mass, force, and acceleration systems of calculations.
Fluid ounce
A fluid ounce (abbreviated fl oz, fl. oz. or oz. fl.) is a unit of volume. An imperial fluid ounce is defined in British law as exactly 28.4130625 millilitres, while a US customary fluid ounce is exactly 29.5735295625 mL, and a US food labelling fluid ounce is 30 mL. The fluid ounce is sometimes referred to simply as an "ounce" in contexts where its use is implicit, such as bartending.
Other uses
Fabric weight
Ounces are also used to express the "weight", or more accurately the areal density, of a textile fabric in North America, Asia, or the UK, as in "16 oz denim". The number refers to the weight in ounces of a given amount of fabric, either a yard of a given width, or a square yard, where the depth of the fabric is a fabric-specific constant.
Copper layer thickness of a printed circuit board
The most common unit of measure for the copper thickness on a printed circuit board (PCB) is ounces (oz), as in mass. It is the resulting thickness when the mass of copper is pressed flat and spread evenly over a one-square-foot area. 1 oz will roughly equal 34.7 μm.
Notes and references
External links
Dictionary of Units: Ounce
Customary units of measurement in the United States
Imperial units
Units of mass | Ounce | [
"Physics",
"Mathematics"
] | 1,560 | [
"Matter",
"Quantity",
"Units of mass",
"Mass",
"Units of measurement"
] |
180,339 | https://en.wikipedia.org/wiki/Ostinato | In music, an ostinato (; derived from the Italian word for stubborn, compare English obstinate) is a motif or phrase that persistently repeats in the same musical voice, frequently in the same pitch. Well-known ostinato-based pieces include classical compositions such as Ravel's Boléro and the Carol of the Bells, and popular songs such as John Lennon’s “Mind Games”(1973), Donna Summer and Giorgio Moroder's "I Feel Love" (1977), Henry Mancini's theme from Peter Gunn (1959), The Who's "Baba O'Riley" (1971), The Verve's "Bitter Sweet Symphony" (1997), and Flo Rida's "Low" (2007).
Both ostinatos and ostinati are accepted English plural forms, the latter reflecting the word's Italian etymology.
The repeating idea may be a rhythmic pattern, part of a tune, or a complete melody in itself. Strictly speaking, ostinati should have exact repetition, but in common usage, the term covers repetition with variation and development, such as the alteration of an ostinato line to fit changing harmonies or keys.
Within the context of European classical and film music, Claudia Gorbman defines an ostinato as a repeated melodic or rhythmic figure that propels scenes that lack dynamic visual action.
Ostinati play an important part in improvised music (rock and jazz), in which they are often referred to as riffs or vamps. A "favorite technique of contemporary jazz writers", ostinati are often used in modal and Latin jazz and traditional African music including Gnawa music.
The term ostinato essentially has the same meaning as the medieval Latin word pes, the word ground as applied to classical music, and the word riff in contemporary popular music.
European classical music
Within the domain of European classical music traditions, Ostinati are used in 20th-century music to stabilize groups of pitches, as in Stravinsky's The Rite of Spring Introduction and Augurs of Spring. A famous type of ostinato, called the Rossini crescendo, owes its name to a crescendo that underlies a persistent musical pattern, which usually culminates in a solo vocal cadenza. This style was emulated by other bel canto composers, especially Vincenzo Bellini; and later by Wagner (in pure instrumental terms, discarding the closing vocal cadenza).
Applicable in homophonic and contrapuntal textures, they are "repetitive rhythmic-harmonic schemes", more familiar as accompanimental melodies, or purely rhythmic. The technique's appeal to composers from Debussy to avant-garde composers until at least the 1970s "... lies in part in the need for unity created by the virtual abandonment of functional chord progressions to shape phrases and define tonality". Similarly, in modal music, "... relentless, repetitive character help to establish and confirm the modal center". Their popularity may also be justified by their ease as well as range of use, though, "... ostinato must be employed judiciously, as its overuse can quickly lead to monotony".
Medieval
Ostinato patterns have been present in European music from the Middle Ages onwards. In the famous English canon "Sumer Is Icumen In", the main vocal lines are underpinned by an ostinato pattern, known as a pes:
Later in the medieval era, Guillaume Dufay's 15th-century chanson Resvelons Nous features a similarly constructed ostinato pattern, but this time 5 bars long. Over this, the main melodic line moves freely, varying the phrase-lengths, while being "to some extent predetermined by the repeating pattern of the canon in the lower two voices."
Ground bass: Late Renaissance and Baroque
Ground bass or basso ostinato (obstinate bass) is a type of variation form in which a bass line, or harmonic pattern (see Chaconne; also common in Elizabethan England as Grounde) is repeated as the basis of a piece underneath variations. Aaron Copland describes basso ostinato as "... the easiest to recognize" of the variation forms wherein, "... a long phrase—either an accompanimental figure or an actual melody—is repeated over and over again in the bass part, while the upper parts proceed normally [with variation]". However, he cautions, "it might more properly be termed a musical device than a musical form."
One striking ostinato instrumental piece of the late Renaissance period is "The Bells", a piece for virginals by William Byrd. Here the ostinato (or 'ground') consists of just two notes:
In Italy, during the seventeenth century, Claudio Monteverdi composed many pieces using ostinato patterns in his operas and sacred works. One of these was his 1650 version of "Laetatus sum", an imposing setting of Psalm 122 that pits a four-note "ostinato of unquenchable energy." against both voices and instruments:
Later in the same century, Henry Purcell became famous for his skilful deployment of ground bass patterns. His most famous ostinato is the descending chromatic ground bass that underpins the aria "When I am laid in earth" ("Dido's Lament") at the end of his opera Dido and Aeneas: While the use of a descending chromatic scale to express pathos was fairly common at the end of the seventeenth century, Richard Taruskin pointed out that Purcell shows a fresh approach to this musical trope: "Altogether unconventional and characteristic, however, is the interpolation of an additional cadential measure into the stereotyped ground, increasing its length from a routine four to a haunting five bars, against which the vocal line, with its despondent refrain ("Remember me!"), is deployed with marked asymmetry. That, in addition to Purcell's distinctively dissonant, suspension-saturated harmony, enhanced by additional chromatic descents during the final ritornello and by many deceptive cadences, makes this little aria an unforgettably poignant embodiment of heartache." See also: Lament bass.
However, this is not the only ostinato pattern that Purcell uses in the opera. Dido's opening aria "Ah, Belinda" is a further demonstration of Purcell's technical mastery: the phrases of the vocal line do not always coincide with the four-bar ground:
"Purcell's compositions over a ground vary in their working out, and the repetition never becomes a restriction." Purcell's instrumental music also featured ground patterns. A particularly fine and complex example is his Fantasia upon a Ground for three violins and continuo:
The intervals in the above pattern are found in many works of the Baroque Period. Pachelbel's Canon also uses a similar sequence of notes in the bass part:
Two pieces by J.S.Bach are particularly striking for their use of an ostinato bass: the Crucifixus from his Mass in B minor and the Passacaglia in C minor for organ, which has a ground rich in melodic intervals: The first variation that Bach builds over this ostinato consists of a gently syncopated motif in the upper voices: This characteristic rhythmic pattern continues in the second variation, but with some engaging harmonic subtleties, especially in the second bar, where an unexpected chord creates a passing implication of a related key: In common with other Passacaglias of the era, the ostinato is not simply confined to the bass, but rises to the uppermost part later in the piece: A performance of the entire piece can be heard here.
Late eighteenth and nineteenth centuries
Ostinatos feature in many works of the late 18th and early 19th centuries. Mozart uses an ostinato phrase throughout the big scene that ends Act 2 of the Marriage of Figaro, to convey a sense of suspense as the jealous Count Almaviva tries in vain to incriminate the Countess, his wife, and Figaro, his butler, for plotting behind his back. A famous type of ostinato, called the Rossini crescendo, owes its name to a crescendo that underlies a persistent musical pattern, which usually culminates in a solo vocal cadenza.
In the energetic Scherzo of Beethoven’s late C sharp minor Quartet, Op. 131, there is a harmonically static passage, with "the repetitiveness of a nursery rhyme" that consists of an ostinato shared between viola and cello supporting a melody in octaves in the first and second violins: Beethoven reverses this relationship a few bars later with the melody in the viola and cello and the ostinato shared between the violins:
Both the first and third acts of Wagner's final opera Parsifal feature a passage accompanying a scene where a band of Knights solemnly processes from the depths of forest to the hall of the Grail. The "Transformation music" that supports this change of scene is dominated by the iterated tolling of four bells: Brahms used ostinato patterns in both the finale of his Fourth Symphony and in the closing section of his Variations on a Theme by Haydn:
Twentieth century
Debussy featured an ostinato pattern throughout his Piano Prelude "Des pas sur la neige". Here, the ostinato pattern stays in the middle register of the piano – it is never used as a bass. "Remark that the footfall ostinato remains nearly throughout on the same notes, at the same pitch level... this piece is an appeal to the basic loneliness of all human beings, oft-forgotten perhaps, but, like the ostinato, forming a basic undercurrent of our history." Of all the major classical composers of the 20th century, Stravinsky is possibly the one most associated with the practice of ostinato. In conversation with the composer, his friend and colleague Robert Craft remarked "Your music always has an element of repetition, of ostinato. What is the function of ostinato?" Stravinsky replied; "It is static – that is, anti-development; and sometimes we need a contradiction to development." Stravinsky was particularly skilled at using ostinatos to confound rather than confirm rhythmic expectations. In the first of his Three Pieces for String Quartet, Stravinsky sets up three repeated patterns, which overlap one another and never coincide. "Here a rigid pattern of (3+2+2/4) bars is laid over a strictly recurring 23-beat tune (the bars being marked by a cello ostinato), so that their changing relationship is governed primarily by the pre-compositional scheme." "The rhythmical current running through the music is what binds together these curious mosaic-like pieces."
A subtler metrical conflict can be found in the final section of Stravinsky's Symphony of Psalms. The choir sing a melody in triple time, while the bass instruments in the orchestra play a 4-beat ostinato against this. "This is built up over an ostinato bass (harp, two pianos and timpani) moving in fourths like a pendulum."
Sub-Saharan African music
Counter-metric structure
Many instruments south of the Sahara Desert play ostinato melodies. These include lamellophones such as the mbira, as well as xylophones like the balafon, the bikutsi, and the gyil. Ostinato figures are also played on string instruments such as the kora, gankoqui bell ensembles, and pitched drums ensembles. Often, African ostinatos contain offbeats or cross-beats, that contradict the metric structure. Other African ostinatos generate complete cross-rhythms by sounding both the main beats and cross-beats. In the following example, a gyil sounds the three-against-two cross-rhythm (hemiola). The left hand (lower notes) sounds the two main beats, while the right hand (upper notes) sounds the three cross-beats.
African harmonic progressions
Popular dance bands in West Africa and the Congo region feature ostinato-playing guitars. The African guitar parts are drawn from a variety of sources, including the indigenous mbira, as well as foreign influences such as James Brown-type funk riffs. However, the foreign influences are interpreted through a distinctly African ostinato sensibility. African guitar styles began with Congolese bands doing Cuban cover songs. The Cuban guajeo had a both familiar and exotic quality to the African musicians. Gradually, various regional guitar styles emerged, as indigenous influences became increasingly dominant within these Africanized guajeos.
As Moore states, "One could say that I – IV – V – IV [chord progressions] is to African music what the 12-bar blues is to North American music." Such progressions seem superficially to follow the conventions of Western music theory. However, performers of African popular music do not perceive these progressions in the same way. Harmonic progressions which move from the tonic to the subdominant (as they are known in European music) have been used in Traditional sub-Saharan African harmony for hundreds of years. Their elaborations follow all the conventions of traditional African harmonic principles. Gehard Kubik concludes:
The harmonic cycle of C–F–G–F [I–IV–V–IV] prominent in Congo/Zaire popular music simply cannot be defined as a progression from tonic to subdominant to dominant and back to subdominant (on which it ends) because in the performer's appreciation they are of equal status, and not in any hierarchical order as in Western music—(Kubik 1999).
Afro-Cuban guajeo
A guajeo is a typical Cuban ostinato melody, most often consisting of arpeggiated chords in syncopated patterns. The guajeo is a hybrid of the African and European ostinato. The guajeo was first played as accompaniment on the tres in the folkloric changüí and son. The term guajeo is often used to mean specific ostinato patterns played by a tres, piano, an instrument of the violin family, or saxophones. The guajeo is a fundamental component of modern-day salsa, and Latin jazz. The following example shows a basic guajeo pattern.
The guajeo is a seamless Afro-Euro ostinato hybrid, which has had a major influence upon jazz, R&B, rock 'n' roll and popular music in general. The Beatles' "I Feel Fine" guitar riff is guajeo-like.
Riff
In various popular music styles, riff refers to a brief, relaxed phrase repeated over changing melodies. It may serve as a refrain or melodic figure, often played by the rhythm section instruments or solo instruments that form the basis or accompaniment of a musical composition. Though they are most often found in rock music, heavy metal music, Latin, funk and jazz, classical music is also sometimes based on a simple riff, such as Ravel's Boléro. Riffs can be as simple as a tenor saxophone honking a simple, catchy rhythmic figure, or as complex as the riff-based variations in the head arrangements played by the Count Basie Orchestra.
David Brackett (1999) defines riffs as "short melodic phrases", while Richard Middleton (1999) defines them as "short rhythmic, melodic, or harmonic figures repeated to form a structural framework". Rikky Rooksby states: "A riff is a short, repeated, memorable musical phrase, often pitched low on the guitar, which focuses much of the energy and excitement of a rock song."
In jazz and R&B, riffs are often used as the starting point for longer compositions. The riff from Charlie Parker's bebop number "Now's the Time" (1945) re-emerged four years later as the R&B dance hit "The Hucklebuck". The verse of "The Hucklebuck"—another riff—was "borrowed" from the Artie Matthews composition "Weary Blues". Glenn Miller's "In the Mood" had an earlier life as Wingy Manone's "Tar Paper Stomp". All these songs use twelve bar blues riffs, and most of these riffs probably precede the examples given.
Neither of the terms 'riff' or 'lick' are used in classical music. Instead, individual musical phrases used as the basis of classical music pieces are called ostinatos or simply phrases. Contemporary jazz writers also use riff- or lick-like ostinatos in modal music. Latin jazz often uses guajeo-based riffs.
Vamp
In music, a vamp is a repeating musical figure, section, or accompaniment. Vamps are usually harmonically sparse: A vamp may consist of a single chord or a sequence of chords played in a repeated rhythm. The term frequently appeared in the instruction 'Vamp till ready' on sheet music for popular songs in the 1930s and 1940s, indicating the accompanist should repeat the musical phrase until the vocalist was ready. Vamps are generally symmetrical, self-contained, and open to variation. They are used in blues, jazz, gospel, soul, and musical theater. Vamps are also found in rock, funk, reggae, R&B, pop, and country. The equivalent in classical music is an ostinato, in hip hop and electronic music the loop, and in rock music the riff.
The slang term vamp comes from the Middle English word vampe (sock), from Old French avanpie, equivalent to Modern French avant-pied, literally before-foot.
Many vamp-oriented songwriters begin the creative process by attempting to evoke a mood or feeling while riffing freely on an instrument or scat singing. Many well known artists primarily build songs with a vamp/riff/ostinato based approach—including John Lee Hooker ("Boogie Chillen", "House Rent Boogie"), Bo Diddley ("Hey Bo Diddley", "Who Do You Love?"), Jimmy Page ("Ramble On", "Bron Yr Aur"), Nine Inch Nails ("Closer"), and Beck ("Loser").
Classic examples of vamps in jazz include "A Night in Tunisia", "Take Five", "A Love Supreme", "Maiden Voyage", and "Cantaloupe Island". Rock examples include the long jam at the ends of "Loose Change" by Neil Young and Crazy Horse and "Sooner or Later" by King's X.
Jazz, fusion, and Latin jazz
In jazz, fusion, and related genres, a background vamp provides a performer with a harmonic framework supporting improvisation. In Latin jazz guajeos fulfill the role of piano vamp. A vamp at the beginning of a jazz tune may act as a springboard to the main tune; a vamp at the end of a song is often called a tag.
Examples
"Take Five" begins with a repeated, syncopated figure in time, which pianist Dave Brubeck plays throughout the song (except for Joe Morello's drum solo and a variation on the chords in the middle section).
The music from Miles Davis's modal period (1958–1963) was based on improvising songs with a small number of chords. The jazz standard "So What" uses a vamp in the two-note "Sooooo what?" figure, regularly played by the piano and the trumpet throughout. Jazz scholar Barry Kernfeld calls this music vamp music.
Examples include the outros to George Benson's "Body Talk" and "Plum", and the solo changes to "Breezin'". The following songs are dominated by vamps: John Coltrane, Kenny Burrell, and Grant Green's versions of "My Favorite Things", Herbie Hancock's "Watermelon Man" and "Chameleon", Wes Montgomery's "Bumpin' on Sunset", and Larry Carlton's "Room 335".
The Afro-Cuban vamp style known as guajeo is used in the bebop/Latin jazz standard "A Night in Tunisia". Depending upon the musician, a repeating figure in "A Night in Tunisia" could be called an ostinato, guajeo, riff, or vamp. The Cuban-jazz hybrid spans the disciplines that encompass all these terms.
Gospel, soul, and funk
In gospel and soul music, the band often vamps on a simple ostinato groove at the end of a song, usually over a single chord. In soul music, the end of recorded songs often contains a display of vocal effects—such as rapid scales, arpeggios, and improvised passages. For recordings, sound engineers gradually fade out the vamp section at the end of a song, to transition to the next track on the album. Salsoul singers such as Loleatta Holloway have become notable for their vocal improvisations at the end of songs, and they are sampled and used in other songs. Andrae Crouch extended the use of vamps in gospel, introducing chain vamps (one vamp after the other, each successive vamp drawn from the first).
1970s-era funk music often takes a short one or two bar musical figure based on a single chord one would consider an introduction vamp in jazz or soul music, and then uses this vamp as the basis of the entire song ("Funky Drummer" by James Brown, for example). Jazz, blues, and rock are almost always based on chord progressions (a sequence of changing chords), and they use the changing harmony to build tension and sustain listener interest. Unlike these music genres, funk is based on the rhythmic groove of the percussion, rhythm section instruments, and a deep electric bass line, usually all over a single chord. "In funk, harmony is often second to the 'lock,' the linking of contrapuntal parts that are played on guitar, bass, and drums in the repeating vamp."
Examples include Stevie Wonder's vamp-based "Superstition" and Little Johnny Taylor's "Part Time Love", which features an extended improvisation over a two-chord vamp.
Musical theater
In musical theater, a vamp, or intro, is the few bars, one to eight, of music without lyrics that begin a printed copy of a song. The orchestra may repeat the vamp or other accompaniment during dialogue or stage business, as accompaniment for onstage transitions of indeterminate length. The score provides a one or two bar vamp figure, and indicates, "Vamp till cue", by the conductor. The vamp gives the onstage singers time to prepare for the song or the next verse, without requiring the music to pause. Once the vamp section is over, the music continues to the next section.
The vamp may be written by the composer of the song, a copyist employed by the publisher, or the arranger for the vocalist. The vamp serves three main purposes: it provides the key, establishes the tempo, and provides emotional context. The vamp may be as short as a bell tone, sting (a harmonized bell tone with stress on the starting note), or measures long. The rideout is the transitional music that begins on the downbeat of the last word of the song and is usually two to four bars long, though it may be as short as a sting or as long as a Roxy Rideout.
Indian classical music
In Indian classical music, during Tabla or Pakhawaj solo performances and Kathak dance accompaniments, a conceptually similar melodic pattern known as the Lehara (sometimes spelled Lehra) or Nagma is played repeatedly throughout the performance. This melodic pattern is set to the number of beats in a rhythmic cycle (Tala or Taal) being performed and may be based on one or a blend of multiple Ragas.
The basic idea of the lehara is to provide a steady melodious framework and keep the time-cycle for rhythmic improvisations. It serves as an auditory workbench not only for the soloist but also for the audience to appreciate the ingenuity of the improvisations and thus the merits of the overall performance. In Indian Classical Music, the concept of 'sam' (pronounced as 'sum') carries paramount importance. The sam is the target unison beat (and almost always the first beat) of any rhythmic cycle. The second most important beat is the Khali, which is a complement of the sam. Besides these two prominent beats, there are other beats of emphasis in any given taal, which signify 'khand's (divisions) of the taal. E.g. 'Roopak' or 'Rupak' taal, a 7-beat rhythmic cycle, is divided 3–2–2, further implying that the 1st, 4th, and 6th beats are the prominent beats in that taal. Therefore, it is customary, but not essential, to align the lehara according to the divisions of the Taal. It is done with a view to emphasize those beats that mark the divisions of the Taal.
The lehara can be played on a variety of instruments, including the sarangi, harmonium, sitar, sarod, flute and others. The playing of the lehara is relatively free from the numerous rules and constraints of Raga Sangeet, which are upheld and honoured in the tradition of Indian Classical Music. The lehara may be interspersed with short and occasional improvisations built around the basic melody. It is also permissible to switch between two or more disparate melodies during the course of the performance. It is essential that the lehara be played with the highest precision in Laya (Tempo) and Swara control, which requires years of specialist training (Taalim) and practice (Riyaaz). It is considered a hallmark of excellence to play lehara alongside a recognised Tabla or Pakhawaj virtuoso as it is a difficult task to keep a steady pulse while the percussionist is improvising or playing difficult compositions in counterpoint. While there may be scores of individually talented instrumentalists, there are very few who are capable of playing the lehra for a Tabla / Pakhawaj solo performance.
See also
Canto Ostinato
Chaconne
Chanking
Fill (music)
Folia
Glossary of musical terminology
Hook (music)
Imitation (music)
Leitmotif
Music sequencer
O Fortuna
Passacaglia
Pedal point
Sequence (music)
Traditional sub-Saharan African harmony
Minimal music
References
Further reading
External links
Jazz Guitar Riffs
Explanation with musical examples.
Accompaniment
Italian words and phrases
Bass (sound)
Musical analysis
Musical terminology
Repetition (music)
Rhythm and meter
Riffs
Tonality | Ostinato | [
"Physics"
] | 5,524 | [
"Spacetime",
"Rhythm and meter",
"Physical quantities",
"Time"
] |
180,437 | https://en.wikipedia.org/wiki/Pavel%20Chekov | Pavel Andreievich Chekov () is a fictional character in the Star Trek universe.
Walter Koenig portrayed Chekov in the second and third seasons of the original Star Trek series and the first seven Star Trek films. Anton Yelchin portrayed the character in the 2009 Star Trek reboot film and two sequels, Star Trek Into Darkness and Star Trek Beyond. Both Koenig and Yelchin were born to Russian parents, but grew up in the United States, and both affected Russian accents for their roles.
Origin
Star Trek creator Gene Roddenberry wanted to include a younger cast member to appeal to teenage audiences. With a second season of Star Trek to be produced, Roddenberry interviewed Walter Koenig on the recommendation of director Joseph Pevney. After casting Koenig, Roddenberry wrote a letter to Mikhail Zimyanin, editor of Pravda, informing him of the introduction of a Russian character, and an NBC press release announcing the character at the time stated that it was in response to a Pravda article, condemning the show for having no Russian characters. The existence of such a Pravda article is disputed. Roddenberry also acknowledged that the character was in response to the popularity of The Monkees' Davy Jones. Koenig always denied the "Russian origin" story and affirmed that his character was added in response to the popularity of The Monkees, and the character's hairstyle and appearance are a direct reference to this. Roddenberry had previously mentioned, in a memo to his casting director, a desire to have someone reminiscent of one of The Beatles or Monkees on the show.
Koenig's modest height, eyes, thick eyebrows, boyish face, and smile were all strikingly evocative of the lone British "Monkee" who captivated millions of pre-teen girls. Early attempts were made to style Koenig's brown hair similar to that of Jones too. Wigs were used in a couple of early episodes but not in others, which reveals a stage of experimentation to attentive viewers. Eventually, a final look for Koenig's hair length and fullness was reached and used consistently thereafter.
After Paramount Television signed Koenig to a contract because of the number of fan letters he received as Chekov, Roddenberry wrote in another memo "Kirk and Spock and the others actually seem rather 'middle aged' to the large youthful segment of our audience. We badly need a young man aboard the Enterprisewe need youthful attitudes and perspectives. Chekov can be used potently here". In actuality, Koenig is only five years younger than co-stars Leonard Nimoy and William Shatner.
The episode "Amok Time", which was the first episode broadcast during the second season, was Chekov's first television appearance ("Catspaw", the first episode shot with the Chekov character, would be broadcast a month later to roughly coincide with Halloween). Because of budgetary constraints, the character did not appear in the animated Star Trek.
Character biography
Pavel Andreievich Chekov was born in 2245 and is a young and naïve ensign who first appears on-screen in the original series’ second season as the Enterprise'''s navigator. According to Roddenberry, he is "an extraordinarily capable young man—almost Spock's equal in some areas. An honor graduate of the Space Academy." Chekov also substitutes for Mr. Spock at the science officer station when necessary. His promotion to lieutenant for Star Trek: The Motion Picture brings with it his transfer as the ship's tactical officer and chief of security. During his tour of Duty on the Enterprise, Chekov lost his mind on three occasions: in "Day of the Dove" Chekov was implanted with false memories and driven to violence by a non-corporeal alien entity; in "And the Children Shall Lead", Chekov was exposed to mind control by a group of children who had been given powers by a non-corporeal being; and in "The Tholian Web" Chekov became violently insane following exposure to interspace. Furthermore, in the film Star Trek II: The Wrath of Khan, Chekov was subject to mind control after being implanted with a juvenile Ceti eel. A running gag on Star Trek is that whenever Chekov gets into personal combat with opponents stronger than him, he loses the fight: "The Trouble with Tribbles" with Klingons or "The Gamesters of Triskelion" with the gladiator-like slaves/thralls. In "Spectre of the Gun" he is shot and killed in the fantasy but survives only because he was thinking of a beautiful fantasy woman. He also likes the beautiful female androids in "I, Mudd".
By the events of Star Trek II: The Wrath of Khan, Chekov is executive officer aboard the USS Reliant. In that film, Khan Noonien Singh uses a creature that wraps itself around Chekov's cerebral cortex to control him and his captain. Chekov overcomes the creature's mind control and serves as Enterprise tactical officer in the film's climactic battle.
A common myth about Star Trek is that Khan recognizing Chekov in the film is a continuity error because "Space Seed", with the villain, was broadcast before Koenig's casting. Adaptations: From Text to Screen, Screen to Text calls this "the apparent gaffe notorious throughout Star Trek fandom". Although Chekov does not appear in "Space Seed", "Catspaw"—with the character—has an earlier stardate. Koenig joked that Khan remembers Chekov from the episode after he takes too long in a restroom Khan wants to use.
Chekov is an accomplice in Kirk's theft of the Enterprise to rescue Spock in Star Trek III: The Search for Spock, but is exonerated in Star Trek IV: The Voyage Home. He serves as navigator aboard the Enterprise-A during the events of Star Trek V: The Final Frontier and Star Trek VI: The Undiscovered Country. The character's final film appearance is as a guest aboard the Enterprise-B on its maiden voyage in Star Trek Generations. Chekov is mentioned in the series finale of Star Trek: Picard as being deceased in a broadcast by his son, Anton, who is serving as the president of the Federation. Koenig insisted that the character be called Anton as a tribute to the late Anton Yelchin, who inherited his role as Chekov for the J. J. Abrams reboot films.
Spinoff novels show a continued career path, but these are not considered canon in the Star Trek universe. Novels written by William Shatner detail that Chekov reaches the rank of admiral, and even serves as Commander in Chief of Starfleet.
Reboot films
The 2009 Star Trek film creates an alternate timeline in the franchise. In this timeline, Anton Yelchin's portrayal presents Chekov as a 17-year-old prodigy whose mathematical ability proves instrumental in a few events within the film.
In the sequel, Star Trek Into Darkness, Chekov finds himself promoted to chief engineer after Scotty resigns. When Kirk orders him to put on a red shirt, a brief sting is heard as a closeup shows Chekov's nervous face, playing on the reputation of redshirts in the franchise as much as the character's shock regarding his sudden promotion.
The third film, Star Trek Beyond, was Yelchin's final appearance as Chekov, the film seeing Chekov accompanying Kirk after the entire crew is marooned on an uncharted planet following the destruction of the Enterprise, forcing them to destroy the last of the Enterprise to escape a trap and later work with the rest of the senior staff to restart a long-lost Starfleet ship to escape the planet and defeat a plan to attack the Federation.
Anton Yelchin's death
Yelchin was crushed to death by his 2016 Jeep Grand Cherokee on June 19, 2016, a little more than a month before the scheduled release of Star Trek Beyond on July 22, 2016. All filming had been completed and post-production had started. A dedication to Yelchin's memory was inserted into the credits. J. J. Abrams, producer of the reboot trilogy and director of its first two films, has stated that the role will not be recast for future sequels, implying the character of Chekov will be written out in future films.
Fan productions
Walter Koenig reprised his role as Chekov 12 years after Star Trek Generations in the fan-created series New Voyages episode "To Serve All My Days". Andy Bray portrayed a younger Chekov in that episode. Koenig reprised the character again in Star Trek: Renegades as 143-year-old Admiral Chekov, the newly appointed head of Section 31. He has stated that should there be a sequel, he would reprise Chekov and then retire from the role. He also returned as Chekov in the online miniseries Star Trek: Of Gods and Men.
In scientific illustrator Jenny Parks' 2017 book Star Trek Cats, Chekov is depicted as a Russian Blue.
Reception
In 2018, The Wrap placed Chekov as 21st out 39 in a ranking of main cast characters of the Star Trek franchise prior to Star Trek: Discovery. In 2016, Chekov was ranked as the 30th most important character of Starfleet within the Star Trek science fiction universe by Wired'' magazine, out of 100 characters.
In 2018, Comic Book Resources ranked Chekov the 22nd best member of Starfleet.
References
External links
"Pavel Chekov" at STARTREK.COM
Television characters introduced in 1967
Fictional Russian people
Fictional astronomers
Fictional navigators
Star Trek: The Original Series characters
Star Trek (film franchise) characters
Starfleet admirals
Starfleet captains
Starfleet commanders
Starfleet lieutenants
Starfleet ensigns
Star Trek: Phase II characters
Fictional characters from the 23rd century | Pavel Chekov | [
"Astronomy"
] | 2,039 | [
"Astronomers",
"Fictional astronomers"
] |
180,457 | https://en.wikipedia.org/wiki/Keygen | A key generator (key-gen) is a computer program that generates a product licensing key, such as a serial number, necessary to activate for use of a software application. Keygens may be legitimately distributed by software manufacturers for licensing software in commercial environments where software has been licensed in bulk for an entire site or enterprise, or they may be developed and distributed illegitimately in circumstances of copyright infringement or software piracy.
Illegitimate key generators are typically programmed and distributed by software crackers in the warez scene. These keygens often play music (taking from the tradition of cracktros), which may include the genres dubstep, chiptunes, sampled loops or anything that the programmer desires. Chiptunes are often preferred due to their small size. Keygens can have artistic user interfaces or kept simple and display only a cracking group or cracker's logo.
Software licensing
A software license is a legal instrument that governs the usage and distribution of computer software. Often, such licenses are enforced by implementing in the software a product activation or digital rights management (DRM) mechanism, seeking to prevent unauthorized use of the software by issuing a code sequence that must be entered into the application when prompted or stored in its configuration.
Key verification
Many programs attempt to verify or validate licensing keys over the Internet by establishing a session with a licensing application of the software publisher. Advanced keygens bypass this mechanism, and include additional features for key verification, for example by generating the validation data which would otherwise be returned by an activation server. If the software offers phone activation then the keygen could generate the correct activation code to finish activation. Another method that has been used is activation server emulation, which patches the program memory to "see" the keygen as the de facto activation server.
Multi-keygen
A multi-keygen is a keygen that offers key generation for multiple software applications. Multi-keygens are sometimes released over singular keygens if a series of products requires the same algorithm for generating product keys.
These tools simplify the process of obtaining activation keys for users who need access to various software products within the same suite or developed by the same company. By integrating the algorithms for multiple applications into one interface, multi-keygens eliminate the need to manage separate keygens for each program. However, the use of multi-keygens often violates software licensing agreements or constitutes copyright infringement when unauthorized, and may pose risks such as malware or compromised system security.
Authors and distribution
Unauthorized keygens that typically violate software licensing terms are written by programmers who engage in reverse engineering and software cracking, often called crackers, to circumvent copy protection of software or digital rights management for multimedia.
Keygens are available for download on warez sites or through peer-to-peer (P2P) networks.
Malware keygens
Keygens, available through P2P networks or otherwise, can contain malicious payloads. These key generators may or may not generate a valid key, but the embedded malware loaded invisibly at the same time may, for example, be a version of CryptoLocker (ransomware).
Antivirus software may discover malware embedded in keygens; such software often also identifies unauthorized keygens which do not contain a payload as potentially unwanted software, often labelling them with a name such as Win32/Keygen or Win32/Gendows.
HackTool.Win32.HackAV
A program designed to assist hacking is defined as HackTool.Win32.HackAV or not-a-virus:Keygen from Kaspersky Labs or as HackTool:Win32/Keygen by Microsoft Malware Protection Center. According to the Microsoft Malware Protection Center, its first known detection dates back to 16 July 2009. The following security threats were most often found on PCs that have been related to these tools:
Blackhole exploit kit
Win32/Autorun
Win32/Dorkbot
Win32/Obfuscator
Keychan
A key changer or keychan is a variation of a keygen. A keychan is a small piece of software that changes the license key or serial number of a particular piece of proprietary software installed on a computer.
See also
BSA (The Software Alliance)
Canadian Alliance Against Software Theft
Free Software Foundation
References
External links
Business Software Alliance and Software Patents
Software cracking
Warez
Copyright infringement of software
Cryptographic software | Keygen | [
"Mathematics"
] | 888 | [
"Cryptographic software",
"Mathematical software"
] |
180,472 | https://en.wikipedia.org/wiki/Consumer%20privacy | Consumer privacy is information privacy as it relates to the consumers of products and services.
A variety of social, legal and political issues arise from the interaction of the public's potential expectation of privacy and the collection and dissemination of data by businesses or merchants. Consumer privacy concerns date back to the first commercial couriers and bankers who enforced strong measures to protect customer privacy. In modern times, the ethical codes of various professions specify measures to protect customer privacy, including medical privacy and client confidentiality. State interests include matters of national security. Consumer concerned about the invasion of individual information, thus doubtful when thinking about using certain services. Many organizations have a competitive incentive to collect, retain, and use customer data for various purposes, and many companies adopt security engineering measures to control this data and manage customer expectations and legal requirements for consumer privacy.
Consumer privacy protection is the use of laws and regulations to protect individuals from privacy loss due to the failures and limitations of corporate customer privacy measures. Corporations may be inclined to share data for commercial advantage and fail to officially recognize it as sensitive to avoid legal liability in the chance that lapses of security may occur. Modern consumer privacy law originated from telecom regulation when it was recognized that a telephone company had access to unprecedented levels of information. Customer privacy measures were seen as deficient to deal with the many hazards of corporate data sharing, corporate mergers, employee turnover, and theft of data storage devices (e.g., hard drives) that could store a large amount of data in a portable location.
Businesses have consumer data and information obtained from consumer and client purchases, products, and services. Thus, businesses have the responsibility to keep these data and information safe and confidential. Consumers expect that businesses will take an active stance when protecting consumer privacy issues and supporting confidential agreements. Whether a firm provides services or products to consumers, firms are expected to use methods such as obfuscation or encoding methods to cover up consumer data when analyzing data or trends for example. Firms are also expected to protect consumer privacy both within the organizations themselves and from outside third entities including third party providers of services, suppliers who provide product components and supplies, and government institutions or community partnership organizations. In addition, businesses are sometime required to provide an agreement/contract to service clients or product consumer that states customer or client information and data will be kept confidential and that it will not be used for advertising or promotional purposes for example. The US government, including the FTC, have consumer protection laws like The Telephone Consumer Protection Act and Data Transparency and Privacy Act. Individuals States have laws and regulation that protect consumers as well. One example of this is The California Consumer Privacy Act.
Legislation
Consumer privacy concerns date back to the first commercial couriers and bankers who enforced strong measures to protect customer privacy. Harsh punitive measures were passed as the result of failing to keep a customer's information private. In modern times, the ethical codes of most professions specify privacy measures for the consumer of any service, including medical privacy, client confidentiality, and national security. These codes are particularly important in a carceral state, where no privacy in any form nor limits on state oversight or data use exists. Corporate customer privacy practices are approaches taken by commercial organizations to ensure that confidential customer data is not stolen or abused. Since most organizations have strong competitive incentives to retain exclusive access to customer data, and since customer trust is usually a high priority, most companies take some security engineering measures to protect customer privacy. There is also a concern that companies may sell consumer data if they have to declare bankruptcy, although it often violates their own privacy policies.
The measures companies take to protect consumer privacy vary in effectiveness, and would not typically meet the much higher standards of client confidentiality applied by ethical codes or legal codes in banking or law, nor patient privacy measures in medicine, nor rigorous national security measures in military and intelligence organizations. The California Consumer Privacy Act, for example, protects the use of consumer privacy data by firms and governments. This act makes it harder for firms to extract personal information from consumers and use it for commercial purposes. Some of the rights included in this act include:
The right to know about the personal information a business collects about them and how it is used and shared
The right to delete personal information collected from them (with some exceptions)
The right to opt-out of the sale or sharing of their personal information
The right to non-discrimination for exercising their CCPA rights
Since companies operate to generate a profit, commercial organizations also cannot spend unlimited funds on precautions while remaining competitive; a commercial context tends to limit privacy measures and to motivate organizations to share data when working in partnership. The damage done by privacy loss is not measurable, nor can it be undone, and commercial organizations have little or no interest in taking unprofitable measures to drastically increase the privacy of customers. Corporations may be inclined to share data for commercial advantage and fail to officially recognize it as sensitive to avoid legal liability in the chance that lapses of security may occur. This has led to many moral hazards and customer privacy violation incidents.
Some services—notably telecommunications, including Internet—require collecting a vast array of information about users' activities in the course of business, and may also require consultation of these data to prepare bills. In the US and Canada, telecom data must be kept for seven years to permit dispute and consultation about phone charges. These sensitivities have led telecom regulation to be a leader in consumer privacy regulation, enforcing a high level of confidentiality on the sensitive customer communication records. The focus of consumer rights activists on the telecoms industry has super-sided as other industries also gather sensitive consumer data. Such common commercial measures as software-based customer relationship management, rewards programs, and target marketing tend to drastically increase the amount of information gathered (and sometimes shared). These very drastically increase privacy risks and have accelerated the shift to regulation, rather than relying on the corporate desire to preserve goodwill.
Concerns have led to consumer privacy laws in most countries, especially in the European Union, Australia, New Zealand and Canada. Notably, among developed countries, the United States has no such law and relies on corporate customer privacy disclosed in privacy policies to ensure consumer privacy in general. Modern privacy law and regulation may be compared to parts of the Hippocratic Oath, which includes a requirement for doctors to avoid mentioning the ills of patients to others—not only to protect them, but to protect their families— and also recognizes that innocent third parties can be harmed by the loss of control of sensitive personal information.
Modern consumer privacy law originated from telecom regulation when it was recognized that a telephone company—especially a monopoly (known in many nations as a PTT)—had access to unprecedented levels of information: the direct customer's communication habits and correspondents and the data of those who shared the household. Telephone operators could frequently hear conversations—inadvertently or deliberately—and their job required them to dial the exact numbers. The data gathering required for the process of billing began to become a privacy risk as well. Accordingly, strong rules on operator behaviour, customer confidentiality, records keeping and destruction were enforced on telephone companies in every country. Typically only police and military authorities had legal powers to wiretap or see records. Even stricter requirements emerged for various banks' electronic records. In some countries, financial privacy is a major focus of the economy, with severe criminal penalties for violating it.
History
1970s
Through the 1970s, many other organizations in developed nations began to acquire sensitive data, but there were few or no regulations in place to prevent them from sharing or abusing the data. Customer trust and goodwill were generally thought to be sufficient in first-world countries, notably the United States, to ensure the protection of truly sensitive data; caveat emptor was applied in these situations. But in the 1980s, smaller organizations also began to get access to computer hardware and software, and these simply did not have the procedures or personnel or expertise, nor less the time, to take rigorous measures to protect their customers. Meanwhile, via target marketing and rewards programs, companies were acquiring ever more data.
Gradually, customer privacy measures were seen as deficient to deal with the many hazards of corporate data sharing, corporate mergers, employee turnover, and theft of data storage devices (e.g. hard drives) that could store a large amount of data in a portable location. Explicit regulation of consumer privacy gained further support, especially in the European Union, where each nation had laws that were incompatible (e.g., some restricted the data collection, the data compilation and the data dissemination); it was possible to violate privacy within the EU simply doing these things from different places in the European Common Market as it existed before 1992.
1990s
Through the 1990s, the proliferation of mobile telecom, the introduction of customer relationship management, and the use of the Internet in developed nations brought the situation to the forefront, and most countries had to implement strong consumer privacy laws, often over the objections of business. The European Union and New Zealand passed particularly strong laws that were used as a template for more limited laws in Australia and Canada and some states of the United States (where no federal law for consumer privacy exists, although there are requirements specific to banking and telecom privacy). In Austria around the 1990s, the mere mention of a client's name in a semi-public social setting was enough to earn a junior bank executive a stiff jail sentence.
2000s
After the terrorist attacks against the United States on September 11, 2001, privacy took a back-seat to national security in legislators' minds. Accordingly, concerns of consumer privacy in the United States have tended to go unheard of as questions of citizen privacy versus the state, and the development of a police state or carceral state, have occupied advocates of strong privacy measures. Whereas it may have appeared prior to 2002 that commercial organizations and the consumer data they gathered were of primary concern, it has appeared since then in most developed nations to be much less of a concern than political privacy and medical privacy (e.g., as violated by biometrics). Indeed, people have recently been stopped at airports solely due to their political views, and there appears to be minimal public will to stop practices of this nature. The need for stricter laws is more pronounced after the American web service provider, Yahoo admitted that sensitive information (including email addresses and passwords) of half a billion users was stolen by hackers in 2014. The data breach was a massive setback for the company and raised several questions about the revelation of the news after two years of the hacking incident.
See also
Big data
Information privacy
Information technology management
Management information systems
Privacy
Privacy law
Privacy policy
Personally identifiable information
References
Consumer
E-commerce
Privacy | Consumer privacy | [
"Technology"
] | 2,154 | [
"Information technology",
"E-commerce"
] |
180,583 | https://en.wikipedia.org/wiki/Dymaxion%20map | The Dymaxion map projection, also called the Fuller projection, is a kind of polyhedral map projection of the Earth's surface onto the unfolded net of an icosahedron. The resulting map is heavily interrupted in order to reduce shape and size distortion compared to other world maps, but the interruptions are chosen to lie in the ocean.
The projection was invented by Buckminster Fuller. In 1943, Fuller proposed a projection onto a cuboctahedron, which he called the Dymaxion World, using the name Dymaxion which he also applied to several of his other inventions. In 1954, Fuller and cartographer Shoji Sadao produced an updated Dymaxion map, the Airocean World Map, based on an icosahedron with a few of the triangular faces cut to avoid breaks in landmasses.
The Dymaxion projection is intended for representations of the entire Earth.
History
The March 1, 1943, edition of Life magazine included a photographic essay titled "Life Presents R. Buckminster Fuller's Dymaxion World", illustrating a projection onto a cuboctahedron, including several examples of possible arrangements of the square and triangular pieces, and a pull-out section of one-sided magazine pages with the map faces printed on them, intended to be cut out and glued to card stock to make a three-dimensional cuboctahedron or its two-dimensional net. Fuller applied for a patent in the United States in February 1944 for the cuboctahedron projection, which was issued in January 1946.
In 1954, Fuller and cartographer Shoji Sadao produced a new map onto an icosahedron instead of the cuboctahedron. It depicts Earth's continents as "one island", or nearly contiguous land masses. References today to the Fuller projection or Dymaxion usually indicate this version.
Projection of each triangle
Unlike other polyhedral map projections, the Dymaxion map does not use a gnomonic projection (perspective projection through the Earth's center onto the polyhedral surface), which causes length distortion away from the center of each face. Instead each triangle's three edges on the Dymaxion map match the scale along the corresponding arcs of great circles on the Earth (modeled as a sphere), and then the scale diminishes toward the middle of the triangle. The transformation process was formally mathematically defined in 1978.
Properties
Though neither conformal nor equal-area, Fuller claimed that his map had several advantages over other projections for world maps.
It has less distortion of relative size of areas, most notably when compared to the Mercator projection; and less distortion of shapes of areas, notably when compared to the Gall–Peters projection. Other compromise projections attempt a similar trade-off.
More unusually, the Dymaxion map does not have any "right way up". Fuller argued that in the universe there is no "up" and "down", or "north" and "south": only "in" and "out". Gravitational forces of the stars and planets created "in", meaning "towards the gravitational center", and "out", meaning "away from the gravitational center". He attributed the north-up-superior/south-down-inferior presentation of most other world maps to cultural bias.
Fuller intended the map to be unfolded in different ways to emphasize different aspects of the world. Peeling the triangular faces of the icosahedron apart in one way results in an icosahedral net that shows an almost contiguous land mass comprising all of Earth's continents – not groups of continents divided by oceans. Peeling the solid apart in a different way presents a view of the world dominated by connected oceans surrounded by land.
Showing the continents as "one island earth" also helped Fuller explain, in his book Critical Path, the journeys of early seafaring people, who were in effect using prevailing winds to circumnavigate this world island.
However, the Dymaxion map can also prove difficult to use. It is, for example, confusing to describe the four cardinal directions and locate geographic coordinates. The awkward shape of the map may be counterintuitive to most people trying to use it. For example, the shortest route from Africa to South America is not obvious. Depending on how the map is projected, land masses and oceans are often divided into several pieces.
Conformal variant
In 2019, Daniel "daan" Strebe developed a conformal icosahedral projection, similar to the conformal projections to an octahedron by Oscar S. Adams (1928) and to a tetrahedron by Laurence P. Lee (1965), all three using Dixon elliptic functions. A conformal map preserves angles and local shapes from the sphere at the expense of increasing the scale distortion near the vertices of the icosahedron.
Comparison of the Fuller projection and Strebe's Dymaxion-like
conformal projection with Tissot's indicatrices at 30° intervals
Influence
A 1967 Jasper Johns painting, Map (Based on Buckminster Fuller's Dymaxion Airocean World), depicting a Dymaxion map, hangs in the permanent collection of the Museum Ludwig in Cologne.
The World Game, a collaborative simulation game in which players attempt to solve world problems, is played on a 70-by-35-foot Dymaxion map.
In 2013, to commemorate the 70th anniversary of the publication of the Dymaxion map in Life magazine, the Buckminster Fuller Institute announced the "Dymax Redux", a competition for graphic designers and visual artists to re-imagine the Dymaxion map. The competition received over 300 entries from 42 countries.
The H3 hierarchical global grid implemented by Uber uses an icosahedron oriented in Dymaxion orientation, then further subdivided into hexagons.
In 2020, a collaborative effort by thousands of Minecraft players, the Build the Earth project, used Strebe's conformal variant as a projection for building a 1:1 scale representation of the Earth inside the game.
Gallery
See also
List of map projections
Authagraph projection, inspired by Fuller, 1999
Peirce quincuncial projection, 1879
Polyhedral map projection, earliest known is by Leonardo da Vinci, 1514
References
External links
Fuller Map homepage
Dymaxion Project Animation
Icosahedron and Fuller maps
Dynamically generated maps based on the Dymaxion projection
Buckminster Fuller
Map projections
1943 introductions | Dymaxion map | [
"Mathematics"
] | 1,326 | [
"Map projections",
"Coordinate systems"
] |
180,607 | https://en.wikipedia.org/wiki/Boeing%202707 | The Boeing 2707 was an American supersonic passenger airliner project during the 1960s. After winning a competition for a government-funded contract to build an American supersonic airliner, Boeing began development at its facilities in Seattle, Washington. The design emerged as a large aircraft with seating for 250 to 300 passengers and cruise speeds of approximately Mach 3. It was intended to be much larger and faster than competing supersonic transport (SST) designs such as the Concorde.
The SST was the topic of considerable concern within and outside the aviation industry. From the start, the airline industry noted that the economics of the design were questionable, concerns that were only partially addressed during development. Outside the field, the entire SST concept was the subject of considerable negative press, centered on the issue of sonic booms and effects on the ozone layer.
A key design feature of the 2707 was its use of a swing-wing configuration. During development, the required weight and size of this mechanism continued to grow, forcing the team to switch to a conventional delta wing. Rising costs, environmental concerns, noise, and the lack of a clear market led to its cancellation in 1971 before two prototypes were completed.
Development
Early studies
Boeing had worked on a number of small-scale SST studies since 1952. In 1958, it established a permanent research committee, which grew to a $1 million effort by 1960. The committee proposed a variety of alternative designs, all under the name Model 733. Most of the designs featured a large delta wing, but in 1959 another design was offered as an offshoot of Boeing's efforts in the swing-wing TFX program (which led to the purchase of the General Dynamics F-111 instead of the Boeing offering). In 1960, an internal competition was run on a baseline 150-seat aircraft for trans-Atlantic routes, and the swing-wing version won.
Shortly after taking office, President John F. Kennedy tasked the Federal Aviation Administration with preparing a report on "national aviation goals for the period between now and 1970". The study was prompted in the wake of several accidents, which led to the belief that the industry was becoming moribund. Two projects were started, Project Beacon on new navigational systems and air traffic control, and Project Horizon on advanced civil aviation developments.
Only one month later the FAA's new director, Najeeb Halaby, produced the Commission on National Aviation Goals, better known as Project Horizon. Among other suggestions, the report was used as a platform to promote the SST. Halaby argued that a failure to enter this market would be a "stunning setback". The report was met with skepticism by most others. Kennedy had put Lyndon Johnson on the SST file, and he turned to Robert McNamara for guidance. McNamara was highly skeptical of the SST project and savaged Halaby's predictions; he was also afraid the project might be turned over to the DoD and was careful to press for further studies.
The basic concept behind the SST was that its fast flight would allow them to fly more trips than a subsonic aircraft, leading to higher utilization. However, it did this at the cost of greatly increased fuel use. If fuel costs were to change dramatically, SSTs would not be competitive. These problems were well understood within the industry; the IATA released a set of "design imperatives" for an SST that were essentially impossible to meet—the release was a warning to promoters of the SST within the industry.
Concorde
By mid-1962, it was becoming clear that tentative talks earlier that year between the British Aircraft Corporation and Sud Aviation (later Aérospatiale) on a merger of their SST projects were more serious than originally thought. In November 1962, still to the surprise of many, the Concorde project was announced. In spite of marginal economics, nationalistic and political arguments had led to wide support for the project, especially from Charles de Gaulle. This set off something of a wave of panic in other countries, as it was widely believed that almost all future commercial aircraft would be supersonic, and it looked like the Europeans would start off with a huge lead. As if this were not enough, it soon became known that the Soviets were also working on a similar design.
Three days after the Concorde announcement, Halaby wrote a letter to Kennedy suggesting that if they did not immediately start their own SST effort, the US would lose 50,000 jobs, $4 billion in income, and $3 billion in capital as local carriers turned to foreign suppliers. A report from the Supersonic Transport Advisory Group (STAG) followed, noting that the European team was in the lead in basic development, and suggested competing by developing a more advanced design with better economics. At the time, more advanced generally meant higher speed. The baseline design in the report called for an aircraft with Mach 3 performance with range in order to serve the domestic market. They felt that there was no way to build a transatlantic design with that performance in time to catch the Concorde's introduction, abandoning the trans-Atlantic market to the Europeans.
In spite of vocal opponents, questions about the technical requirements, and extremely negative reports about its economic viability, the SST project gathered strong backing from industry and the FAA. Johnson sent a report to the president asking for $100 million in funding for FY 1964. This might have been delayed, but in May, Pan Am announced they had placed 6 options on the Concorde. Juan Trippe leaked the information earlier that month, stating that the airline would not ignore the SST market, and would buy from Europe if need be. Pan Am's interest in Concorde angered Kennedy, who called his administration to get Pan Am to redirect its potential funding back to the US SST program.
Kennedy introduced the National Supersonic Transport program on June 5, 1963, in a speech at the US Air Force Academy.
Design competition
Requests for proposals were sent out to airframe manufacturers Boeing, Lockheed, and North American for the airframes; and Curtiss-Wright, General Electric and Pratt & Whitney for engines. The FAA estimated that there would be a market for 500 SSTs by 1990. Despite not having a selected design, orders from air carriers started flowing in immediately. Preliminary designs were submitted to the FAA on January 15, 1964.
Boeing's entry was essentially identical to the swing-wing Model 733 studied in 1960; it was known officially as the Model , but also referred to both as the 1966 Model and the Model 2707. The latter name became the best known in public, while Boeing continued to use 733 model numbers internally. The design resembled the future B-1 Lancer bomber, with the exception that the four engines were mounted in individual nacelles instead of paired pods used on the Lancer. The blended wing root spanned almost all of cabin area, and this early version had a much more stubby look than the models that would ultimately evolve. The wing featured extensive high-lift devices on both the leading and trailing edges, minimizing the thrust required, and thus noise created, during climb out. The proposal also included optional fuselage stretches that increased capacity from the normal 150 seats to 227.
Lockheed's entry, designated CL-823, was essentially an enlarged Concorde. Like the Concorde, it featured a long and skinny fuselage, engines under the wing, and a compound delta planform. The only major design difference was the use of individual pods for the engines, rather than pairs. The CL-823 lacked any form of high-lift devices on the wings, relying on engine power and long runways for liftoff, ensuring a huge noise footprint. The CL-823 was the largest of the first-round entries, with typical seating for 218.
The North American NAC-60 was essentially a scaled-up B-70 with a less tapered fuselage and new compound-delta wing. The design retained the high-mounted canard above the cockpit area, and the box-like engine area under the fuselage. The use of high-lift devices on the leading edge of the wing lowered the landing angles to the point where the "drooping nose" was not required, and a more conventional rounded design was used. Compared to the other designs, the rounded nose profile and more cylindrical cross-section gave the NAC-60 a decidedly more conventional look than the other entries. This also meant it would fly slower, at Mach 2.65.
A "downselect" of the proposed models resulted in the NAC-60 and Curtiss-Wright efforts being dropped from the program, with both Boeing and Lockheed asked to offer SST models meeting the more demanding FAA requirements and able to use either of the remaining engine designs from GE or P&W. In November, another design review was held, and by this time Boeing had scaled up the original design into a 250-seat model, the Model 733-290. Due to concerns about jet blast, the four engines were moved to a position underneath an enlarged tailplane. When the wings were in their swept-back position, they merged with the tailplane to produce a delta-wing planform.
Both companies were now asked for considerably more detailed proposals, to be presented for final selection in 1966. When this occurred, Boeing's design was now the 300-seat Model 733-390. Both the Boeing and Lockheed L-2000 designs were presented in September 1966 along with full-scale mock-ups. After a lengthy review the Boeing design was announced as the winner on January 1, 1967. The design would be powered by the General Electric GE4/J5 engines. Lockheed's L-2000 was judged simpler to produce and less risky, but its performance was slightly lower and its noise levels slightly higher.
Refining the design
The 733-390 would have been an advanced aircraft even if it had been only subsonic. It was one of the earliest wide-body aircraft designs, with 2-3-2 row seating arrangement at its widest section in a fuselage that was considerably wider than aircraft then in service. The SST mock-up included both overhead storage for smaller items with restraining nets, as well as large drop-in bins between sections of the aircraft. In the main 247-seat tourist-class cabin, the entertainment system consisted of retractable televisions placed between every sixth row in the overhead storage. In the 30-seat first-class area, every pair of seats included smaller televisions in a console between the seats. Windows were only due to the high altitudes the aircraft flew at maximizing the pressure on them, but the internal pane was to give an illusion of size.
Boeing predicted that if the go-ahead were given, construction of the SST prototypes would begin in early 1967 and the first flight could be made in early 1970. Production aircraft could start being built in early 1969, with the flight testing in late 1972 and certification by mid-1974.
A major change in the design came when Boeing added canards behind the nose—which added weight. Boeing also faced insurmountable weight problems due to the swing-wing mechanism, a titanium pivot section having been fabricated with a weight of and measuring long and thick, and the design could not achieve sufficient range. Flexing of the fuselage (it would have been the longest ever built) threatened to make control difficult. In October 1968, the company was finally forced to abandon the variable geometry wing. The Boeing team fell back on a tailed delta fixed wing. The new design was also smaller, seating 234, and known as the Model 2707-300. Work began on a full-sized mock-up and two prototypes in September 1969, now two years behind schedule.
A promotional film claimed that airlines would soon pay back the federal investment in the project, and it was projected that SSTs would dominate the skies with subsonic jumbo jets (such as Boeing's 747) being only a passing intermediate fad.
By October 1969, there were delivery positions reserved for 122 Boeing SSTs by 26 airlines, including Alitalia, Canadian Pacific Airlines, Delta Air Lines, Iberia, KLM, Northwest Airlines, and World Airways.
Environmental concerns
By this point, the opposition to the project was becoming increasingly vocal. Environmentalists were the most influential group, voicing concerns about possible depletion of the ozone layer due to the high altitude flights, and about noise at airports, as well as from sonic booms.
The latter became the most significant rallying point, especially after the publication of the anti-SST paperback, SST and Sonic Boom Handbook edited by William Shurcliff, which claimed that a single flight would "leave a 'bang-zone' wide by long" along with a host of associated problems. During tests in 1964 with the XB-70 near Oklahoma City, the path had a maximum width of , but still resulted in 9,594 complaints of damage to buildings, 4,629 formal damage claims, and 229 claims for a total of $12,845.32, mostly for broken glass and cracked plaster. As the opposition widened, the claimed negative effects increased, including upsetting people who do delicate work (e.g., brain surgeons), and harming persons with nervous ailments.
One concern was that the water vapor released by the engines into the stratosphere would envelop the earth in a "global gloom". Presidential Adviser Russell Train warned that a fleet of 500 SSTs flying at for a period of years could raise stratospheric water content by as much as 50% to 100%. According to Train, this could lead to greater ground-level heat and hamper the formation of ozone. Later, an additional threat to the ozone was found in the exhaust's nitrogen oxides, a threat that was later validated by MIT. More recent analysis in 1995 by David W. Fahey, an atmospheric scientist at the National Oceanic and Atmospheric Administration, and others found that the drop in ozone would be from 1 to 2% if a fleet of 500 supersonic aircraft was operated. Fahey expressed the opinion that this would not be a fatal obstacle for an advanced SST development.
During the 1970s the alleged potential for serious ozone damage and the sonic boom worries were picked up by the Sierra Club, the National Wildlife Federation and the Wilderness Society. Supersonic flight over land in the United States was eventually banned, and several states added additional restrictions or banned Concorde outright.
Senator William Proxmire (D-Wisconsin) criticized the SST program as frivolous federal spending.
Halaby attempted to dismiss these concerns, stating "The supersonics are coming−as surely as tomorrow. You will be flying one version or another by 1980 and be trying to remember what the great debate was all about."
Government funding cut
In March 1971, despite the project's strong support by the administration of President Richard Nixon, the U.S. Senate rejected further funding. A counterattack was organized under the banner of the "National Committee for an American SST", which urged supporters to send in $1 to keep the program alive. Afterward, letters of support from aviation buffs, containing nearly $1 million worth of contributions, poured in. Labor unions also supported the SST project, worried that the winding down of both the Vietnam War and Apollo program would lead to mass unemployment in the aerospace sector. AFL–CIO President George Meany suggested that the race to develop a first-generation SST was already lost, but the US should "enter the competition for the second generation—the SSTs of the 1980s and 1990s".
Despite this newfound support, the House of Representatives also voted to end SST funding on May 20, 1971. The vote was highly contentious. Gerald Ford, then Republican Leader, shouted Meany's claims that "If you vote for the SST, you are ensuring 13,000 jobs today plus 50,000 jobs in the second tier and 150,000 jobs each year over the next ten years." Sidney Yates, leading the "no" camp, offered a then-uncommon motion to instruct conferees and eventually won the vote against further funding, 215 to 204.
At the time, there were 115 unfilled orders by 25 airlines, while Concorde had 74 orders from 16 customers. The two prototypes were never completed. Due to the loss of several government contracts and a downturn in the civilian aviation market, Boeing reduced its number of employees by more than 60,000. The SST became known as "the airplane that almost ate Seattle." As a result of the mass layoffs and so many people moving away from the city in search of work, a billboard was erected near Seattle–Tacoma International Airport in 1971 that read, "Will the last person leaving Seattle – turn out the lights".
Aftermath
The SST race has had several lasting effects on the industry as a whole. The supercritical wing was originally developed as part of the SST efforts in the U.S., but is now widely used on most jet aircraft. In Europe, the cooperation that allowed Concorde led to the formation of Airbus, Boeing's foremost competitor, with Aérospatiale becoming a main component of Airbus.
When Concorde was launched, sales were predicted to be 150 aircraft, but only 14 aircraft were built for commercial service. Service entry was only secured through large government funding subsidy. These few aircraft went on to have a very long in-service flight life and were claimed to be ultimately commercially successful for their operators, until finally removed from service in the aftermath of the type's only crash in 2000 and the 9/11 terrorist attacks when Airbus decided to end servicing arrangements.
Its Soviet counterpart, the Tupolev Tu-144, was less successful, operating for only 55 passenger flights before being permanently grounded for various reasons.
With the ending of the 2707 project, the entire SST field in the U.S. was moribund for some time. By the mid-1970s, minor advances, combined, appeared to offer greatly improved performance. Through the second half of the 1970s, NASA provided funding for the Advanced Supersonic Transport (AST) project at several companies, including McDonnell Douglas, Boeing, and Lockheed. Considerable wind tunnel testing of the various models was carried out at NASA's Langley Research Center.
Ultimately, supersonic passenger service was not economically competitive, and ceased with the retirement of Concorde in 2003; , no commercial supersonic aircraft operate in the world, due largely to poor fuel economy and high maintenance costs.
Legacy
The Museum of Flight in Seattle parks a British Airways Concorde a few blocks from the building where the original 2707 mockup was housed in Seattle. While the Soviet Tu-144 had a short service life, Concorde was successful enough to fly as a small luxury fleet from 1976 until 2003, with British Airways lifetime costs of £1bn producing £1.75bn in revenues in the niche transatlantic market. As the most advanced supersonic transports became some of the oldest airframes in the fleet, profits eventually fell, due to rising maintenance costs.
The final-configuration Boeing 2707 mockup was sold to a museum and displayed at the SST Aviation Exhibit Center in Kissimmee, Florida, from 1973 to 1981. In 1983, the building, complete with SST, was purchased by the Faith World church. For years the Osceola New Life Assembly of God held services there with the airplane still standing above. In 1990, the mock-up was sold to aircraft restorer Charles Bell, who moved it, in pieces, to Merritt Island, in order to preserve it while it waited for a new home as the church now wanted the space for expansion. The forward fuselage was on display at the Hiller Aviation Museum of San Carlos, California, for many years, but in early 2013, was moved back to Seattle, where it is undergoing restoration at the Museum of Flight.
Seattle's NBA basketball team, formed in 1967, was named the Seattle SuperSonics (shortened to "Sonics"). The name was inspired by the newly won SST contract.
Variants
2707-100
Variable sweep wing
2707-200
Same as -100, but with canards
2707-300
Stationary wing
References
Sources
Abandoned civil aircraft projects of the United States
2707
Supersonic transports
Variable-sweep-wing aircraft
Quadjets
Low-wing aircraft
1960s United States airliners
Aircraft with retractable tricycle landing gear | Boeing 2707 | [
"Physics"
] | 4,164 | [
"Physical systems",
"Transport",
"Supersonic transports"
] |
180,624 | https://en.wikipedia.org/wiki/Vehicle%20dynamics | Vehicle dynamics is the study of vehicle motion, e.g., how a vehicle's forward movement changes in response to driver inputs, propulsion system outputs, ambient conditions, air/surface/water conditions, etc.
Vehicle dynamics is a part of engineering primarily based on classical mechanics.
It may be applied for motorized vehicles (such as automobiles), bicycles and motorcycles, aircraft, and watercraft.
Factors affecting vehicle dynamics
The aspects of a vehicle's design which affect the dynamics can be grouped into drivetrain and braking, suspension and steering, distribution of mass, aerodynamics and tires.
Drivetrain and braking
Automobile layout (i.e. location of engine and driven wheels)
Powertrain
Braking system
Suspension and steering
Some attributes relate to the geometry of the suspension, steering and chassis. These include:
Ackermann steering geometry
Axle track
Camber angle
Caster angle
Ride height
Roll center
Scrub radius
Steering ratio
Toe
Wheel alignment
Wheelbase
Distribution of mass
Some attributes or aspects of vehicle dynamics are purely due to mass and its distribution. These include:
Center of mass
Moment of inertia
Roll moment
Sprung mass
Unsprung mass
Weight distribution
Aerodynamics
Some attributes or aspects of vehicle dynamics are purely aerodynamic. These include:
Automobile drag coefficient
Automotive aerodynamics
Center of pressure
Downforce
Ground effect in cars
Tires
Some attributes or aspects of vehicle dynamics can be attributed directly to the tires. These include:
Camber thrust
Circle of forces
Contact patch
Cornering force
Ground pressure
Pacejka's Magic Formula
Pneumatic trail
Radial Force Variation
Relaxation length
Rolling resistance
Self aligning torque
Skid
Slip angle
Slip (vehicle dynamics)
Spinout
Steering ratio
Tire load sensitivity
Vehicle behaviours
Some attributes or aspects of vehicle dynamics are purely dynamic. These include:
Body flex
Body roll
Bump Steer
Bundorf analysis
Directional stability
Critical speed
Noise, vibration, and harshness
Pitch
Ride quality
Roll
Speed wobble
Understeer, oversteer, lift-off oversteer, and fishtailing
Weight transfer and load transfer
Yaw
Analysis and simulation
The dynamic behavior of vehicles can be analysed in several different ways. This can be as straightforward as a simple spring mass system, through a three-degree of freedom (DoF) bicycle model, to a large degree of complexity using a multibody system simulation package such as MSC ADAMS or Modelica. As computers have gotten faster, and software user interfaces have improved, commercial packages such as CarSim have become widely used in industry for rapidly evaluating hundreds of test conditions much faster than real time. Vehicle models are often simulated with advanced controller designs provided as software in the loop (SIL) with controller design software such as Simulink, or with physical hardware in the loop (HIL).
Vehicle motions are largely due to the shear forces generated between the tires and road, and therefore the tire model is an essential part of the math model. In current vehicle simulator models, the tire model is the weakest and most difficult part to simulate. The tire model must produce realistic shear forces during braking, acceleration, cornering, and combinations, on a range of surface conditions. Many models are in use. Most are semi-empirical, such as the Pacejka Magic Formula model.
Racing car games or simulators are also a form of vehicle dynamics simulation. In early versions many simplifications were necessary in order to get real-time performance with reasonable graphics. However, improvements in computer speed have combined with interest in realistic physics, leading to driving simulators that are used for vehicle engineering using detailed models such as CarSim.
It is important that the models should agree with real world test results, hence many of the following tests are correlated against results from instrumented test vehicles.
Techniques include:
Linear range constant radius understeer
Fishhook
Frequency response
Lane change
Moose test
Sinusoidal steering
Skidpad
Swept path analysis
See also
Automotive suspension design
Automobile handling
Hunting oscillation
Multi-axis shaker table
Vehicular metrics
4-poster
7 post shaker
References
Further reading
A new way of representing tyre data obtained from measurements in pure cornering and pure braking conditions.
Mathematically oriented derivation of standard vehicle dynamics equations, and definitions of standard terms.
Vehicle dynamics as developed by Maurice Olley from the 1930s onwards. First comprehensive analytical synthesis of vehicle dynamics.
Latest and greatest, also the standard reference for automotive suspension engineers.
Vehicle dynamics and chassis design from a race car perspective.
Handling, Braking, and Ride of Road and Race Cars.
Lecture Notes to the MOOC Vehicle Dynamics of iversity
Automotive engineering
Automotive technologies
Dynamics (mechanics)
Vehicle technology | Vehicle dynamics | [
"Physics",
"Engineering"
] | 916 | [
"Physical phenomena",
"Classical mechanics",
"Automotive engineering",
"Motion (physics)",
"Vehicle technology",
"Mechanical engineering by discipline",
"Dynamics (mechanics)"
] |
180,787 | https://en.wikipedia.org/wiki/Cubic%20equation | In algebra, a cubic equation in one variable is an equation of the form
in which is not zero.
The solutions of this equation are called roots of the cubic function defined by the left-hand side of the equation. If all of the coefficients , , , and of the cubic equation are real numbers, then it has at least one real root (this is true for all odd-degree polynomial functions). All of the roots of the cubic equation can be found by the following means:
algebraically: more precisely, they can be expressed by a cubic formula involving the four coefficients, the four basic arithmetic operations, square roots, and cube roots. (This is also true of quadratic (second-degree) and quartic (fourth-degree) equations, but not for higher-degree equations, by the Abel–Ruffini theorem.)
trigonometrically
numerical approximations of the roots can be found using root-finding algorithms such as Newton's method.
The coefficients do not need to be real numbers. Much of what is covered below is valid for coefficients in any field with characteristic other than 2 and 3. The solutions of the cubic equation do not necessarily belong to the same field as the coefficients. For example, some cubic equations with rational coefficients have roots that are irrational (and even non-real) complex numbers.
History
Cubic equations were known to the ancient Babylonians, Greeks, Chinese, Indians, and Egyptians. Babylonian (20th to 16th centuries BC) cuneiform tablets have been found with tables for calculating cubes and cube roots. The Babylonians could have used the tables to solve cubic equations, but no evidence exists to confirm that they did. The problem of doubling the cube involves the simplest and oldest studied cubic equation, and one for which the ancient Egyptians did not believe a solution existed. In the 5th century BC, Hippocrates reduced this problem to that of finding two mean proportionals between one line and another of twice its length, but could not solve this with a compass and straightedge construction, a task which is now known to be impossible. Methods for solving cubic equations appear in The Nine Chapters on the Mathematical Art, a Chinese mathematical text compiled around the 2nd century BC and commented on by Liu Hui in the 3rd century.
In the 3rd century AD, the Greek mathematician Diophantus found integer or rational solutions for some bivariate cubic equations (Diophantine equations). Hippocrates, Menaechmus and Archimedes are believed to have come close to solving the problem of doubling the cube using intersecting conic sections, though historians such as Reviel Netz dispute whether the Greeks were thinking about cubic equations or just problems that can lead to cubic equations. Some others like T. L. Heath, who translated all of Archimedes's works, disagree, putting forward evidence that Archimedes really solved cubic equations using intersections of two conics, but also discussed the conditions where the roots are 0, 1 or 2.
In the 7th century, the Tang dynasty astronomer mathematician Wang Xiaotong in his mathematical treatise titled Jigu Suanjing systematically established and solved numerically 25 cubic equations of the form , 23 of them with , and two of them with .
In the 11th century, the Persian poet-mathematician, Omar Khayyam (1048–1131), made significant progress in the theory of cubic equations. In an early paper, he discovered that a cubic equation can have more than one solution and stated that it cannot be solved using compass and straightedge constructions. He also found a geometric solution. In his later work, the Treatise on Demonstration of Problems of Algebra, he wrote a complete classification of cubic equations with general geometric solutions found by means of intersecting conic sections. Khayyam made an attempt to come up with an algebraic formula for extracting cubic roots. He wrote: “We have tried to express these roots by algebra but have failed. It may be, however, that men who come after us will succeed.”
In the 12th century, the Indian mathematician Bhaskara II attempted the solution of cubic equations without general success. However, he gave one example of a cubic equation: . In the 12th century, another Persian mathematician, Sharaf al-Dīn al-Tūsī (1135–1213), wrote the Al-Muʿādalāt (Treatise on Equations), which dealt with eight types of cubic equations with positive solutions and five types of cubic equations which may not have positive solutions. He used what would later be known as the Horner–Ruffini method to numerically approximate the root of a cubic equation. He also used the concepts of maxima and minima of curves in order to solve cubic equations which may not have positive solutions. He understood the importance of the discriminant of the cubic equation to find algebraic solutions to certain types of cubic equations.
In his book Flos, Leonardo de Pisa, also known as Fibonacci (1170–1250), was able to closely approximate the positive solution to the cubic equation . Writing in Babylonian numerals he gave the result as 1,22,7,42,33,4,40 (equivalent to 1 + 22/60 + 7/602 + 42/603 + 33/604 + 4/605 + 40/606), which has a relative error of about 10−9.
In the early 16th century, the Italian mathematician Scipione del Ferro (1465–1526) found a method for solving a class of cubic equations, namely those of the form . In fact, all cubic equations can be reduced to this form if one allows and to be negative, but negative numbers were not known to him at that time. Del Ferro kept his achievement secret until just before his death, when he told his student Antonio Fior about it.
In 1535, Niccolò Tartaglia (1500–1557) received two problems in cubic equations from Zuanne da Coi and announced that he could solve them. He was soon challenged by Fior, which led to a famous contest between the two. Each contestant had to put up a certain amount of money and to propose a number of problems for his rival to solve. Whoever solved more problems within 30 days would get all the money. Tartaglia received questions in the form , for which he had worked out a general method. Fior received questions in the form , which proved to be too difficult for him to solve, and Tartaglia won the contest.
Later, Tartaglia was persuaded by Gerolamo Cardano (1501–1576) to reveal his secret for solving cubic equations. In 1539, Tartaglia did so only on the condition that Cardano would never reveal it and that if he did write a book about cubics, he would give Tartaglia time to publish. Some years later, Cardano learned about del Ferro's prior work and published del Ferro's method in his book Ars Magna in 1545, meaning Cardano gave Tartaglia six years to publish his results (with credit given to Tartaglia for an independent solution).
Cardano's promise to Tartaglia said that he would not publish Tartaglia's work, and Cardano felt he was publishing del Ferro's, so as to get around the promise. Nevertheless, this led to a challenge to Cardano from Tartaglia, which Cardano denied. The challenge was eventually accepted by Cardano's student Lodovico Ferrari (1522–1565). Ferrari did better than Tartaglia in the competition, and Tartaglia lost both his prestige and his income.
Cardano noticed that Tartaglia's method sometimes required him to extract the square root of a negative number. He even included a calculation with these complex numbers in Ars Magna, but he did not really understand it. Rafael Bombelli studied this issue in detail and is therefore often considered as the discoverer of complex numbers.
François Viète (1540–1603) independently derived the trigonometric solution for the cubic with three real roots, and René Descartes (1596–1650) extended the work of Viète.
Factorization
If the coefficients of a cubic equation are rational numbers, one can obtain an equivalent equation with integer coefficients, by multiplying all coefficients by a common multiple of their denominators. Such an equation
with integer coefficients, is said to be reducible if the polynomial on the left-hand side is the product of polynomials of lower degrees. By Gauss's lemma, if the equation is reducible, one can suppose that the factors have integer coefficients.
Finding the roots of a reducible cubic equation is easier than solving the general case. In fact, if the equation is reducible, one of the factors must have degree one, and thus have the form
with and being coprime integers. The rational root test allows finding and by examining a finite number of cases (because must be a divisor of , and must be a divisor of ).
Thus, one root is and the other roots are the roots of the other factor, which can be found by polynomial long division. This other factor is
(The coefficients seem not to be integers, but must be integers if is a root.)
Then, the other roots are the roots of this quadratic polynomial and can be found by using the quadratic formula.
Depressed cubic
Cubics of the form
are said to be depressed. They are much simpler than general cubics, but are fundamental, because the study of any cubic may be reduced by a simple change of variable to that of a depressed cubic.
Let
be a cubic equation. The change of variable
gives a cubic (in ) that has no term in .
After dividing by one gets the depressed cubic equation
with
The roots of the original equation are related to the roots of the depressed equation by the relations
for .
Discriminant and nature of the roots
The nature (real or not, distinct or not) of the roots of a cubic can be determined without computing them explicitly, by using the discriminant.
Discriminant
The discriminant of a polynomial is a function of its coefficients that is zero if and only if the polynomial has a multiple root, or, if it is divisible by the square of a non-constant polynomial. In other words, the discriminant is nonzero if and only if the polynomial is square-free.
If are the three roots (not necessarily distinct nor real) of the cubic then the discriminant is
The discriminant of the depressed cubic is
The discriminant of the general cubic is
It is the product of and the discriminant of the corresponding depressed cubic. Using the formula relating the general cubic and the associated depressed cubic, this implies that the discriminant of the general cubic can be written as
It follows that one of these two discriminants is zero if and only if the other is also zero, and, if the coefficients are real, the two discriminants have the same sign. In summary, the same information can be deduced from either one of these two discriminants.
To prove the preceding formulas, one can use Vieta's formulas to express everything as polynomials in , and . The proof then results in the verification of the equality of two polynomials.
Nature of the roots
If the coefficients of a polynomial are real numbers, and its discriminant is not zero, there are two cases:
If the cubic has three distinct real roots
If the cubic has one real root and two non-real complex conjugate roots.
This can be proved as follows. First, if is a root of a polynomial with real coefficients, then its complex conjugate is also a root. So the non-real roots, if any, occur as pairs of complex conjugate roots. As a cubic polynomial has three roots (not necessarily distinct) by the fundamental theorem of algebra, at least one root must be real.
As stated above, if are the three roots of the cubic , then the discriminant is
If the three roots are real and distinct, the discriminant is a product of positive reals, that is
If only one root, say , is real, then and are complex conjugates, which implies that is a purely imaginary number, and thus that is real and negative. On the other hand, and are complex conjugates, and their product is real and positive. Thus the discriminant is the product of a single negative number and several positive ones. That is
Multiple root
If the discriminant of a cubic is zero, the cubic has a multiple root. If furthermore its coefficients are real, then all of its roots are real.
The discriminant of the depressed cubic is zero if If is also zero, then , and 0 is a triple root of the cubic. If and , then the cubic has a simple root
and a double root
In other words,
This result can be proved by expanding the latter product or retrieved by solving the rather simple system of equations resulting from Vieta's formulas.
By using the reduction of a depressed cubic, these results can be extended to the general cubic. This gives: If the discriminant of the cubic is zero, then
either, if the cubic has a triple root and
or, if the cubic has a double root and a simple root, and thus
Characteristic 2 and 3
The above results are valid when the coefficients belong to a field of characteristic other than 2 or 3, but must be modified for characteristic 2 or 3, because of the involved divisions by 2 and 3.
The reduction to a depressed cubic works for characteristic 2, but not for characteristic 3. However, in both cases, it is simpler to establish and state the results for the general cubic. The main tool for that is the fact that a multiple root is a common root of the polynomial and its formal derivative. In these characteristics, if the derivative is not a constant, it is a linear polynomial in characteristic 3, and is the square of a linear polynomial in characteristic 2. Therefore, for either characteristic 2 or 3, the derivative has only one root. This allows computing the multiple root, and the third root can be deduced from the sum of the roots, which is provided by Vieta's formulas.
A difference with other characteristics is that, in characteristic 2, the formula for a double root involves a square root, and, in characteristic 3, the formula for a triple root involves a cube root.
Cardano's formula
Gerolamo Cardano is credited with publishing the first formula for solving cubic equations, attributing it to Scipione del Ferro and Niccolo Fontana Tartaglia. The formula applies to depressed cubics, but, as shown in , it allows solving all cubic equations.
Cardano's result is that if
is a cubic equation such that and are real numbers such that is positive (this implies that the discriminant of the equation is negative) then the equation has the real root
where and are the two numbers and
See , below, for several methods for getting this result.
As shown in , the two other roots are non-real complex conjugate numbers, in this case. It was later shown (Cardano did not know complex numbers) that the two other roots are obtained by multiplying one of the cube roots by the primitive cube root of unity and the other cube root by the other primitive cube root of the unity That is, the other roots of the equation are and
If there are three real roots, but Galois theory allows proving that, if there is no rational root, the roots cannot be expressed by an algebraic expression involving only real numbers. Therefore, the equation cannot be solved in this case with the knowledge of Cardano's time. This case has thus been called casus irreducibilis, meaning irreducible case in Latin.
In casus irreducibilis, Cardano's formula can still be used, but some care is needed in the use of cube roots. A first method is to define the symbols and as representing the principal values of the root function (that is the root that has the largest real part). With this convention Cardano's formula for the three roots remains valid, but is not purely algebraic, as the definition of a principal part is not purely algebraic, since it involves inequalities for comparing real parts. Also, the use of principal cube root may give a wrong result if the coefficients are non-real complex numbers. Moreover, if the coefficients belong to another field, the principal cube root is not defined in general.
The second way for making Cardano's formula always correct, is to remark that the product of the two cube roots must be . It results that a root of the equation is
In this formula, the symbols and denote any square root and any cube root. The other roots of the equation are obtained either by changing of cube root or, equivalently, by multiplying the cube root by a primitive cube root of unity, that is
This formula for the roots is always correct except when , with the proviso that if , the square root is chosen so that . However, Cardano's formula is useless if as the roots are the cube roots of Similarly, the formula is also useless in the cases where no cube root is needed, that is when the cubic polynomial is not irreducible; this includes the case
This formula is also correct when and belong to any field of characteristic other than 2 or 3.
General cubic formula
A cubic formula for the roots of the general cubic equation (with )
can be deduced from every variant of Cardano's formula by reduction to a depressed cubic. The variant that is presented here is valid not only for real coefficients, but also for coefficients belonging to any field of characteristic other than 2 or 3. If the coefficients are real numbers, the formula covers all complex solutions, not just real ones.
The formula being rather complicated, it is worth splitting it in smaller formulas.
Let
(Both and can be expressed as resultants of the cubic and its derivatives: is times the resultant of the cubic and its second derivative, and is times the resultant of the first and second derivatives of the cubic polynomial.)
Then let
where the symbols and are interpreted as any square root and any cube root, respectively (every nonzero complex number has two square roots and three cubic roots). The sign "" before the square root is either "" or ""; the choice is almost arbitrary, and changing it amounts to choosing a different square root. However, if a choice yields (this occurs if ), then the other sign must be selected instead. If both choices yield , that is, if a fraction occurs in following formulas; this fraction must be interpreted as equal to zero (see the end of this section).
With these conventions, one of the roots is
The other two roots can be obtained by changing the choice of the cube root in the definition of , or, equivalently by multiplying by a primitive cube root of unity, that is . In other words, the three roots are
where .
As for the special case of a depressed cubic, this formula applies but is useless when the roots can be expressed without cube roots. In particular, if the formula gives that the three roots equal which means that the cubic polynomial can be factored as A straightforward computation allows verifying that the existence of this factorization is equivalent with
Trigonometric and hyperbolic solutions
Trigonometric solution for three real roots
When a cubic equation with real coefficients has three real roots, the formulas expressing these roots in terms of radicals involve complex numbers. Galois theory allows proving that when the three roots are real, and none is rational (casus irreducibilis), one cannot express the roots in terms of real radicals. Nevertheless, purely real expressions of the solutions may be obtained using trigonometric functions, specifically in terms of cosines and arccosines. More precisely, the roots of the depressed cubic
are
This formula is due to François Viète. It is purely real when the equation has three real roots (that is ). Otherwise, it is still correct but involves complex cosines and arccosines when there is only one real root, and it is nonsensical (division by zero) when .
This formula can be straightforwardly transformed into a formula for the roots of a general cubic equation, using the back-substitution described in .
The formula can be proved as follows: Starting from the equation , let us set The idea is to choose to make the equation coincide with the identity
For this, choose and divide the equation by This gives
Combining with the above identity, one gets
and the roots are thus
Hyperbolic solution for one real root
When there is only one real root (and ), this root can be similarly represented using hyperbolic functions, as
If and the inequalities on the right are not satisfied (the case of three real roots), the formulas remain valid but involve complex quantities.
When , the above values of are sometimes called the Chebyshev cube root. More precisely, the values involving cosines and hyperbolic cosines define, when , the same analytic function denoted , which is the proper Chebyshev cube root. The value involving hyperbolic sines is similarly denoted , when .
Geometric solutions
Omar Khayyám's solution
For solving the cubic equation where , Omar Khayyám constructed the parabola , the circle that has as a diameter the line segment on the positive -axis, and a vertical line through the point where the circle and the parabola intersect above the -axis. The solution is given by the length of the horizontal line segment from the origin to the intersection of the vertical line and the -axis (see the figure).
A simple modern proof is as follows. Multiplying the equation by and regrouping the terms gives
The left-hand side is the value of on the parabola. The equation of the circle being , the right hand side is the value of on the circle.
Solution with angle trisector
A cubic equation with real coefficients can be solved geometrically using compass, straightedge, and an angle trisector if and only if it has three real roots.
A cubic equation can be solved by compass-and-straightedge construction (without trisector) if and only if it has a rational root. This implies that the old problems of angle trisection and doubling the cube, set by ancient Greek mathematicians, cannot be solved by compass-and-straightedge construction.
Geometric interpretation of the roots
Three real roots
Viète's trigonometric expression of the roots in the three-real-roots case lends itself to a geometric interpretation in terms of a circle. When the cubic is written in depressed form (), , as shown above, the solution can be expressed as
Here is an angle in the unit circle; taking of that angle corresponds to taking a cube root of a complex number; adding for finds the other cube roots; and multiplying the cosines of these resulting angles by corrects for scale.
For the non-depressed case () (shown in the accompanying graph), the depressed case as indicated previously is obtained by defining such that so . Graphically this corresponds to simply shifting the graph horizontally when changing between the variables and , without changing the angle relationships. This shift moves the point of inflection and the centre of the circle onto the -axis. Consequently, the roots of the equation in sum to zero.
One real root
In the Cartesian plane
When the graph of a cubic function is plotted in the Cartesian plane, if there is only one real root, it is the abscissa (-coordinate) of the horizontal intercept of the curve (point R on the figure). Further, if the complex conjugate roots are written as , then the real part is the abscissa of the tangency point H of the tangent line to cubic that passes through -intercept R of the cubic (that is the signed length OM, negative on the figure). The imaginary parts are the square roots of the tangent of the angle between this tangent line and the horizontal axis.
In the complex plane
With one real and two complex roots, the three roots can be represented as points in the complex plane, as can the two roots of the cubic's derivative. There is an interesting geometrical relationship among all these roots.
The points in the complex plane representing the three roots serve as the vertices of an isosceles triangle. (The triangle is isosceles because one root is on the horizontal (real) axis and the other two roots, being complex conjugates, appear symmetrically above and below the real axis.) Marden's theorem says that the points representing the roots of the derivative of the cubic are the foci of the Steiner inellipse of the triangle—the unique ellipse that is tangent to the triangle at the midpoints of its sides. If the angle at the vertex on the real axis is less than then the major axis of the ellipse lies on the real axis, as do its foci and hence the roots of the derivative. If that angle is greater than , the major axis is vertical and its foci, the roots of the derivative, are complex conjugates. And if that angle is , the triangle is equilateral, the Steiner inellipse is simply the triangle's incircle, its foci coincide with each other at the incenter, which lies on the real axis, and hence the derivative has duplicate real roots.
Galois group
Given a cubic irreducible polynomial over a field of characteristic different from 2 and 3, the Galois group over is the group of the field automorphisms that fix of the smallest extension of (splitting field). As these automorphisms must permute the roots of the polynomials, this group is either the group of all six permutations of the three roots, or the group of the three circular permutations.
The discriminant of the cubic is the square of
where is the leading coefficient of the cubic, and , and are the three roots of the cubic. As changes of sign if two roots are exchanged, is fixed by the Galois group only if the Galois group is
. In other words, the Galois group is if and only if the discriminant is the square of an element of .
As most integers are not squares, when working over the field of the rational numbers, the Galois group of most irreducible cubic polynomials is the group with six elements. An example of a Galois group with three elements is given by , whose discriminant is .
Derivation of the roots
This section regroups several methods for deriving Cardano's formula.
Cardano's method
This method is due to Scipione del Ferro and Tartaglia, but is named after Gerolamo Cardano who first published it in his book Ars Magna (1545).
This method applies to a depressed cubic . The idea is to introduce two variables and such that and to substitute this in the depressed cubic, giving
At this point Cardano imposed the condition This removes the third term in previous equality, leading to the system of equations
Knowing the sum and the product of and one deduces that they are the two solutions of the quadratic equation
so
The discriminant of this equation is , and assuming it is positive, real solutions to this equation are (after folding division by 4 under the square root):
So (without loss of generality in choosing or ):
As the sum of the cube roots of these solutions is a root of the equation. That is
is a root of the equation; this is Cardano's formula.
This works well when but, if the square root appearing in the formula is not real. As a complex number has three cube roots, using Cardano's formula without care would provide nine roots, while a cubic equation cannot have more than three roots. This was clarified first by Rafael Bombelli in his book L'Algebra (1572). The solution is to use the fact that that is, This means that only one cube root needs to be computed, and leads to the second formula given in .
The other roots of the equation can be obtained by changing of cube root, or, equivalently, by multiplying the cube root by each of the two primitive cube roots of unity, which are
Vieta's substitution
Vieta's substitution is a method introduced by François Viète (Vieta is his Latin name) in a text published posthumously in 1615, which provides directly the second formula of , and avoids the problem of computing two different cube roots.
Starting from the depressed cubic , Vieta's substitution is .
The substitution transforms the depressed cubic into
Multiplying by , one gets a quadratic equation in :
Let
be any nonzero root of this quadratic equation. If , and are the three cube roots of , then the roots of the original depressed cubic are , , and . The other root of the quadratic equation is This implies that changing the sign of the square root exchanges and for , and therefore does not change the roots. This method only fails when both roots of the quadratic equation are zero, that is when , in which case the only root of the depressed cubic is .
Lagrange's method
In his paper Réflexions sur la résolution algébrique des équations ("Thoughts on the algebraic solving of equations"), Joseph Louis Lagrange introduced a new method to solve equations of low degree in a uniform way, with the hope that he could generalize it for higher degrees. This method works well for cubic and quartic equations, but Lagrange did not succeed in applying it to a quintic equation, because it requires solving a resolvent polynomial of degree at least six.
Apart from the fact that nobody had previously succeeded, this was the first indication of the non-existence of an algebraic formula for degrees 5 and higher; as was later proved by the Abel–Ruffini theorem. Nevertheless, modern methods for solving solvable quintic equations are mainly based on Lagrange's method.
In the case of cubic equations, Lagrange's method gives the same solution as Cardano's. Lagrange's method can be applied directly to the general cubic equation , but the computation is simpler with the depressed cubic equation, .
Lagrange's main idea was to work with the discrete Fourier transform of the roots instead of with the roots themselves. More precisely, let be a primitive third root of unity, that is a number such that and (when working in the space of complex numbers, one has but this complex interpretation is not used here). Denoting , and the three roots of the cubic equation to be solved, let
be the discrete Fourier transform of the roots. If , and are known, the roots may be recovered from them with the inverse Fourier transform consisting of inverting this linear transformation; that is,
By Vieta's formulas, is known to be zero in the case of a depressed cubic, and for the general cubic. So, only and need to be computed. They are not symmetric functions of the roots (exchanging and exchanges also and ), but some simple symmetric functions of and are also symmetric in the roots of the cubic equation to be solved. Thus these symmetric functions can be expressed in terms of the (known) coefficients of the original cubic, and this allows eventually expressing the as roots of a polynomial with known coefficients. This works well for every degree, but, in degrees higher than four, the resulting polynomial that has the as roots has a degree higher than that of the initial polynomial, and is therefore unhelpful for solving. This is the reason for which Lagrange's method fails in degrees five and higher.
In the case of a cubic equation, and are such symmetric polynomials (see below). It follows that and are the two roots of the quadratic equation Thus the resolution of the equation may be finished exactly as with Cardano's method, with and in place of and
In the case of the depressed cubic, one has and while in Cardano's method we have set and Thus, up to the exchange of and we have and In other words, in this case, Cardano's method and Lagrange's method compute exactly the same things, up to a factor of three in the auxiliary variables, the main difference being that Lagrange's method explains why these auxiliary variables appear in the problem.
Computation of and
A straightforward computation using the relations and gives
This shows that and are symmetric functions of the roots. Using Newton's identities, it is straightforward to express them in terms of the elementary symmetric functions of the roots, giving
with , and in the case of a depressed cubic, and , and , in the general case.
Applications
Cubic equations arise in various other contexts.
In mathematics
Angle trisection and doubling the cube are two ancient problems of geometry that have been proved to not be solvable by straightedge and compass construction, because they are equivalent to solving a cubic equation.
Marden's theorem states that the foci of the Steiner inellipse of any triangle can be found by using the cubic function whose roots are the coordinates in the complex plane of the triangle's three vertices. The roots of the first derivative of this cubic are the complex coordinates of those foci.
The area of a regular heptagon can be expressed in terms of the roots of a cubic. Further, the ratios of the long diagonal to the side, the side to the short diagonal, and the negative of the short diagonal to the long diagonal all satisfy a particular cubic equation. In addition, the ratio of the inradius to the circumradius of a heptagonal triangle is one of the solutions of a cubic equation. The values of trigonometric functions of angles related to satisfy cubic equations.
Given the cosine (or other trigonometric function) of an arbitrary angle, the cosine of one-third of that angle is one of the roots of a cubic.
The solution of the general quartic equation relies on the solution of its resolvent cubic.
The eigenvalues of a 3×3 matrix are the roots of a cubic polynomial which is the characteristic polynomial of the matrix.
The characteristic equation of a third-order constant coefficients or Cauchy–Euler (equidimensional variable coefficients) linear differential equation or difference equation is a cubic equation.
Intersection points of cubic Bézier curve and straight line can be computed using direct cubic equation representing Bézier curve.
Critical points of a quartic function are found by solving a cubic equation (the derivative set equal to zero).
Inflection points of a quintic function are the solution of a cubic equation (the second derivative set equal to zero).
In other sciences
In analytical chemistry, the Charlot equation, which can be used to find the pH of buffer solutions, can be solved using a cubic equation.
In thermodynamics, equations of state (which relate pressure, volume, and temperature of a substances), e.g. the Van der Waals equation of state, are cubic in the volume.
Kinematic equations involving linear rates of acceleration are cubic.
The speed of seismic Rayleigh waves is a solution of the Rayleigh wave cubic equation.
The steady state speed of a vehicle moving on a slope with air friction for a given input power is solved by a depressed cubic equation.
Kepler's third law of planetary motion is cubic in the semi-major axis.
See also
Quartic equation
Quintic equation
Tschirnhaus transformation
Principal equation form
Notes
References
Further reading
Ch. 24.
External links
History of quadratic, cubic and quartic equations on MacTutor archive.
500 years of NOT teaching THE CUBIC FORMULA. What is it they think you can't handle? – YouTube video by Mathologer about the history of cubic equations and Cardano's solution, as well as Ferrari's solution to quartic equations
Elementary algebra
Equations
Polynomials | Cubic equation | [
"Mathematics"
] | 7,424 | [
"Polynomials",
"Mathematical objects",
"Elementary algebra",
"Equations",
"Elementary mathematics",
"Algebra"
] |
180,835 | https://en.wikipedia.org/wiki/De%20Finetti%27s%20theorem | In probability theory, de Finetti's theorem states that exchangeable observations are conditionally independent relative to some latent variable. An epistemic probability distribution could then be assigned to this variable. It is named in honor of Bruno de Finetti, and one of its uses is in providing a pragmatic approach to de Finetti's well-known dictum "Probability does not exist".
For the special case of an exchangeable sequence of Bernoulli random variables it states that such a sequence is a "mixture" of sequences of independent and identically distributed (i.i.d.) Bernoulli random variables.
A sequence of random variables is called exchangeable if the joint distribution of the sequence is unchanged by any permutation of the indices. In general, while the variables of the exchangeable sequence are not themselves independent, only exchangeable, there is an underlying family of i.i.d. random variables. That is, there are underlying, generally unobservable, quantities that are i.i.d. – exchangeable sequences are mixtures of i.i.d. sequences.
Background
A Bayesian statistician often seeks the conditional probability distribution of a random quantity given the data. The concept of exchangeability was introduced by de Finetti. De Finetti's theorem explains a mathematical relationship between independence and exchangeability.
An infinite sequence
of random variables is said to be exchangeable if for any natural number n and any finite sequence i1, ..., in and any permutation of the sequence π:{i1, ..., in } → {i1, ..., in },
both have the same joint probability distribution.
If an identically distributed sequence is independent, then the sequence is exchangeable; however, the converse is false—there exist exchangeable random variables that are not statistically independent, for example the Pólya urn model.
Statement of the theorem
A random variable X has a Bernoulli distribution if Pr(X = 1) = p and Pr(X = 0) = 1 − p for some p ∈ (0, 1).
De Finetti's theorem states that the probability distribution of any infinite exchangeable sequence of Bernoulli random variables is a "mixture" of the probability distributions of independent and identically distributed sequences of Bernoulli random variables. "Mixture", in this sense, means a weighted average, but this need not mean a finite or countably infinite (i.e., discrete) weighted average: it can be an integral over a measure rather than a sum.
More precisely, suppose X1, X2, X3, ... is an infinite exchangeable sequence of Bernoulli-distributed random variables. Then there is some probability measure m on the interval [0, 1] and some random variable Y such that
The probability measure of Y is m, and
The conditional probability distribution of the whole sequence X1, X2, X3, ... given the value of Y is described by saying that
X1, X2, X3, ... are conditionally independent given Y, and
For any i ∈ {1, 2, 3, ...}, the conditional probability that Xi = 1, given the value of Y, is Y.
Another way of stating the theorem
Suppose is an infinite exchangeable sequence of Bernoulli random variables. Then are conditionally independent and identically distributed given the exchangeable sigma-algebra (i.e., the sigma-algebra consisting of events that are measurable with respect to and invariant under finite permutations of the indices).
A plain-language consequence of the theorem
According to David Spiegelhalter (ref 1) the theorem provides a pragmatic approach to de Finetti's statement that "Probability does not exist". If our view of the probability of a sequence of events is subjective but remains unaffected by the order in which we make our observations, then the sequence can be regarded as exchangeable. De Finetti's theorem then implies that believing the sequence to be exchangeable is mathematically equivalent to actng as if the events are independent and have an objective underlying probability of occurring, with our uncertainty about what that probability is being expressed by a subjective probability distribution function. According to Spiegelhalter: ″This is remarkable: it shows that, starting from a specific, but purely subjective, expression of convictions, we should act as if events were driven by objective chances."
Example
As a concrete example, we construct a sequence
of random variables, by "mixing" two i.i.d. sequences as follows.
We assume p = 2/3 with probability 1/2 and p = 9/10 with probability 1/2. Given the event p = 2/3, the conditional distribution of the sequence is that the Xi are independent and identically distributed and X1 = 1 with probability 2/3 and X1 = 0 with probability 1 − 2/3. Given the event p = 9/10, the conditional distribution of the sequence is that the Xi are independent and identically distributed and X1 = 1 with probability 9/10 and X1 = 0 with probability 1 − 9/10.
This can be interpreted as follows: Make two biased coins, one showing "heads" with 2/3 probability and one showing "heads" with 9/10 probability. Flip a fair coin once to decide which biased coin to use for all flips that are recorded. Here "heads" at flip i means Xi=1.
The independence asserted here is conditional independence, i.e. the Bernoulli random variables in the sequence are conditionally independent given the event that p = 2/3, and are conditionally independent given the event that p = 9/10. But they are not unconditionally independent; they are positively correlated.
In view of the strong law of large numbers, we can say that
Rather than concentrating probability 1/2 at each of two points between 0 and 1, the "mixing distribution" can be any probability distribution supported on the interval from 0 to 1; which one it is depends on the joint distribution of the infinite sequence of Bernoulli random variables.
The definition of exchangeability, and the statement of the theorem, also makes sense for finite length sequences
but the theorem is not generally true in that case. It is true if the sequence can be extended to an exchangeable sequence that is infinitely long. The simplest example of an exchangeable sequence of Bernoulli random variables that cannot be so extended is the one in which X1 = 1 − X2 and X1 is either 0 or 1, each with probability 1/2. This sequence is exchangeable, but cannot be extended to an exchangeable sequence of length 3, let alone an infinitely long one.
As a categorical limit
De Finetti's theorem can be expressed as a categorical limit in the category of Markov kernels.
Let be a standard Borel space, and consider the space of sequences on , the countable product (equipped with the product sigma-algebra).
Given a finite permutation , denote again by the permutation action on , as well as the Markov kernel induced by it.
In terms of category theory, we have a diagram with a single object, , and a countable number of arrows, one for each permutation.
Recall now that a probability measure is equivalently a Markov kernel from the one-point measurable space.
A probability measure on is exchangeable if and only if, as Markov kernels, for every permutation .
More generally, given any standard Borel space , one can call a Markov kernel exchangeable if for every , i.e. if the following diagram commutes,
giving a cone.
De Finetti's theorem can be now stated as the fact that the space of probability measures over (Giry monad) forms a universal (or limit) cone.
More in detail, consider the Markov kernel constructed as follows, using the Kolmogorov extension theorem:
for all measurable subsets of .
Note that we can interpret this kernel as taking a probability measure as input and returning an iid sequence on distributed according to . Since iid sequences are exchangeable, is an exchangeable kernel in the sense defined above.
The kernel doesn't just form a cone, but a limit cone: given any exchangeable kernel , there exists a unique kernel such that , i.e. making the following diagram commute:
In particular, for any exchangeable probability measure on , there exists a unique probability measure on (i.e. a probability measure over probability measures) such that , i.e. such that for all measurable subsets of ,
In other words, is a mixture of iid measures on (the ones formed by in the integral above).
Extensions
Versions of de Finetti's theorem for finite exchangeable sequences, and for Markov exchangeable sequences have been proved by Diaconis and Freedman and by Kerns and Szekely.
Two notions of partial exchangeability of arrays, known as separate and joint exchangeability lead to extensions of de Finetti's theorem for arrays by Aldous and Hoover.
The computable de Finetti theorem shows that if an exchangeable sequence of real random variables is given by a computer program, then a program which samples from the mixing measure can be automatically recovered.
In the setting of free probability, there is a noncommutative extension of de Finetti's theorem which characterizes noncommutative sequences invariant under quantum permutations.
Extensions of de Finetti's theorem to quantum states have been found to be useful in quantum information, in topics like quantum key distribution and entanglement detection. A multivariate extension of de Finetti’s theorem can be used to derive Bose–Einstein statistics from the statistics of classical (i.e. independent) particles.
See also
Choquet theory
Hewitt–Savage zero–one law
Krein–Milman theorem
Invariant sigma-algebra
References
External links
What is so cool about De Finetti's representation theorem?
Probability theorems
Bayesian statistics
Integral representations | De Finetti's theorem | [
"Mathematics"
] | 2,085 | [
"Theorems in probability theory",
"Mathematical theorems",
"Mathematical problems"
] |
180,841 | https://en.wikipedia.org/wiki/Hypergeometric%20distribution | In probability theory and statistics, the hypergeometric distribution is a discrete probability distribution that describes the probability of successes (random draws for which the object drawn has a specified feature) in draws, without replacement, from a finite population of size that contains exactly objects with that feature, wherein each draw is either a success or a failure. In contrast, the binomial distribution describes the probability of successes in draws with replacement.
Definitions
Probability mass function
The following conditions characterize the hypergeometric distribution:
The result of each draw (the elements of the population being sampled) can be classified into one of two mutually exclusive categories (e.g. Pass/Fail or Employed/Unemployed).
The probability of a success changes on each draw, as each draw decreases the population (sampling without replacement from a finite population).
A random variable follows the hypergeometric distribution if its probability mass function (pmf) is given by
where
is the population size,
is the number of success states in the population,
is the number of draws (i.e. quantity drawn in each trial),
is the number of observed successes,
is a binomial coefficient.
The is positive when .
A random variable distributed hypergeometrically with parameters , and is written and has probability mass function above.
Combinatorial identities
As required, we have
which essentially follows from Vandermonde's identity from combinatorics.
Also note that
This identity can be shown by expressing the binomial coefficients in terms of factorials and rearranging the latter. Additionally, it
follows from the symmetry of the problem, described in two different but interchangeable ways.
For example, consider two rounds of drawing without replacement. In the first round, out of neutral marbles are drawn from an urn without replacement and coloured green. Then the colored marbles are put back. In the second round, marbles are drawn without replacement and colored red. Then, the number of marbles with both colors on them (that is, the number of marbles that have been drawn twice) has the hypergeometric distribution. The symmetry in and stems from the fact that the two rounds are independent, and one could have started by drawing balls and colouring them red first.
Note that we are interested in the probability of successes in draws without replacement, since the probability of success on each trial is not the same, as the size of the remaining population changes as we remove each marble. Keep in mind not to confuse with the binomial distribution, which describes the probability of successes in draws with replacement.
Properties
Working example
The classical application of the hypergeometric distribution is sampling without replacement. Think of an urn with two colors of marbles, red and green. Define drawing a green marble as a success and drawing a red marble as a failure. Let N describe the number of all marbles in the urn (see contingency table below) and K describe the number of green marbles, then N − K corresponds to the number of red marbles. Now, standing next to the urn, you close your eyes and draw n marbles without replacement. Define X as a random variable whose outcome is k, the number of green marbles drawn in the experiment. This situation is illustrated by the following contingency table:
Indeed, we are interested in calculating the probability of drawing k green marbles in n draws, given that there are K green marbles out of a total of N marbles. For this example, assume that there are 5 green and 45 red marbles in the urn. Standing next to the urn, you close your eyes and draw 10 marbles without replacement. What is the probability that exactly 4 of the 10 are green?
This problem is summarized by the following contingency table:
To find the probability of drawing k green marbles in exactly n draws out of N total draws, we identify X as a hyper-geometric random variable to use the formula
To intuitively explain the given formula, consider the two symmetric problems represented by the identity
left-hand side - drawing a total of only n marbles out of the urn. We want to find the probability of the outcome of drawing k green marbles out of K total green marbles, and drawing n-k red marbles out of N-K red marbles, in these n rounds.
right hand side - alternatively, drawing all N marbles out of the urn. We want to find the probability of the outcome of drawing k green marbles in n draws out of the total N draws, and K-k green marbles in the rest N-n draws.
Back to the calculations, we use the formula above to calculate the probability of drawing exactly k green marbles
Intuitively we would expect it to be even more unlikely that all 5 green marbles will be among the 10 drawn.
As expected, the probability of drawing 5 green marbles is roughly 35 times less likely than that of drawing 4.
Symmetries
Swapping the roles of green and red marbles:
Swapping the roles of drawn and not drawn marbles:
Swapping the roles of green and drawn marbles:
These symmetries generate the dihedral group .
Order of draws
The probability of drawing any set of green and red marbles (the hypergeometric distribution) depends only on the numbers of green and red marbles, not on the order in which they appear; i.e., it is an exchangeable distribution. As a result, the probability of drawing a green marble in the draw is
This is an ex ante probability—that is, it is based on not knowing the results of the previous draws.
Tail bounds
Let and . Then for we can derive the following bounds:
where
is the Kullback-Leibler divergence and it is used that .
Note: In order to derive the previous bounds, one has to start by observing that where are dependent random variables with a specific distribution . Because most of the theorems about bounds in sum of random variables are concerned with independent sequences of them, one has to first create a sequence of independent random variables with the same distribution and apply the theorems on . Then, it is proved from Hoeffding that the results and bounds obtained via this process hold for as well.
If n is larger than N/2, it can be useful to apply symmetry to "invert" the bounds, which give you the following:
Statistical Inference
Hypergeometric test
The hypergeometric test uses the hypergeometric distribution to measure the statistical significance of having drawn a sample consisting of a specific number of successes (out of total draws) from a population of size containing successes. In a test for over-representation of successes in the sample, the hypergeometric p-value is calculated as the probability of randomly drawing or more successes from the population in total draws. In a test for under-representation, the p-value is the probability of randomly drawing or fewer successes.
The test based on the hypergeometric distribution (hypergeometric test) is identical to the corresponding one-tailed version of Fisher's exact test. Reciprocally, the p-value of a two-sided Fisher's exact test can be calculated as the sum of two appropriate hypergeometric tests (for more information see).
The test is often used to identify which sub-populations are over- or under-represented in a sample. This test has a wide range of applications. For example, a marketing group could use the test to understand their customer base by testing a set of known customers for over-representation of various demographic subgroups (e.g., women, people under 30).
Related distributions
Let and .
If then has a Bernoulli distribution with parameter .
Let have a binomial distribution with parameters and ; this models the number of successes in the analogous sampling problem with replacement. If and are large compared to , and is not close to 0 or 1, then and have similar distributions, i.e., .
If is large, and are large compared to , and is not close to 0 or 1, then
where is the standard normal distribution function
If the probabilities of drawing a green or red marble are not equal (e.g. because green marbles are bigger/easier to grasp than red marbles) then has a noncentral hypergeometric distribution
The beta-binomial distribution is a conjugate prior for the hypergeometric distribution.
The following table describes four distributions related to the number of successes in a sequence of draws:
Multivariate hypergeometric distribution
The model of an urn with green and red marbles can be extended to the case where there are more than two colors of marbles. If there are Ki marbles of color i in the urn and you take n marbles at random without replacement, then the number of marbles of each color in the sample (k1, k2,..., kc) has the multivariate hypergeometric distribution:
This has the same relationship to the multinomial distribution that the hypergeometric distribution has to the binomial distribution—the multinomial distribution is the "with-replacement" distribution and the multivariate hypergeometric is the "without-replacement" distribution.
The properties of this distribution are given in the adjacent table, where c is the number of different colors and is the total number of marbles in the urn.
Example
Suppose there are 5 black, 10 white, and 15 red marbles in an urn. If six marbles are chosen without replacement, the probability that exactly two of each color are chosen is
Occurrence and applications
Application to auditing elections
Election audits typically test a sample of machine-counted precincts to see if recounts by hand or machine match the original counts. Mismatches result in either a report or a larger recount. The sampling rates are usually defined by law, not statistical design, so for a legally defined sample size , what is the probability of missing a problem which is present in precincts, such as a hack or bug? This is the probability that Bugs are often obscure, and a hacker can minimize detection by affecting only a few precincts, which will still affect close elections, so a plausible scenario is for to be on the order of 5% of . Audits typically cover 1% to 10% of precincts (often 3%), so they have a high chance of missing a problem. For example, if a problem is present in 5 of 100 precincts, a 3% sample has 86% probability that so the problem would not be noticed, and only 14% probability of the problem appearing in the sample (positive ):
The sample would need 45 precincts in order to have probability under 5% that k = 0 in the sample, and thus have probability over 95% of finding the problem:
Application to Texas hold'em poker
In hold'em poker players make the best hand they can combining the two cards in their hand with the 5 cards (community cards) eventually turned up on the table. The deck has 52 and there are 13 of each suit.
For this example assume a player has 2 clubs in the hand and there are 3 cards showing on the table, 2 of which are also clubs. The player would like to know the probability of one of the next 2 cards to be shown being a club to complete the flush.
(Note that the probability calculated in this example assumes no information is known about the cards in the other players' hands; however, experienced poker players may consider how the other players place their bets (check, call, raise, or fold) in considering the probability for each scenario. Strictly speaking, the approach to calculating success probabilities outlined here is accurate in a scenario where there is just one player at the table; in a multiplayer game this probability might be adjusted somewhat based on the betting play of the opponents.)
There are 4 clubs showing so there are 9 clubs still unseen. There are 5 cards showing (2 in the hand and 3 on the table) so there are still unseen.
The probability that one of the next two cards turned is a club can be calculated using hypergeometric with and . (about 31.64%)
The probability that both of the next two cards turned are clubs can be calculated using hypergeometric with and . (about 3.33%)
The probability that neither of the next two cards turned are clubs can be calculated using hypergeometric with and . (about 65.03%)
Application to Keno
The hypergeometric distribution is indispensable for calculating Keno odds. In Keno, 20 balls are randomly drawn from a collection of 80 numbered balls in a container, rather like American Bingo. Prior to each draw, a player selects a certain number of spots by marking a paper form supplied for this purpose. For example, a player might play a 6-spot by marking 6 numbers, each from a range of 1 through 80 inclusive. Then (after all players have taken their forms to a cashier and been given a duplicate of their marked form, and paid their wager) 20 balls are drawn. Some of the balls drawn may match some or all of the balls selected by the player. Generally speaking, the more hits (balls drawn that match player numbers selected) the greater the payoff.
For example, if a customer bets ("plays") $1 for a 6-spot (not an uncommon example) and hits 4 out of the 6, the casino would pay out $4. Payouts can vary from one casino to the next, but $4 is a typical value here. The probability of this event is:
Similarly, the chance for hitting 5 spots out of 6 selected is
while a typical payout might be $88. The payout for hitting all 6 would be around $1500 (probability ≈ 0.000128985 or 7752-to-1). The only other nonzero payout might be $1 for hitting 3 numbers (i.e., you get your bet back), which has a probability near 0.129819548.
Taking the sum of products of payouts times corresponding probabilities we get an expected return of 0.70986492 or roughly 71% for a 6-spot, for a house advantage of 29%. Other spots-played have a similar expected return. This very poor return (for the player) is usually explained by the large overhead (floor space, equipment, personnel) required for the game.
See also
Noncentral hypergeometric distributions
Negative hypergeometric distribution
Multinomial distribution
Sampling (statistics)
Generalized hypergeometric function
Coupon collector's problem
Geometric distribution
Keno
Lady tasting tea
References
Citations
Sources
unpublished note
External links
The Hypergeometric Distribution and Binomial Approximation to a Hypergeometric Random Variable by Chris Boucher, Wolfram Demonstrations Project.
Discrete distributions
Factorial and binomial topics | Hypergeometric distribution | [
"Mathematics"
] | 3,041 | [
"Factorial and binomial topics",
"Combinatorics"
] |
180,855 | https://en.wikipedia.org/wiki/Kalman%20filter | In statistics and control theory, Kalman filtering (also known as linear quadratic estimation) is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, to produce estimates of unknown variables that tend to be more accurate than those based on a single measurement, by estimating a joint probability distribution over the variables for each time-step. The filter is constructed as a mean squared error minimiser, but an alternative derivation of the filter is also provided showing how the filter relates to maximum likelihood statistics. The filter is named after Rudolf E. Kálmán.
Kalman filtering has numerous technological applications. A common application is for guidance, navigation, and control of vehicles, particularly aircraft, spacecraft and ships positioned dynamically. Furthermore, Kalman filtering is much applied in time series analysis tasks such as signal processing and econometrics. Kalman filtering is also important for robotic motion planning and control, and can be used for trajectory optimization. Kalman filtering also works for modeling the central nervous system's control of movement. Due to the time delay between issuing motor commands and receiving sensory feedback, the use of Kalman filters provides a realistic model for making estimates of the current state of a motor system and issuing updated commands.
The algorithm works via a two-phase process: a prediction phase and an update phase. In the prediction phase, the Kalman filter produces estimates of the current state variables, including their uncertainties. Once the outcome of the next measurement (necessarily corrupted with some error, including random noise) is observed, these estimates are updated using a weighted average, with more weight given to estimates with greater certainty. The algorithm is recursive. It can operate in real time, using only the present input measurements and the state calculated previously and its uncertainty matrix; no additional past information is required.
Optimality of Kalman filtering assumes that errors have a normal (Gaussian) distribution. In the words of Rudolf E. Kálmán: "The following assumptions are made about random processes: Physical random phenomena may be thought of as due to primary random sources exciting dynamic systems. The primary sources are assumed to be independent gaussian random processes with zero mean; the dynamic systems will be linear." Regardless of Gaussianity, however, if the process and measurement covariances are known, then the Kalman filter is the best possible linear estimator in the minimum mean-square-error sense, although there may be better nonlinear estimators. It is a common misconception (perpetuated in the literature) that the Kalman filter cannot be rigorously applied unless all noise processes are assumed to be Gaussian.
Extensions and generalizations of the method have also been developed, such as the extended Kalman filter and the unscented Kalman filter which work on nonlinear systems. The basis is a hidden Markov model such that the state space of the latent variables is continuous and all latent and observed variables have Gaussian distributions. Kalman filtering has been used successfully in multi-sensor fusion, and distributed sensor networks to develop distributed or consensus Kalman filtering.
History
The filtering method is named for Hungarian émigré Rudolf E. Kálmán, although Thorvald Nicolai Thiele and Peter Swerling developed a similar algorithm earlier. Richard S. Bucy of the Johns Hopkins Applied Physics Laboratory contributed to the theory, causing it to be known sometimes as Kalman–Bucy filtering. Kalman was inspired to derive the Kalman filter by applying state variables to the Wiener filtering problem.
Stanley F. Schmidt is generally credited with developing the first implementation of a Kalman filter. He realized that the filter could be divided into two distinct parts, with one part for time periods between sensor outputs and another part for incorporating measurements. It was during a visit by Kálmán to the NASA Ames Research Center that Schmidt saw the applicability of Kálmán's ideas to the nonlinear problem of trajectory estimation for the Apollo program resulting in its incorporation in the Apollo navigation computer.
This digital filter is sometimes termed the Stratonovich–Kalman–Bucy filter because it is a special case of a more general, nonlinear filter developed by the Soviet mathematician Ruslan Stratonovich. In fact, some of the special case linear filter's equations appeared in papers by Stratonovich that were published before the summer of 1961, when Kalman met with Stratonovich during a conference in Moscow.
This Kalman filtering was first described and developed partially in technical papers by Swerling (1958), Kalman (1960) and Kalman and Bucy (1961).
Kalman filters have been vital in the implementation of the navigation systems of U.S. Navy nuclear ballistic missile submarines, and in the guidance and navigation systems of cruise missiles such as the U.S. Navy's Tomahawk missile and the U.S. Air Force's Air Launched Cruise Missile. They are also used in the guidance and navigation systems of reusable launch vehicles and the attitude control and navigation systems of spacecraft which dock at the International Space Station.
Overview of the calculation
Kalman filtering uses a system's dynamic model (e.g., physical laws of motion), known control inputs to that system, and multiple sequential measurements (such as from sensors) to form an estimate of the system's varying quantities (its state) that is better than the estimate obtained by using only one measurement alone. As such, it is a common sensor fusion and data fusion algorithm.
Noisy sensor data, approximations in the equations that describe the system evolution, and external factors that are not accounted for, all limit how well it is possible to determine the system's state. The Kalman filter deals effectively with the uncertainty due to noisy sensor data and, to some extent, with random external factors. The Kalman filter produces an estimate of the state of the system as an average of the system's predicted state and of the new measurement using a weighted average. The purpose of the weights is that values with better (i.e., smaller) estimated uncertainty are "trusted" more. The weights are calculated from the covariance, a measure of the estimated uncertainty of the prediction of the system's state. The result of the weighted average is a new state estimate that lies between the predicted and measured state, and has a better estimated uncertainty than either alone. This process is repeated at every time step, with the new estimate and its covariance informing the prediction used in the following iteration. This means that Kalman filter works recursively and requires only the last "best guess", rather than the entire history, of a system's state to calculate a new state.
The measurements' certainty-grading and current-state estimate are important considerations. It is common to discuss the filter's response in terms of the Kalman filter's gain. The Kalman gain is the weight given to the measurements and current-state estimate, and can be "tuned" to achieve a particular performance. With a high gain, the filter places more weight on the most recent measurements, and thus conforms to them more responsively. With a low gain, the filter conforms to the model predictions more closely. At the extremes, a high gain (close to one) will result in a more jumpy estimated trajectory, while a low gain (close to zero) will smooth out noise but decrease the responsiveness.
When performing the actual calculations for the filter (as discussed below), the state estimate and covariances are coded into matrices because of the multiple dimensions involved in a single set of calculations. This allows for a representation of linear relationships between different state variables (such as position, velocity, and acceleration) in any of the transition models or covariances.
Example application
As an example application, consider the problem of determining the precise location of a truck. The truck can be equipped with a GPS unit that provides an estimate of the position within a few meters. The GPS estimate is likely to be noisy; readings 'jump around' rapidly, though remaining within a few meters of the real position. In addition, since the truck is expected to follow the laws of physics, its position can also be estimated by integrating its velocity over time, determined by keeping track of wheel revolutions and the angle of the steering wheel. This is a technique known as dead reckoning. Typically, the dead reckoning will provide a very smooth estimate of the truck's position, but it will drift over time as small errors accumulate.
For this example, the Kalman filter can be thought of as operating in two distinct phases: predict and update. In the prediction phase, the truck's old position will be modified according to the physical laws of motion (the dynamic or "state transition" model). Not only will a new position estimate be calculated, but also a new covariance will be calculated as well. Perhaps the covariance is proportional to the speed of the truck because we are more uncertain about the accuracy of the dead reckoning position estimate at high speeds but very certain about the position estimate at low speeds. Next, in the update phase, a measurement of the truck's position is taken from the GPS unit. Along with this measurement comes some amount of uncertainty, and its covariance relative to that of the prediction from the previous phase determines how much the new measurement will affect the updated prediction. Ideally, as the dead reckoning estimates tend to drift away from the real position, the GPS measurement should pull the position estimate back toward the real position but not disturb it to the point of becoming noisy and rapidly jumping.
Technical description and context
The Kalman filter is an efficient recursive filter estimating the internal state of a linear dynamic system from a series of noisy measurements. It is used in a wide range of engineering and econometric applications from radar and computer vision to estimation of structural macroeconomic models, and is an important topic in control theory and control systems engineering. Together with the linear-quadratic regulator (LQR), the Kalman filter solves the linear–quadratic–Gaussian control problem (LQG). The Kalman filter, the linear-quadratic regulator, and the linear–quadratic–Gaussian controller are solutions to what arguably are the most fundamental problems of control theory.
In most applications, the internal state is much larger (has more degrees of freedom) than the few "observable" parameters which are measured. However, by combining a series of measurements, the Kalman filter can estimate the entire internal state.
For the Dempster–Shafer theory, each state equation or observation is considered a special case of a linear belief function and the Kalman filtering is a special case of combining linear belief functions on a join-tree or Markov tree. Additional methods include belief filtering which use Bayes or evidential updates to the state equations.
A wide variety of Kalman filters exists by now: Kalman's original formulation - now termed the "simple" Kalman filter, the Kalman–Bucy filter, Schmidt's "extended" filter, the information filter, and a variety of "square-root" filters that were developed by Bierman, Thornton, and many others. Perhaps the most commonly used type of very simple Kalman filter is the phase-locked loop, which is now ubiquitous in radios, especially frequency modulation (FM) radios, television sets, satellite communications receivers, outer space communications systems, and nearly any other electronic communications equipment.
Underlying dynamic system model
Kalman filtering is based on linear dynamic systems discretized in the time domain. They are modeled on a Markov chain built on linear operators perturbed by errors that may include Gaussian noise. The state of the target system refers to the ground truth (yet hidden) system configuration of interest, which is represented as a vector of real numbers. At each discrete time increment, a linear operator is applied to the state to generate the new state, with some noise mixed in, and optionally some information from the controls on the system if they are known. Then, another linear operator mixed with more noise generates the measurable outputs (i.e., observation) from the true ("hidden") state. The Kalman filter may be regarded as analogous to the hidden Markov model, with the difference that the hidden state variables have values in a continuous space as opposed to a discrete state space as for the hidden Markov model. There is a strong analogy between the equations of a Kalman Filter and those of the hidden Markov model. A review of this and other models is given in Roweis and Ghahramani (1999) and Hamilton (1994), Chapter 13.
In order to use the Kalman filter to estimate the internal state of a process given only a sequence of noisy observations, one must model the process in accordance with the following framework. This means specifying the matrices, for each time-step , following:
, the state-transition model;
, the observation model;
, the covariance of the process noise;
, the covariance of the observation noise;
and sometimes , the control-input model as described below; if is included, then there is also
, the control vector, representing the controlling input into control-input model.
As seen below, it is common in many applications that the matrices , , , , and are constant across time, in which case their index may be dropped.
The Kalman filter model assumes the true state at time is evolved from the state at according to
where
is the state transition model which is applied to the previous state xk−1;
is the control-input model which is applied to the control vector ;
is the process noise, which is assumed to be drawn from a zero mean multivariate normal distribution, , with covariance, : .
If is independent of time, one may, following Roweis and Ghahramani (op. cit.), write instead of to emphasize that the noise has no explicit knowledge of time.
At time an observation (or measurement) of the true state is made according to
where
is the observation model, which maps the true state space into the observed space and
is the observation noise, which is assumed to be zero mean Gaussian white noise with covariance : .
Analogously to the situation for , one may write instead of if is independent of time.
The initial state, and the noise vectors at each step are all assumed to be mutually independent.
Many real-time dynamic systems do not exactly conform to this model. In fact, unmodeled dynamics can seriously degrade the filter performance, even when it was supposed to work with unknown stochastic signals as inputs. The reason for this is that the effect of unmodeled dynamics depends on the input, and, therefore, can bring the estimation algorithm to instability (it diverges). On the other hand, independent white noise signals will not make the algorithm diverge. The problem of distinguishing between measurement noise and unmodeled dynamics is a difficult one and is treated as a problem of control theory using robust control.
Details
The Kalman filter is a recursive estimator. This means that only the estimated state from the previous time step and the current measurement are needed to compute the estimate for the current state. In contrast to batch estimation techniques, no history of observations and/or estimates is required. In what follows, the notation represents the estimate of at time n given observations up to and including at time .
The state of the filter is represented by two variables:
, the a posteriori state estimate mean at time k given observations up to and including at time k;
, the a posteriori estimate covariance matrix (a measure of the estimated accuracy of the state estimate).
The algorithm structure of the Kalman filter resembles that of Alpha beta filter. The Kalman filter can be written as a single equation; however, it is most often conceptualized as two distinct phases: "Predict" and "Update". The predict phase uses the state estimate from the previous timestep to produce an estimate of the state at the current timestep. This predicted state estimate is also known as the a priori state estimate because, although it is an estimate of the state at the current timestep, it does not include observation information from the current timestep. In the update phase, the innovation (the pre-fit residual), i.e. the difference between the current a priori prediction and the current observation information, is multiplied by the optimal Kalman gain and combined with the previous state estimate to refine the state estimate. This improved estimate based on the current observation is termed the a posteriori state estimate.
Typically, the two phases alternate, with the prediction advancing the state until the next scheduled observation, and the update incorporating the observation. However, this is not necessary; if an observation is unavailable for some reason, the update may be skipped and multiple prediction procedures performed. Likewise, if multiple independent observations are available at the same time, multiple update procedures may be performed (typically with different observation matrices Hk).
Predict
Update
The formula for the updated (a posteriori) estimate covariance above is valid for the optimal Kk gain that minimizes the residual error, in which form it is most widely used in applications. Proof of the formulae is found in the derivations section, where the formula valid for any Kk is also shown.
A more intuitive way to express the updated state estimate () is:
This expression reminds us of a linear interpolation, for between [0,1].
In our case:
is the matrix that takes values from (high error in the sensor) to or a projection (low error).
is the internal state estimated from the model.
is the internal state estimated from the measurement, assuming is nonsingular.
This expression also resembles the alpha beta filter update step.
Invariants
If the model is accurate, and the values for and accurately reflect the distribution of the initial state values, then the following invariants are preserved:
where is the expected value of . That is, all estimates have a mean error of zero.
Also:
so covariance matrices accurately reflect the covariance of estimates.
Estimation of the noise covariances Qk and Rk
Practical implementation of a Kalman Filter is often difficult due to the difficulty of getting a good estimate of the noise covariance matrices Qk and Rk. Extensive research has been done to estimate these covariances from data. One practical method of doing this is the autocovariance least-squares (ALS) technique that uses the time-lagged autocovariances of routine operating data to estimate the covariances. The GNU Octave and Matlab code used to calculate the noise covariance matrices using the ALS technique is available online using the GNU General Public License. Field Kalman Filter (FKF), a Bayesian algorithm, which allows simultaneous estimation of the state, parameters and noise covariance has been proposed. The FKF algorithm has a recursive formulation, good observed convergence, and relatively low complexity, thus suggesting that the FKF algorithm may possibly be a worthwhile alternative to the Autocovariance Least-Squares methods. Another approach is the Optimized Kalman Filter (OKF), which considers the covariance matrices not as representatives of the noise, but rather, as parameters aimed to achieve the most accurate state estimation. These two views coincide under the KF assumptions, but often contradict each other in real systems. Thus, OKF's state estimation is more robust to modeling inaccuracies.
Optimality and performance
It follows from theory that the Kalman filter provides an optimal state estimation in cases where a) the model matches the real system perfectly, b) the entering noise is "white" (uncorrelated), and c) the covariances of the noise are known exactly. Correlated noise can also be treated using Kalman filters.
Several methods for the noise covariance estimation have been proposed during past decades, including ALS, mentioned in the section above. More generally, if the model assumptions do not match the real system perfectly, then optimal state estimation is not necessarily obtained by setting Qk and Rk to the covariances of the noise. Instead, in that case, the parameters Qk and Rk may be set to explicitly optimize the state estimation, e.g., using standard supervised learning.
After the covariances are set, it is useful to evaluate the performance of the filter; i.e., whether it is possible to improve the state estimation quality. If the Kalman filter works optimally, the innovation sequence (the output prediction error) is a white noise, therefore the whiteness property of the innovations measures filter performance. Several different methods can be used for this purpose. If the noise terms are distributed in a non-Gaussian manner, methods for assessing performance of the filter estimate, which use probability inequalities or large-sample theory, are known in the literature.
Example application, technical
Consider a truck on frictionless, straight rails. Initially, the truck is stationary at position 0, but it is buffeted this way and that by random uncontrolled forces. We measure the position of the truck every Δt seconds, but these measurements are imprecise; we want to maintain a model of the truck's position and velocity. We show here how we derive the model from which we create our Kalman filter.
Since are constant, their time indices are dropped.
The position and velocity of the truck are described by the linear state space
where is the velocity, that is, the derivative of position with respect to time.
We assume that between the (k − 1) and k timestep, uncontrolled forces cause a constant acceleration of ak that is normally distributed with mean 0 and standard deviation σa. From Newton's laws of motion we conclude that
(there is no term since there are no known control inputs. Instead, ak is the effect of an unknown input and applies that effect to the state vector) where
so that
where
The matrix is not full rank (it is of rank one if ). Hence, the distribution is not absolutely continuous and has no probability density function. Another way to express this, avoiding explicit degenerate distributions is given by
At each time phase, a noisy measurement of the true position of the truck is made. Let us suppose the measurement noise vk is also distributed normally, with mean 0 and standard deviation σz.
where
and
We know the initial starting state of the truck with perfect precision, so we initialize
and to tell the filter that we know the exact position and velocity, we give it a zero covariance matrix:
If the initial position and velocity are not known perfectly, the covariance matrix should be initialized with suitable variances on its diagonal:
The filter will then prefer the information from the first measurements over the information already in the model.
Asymptotic form
For simplicity, assume that the control input . Then the Kalman filter may be written:
A similar equation holds if we include a non-zero control input. Gain matrices evolve independently of the measurements . From above, the four equations needed for updating the Kalman gain are as follows:
Since the gain matrices depend only on the model, and not the measurements, they may be computed offline. Convergence of the gain matrices to an asymptotic matrix applies for conditions established in Walrand and Dimakis. Simulations establish the number of steps to convergence. For the moving truck example described above, with . and , simulation shows convergence in iterations.
Using the asymptotic gain, and assuming and are independent of , the Kalman filter becomes a linear time-invariant filter:
The asymptotic gain , if it exists, can be computed by first solving the following discrete Riccati equation for the asymptotic state covariance :
The asymptotic gain is then computed as before.
Additionally, a form of the asymptotic Kalman filter more commonly used in control theory is given by
where
This leads to an estimator of the form
Derivations
The Kalman filter can be derived as a generalized least squares method operating on previous data.
Deriving the posteriori estimate covariance matrix
Starting with our invariant on the error covariance Pk | k as above
substitute in the definition of
and substitute
and
and by collecting the error vectors we get
Since the measurement error vk is uncorrelated with the other terms, this becomes
by the properties of vector covariance this becomes
which, using our invariant on Pk | k−1 and the definition of Rk becomes
This formula (sometimes known as the Joseph form of the covariance update equation) is valid for any value of Kk. It turns out that if Kk is the optimal Kalman gain, this can be simplified further as shown below.
Kalman gain derivation
The Kalman filter is a minimum mean-square error (MMSE) estimator. The error in the a posteriori state estimation is
We seek to minimize the expected value of the square of the magnitude of this vector, . This is equivalent to minimizing the trace of the a posteriori estimate covariance matrix . By expanding out the terms in the equation above and collecting, we get:
The trace is minimized when its matrix derivative with respect to the gain matrix is zero. Using the gradient matrix rules and the symmetry of the matrices involved we find that
Solving this for Kk yields the Kalman gain:
This gain, which is known as the optimal Kalman gain, is the one that yields MMSE estimates when used.
Simplification of the posteriori error covariance formula
The formula used to calculate the a posteriori error covariance can be simplified when the Kalman gain equals the optimal value derived above. Multiplying both sides of our Kalman gain formula on the right by SkKkT, it follows that
Referring back to our expanded formula for the a posteriori error covariance,
we find the last two terms cancel out, giving
This formula is computationally cheaper and thus nearly always used in practice, but is only correct for the optimal gain. If arithmetic precision is unusually low causing problems with numerical stability, or if a non-optimal Kalman gain is deliberately used, this simplification cannot be applied; the a posteriori error covariance formula as derived above (Joseph form) must be used.
Sensitivity analysis
The Kalman filtering equations provide an estimate of the state and its error covariance recursively. The estimate and its quality depend on the system parameters and the noise statistics fed as inputs to the estimator. This section analyzes the effect of uncertainties in the statistical inputs to the filter. In the absence of reliable statistics or the true values of noise covariance matrices and , the expression
no longer provides the actual error covariance. In other words, . In most real-time applications, the covariance matrices that are used in designing the Kalman filter are different from the actual (true) noise covariances matrices. This sensitivity analysis describes the behavior of the estimation error covariance when the noise covariances as well as the system matrices and that are fed as inputs to the filter are incorrect. Thus, the sensitivity analysis describes the robustness (or sensitivity) of the estimator to misspecified statistical and parametric inputs to the estimator.
This discussion is limited to the error sensitivity analysis for the case of statistical uncertainties. Here the actual noise covariances are denoted by and respectively, whereas the design values used in the estimator are and respectively. The actual error covariance is denoted by and as computed by the Kalman filter is referred to as the Riccati variable. When and , this means that . While computing the actual error covariance using , substituting for and using the fact that and , results in the following recursive equations for :
and
While computing , by design the filter implicitly assumes that and . The recursive expressions for and are identical except for the presence of and in place of the design values and respectively. Researches have been done to analyze Kalman filter system's robustness.
Factored form
One problem with the Kalman filter is its numerical stability. If the process noise covariance Qk is small, round-off error often causes a small positive eigenvalue of the state covariance matrix P to be computed as a negative number. This renders the numerical representation of P indefinite, while its true form is positive-definite.
Positive definite matrices have the property that they have a factorization into the product of a non-singular, lower-triangular matrix S and its transpose : P = S·ST . The factor S can be computed efficiently using the Cholesky factorization algorithm. This product form of the covariance matrix P is guaranteed to be symmetric, and for all 1 <= k <= n, the k-th diagonal element Pkk is equal to the euclidean norm of the k-th row of S, which is necessarily positive. An equivalent form, which avoids many of the square root operations involved in the Cholesky factorization algorithm, yet preserves the desirable numerical properties, is the U-D decomposition form, P = U·D·UT, where U is a unit triangular matrix (with unit diagonal), and D is a diagonal matrix.
Between the two, the U-D factorization uses the same amount of storage, and somewhat less computation, and is the most commonly used triangular factorization. (Early literature on the relative efficiency is somewhat misleading, as it assumed that square roots were much more time-consuming than divisions, while on 21st-century computers they are only slightly more expensive.)
Efficient algorithms for the Kalman prediction and update steps in the factored form were developed by G. J. Bierman and C. L. Thornton.
The L·D·LT decomposition of the innovation covariance matrix Sk is the basis for another type of numerically efficient and robust square root filter. The algorithm starts with the LU decomposition as implemented in the Linear Algebra PACKage (LAPACK). These results are further factored into the L·D·LT structure with methods given by Golub and Van Loan (algorithm 4.1.2) for a symmetric nonsingular matrix. Any singular covariance matrix is pivoted so that the first diagonal partition is nonsingular and well-conditioned. The pivoting algorithm must retain any portion of the innovation covariance matrix directly corresponding to observed state-variables Hk·xk|k-1 that are associated with auxiliary observations in
yk. The l·d·lt square-root filter requires orthogonalization of the observation vector. This may be done with the inverse square-root of the covariance matrix for the auxiliary variables using Method 2 in Higham (2002, p. 263).
Parallel form
The Kalman filter is efficient for sequential data processing on central processing units (CPUs), but in its original form it is inefficient on parallel architectures such as graphics processing units (GPUs). It is however possible to express the filter-update routine in terms of an associative operator using the formulation in Särkkä and García-Fernández (2021). The filter solution can then be retrieved by the use of a prefix sum algorithm which can be efficiently implemented on GPU. This reduces the computational complexity from in the number of time steps to .
Relationship to recursive Bayesian estimation
The Kalman filter can be presented as one of the simplest dynamic Bayesian networks. The Kalman filter calculates estimates of the true values of states recursively over time using incoming measurements and a mathematical process model. Similarly, recursive Bayesian estimation calculates estimates of an unknown probability density function (PDF) recursively over time using incoming measurements and a mathematical process model.
In recursive Bayesian estimation, the true state is assumed to be an unobserved Markov process, and the measurements are the observed states of a hidden Markov model (HMM).
Because of the Markov assumption, the true state is conditionally independent of all earlier states given the immediately previous state.
Similarly, the measurement at the k-th timestep is dependent only upon the current state and is conditionally independent of all other states given the current state.
Using these assumptions the probability distribution over all states of the hidden Markov model can be written simply as:
However, when a Kalman filter is used to estimate the state x, the probability distribution of interest is that associated with the current states conditioned on the measurements up to the current timestep. This is achieved by marginalizing out the previous states and dividing by the probability of the measurement set.
This results in the predict and update phases of the Kalman filter written probabilistically. The probability distribution associated with the predicted state is the sum (integral) of the products of the probability distribution associated with the transition from the (k − 1)-th timestep to the k-th and the probability distribution associated with the previous state, over all possible .
The measurement set up to time t is
The probability distribution of the update is proportional to the product of the measurement likelihood and the predicted state.
The denominator
is a normalization term.
The remaining probability density functions are
The PDF at the previous timestep is assumed inductively to be the estimated state and covariance. This is justified because, as an optimal estimator, the Kalman filter makes best use of the measurements, therefore the PDF for given the measurements is the Kalman filter estimate.
Marginal likelihood
Related to the recursive Bayesian interpretation described above, the Kalman filter can be viewed as a generative model, i.e., a process for generating a stream of random observations z = (z0, z1, z2, ...). Specifically, the process is
Sample a hidden state from the Gaussian prior distribution .
Sample an observation from the observation model .
For , do
Sample the next hidden state from the transition model
Sample an observation from the observation model
This process has identical structure to the hidden Markov model, except that the discrete state and observations are replaced with continuous variables sampled from Gaussian distributions.
In some applications, it is useful to compute the probability that a Kalman filter with a given set of parameters (prior distribution, transition and observation models, and control inputs) would generate a particular observed signal. This probability is known as the marginal likelihood because it integrates over ("marginalizes out") the values of the hidden state variables, so it can be computed using only the observed signal. The marginal likelihood can be useful to evaluate different parameter choices, or to compare the Kalman filter against other models using Bayesian model comparison.
It is straightforward to compute the marginal likelihood as a side effect of the recursive filtering computation. By the chain rule, the likelihood can be factored as the product of the probability of each observation given previous observations,
,
and because the Kalman filter describes a Markov process, all relevant information from previous observations is contained in the current state estimate Thus the marginal likelihood is given by
i.e., a product of Gaussian densities, each corresponding to the density of one observation zk under the current filtering distribution . This can easily be computed as a simple recursive update; however, to avoid numeric underflow, in a practical implementation it is usually desirable to compute the log marginal likelihood instead. Adopting the convention , this can be done via the recursive update rule
where is the dimension of the measurement vector.
An important application where such a (log) likelihood of the observations (given the filter parameters) is used is multi-target tracking. For example, consider an object tracking scenario where a stream of observations is the input, however, it is unknown how many objects are in the scene (or, the number of objects is known but is greater than one). For such a scenario, it can be unknown apriori which observations/measurements were generated by which object. A multiple hypothesis tracker (MHT) typically will form different track association hypotheses, where each hypothesis can be considered as a Kalman filter (for the linear Gaussian case) with a specific set of parameters associated with the hypothesized object. Thus, it is important to compute the likelihood of the observations for the different hypotheses under consideration, such that the most-likely one can be found.
Information filter
In cases where the dimension of the observation vector y is bigger than the dimension of the state space vector x, the information filter can avoid the inversion of a bigger matrix in the Kalman gain calculation at the price of inverting a smaller matrix in the prediction step, thus saving computing time. Additionally, the information filter allows for system information initialization according to , which would not be possible for the regular Kalman filter. In the information filter, or inverse covariance filter, the estimated covariance and estimated state are replaced by the information matrix and information vector respectively. These are defined as:
Similarly the predicted covariance and state have equivalent information forms, defined as:
and the measurement covariance and measurement vector, which are defined as:
The information update now becomes a trivial sum.
The main advantage of the information filter is that N measurements can be filtered at each time step simply by summing their information matrices and vectors.
To predict the information filter the information matrix and vector can be converted back to their state space equivalents, or alternatively the information space prediction can be used.
Fixed-lag smoother
The optimal fixed-lag smoother provides the optimal estimate of for a given fixed-lag using the measurements from to . It can be derived using the previous theory via an augmented state, and the main equation of the filter is the following:
where:
is estimated via a standard Kalman filter;
is the innovation produced considering the estimate of the standard Kalman filter;
the various with are new variables; i.e., they do not appear in the standard Kalman filter;
the gains are computed via the following scheme:
and
where and are the prediction error covariance and the gains of the standard Kalman filter (i.e., ).
If the estimation error covariance is defined so that
then we have that the improvement on the estimation of is given by:
Fixed-interval smoothers
The optimal fixed-interval smoother provides the optimal estimate of () using the measurements from a fixed interval to . This is also called "Kalman Smoothing". There are several smoothing algorithms in common use.
Rauch–Tung–Striebel
The Rauch–Tung–Striebel (RTS) smoother is an efficient two-pass algorithm for fixed interval smoothing.
The forward pass is the same as the regular Kalman filter algorithm. These filtered a-priori and a-posteriori state estimates , and covariances , are saved for use in the backward pass (for retrodiction).
In the backward pass, we compute the smoothed state estimates and covariances . We start at the last time step and proceed backward in time using the following recursive equations:
where
is the a-posteriori state estimate of timestep and is the a-priori state estimate of timestep . The same notation applies to the covariance.
Modified Bryson–Frazier smoother
An alternative to the RTS algorithm is the modified Bryson–Frazier (MBF) fixed interval smoother developed by Bierman. This also uses a backward pass that processes data saved from the Kalman filter forward pass. The equations for the backward pass involve the recursive
computation of data which are used at each observation time to compute the smoothed state and covariance.
The recursive equations are
where is the residual covariance and . The smoothed state and covariance can then be found by substitution in the equations
or
An important advantage of the MBF is that it does not require finding the inverse of the covariance matrix. Bierman's derivation is based on the RTS smoother, which assumes that the underlying distributions are Gaussian. However, a derivation of the MBF based on the concept of the fixed point smoother, which does not require the Gaussian assumption, is given by Gibbs.
The MBF can also be used to perform consistency checks on the filter residuals and the difference between the value of a filter state after an update and the smoothed value of the state, that is .
Minimum-variance smoother
The minimum-variance smoother can attain the best-possible error performance, provided that the models are linear, their parameters and the noise statistics are known precisely. This smoother is a time-varying state-space generalization of the optimal non-causal Wiener filter.
The smoother calculations are done in two passes. The forward calculations involve a one-step-ahead predictor and are given by
The above system is known as the inverse Wiener-Hopf factor. The backward recursion is the adjoint of the above forward system. The result of the backward pass may be calculated by operating the forward equations on the time-reversed and time reversing the result. In the case of output estimation, the smoothed estimate is given by
Taking the causal part of this minimum-variance smoother yields
which is identical to the minimum-variance Kalman filter. The above solutions minimize the variance of the output estimation error. Note that the Rauch–Tung–Striebel smoother derivation assumes that the underlying distributions are Gaussian, whereas the minimum-variance solutions do not. Optimal smoothers for state estimation and input estimation can be constructed similarly.
A continuous-time version of the above smoother is described in.
Expectation–maximization algorithms may be employed to calculate approximate maximum likelihood estimates of unknown state-space parameters within minimum-variance filters and smoothers. Often uncertainties remain within problem assumptions. A smoother that accommodates uncertainties can be designed by adding a positive definite term to the Riccati equation.
In cases where the models are nonlinear, step-wise linearizations may be within the minimum-variance filter and smoother recursions (extended Kalman filtering).
Frequency-weighted Kalman filters
Pioneering research on the perception of sounds at different frequencies was conducted by Fletcher and Munson in the 1930s. Their work led to a standard way of weighting measured sound levels within investigations of industrial noise and hearing loss. Frequency weightings have since been used within filter and controller designs to manage performance within bands of interest.
Typically, a frequency shaping function is used to weight the average power of the error spectral density in a specified frequency band. Let denote the output estimation error exhibited by a conventional Kalman filter. Also, let denote a causal frequency weighting transfer function. The optimum solution which minimizes the variance of arises by simply constructing .
The design of remains an open question. One way of proceeding is to identify a system which generates the estimation error and setting equal to the inverse of that system. This procedure may be iterated to obtain mean-square error improvement at the cost of increased filter order. The same technique can be applied to smoothers.
Nonlinear filters
The basic Kalman filter is limited to a linear assumption. More complex systems, however, can be nonlinear. The nonlinearity can be associated either with the process model or with the observation model or with both.
The most common variants of Kalman filters for non-linear systems are the Extended Kalman Filter and Unscented Kalman filter. The suitability of which filter to use depends on the non-linearity indices of the process and observation model.
Extended Kalman filter
In the extended Kalman filter (EKF), the state transition and observation models need not be linear functions of the state but may instead be nonlinear functions. These functions are of differentiable type.
The function f can be used to compute the predicted state from the previous estimate and similarly the function h can be used to compute the predicted measurement from the predicted state. However, f and h cannot be applied to the covariance directly. Instead a matrix of partial derivatives (the Jacobian) is computed.
At each timestep the Jacobian is evaluated with current predicted states. These matrices can be used in the Kalman filter equations. This process essentially linearizes the nonlinear function around the current estimate.
Unscented Kalman filter
When the state transition and observation models—that is, the predict and update functions and —are highly nonlinear, the extended Kalman filter can give particularly poor performance.
This is because the covariance is propagated through linearization of the underlying nonlinear model. The unscented Kalman filter (UKF) uses a deterministic sampling technique known as the unscented transformation (UT) to pick a minimal set of sample points (called sigma points) around the mean. The sigma points are then propagated through the nonlinear functions, from which a new mean and covariance estimate are then formed. The resulting filter depends on how the transformed statistics of the UT are calculated and which set of sigma points are used. It should be remarked that it is always possible to construct new UKFs in a consistent way. For certain systems, the resulting UKF more accurately estimates the true mean and covariance. This can be verified with Monte Carlo sampling or Taylor series expansion of the posterior statistics. In addition, this technique removes the requirement to explicitly calculate Jacobians, which for complex functions can be a difficult task in itself (i.e., requiring complicated derivatives if done analytically or being computationally costly if done numerically), if not impossible (if those functions are not differentiable).
Sigma points
For a random vector , sigma points are any set of vectors
attributed with
first-order weights that fulfill
for all :
second-order weights that fulfill
for all pairs .
A simple choice of sigma points and weights for in the UKF algorithm is
where is the mean estimate of . The vector is the jth column of where . Typically, is obtained via Cholesky decomposition of . With some care the filter equations can be expressed in such a way that is evaluated directly without intermediate calculations of . This is referred to as the square-root unscented Kalman filter.
The weight of the mean value, , can be chosen arbitrarily.
Another popular parameterization (which generalizes the above) is
and control the spread of the sigma points. is related to the distribution of . Note that this is an overparameterization in the sense that any one of , and can be chosen arbitrarily.
Appropriate values depend on the problem at hand, but a typical recommendation is , , and . If the true distribution of is Gaussian, is optimal.
Predict
As with the EKF, the UKF prediction can be used independently from the UKF update, in combination with a linear (or indeed EKF) update, or vice versa.
Given estimates of the mean and covariance, and , one obtains sigma points as described in the section above. The sigma points are propagated through the transition function f.
.
The propagated sigma points are weighed to produce the predicted mean and covariance.
where are the first-order weights of the original sigma points, and are the second-order weights. The matrix is the covariance of the transition noise, .
Update
Given prediction estimates and , a new set of sigma points with corresponding first-order weights and second-order weights is calculated. These sigma points are transformed through the measurement function .
.
Then the empirical mean and covariance of the transformed points are calculated.
where is the covariance matrix of the observation noise, . Additionally, the cross covariance matrix is also needed
The Kalman gain is
The updated mean and covariance estimates are
Discriminative Kalman filter
When the observation model is highly non-linear and/or non-Gaussian, it may prove advantageous to apply Bayes' rule and estimate
where for nonlinear functions . This replaces the generative specification of the standard Kalman filter with a discriminative model for the latent states given observations.
Under a stationary state model
where , if
then given a new observation , it follows that
where
Note that this approximation requires to be positive-definite; in the case that it is not,
is used instead. Such an approach proves particularly useful when the dimensionality of the observations is much greater than that of the latent states and can be used build filters that are particularly robust to nonstationarities in the observation model.
Adaptive Kalman filter
Adaptive Kalman filters allow to adapt for process dynamics which are not modeled in the process model , which happens for example in the context of a maneuvering target when a constant velocity (reduced order) Kalman filter is employed for tracking.
Kalman–Bucy filter
Kalman–Bucy filtering (named for Richard Snowden Bucy) is a continuous time version of Kalman filtering.
It is based on the state space model
where and represent the intensities of the two white noise terms and , respectively.
The filter consists of two differential equations, one for the state estimate and one for the covariance:
where the Kalman gain is given by
Note that in this expression for the covariance of the observation noise represents at the same time the covariance of the prediction error (or innovation) ; these covariances are equal only in the case of continuous time.
The distinction between the prediction and update steps of discrete-time Kalman filtering does not exist in continuous time.
The second differential equation, for the covariance, is an example of a Riccati equation. Nonlinear generalizations to Kalman–Bucy filters include continuous time extended Kalman filter.
Hybrid Kalman filter
Most physical systems are represented as continuous-time models while discrete-time measurements are made frequently for state estimation via a digital processor. Therefore, the system model and measurement model are given by
where
.
Initialize
Predict
The prediction equations are derived from those of continuous-time Kalman filter without update from measurements, i.e., . The predicted state and covariance are calculated respectively by solving a set of differential equations with the initial value equal to the estimate at the previous step.
For the case of linear time invariant systems, the continuous time dynamics can be exactly discretized into a discrete time system using matrix exponentials.
Update
The update equations are identical to those of the discrete-time Kalman filter.
Variants for the recovery of sparse signals
The traditional Kalman filter has also been employed for the recovery of sparse, possibly dynamic, signals from noisy observations. Recent works utilize notions from the theory of compressed sensing/sampling, such as the restricted isometry property and related probabilistic recovery arguments, for sequentially estimating the sparse state in intrinsically low-dimensional systems.
Relation to Gaussian processes
Since linear Gaussian state-space models lead to Gaussian processes, Kalman filters can be viewed as sequential solvers for Gaussian process regression.
Applications
Attitude and heading reference systems
Autopilot
Electric battery state of charge (SoC) estimation
Brain–computer interfaces
Tracking and vertex fitting of charged particles in particle detectors
Tracking of objects in computer vision
Dynamic positioning in shipping
Economics, in particular macroeconomics, time series analysis, and econometrics
Inertial guidance system
Nuclear medicine – single photon emission computed tomography image restoration
Orbit determination
Power system state estimation
Radar tracker
Satellite navigation systems
Seismology
Sensorless control of AC motor variable-frequency drives
Simultaneous localization and mapping
Speech enhancement
Visual odometry
Weather forecasting
Navigation system
3D modeling
Structural health monitoring
Human sensorimotor processing
See also
Alpha beta filter
Inverse-variance weighting
Covariance intersection
Data assimilation
Ensemble Kalman filter
Extended Kalman filter
Fast Kalman filter
Filtering problem (stochastic processes)
Generalized filtering
Invariant extended Kalman filter
Kernel adaptive filter
Masreliez's theorem
Moving horizon estimation
Particle filter estimator
PID controller
Predictor–corrector method
Recursive least squares filter
Schmidt–Kalman filter
Separation principle
Sliding mode control
State-transition matrix
Stochastic differential equations
Switching Kalman filter
References
Further reading
External links
A New Approach to Linear Filtering and Prediction Problems, by R. E. Kalman, 1960
Kalman and Bayesian Filters in Python. Open source Kalman filtering textbook.
How a Kalman filter works, in pictures. Illuminates the Kalman filter with pictures and colors
Kalman–Bucy Filter, a derivation of the Kalman–Bucy Filter
Kalman filter in Javascript. Open source Kalman filter library for node.js and the web browser.
An Introduction to the Kalman Filter , SIGGRAPH 2001 Course, Greg Welch and Gary Bishop
Kalman Filter webpage, with many links
Kalman Filter Explained Simply, Step-by-Step Tutorial of the Kalman Filter with Equations
Gerald J. Bierman's Estimation Subroutine Library: Corresponds to the code in the research monograph "Factorization Methods for Discrete Sequential Estimation" originally published by Academic Press in 1977. Republished by Dover.
Matlab Toolbox implementing parts of Gerald J. Bierman's Estimation Subroutine Library: UD / UDU' and LD / LDL' factorization with associated time and measurement updates making up the Kalman filter.
Matlab Toolbox of Kalman Filtering applied to Simultaneous Localization and Mapping: Vehicle moving in 1D, 2D and 3D
The Kalman Filter in Reproducing Kernel Hilbert Spaces A comprehensive introduction.
Matlab code to estimate Cox–Ingersoll–Ross interest rate model with Kalman Filter : Corresponds to the paper "estimating and testing exponential-affine term structure models by kalman filter" published by Review of Quantitative Finance and Accounting in 1999.
Online demo of the Kalman Filter. Demonstration of Kalman Filter (and other data assimilation methods) using twin experiments.
kalman-filter.com. Insights into the use of Kalman Filters in different domains.
Examples and how-to on using Kalman Filters with MATLAB A Tutorial on Filtering and Estimation
Explaining Filtering (Estimation) in One Hour, Ten Minutes, One Minute, and One Sentence by Yu-Chi Ho
Simo Särkkä (2013). "Bayesian Filtering and Smoothing". Cambridge University Press. Full text available on author's webpage https://users.aalto.fi/~ssarkka/.
Control theory
Linear filters
Markov models
Nonlinear filters
Robot control
Signal estimation
Stochastic differential equations
Hungarian inventions | Kalman filter | [
"Mathematics",
"Engineering"
] | 11,193 | [
"Robotics engineering",
"Applied mathematics",
"Control theory",
"Robot control",
"Dynamical systems"
] |
180,868 | https://en.wikipedia.org/wiki/Land%20art | Land art, variously known as Earth art, environmental art, and Earthworks, is an art movement that emerged in the 1960s and 1970s, largely associated with Great Britain and the United States but that also includes examples from many countries. As a trend, "land art" expanded boundaries of art by the materials used and the siting of the works. The materials used were often the materials of the Earth, including the soil, rocks, vegetation, and water found on-site, and the sites of the works were often distant from population centers. Though sometimes fairly inaccessible, photo documentation was commonly brought back to the urban art gallery.
Concerns of the art movement centered around rejection of the commercialization of art-making and enthusiasm with an emergent ecological movement. The art movement coincided with the popularity of the rejection of urban living and its counterpart, an enthusiasm for that which is rural. Included in these inclinations were spiritual yearnings concerning the planet Earth as home to humanity.
Form
The art form gained traction in the 1960s and 1970s as land art was not something that could easily be turned into a commodity, unlike the "mass produced cultural debris" of the time. During this period, proponents of land art rejected the museum or gallery as the setting of artistic activity and developed monumental landscape projects which were beyond the reach of traditional transportable sculpture and the commercial art market, although photographic documentation was often presented in normal gallery spaces. Land art was inspired by minimal art and conceptual art but also by modern movements such as De Stijl, Cubism, minimalism and the work of Constantin Brâncuși and Joseph Beuys. One of the first earthworks artists was Herbert Bayer, who created Grass Mound in Aspen, Colorado, in 1955.
Many of the artists associated with land art had been involved with minimal art and conceptual art. Isamu Noguchi's 1941 design for Contoured Playground in New York City is sometimes interpreted as an important early piece of land art even though the artist himself never called his work "land art" but simply "sculpture". His influence on contemporary land art, landscape architecture and environmental sculpture is evident in many works today.
Alan Sonfist used an alternative approach to working with nature and culture by bringing historical nature and sustainable art back into New York City. His most inspirational work is Time Landscape, an indigenous forest he planted in New York City. He created several other Time Landscapes around the world such as Circles of Time in Florence, Italy documenting the historical usage of the land, and at the deCordova Sculpture Park and Museum outside Boston. According to critic Barbara Rose, writing in Artforum in 1969, he had become disillusioned with the commodification and insularity of gallery bound art. Dian Parker wrote in ArtNet, "The artist’s ecological message seems more timely now than ever, noted Adam Weinberg, the director emeritus of the Whitney Museum of American Art. 'Since the ’60s, [Sonfist has] continued to push forward his ideas about the land, particularly urgent right now with global warming all over the world. We need solutions to climate change not only from scientists and politicians but also from artists, envisioning and realizing a greener, more primordial future.'"
In 1967, the art critic Grace Glueck writing in The New York Times declared the first Earthwork to be done by Douglas Leichter and Richard Saba at the Skowhegan School of Painting and Sculpture. The sudden appearance of land art in 1968 can be located as a response by a generation of artists mostly in their late twenties to the heightened political activism of the year and the emerging environmental and women's liberation movements.
One example of land art in the 20th century was a group exhibition called "Earthworks" created in 1968 at the Dwan Gallery in New York. In February 1969, Willoughby Sharp curated the "Earth Art" exhibition at the Andrew Dickson White Museum of Art at Cornell University, Ithaca, New York. The artists included were Walter De Maria, Jan Dibbets, Hans Haacke, Michael Heizer, Neil Jenney, Richard Long, David Medalla, Robert Morris, Dennis Oppenheim, Robert Smithson, and Gunther Uecker. The exhibition was directed by Thomas W. Leavitt. Gordon Matta-Clark, who lived in Ithaca at the time, was invited by Sharp to help the artists in "Earth Art" with the on-site execution of their works for the exhibition.
Perhaps the best known artist who worked in this genre was Robert Smithson whose 1968 essay "The Sedimentation of the Mind: Earth Projects" provided a critical framework for the movement as a reaction to the disengagement of Modernism from social issues as represented by the critic Clement Greenberg. His best known piece, and probably the most famous piece of all land art, is the Spiral Jetty (1970), for which Smithson arranged rock, earth and algae so as to form a long (1500 ft) spiral-shape jetty protruding into Great Salt Lake in northern Utah, U.S. How much of the work, if any, is visible is dependent on the fluctuating water levels. Since its creation, the work has been completely covered, and then uncovered again, by water. A steward of the artwork in conjunction with the Dia Foundation, the Utah Museum of Fine Arts regularly curates programming around the Spiral Jetty, including a "Family Backpacks" program.
Smithson's Gravel Mirror with Cracks and Dust (1968) is an example of land art existing in a gallery space rather than in the natural environment. It consists of a pile of gravel by the side of a partially mirrored gallery wall. In its simplicity of form and concentration on the materials themselves, this and other pieces of land art have an affinity with minimalism. There is also a relationship to Arte Povera in the use of materials traditionally considered "unartistic" or "worthless". The Italian Germano Celant, founder of Arte Povera, was one of the first curators to promote land art.
"Land artists" have tended to be American, with other prominent artists in this field being Carl Andre, Alice Aycock, Walter De Maria, Hans Haacke, Michael Heizer, Nancy Holt, Peter Hutchinson, Ana Mendieta, Dennis Oppenheim, Andrew Rogers, Charles Ross, Alan Sonfist, and James Turrell. Turrell began work in 1972 on possibly the largest piece of land art thus far, reshaping the earth surrounding the extinct Roden Crater volcano in Arizona. Perhaps the most prominent non-American land artists are the British Chris Drury, Andy Goldsworthy, Richard Long and the Australian Andrew Rogers.
In 1973 Jacek Tylicki begins to lay out blank canvases or paper sheets in the natural environment for the nature to create art.
Some projects by the artists Christo and Jeanne-Claude (who are famous for wrapping monuments, buildings and landscapes in fabric) have also been considered land art by some, though the artists themselves consider this incorrect. Joseph Beuys's concept of "social sculpture" influenced "land art", and his *7000 Eichen* project of 1982 to plant 7,000 Oak trees has many similarities to land art processes. Rogers' “Rhythms of Life” project is the largest contemporary land-art undertaking in the world, forming a chain of stone sculptures, or geoglyphs, around the globe – 12 sites – in disparate exotic locations (from below sea level and up to altitudes of 4,300 m/14,107 ft). Up to three geoglyphs (ranging in size up to 40,000 sq m/430,560 sq ft) are located in each site.
Land artists in America relied mostly on wealthy patrons and private foundations to fund their often costly projects. With the sudden economic downturn of the mid-1970s, funds from these sources largely stopped. With the death of Robert Smithson in a plane crash in 1973, the movement lost one of its most important figureheads and faded out. Charles Ross continues to work on the Star Axis project, which he began in 1971.
Michael Heizer in 2022 completed his work on City, and James Turrell continues to work on the Roden Crater project. In most respects, "land art" has become part of mainstream public art and in many cases the term "land art" is misused to label any kind of art in nature even though conceptually not related to the avant-garde works by the pioneers of land art.
The Earth art of the 1960s were sometimes reminiscent of much older land works, such as Stonehenge, the Pyramids, Native American mounds, the Nazca Lines in Peru, Carnac stones, and Native American burial grounds, and often evoked the spirituality of such archeological sites.
Contemporary land artists
Lita Albuquerque (born 1946)
Betty Beaumont (born 1946)
Milton Becerra (born 1951)
Marinus Boezem (born 1934)
Chris Booth (born 1948)
Eberhard Bosslet (born 1953)
Alberto Burri (1915–1995)
Mel Chin (born 1951)
Christo and Jeanne Claude Christo (1935–2020) Jeanne (1935–2009)
Walter De Maria (1935–2013)
Lucien den Arend (born 1943)
Agnes Denes (born 1938)
Jan Dibbets (born 1941)
Harvey Fite (1903–1976)
Barry Flanagan (1941–2009)
Hamish Fulton (born 1946)
Andy Goldsworthy (born 1956)
Michael Heizer (born 1944)
Stan Herd (born 1950)
Nancy Holt (1938–2014)
Peter Hutchinson (born 1930)
Junichi Kakizaki (born 1971)
Dani Karavan (1930–2021)
Maya Lin (born 1959)
Richard Long (born 1945)
Robert Morris (1931–2018)
Vik Muniz (born 1961)
David Nash (born 1945)
Ugo Rondinone (born 1964)
Dennis Oppenheim (1938–2011)
Georgia Papageorge (born 1941)
Beverly Pepper (1922–2020)
Tanya Preminger (born 1944)
Andrew Rogers (born 1947)
Charles Ross (born 1937)
Richard Shilling (born 1973)
Nobuo Sekine (1942–2019)
Robert Smithson (1938–1973)
Alan Sonfist (born 1946)
Tang Da Wu (born 1943)
James Turrell (born 1943)
Jacek Tylicki (born 1951)
Nils Udo (born 1937)
Bill Vazan (born 1933)
Strijdom van der Merwe (born 1961)
See also
Ecofeminist art
Ecological art
Ecovention
Environmental art
Environmental sculpture
Geoglyphs
Hill figure
Land Arts of the American West
Petroglyph
Rock art
Independent public art
Site-specific art
Tree Shaping
References
Notes
Further reading
Lawrence Alloway, Wolfgang Becker, Robert Rosenblum et al., Alan Sonfist, Nature: The End of Art, Gli Ori, Dist. Thames & Hudson Florence, Italy,2004
Max Andrews (Ed.): Land, Art: A Cultural Ecology Handbook. London 2006
John Beardsley: Earthworks and Beyond. Contemporary Art in the Landscape. New York 1998
Suzaan Boettger, Earthworks: Art and the Landscape of the Sixties. University of California Press 2002.
Amy Dempsey: Destination Art. Berkeley CA 2006
Michel Draguet, Nils-Udo, Bob Verschueren, Bruseels: Atelier 340, 1992
Larisa Dryansky, ""Cartophotographies : de l'art conceptuel au Land Art"", Paris, éditions du Comité des travaux historiques et scientifiques-Institut national d'histoire de l'art, 2017.
Jack Flam (Ed.). Robert Smithson: The Collected Writings, Berkeley CA 1996
John K. Grande: New York, London. Balance: Art and Nature, Black Rose Books, 1994, 2003
John K. Grande, Edward Lucie-Smith (Intro): Art Nature Dialogues: Interviews with Environmental Artists, New York 2004
John K. Grande, David Peat & Edward Lucie-Smith (Introduction & forward) Dialogues in Diversity, Italy: Pari Publishing, 2007,
Eleanor Heartney, Andrew Rogers Geoglyphs, Rhythms of Life, Edizioni Charta srl, Italy, 2009
Robert Hobbs, Robert Smithson: A Retrospective View, Wilhelm Lehmbruck Museum, Duisburg / Herbert F. Johnson Museum of Art, Cornell University,
Jeffrey Kastner, Brian Wallis: Land and Environmental Art. Boston 1998
Lucy R Lippard: Overlay: Contemporary Art and the Art of Prehistory. New York 1983
Alessandro Rocca: Natural Architecture. New York (2007)
Chris Taylor and Bill Gilbert. Land Arts of the American West. Austin: University of Texas Press; 2009.
Gilles A. Tiberghien: Land Art. Ed. Carré 1993/1995/2012
Udo Weilacher: Between Landscape Architecture and Land Art. Basel Berlin Boston 1999
External links
Artist in Nature International Network
Denarend.com - About land art
Land Arts of the American West
Official UNM Land Arts of the American West Program Website
Roden Crater by James Turrell
Broken Circle
OBSART | Observatoire du Land Art
Using land art as a form of advertising
Center for Land Use Interpretation entry for Land Art
The Case for Land Art | The Art Assignment | PBS
Contemporary art movements
Installation art
Art
Environmental design | Land art | [
"Engineering"
] | 2,741 | [
"Environmental design",
"Design"
] |
181,070 | https://en.wikipedia.org/wiki/Web%20Accessibility%20Initiative | The World Wide Web Consortium (W3C)'s Web Accessibility Initiative (WAI) is an effort to improve the accessibility of the World Wide Web for people with disabilities. People with disabilities encounter difficulties when using computers generally, but also on the Web. Since they often require non-standard devices and browsers, making websites more accessible also benefits a wide range of user agents and devices, including mobile devices, which have limited resources. According to a US government study, 71% of website visitors with disabilities will leave a website that is not accessible.
The W3C launched the Web Accessibility Initiative in 1997 with endorsement by The White House and W3C members. It has several working groups and interest groups that work on guidelines, technical reports, educational materials and other documents that relate to the several different components of web accessibility. These components include web content, web browsers and media players, authoring tools, and evaluation tools.
Organization
WAI develops guidelines and other technical reports through the same process as other parts of the W3C. Like other W3C initiatives, the WAI consists of several working groups and Special interest groups, each with its own focus. Only working groups can produce technical reports that become W3C recommendations. A working group can sometimes delegate specific work to a task force, which then presents its results back to the working group for approval. Interest groups may produce reports (for example, as W3C Notes), but not recommendations.
Each of these types of groups (working group, task force, interest group) can have one or more mailing lists. They meet through conference calls at regular intervals (typically every week or every other week) and sometimes use web-based surveys to collect input or comments from participants. They can also meet face to face (one to five times per year).
In 1997 Judy Brewer has been the director of the WAI. In this role she has championed improving accessibility of the web for people with disabilities and older users.
Authoring Tool Accessibility Guidelines Working Group (ATAG WG)
The Authoring Tool Accessibility Guidelines Working Group develops guidelines, techniques and supporting resources for tools that create web content, ranging from desktop HTML editors to content management systems.
The accessibility requirements apply to two types of things: the user interface on the one hand, and the content produced by the tool on the other.
The working group consists of representatives from organizations that produce authoring tools, researchers, and other accessibility experts.
The working group produced the Authoring Tool Accessibility Guidelines 1.0 in 2000 and completed Authoring Tool Accessibility Guidelines (ATAG) 2.0 in 2015. A supporting document, Implementing ATAG 2.0, provides additional explanation, examples and resources for ATAG 2.0. It also published a document on Selecting and Using Authoring Tools for Web Accessibility.
Education and Outreach Working Group (EOWG)
The Education and Outreach Working Group develops materials for training and education on Web accessibility. This working group has produced documents on a wide range of subjects, including:
Accessibility Features of CSS
Curriculum for Web Content Accessibility Guidelines 1.0
Evaluating Web Sites for Accessibility, a suite of documents about subjects such as conformance evaluation, evaluation approaches for specific contexts, involving users in web accessibility evaluation, and selecting web accessibility evaluation tools
Planning Web Accessibility Training
Developing a Web Accessibility Business Case for Your Organization
How People with Disabilities Use the Web, a document that describes various fictitious characters with disabilities and how they use the Web in different scenarios
many introduction pages on the WAI website.
Currently, the working group has a task force to support the work done in the WAI-AGE project. This project published a document that reviews literature about the needs of older users and compares these needs with those of people with disabilities as already addressed in WAI guidelines.
The Education and Outreach Working Group can also review working drafts produced by other WAI working groups.
Evaluation and Repair Tools Working Group (ERT WG)
The Evaluation and Repair Tools Working Group develops technical specifications that support the accessibility evaluation and repair of Web sites. It also maintains a database of tools for evaluating Web sites and for making them more accessible ("repair", "retrofitting").
The working group consists mainly of developers of such tools and researchers.
Current work focuses on
Evaluation and Report Language (EARL): a language for expressing evaluation reports in a machine-readable way
HTTP Vocabulary in RDF, which specifies how HTTP requests and responses can be expressed in RDF
Representing Content in RDF, which specifies how content (retrieved from the Web or a local storage device) can be represented in RDF
Pointer Methods in RDF, early work on how locations in and parts of online documents can be expressed in RDF.
Protocols & Formats Working Group (PFWG)
The Protocols & Formats Working Group reviews all W3C technologies for accessibility before they are published as a recommendation. It has also published a note on accessibility issues of CAPTCHA,
a paper on natural language usage for people with cognitive disabilities, and initial work on accessibility requirements for XML-based markup languages (XML Accessibility Guidelines).
In 2006, the working group started development of a set of document and specifications for accessible rich internet applications: WAI-ARIA.
Research and Development Interest Group (RDIG)
The goal of the Research and Development Interest Group is
to increase the incorporation of accessibility considerations into research on Web technologies, and
to identify projects researching Web accessibility and suggest research questions that may contribute to new projects.
This interest group has seen very little activity since 2004. Its current charter expired at the end of 2006.
User Agent Accessibility Guidelines Working Group (UAWG)
The User Agent Accessibility Guideline Working Group develops guidelines, techniques and other documents to promote the accessibility of user agents: browsers and plug-ins.
The working group consists mainly of organizations that develop user agents, researchers, and other accessibility experts.
UAWG published User Agent Accessibility Guidelines (UAAG) 2.0 in December 2015. Supporting documentation includes: UAAG 2.0 Reference and UAAG Mobile Examples. The working group published User Agent Accessibility Guidelines 1.0 (UAAG 1.0) as a W3C Recommendation in 2002.
WAI Interest Group (WAI IG)
The WAI Interest Group is an open group with a mailing list to which anyone can subscribe. W3C staff post announcements of new WAI documents to this mailing list to invite reviews and comments. Members of the list also post announcements of relevant events and publications, and ask advice on issues related to web accessibility. The language of the mailing list is English; there are no parallel mailing lists in other languages.
Accessibility Guidelines Working Group (AGWG)
The Accessibility Guidelines Working Group (formerly the Web Content Accessibility Guidelines Working Group) produces guidelines, techniques and other supporting documents relating to the accessibility of Web content. Web content refers to any information you may find on a Web site: text, images, forms, sound, video, etcetera, regardless whether these were produced on the server side or on the client side (with a client-side scripting language such as JavaScript). Thus, the guidelines also apply to rich internet applications.
The working group consists of representatives from industry, accessibility consultancies, universities, organizations that represent end users, and other accessibility experts.
The working group published the Web Content Accessibility Guidelines 1.0 (WCAG 1.0) as W3C Recommendation in 1999, followed by techniques documents in 2000.
In 2001, the working group started work on WCAG 2.0, which became a W3C Recommendation on 11 December 2008.
WAI Coordination Group
The WAI Coordination Group co-ordinates that activities of the WAI working groups (and interest groups). Its activities are not public.
Guidelines and technical reports
Web Content Accessibility Guidelines (WCAG)
The Web Content Accessibility Guidelines 1.0 (known as WCAG) were published as a W3C Recommendation on 5 May 1999. A supporting document, Techniques for Web Content Accessibility Guidelines 1.0 was published as a W3C Note on 6 November 2000. WCAG 1.0 is a set of guidelines for making web content more accessible to persons with disabilities. They also help make web content more usable for other devices, including mobile devices (PDAs and cell phones).
The Web Content Accessibility Guidelines 1.0 are recognized as a de facto standard and have served as a basis for legislation and evaluation methodologies in many countries.
The WCAG working group published WCAG 2.0 as a Recommendation on 11 December 2008. WCAG 2.0 is based on very different requirements from WCAG 1.0:
the guidelines needed to be technology-neutral, whereas WCAG 1.0 was strongly based on HTML and CSS;
the guidelines needed to be worded as testable statements instead of instructions to authors.
The combination of more general applicability and higher precision proved very challenging.
In 2018, the WCAG working group published WCAG 2.1. This remains fundamentally similar to the guidance in WCAG 2.0, with some additional recommendations made in particular areas:
Mobile device accessibility
Low vision users
Authoring Tool Accessibility Guidelines (ATAG)
Developed by the Authoring Tool Accessibility Guidelines Working Group, the ATAG 2.0 became a W3C Recommendation on 24 September 2015.
ATAG is a set of guidelines for developers of any kind of authoring tool for Web content: simple HTML editors, tools that export content for use on the Web (for example, word processors that can save as HTML), tools that produce multimedia, content management systems, learning management systems, social media, etc..
The goal is for developers to create tools that:
are accessible to authors regardless of disability;
produce accessible content by default;
support and encourage authors to create accessible content.
Implementing ATAG 2.0 is a companion document that provides guidance on understanding and implementing ATAG 2.0. It gives an explanation of the intent of each success criterion, examples of the success criterion, and additional resources. Implementing ATAG 2.0 recommendations can reduce the costs for accessibility because authors are given the tools they need to create accessible content.
List of authoring tools looking to implement ATAG 2.0:
CKEditor
Drupal
Authoring Tool Accessibility Guidelines 1.0 was published in 2000 by the Authoring Tool Accessibility Guidelines Working Group.
User Agent Accessibility Guidelines (UAAG)
Developed by the User Agent Accessibility Guidelines Working Group, the UAAG 1.0 became a W3C Recommendation on 17 December 2002. The UAAG is a set of guidelines for user agent developers (such as web browsers and media players) aimed at making the user agent accessible to users with disabilities. Techniques for User Agent Accessibility Guidelines 1.0 was published as a W3C Note on the same day; it provides techniques for satisfying the checkpoints defined in UAAG 1.0.
Working group members also produced other supporting documents, including initial notes on How to evaluate a user agent for conformance to UAAG 1.0; this document was not formally approved by the working group. No user agents have been reported as fully conforming to UAAG 1.0.
The working group is currently working on a new version of the guidelines. The first public draft of User Agent Accessibility Guidelines 2.0 was published on 12 March 2008.
XML Accessibility Guidelines (XAG)
The XAG explains how to include features in XML applications (i.e. markup languages conforming to the XML specification) that promote accessibility. Work on these guidelines stopped in 2002; the guidelines are still a working draft.
Accessible Rich Internet Applications (WAI-ARIA)
WAI-ARIA (Web Accessibility Initiative – Accessible Rich Internet Applications) is a technical specification which became a W3C Recommended Web Standard on 20 March 2014. It allows web pages (or portions of pages) to declare themselves as applications rather than as static documents, by adding role, property, and state information to dynamic web applications. ARIA is intended for use by developers of web applications, web browsers, assistive technologies, and accessibility evaluation tools.
See also
Knowbility
Section 508 of the Rehabilitation Act of 1973 – a Federal law requiring US government electronic and information technology (EIT) to meet accessibility requirements
Web accessibility
References
External links
Official website
Web Content Accessibility Guidelines 1.0
Web Content Accessibility Guidelines 2.0 (W3C Recommendation 11 December 2008)
Authoring Tool Accessibility Guidelines (ATAG) 2.0
Implementing ATAG 2.0
Authoring Tool Accessibility Guidelines 1.0
User Agent Accessibility Guidelines 1.0
XML Accessibility Guidelines Working Draft
Education & Outreach Working Group
Research and Development Interest Group
Getting Started
WAI early days
The ultimate guide to website accessibility for Section 508 and WCAG 2.1 A/AA.
CheckMesiter, lots of accessibility tools and a free screen reader.
Accessibility
Digital divide
Web accessibility
World Wide Web Consortium standards
Projects established in 1997 | Web Accessibility Initiative | [
"Engineering"
] | 2,593 | [
"Accessibility",
"Design"
] |
181,158 | https://en.wikipedia.org/wiki/Coma%20Berenices | Coma Berenices is an ancient asterism in the northern sky, which has been defined as one of the 88 modern constellations. It is in the direction of the fourth galactic quadrant, between Leo and Boötes, and it is visible in both hemispheres. Its name means "Berenice's Hair" in Latin and refers to Queen Berenice II of Egypt, who sacrificed her long hair as a votive offering. It was introduced to Western astronomy during the third century BC by Conon of Samos and was further corroborated as a constellation by Gerardus Mercator and Tycho Brahe. It is the only modern constellation named after a historic person.
The constellation's major stars are Alpha, Beta, and Gamma Comae Berenices. They form a half square, along the diagonal of which run Berenice's imaginary tresses, formed by the Coma Star Cluster. The constellation's brightest star is Beta Comae Berenices, a 4.2-magnitude main sequence star similar to the Sun. Coma Berenices contains the North Galactic Pole and one of the richest-known galaxy clusters, the Coma Cluster, part of the Coma Supercluster. Galaxy Malin 1, in the constellation, is the first-known giant low-surface-brightness galaxy. Supernova SN 1940B was the first scientifically observed (underway) type II supernova. FK Comae Berenices is the prototype of an eponymous class of variable stars. The constellation is the radiant of one meteor shower, Coma Berenicids, which has one of the fastest meteor speeds, up to .
History
Coma Berenices has been recognized as an asterism since the Hellenistic period (or much earlier, according to some authors), and is the only modern constellation named for an historic figure. It was introduced to Western astronomy during the third century BC by Conon of Samos, the court astronomer of Egyptian ruler Ptolemy III Euergetes, to honour Ptolemy's consort, Berenice II. Berenice vowed to sacrifice her long hair as a votive offering if Ptolemy returned safely from battle during the Third Syrian War. Modern scholars are uncertain if Berenice made the sacrifice before or after Ptolemy's return; it was suggested that it happened after Ptolemy's return (around March–June or May 245 BC), when Conon presented the asterism jointly with scholar and poet Callimachus during a public evening ceremony. In Callimachus' poem, Aetia (composed around that time), Berenice dedicated her tresses "to all the gods". In Poem 66, the Latin translation by the Roman poet Catullus, and in Hyginus' De Astronomica, she dedicated her tresses to Aphrodite and placed them in the temple of Arsinoe II (identified after Berenice's death with Aphrodite) at Zephyrium. According to De astronomica, by the next morning the tresses had disappeared. Conon proposed that Aphrodite had placed the tresses in the sky as an acknowledgement of Berenice's sacrifice. Callimachus called the asterism plokamos Berenikēs or bostrukhon Berenikēs in Greek, translated into Latin as "Coma Berenices" by Catullus. Hipparchus and Geminus also recognized it as a distinct constellation. Eratosthenes called it "Berenice's Hair" and "Ariadne's Hair", considering it part of the constellation Leo. Similarly, Ptolemy did not include it among his 48 constellations in the Almagest; considering it part of Leo and calling it Plokamos.
Coma Berenices became popular during the 16th century. In 1515, a set of gores by Johannes Schöner labelled the asterism Trica, "hair". In 1536 it appeared on a celestial globe by Caspar Vopel, who is credited with the asterism's designation as a constellation. That year, it also appeared on a celestial map by Petrus Apianus as "Crines Berenices". In 1551, Coma Berenices appeared on a celestial globe by Gerardus Mercator with five Latin and Greek names: Cincinnus, caesaries, πλόκαμος, Berenicis crinis and Trica. Mercator's reputation as a cartographer ensured the constellation's inclusion on Dutch sky globes beginning in 1589.
Tycho Brahe, also credited with Coma's designation as a constellation, included it in his 1602 star catalogue. Brahe recorded fourteen stars in the constellation; Johannes Hevelius increased its number to twenty-one, and John Flamsteed to forty-three. Coma Berenices also appeared in Johann Bayer's 1603 Uranometria, and a few other 17th-century celestial maps followed suit. Coma Berenices and the now-obsolete Antinous are considered the first post-Ptolemaic constellations depicted on a celestial globe. With Antinous, Coma Berenices exemplified a trend in astronomy in which globe- and map-makers continued to rely on the ancients for data. This trend ended at the turn of the 16th century with observations of the southern sky and the work of Tycho Brahe.
Before the 18th century Coma Berenices was known in English by several names, including "Berenice's Bush" and "Berenice's periwig". The earliest-known English name, "Berenices haire", dates to 1601. By 1702 the constellation was known as Coma Berenices, and appears as such in the 1731 Universal Etymological English Dictionary.
Non-Western astronomy
Coma Berenices was known to the Akkadians as Ḫegala. In Babylonian astronomy a star, known as ḪÉ.GÁL-a-a (translated as "which is before it") or MÚL.ḪÉ.GÁL-a-a, is tentatively considered part of Coma Berenices. It was also argued that Coma Berenices appears in Egyptian Ramesside star clocks as sb3w ꜥš3w, meaning "many stars".
In Arabic astronomy Coma Berenices was known as Al-Dafira الضفيرة ("braid"), Al-Hulba الهلبة and Al-Thu'aba الذؤابة (both meaning "tuft"), the latter two are translations of the Ptolemaic Plokamos, forming the tuft of the constellation Leo and including most of the Flamsteed-designated stars (particularly 12, 13, 14, 16, 17, 18 and 21 Comae Berenices). Al-Sufi included it in Leo. Ulugh Beg, however, regarded Al-Dafira as consisting of two stars, 7 and 23 Comae Berenices.
The North American Pawnee people depicted Coma Berenices as ten faint stars on a tanned elk-skin star map dated to at least the 17th century. In the South American Kalina mythology, the constellation was known as ombatapo (face).
The constellation was also recognized by several Polynesian peoples. The people of Tonga had four names for Coma Berenices: Fatana-lua, Fata-olunga, Fata-lalo and Kapakau-o-Tafahi. The Boorong people called the constellation Tourt-chinboiong-gherra, and saw it as a small flock of birds drinking rainwater from a puddle in the crotch of a tree. The people of the Pukapuka atoll may have called it Te Yiku-o-te-kiole, although sometimes this name is associated with Ursa Major.
Characteristics
Coma Berenices is bordered by Boötes to the east, Canes Venatici to the north, Leo to the west and Virgo to the south. Covering 386.5 square degrees and 0.937% of the night sky, it ranks 42nd of the 88 constellations by area. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Com". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 12 segments (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , and the declination coordinates are between +13.30° and +33.31°. Coma Berenices is wholly visible to observers north of latitude 56°S. and the constellation's midnight culmination occurs on 2 April.
Features
Although it is not large, Coma Berenices contains one galactic supercluster, two galactic clusters, one star cluster and eight Messier objects (including several globular clusters). These objects can be seen with minimal obscuration by dust because the constellation is not in the direction of the galactic plane. Because of that, there are few open clusters (except for the Coma Berenices Cluster, which dominates the northern part of the constellation), diffuse nebulae or planetary nebulae. Coma Berenices contains the North Galactic Pole at right ascension and declination (epoch J2000.0).
Stars
Brightest stars
Coma Berenices is not particularly bright, as none of its stars are brighter than fourth magnitude, although there are 66 stars brighter than or equal to apparent magnitude 6.5.
The constellation's brightest star is Beta Comae Berenices (43 Comae Berenices in Flamsteed designation, occasionally known as Al-Dafira), at magnitude 4.2 and with a high proper motion. In Coma Berenices' northeastern region, it is 29.95 ± 0.10 light-years from Earth. A solar analog, it is a yellow-hued F-type main-sequence star with a spectral class of F9.5V B. Beta Comae Berenices is around 36% brighter, and 15% more massive than the Sun, and with a radius 10% larger.
The second-brightest star in Coma Berenices is the 4.3-magnitude, bluish Alpha Comae Berenices (42 Comae Berenices), with the proper name Diadem, in the southeastern part of the constellation. Despite its Alpha Bayer designation, the star is dimmer than Beta Comae Berenices, being one of the cases where designation does not correspond to the brightest star. It is a double star, with the spectral classes of F5V and F6V. The star system is 58.1 ± 0.9 light-years from Earth.
Gamma Comae Berenices (15 Comae Berenices) is an orange-hued giant star with a magnitude of 4.4 and a spectral class of K1III C. In the southwestern part of the constellation, it is 169 ± 2 light-years from Earth, Estimated to be around 1.79 times as massive as the Sun, it has expanded to around 10 times its radius. It is the brightest star in the Coma Star Cluster. With Alpha Comae Berenices and Beta Comae Berenices, Gamma Comae Berenices forms a 45-degree isosceles triangle from which Berenice's imaginary tresses hang.
Star systems
The star systems of Coma Berenices include binary, double and triple stars. 21 Comae Berenices (proper name Kissin) is a close binary with nearly equal components and an orbital period of 26 years. The system is 272 ± 3 light-years away. The Coma Cluster contains at least eight spectroscopic binaries, and the constellation has seven eclipsing binaries: CC, DD, EK, RW, RZ, SS and UX Comae Berenices.
There are over thirty double stars in Coma Berenices, including 24 Comae Berenices with contrasting colors. Its primary is an orange-hued giant star with a magnitude of 5.0, 610 light-years from Earth, and its secondary is a blue-white-hued star with a magnitude of 6.6. Triple stars include 12 Comae Berenices, 17 Comae Berenices, KR Comae Berenices and Struve 1639.
Variable stars
Over 200 variable stars are known in Coma Berenices, although many are obscure. Alpha Comae Berenices is a possible Algol variable. FK Comae Berenices, which varies from magnitude 8.14 to 8.33 over a period of 2.4 days, is the prototype for the FK Comae Berenices class of variable stars and the star in which the "flip-flop phenomenon" was discovered. FS Comae Berenices is a semi-regular variable, a red giant with a period of about two months whose magnitude varies between 6.1 and 5.3. R Comae Berenices is a Mira variable with a maximum magnitude of almost 7. There are 123 RR Lyrae variables in the constellation, with many in the M53 cluster. One of these stars, TU Comae Berenices, may have a binary system. The M100 galaxy contains about twenty Cepheid variables, which were observed by the Hubble Space Telescope. Coma Berenices also contains Alpha2 Canum Venaticorum variables, such as 13 Comae Berenices and AI Comae Berenices.
In 2019 scientists at Aryabhatta Research Institute of Observational Sciences announced the discovery of 28 new variable stars in Coma Berenices' globular cluster NGC 4147.
Supernovae
A number of supernovae have been discovered in Coma Berenices. Four (SN 1940B, SN 1969H, SN 1987E and SN 1999gs) were in the NGC 4725 galaxy, and another four were discovered in the M99 galaxy (NGC 4254): SN 1967H, SN 1972Q, SN 1986I and SN 2014L. Five were discovered in the M100 galaxy (NGC 4321): SN 1901B, SN 1914A, SN 1959E, SN 1979C and SN 2006X. SN 1940B, discovered on 5 May 1940, was the first observed type II supernova. SN 2005ap, discovered on 3 March 2005, is the second-brightest-known supernova to date with a peak absolute magnitude of about −22.7. Due to its great distance from Earth (4.7 billion light-years), it was not visible to the naked eye and was discovered telescopically. SN 1979C, discovered in 1979, retained its original X-ray brightness for 25 years despite fading in visible light.
Other stars
Coma Berenices also contains the neutron star RBS 1223 and the pulsar PSR B1237+25. RBS 1223 is a member of the Magnificent Seven, a group of young neutron stars. In 1975, the first extra-solar source of extreme ultraviolet, the white dwarf HZ 43, was discovered in Coma Berenices. In 1995, there was a very rare outburst of the WZ Sagittae-type dwarf nova AL Comae Berenices. A June 2003 outburst from GO Comae Berenices, an SU Ursae Majoris-type dwarf nova, was photometrically observed.
Exoplanets
Coma Berenices has seven known exoplanets. One, HD 108874 b, has Earth-like insolation. WASP-56 is a sun-like star of spectral type G6 and apparent magnitude 11.48 with a planet 0.6 the mass of Jupiter that has a period of 4.6 days.
Star clusters
Coma Star Cluster
The Coma Star Cluster represents Berenice's sacrificed tresses and as a naked eye object has been known since antiquity, appearing in Ptolemy's Almagest. It doesn't have a Messier or NGC designation, but is in the Melotte catalogue of open clusters (designated Melotte 111) and is also catalogued as Collinder 256. It is a large, diffuse open cluster of about 50 stars ranging between magnitudes five and ten, including several of Coma Berenices' stars which are visible to the naked eye. The cluster is spread over a huge region (more than five degrees across) near Gamma Comae Berenices. It has such a large apparent size because it is relatively close, only 280 light-years or 86 parsecs away.
Globular clusters
M53 (NGC 5024) is a globular cluster which was discovered independently by Johann Elert Bode in 1775 and Charles Messier in February 1777; William Herschel was the first to resolve it into stars. The magnitude-7.7 cluster is 56,000 light-years from Earth. Only 1° away is NGC 5053, a globular cluster with a sparser nucleus of stars. Its total luminosity is the equivalent of about 16,000 suns, one of the lowest luminosities of any globular cluster. It was discovered by William Herschel in 1784. NGC 4147 is a somewhat dimmer globular cluster, with a much-smaller apparent size and an apparent magnitude of 10.7.
Galaxies
Coma Supercluster
The Coma Supercluster, itself part of the Coma Filament, contains the Coma and Leo Cluster of galaxies. The Coma Cluster (Abell 1656) is 230 to 300 million light-years away. It is one of the largest-known clusters, with at least 10,000 galaxies (mainly elliptical, with a few spiral galaxies). Due to its distance from Earth, most of the galaxies are visible only through large telescopes. Its brightest members are NGC 4874 and NGC 4889, both with a magnitude of 13; most others are magnitude 15 or dimmer. NGC 4889 is a giant elliptical galaxy with one of the largest-known black holes (21 billion solar masses), and NGC 4921 is the cluster's brightest spiral galaxy. After observing the Coma Cluster, astronomer Fritz Zwicky first postulated the existence of dark matter during the 1930s. The massive galaxy Dragonfly 44 discovered in 2015 was found to consist almost entirely of dark matter. Its mass is very similar to that of the Milky Way, but it emits only 1% of the light emitted by the Milky Way. NGC 4676, sometimes called the Mice Galaxies, is a pair of interacting galaxies 300 million light-years from Earth. Its progenitor galaxies were spiral, and astronomers estimate that they had their closest approach about 160 million years ago. That approach triggered large regions of star formation in both galaxies, with long "tails" of dust, stars and gas. The two progenitor galaxies are predicted to interact significantly at least one more time before they merge into a larger, probably-elliptical galaxy.
Virgo Cluster
Coma Berenices contains the northern portion of the Virgo Cluster (also known as the Coma–Virgo Cluster), about 60 million light-years away. The portion includes six Messier galaxies. M85 (NGC 4382), considered elliptical or lenticular, is one of the cluster's brighter members at magnitude nine. M85 is interacting with the spiral galaxy NGC 4394 and the elliptical galaxy MCG-3-32-38. However, it is relatively isolated from the rest of the cluster. M88 (NGC 4501) is a multi-arm spiral galaxy seen at about 30° from edge-on. It has a highly-regular shape with well-developed, symmetrical arms. Among the first galaxies recognized as spiral, it has a supermassive black hole in its center. M91 (NGC 4548), a barred spiral galaxy with a bright, diffuse nucleus, is the faintest object in Messier's catalog at magnitude 10.2. M98 (NGC 4192), a bright, elongated spiral galaxy seen nearly edge-on, appears elliptical because of its unusual angle. The magnitude-10 galaxy has no redshift. M99 (NGC 4254) is a spiral galaxy seen face-on. Like M98 it is of magnitude-10 and has an unusually long arm on its west side. Four supernovae have been observed in the galaxy. M100 (NGC 4321), a magnitude-nine spiral galaxy seen face-on, is one of the cluster's brightest. Photographs reveal a brilliant core, two prominent spiral arms, an array of secondary arms and several dust lanes.
Other galaxies
M64 (NGC 4826) is known as the Black Eye Galaxy because of the prominent dark dust lane in front of the galaxy's bright nucleus. Also known as the Sleeping Beauty and Evil Eye galaxy, it is about 17.3 million light-years away. Recent studies indicate that the interstellar gas in the galaxy's outer regions rotates in the opposite direction from that in the inner regions, leading astronomers to believe that at least one satellite galaxy collided with it less than a billion years ago. All other evidence of the smaller galaxy has been assimilated. At the interface between the clockwise- and counterclockwise-rotating regions are many new nebulae and young stars.
NGC 4314 is a face-on barred spiral galaxy at a distance of 40 million light-years. It is unique for its region of intense star formation, creating a ring around its nucleus which was discovered by the Hubble Space Telescope. The galaxy's prodigious star formation began five million years ago, in a region with a diameter of 1,000 light-years. The core's structure is also unique because the galaxy has spiral arms which feed gas into the bar.
NGC 4414 is an unbarred spiral flocculent galaxy about 62 million light-years away. It is one of the closest flocculent spiral galaxies.
NGC 4565 is an edge-on spiral galaxy which appears superimposed on the Virgo Cluster. NGC 4565 has been nicknamed the Needle Galaxy because when seen in full, it appears as a narrow streak of light. Like many edge-on spiral galaxies, it has a prominent dust lane and a central bulge. NGC 4565 has at least two satellite galaxies, and one of them is interacting with it.
NGC 4651, about the size of the Milky Way, has tidal stellar streams gravitationally stripped from a smaller, satellite galaxy. It is about 62 million light-years away. It is located on the outskirts of the cluster, and is also known as the Umbrella Galaxy. Unlike the other spiral galaxies in the cluster, NGC 4651 is rich in neutral hydrogen, which also extends beyond the optical disk. Its star formation is typical for a galaxy of its type.
Spiral galaxy Malin 1 discovered in 1986 is the first-known giant low-surface-brightness galaxy. With UGC 1382, it is also one of the largest low-surface-brightness galaxies.
In 2006 a dwarf galaxy, also named Coma Berenices, was discovered in the constellation from data obtained by the Sloan Digital Sky Survey. The galaxy is a faint satellite of the Milky Way. It is one of the faintest satellites of the Milky Way - its integrated luminosity is about times that of the Sun (absolute visible magnitude of about −4.1), which is lower than many globular clusters. A high mass to light ratio may mean that the satellite has large amounts of dark matter.
Quasars
HS 1216+5032 is a bright, gravitationally lensed pair of quasars. W Comae Berenices (or ON 231), a blazar in the constellation's northwest, was originally designated a variable star and later found to be a BL Lacertae object. As of 2009, it had the most intense gamma ray spectrum of the sixty known gamma-ray blazars.
Gamma-ray bursts
Some gamma-ray bursts occurred in Coma Berenices, particularly GRB 050509B on 9 May 2005 and GRB 080607 on 7 June 2008. GRB 050509B, which lasted only 0.03 second, became the first short burst with a detected afterglow.
Meteor shower
The Coma Berenicids meteor shower peaks around 18 January. Despite the shower's low intensity (averaging one or two meteors per hour) its meteors are some of the fastest, with speeds up to .
In culture
Since Callimachus' poem, Coma Berenices has been occasionally featured in culture. Alexander Pope alludes to the legend in the ending of The Rape of the Lock, in which the titular hair is placed among the stars. (The poem would go on to provide the names of some of the moons of Uranus.) In 1886, Spanish artist Luis Ricardo Falero created a mezzotint print personifying Coma Berenices alongside Virgo and Leo. In 1892, the Russian poet Afanasy Fet made the constellation the subject of his short poem, composed for the Countess Natalya Sollogub. The Swedish poet Gunnar Ekelöf wrote the lines "Your friend the comet combed his hair with the Leonids / Berenice let her hair hang down from the sky" in a 1933 poem. American writer and folksinger Richard Fariña mentions Coma Berenices in his 1966 novel Been Down So Long It Looks Like Up To Me, sardonically writing about content typical to upper-level astronomy coursework at Cornell: "It's the advanced courses give you trouble. Relativity principles, spiral nebula in Coma Berenices, that kind of hassle". The Bolivian poet, Pedro Shimose, makes Coma Berenices the home address of his "Señorita NGC 4565" in his poem "Carta a una estrella que vive en otra constelación" ("Letter to a star who lives in another constellation"), included in his 1967 collection, "Sardonia". " The Irish poet W. B. Yeats, in his poem "Her Dream", refers to "Berenice's burning hair" being "nailed upon the night". Francisco Guerrero, a 20th-century Spanish composer, wrote an orchestral work on the constellation in 1996. In 1999 Irish artist Alice Maher made a series of four oversize drawings, entitled Coma Berenices, of entwining black hair coils.
Notes
See also
Coma Star Cluster
Coma Berenices in Chinese astronomy
IC 4141
References
External links
The Deep Photographic Guide to the Constellations: Coma Berenices
The clickable Coma Berenices
Northern constellations
Ptolemaic Kingdom in popular culture | Coma Berenices | [
"Astronomy"
] | 5,536 | [
"Coma Berenices",
"Constellations",
"Northern constellations"
] |
181,174 | https://en.wikipedia.org/wiki/Thyristor | A thyristor (, from a combination of Greek language θύρα, meaning "door" or "valve", and transistor ) is a solid-state semiconductor device which can be thought of as being a highly robust and switchable diode, allowing the passage of current in one direction but not the other, often under control of a gate electrode, that is used in high power applications like inverters and radar generators. It usually consists of four layers of alternating P- and N-type materials. It acts as a bistable switch (or a latch). There are two designs, differing in what triggers the conducting state. In a three-lead thyristor, a small current on its gate lead controls the larger current of the anode-to-cathode path. In a two-lead thyristor, conduction begins when the potential difference between the anode and cathode themselves is sufficiently large (breakdown voltage). The thyristor continues conducting until the voltage across the device is reverse-biased or the voltage is removed (by some other means), or through the control gate signal on newer types.
Some sources define "silicon-controlled rectifier" (SCR) and "thyristor" as synonymous. Other sources define thyristors as more complex devices that incorporate at least four layers of alternating N-type and P-type substrate.
The first thyristor devices were released commercially in 1956. Because thyristors can control a relatively large amount of power and voltage with a small device, they find wide application in control of electric power, ranging from light dimmers and electric motor speed control to high-voltage direct-current power transmission. Thyristors may be used in power-switching circuits, relay-replacement circuits, inverter circuits, oscillator circuits, level-detector circuits, chopper circuits, light-dimming circuits, low-cost timer circuits, logic circuits, speed-control circuits, phase-control circuits, etc. Originally, thyristors relied only on current reversal to turn them off, making them difficult to apply for direct current; newer device types can be turned on and off through the control gate signal. The latter is known as a gate turn-off thyristor, or GTO thyristor.
Unlike transistors, thyristors have a two-valued switching characteristic, meaning that a thyristor can only be fully on or off, while a transistor can lie in between on and off states. This makes a thyristor unsuitable as an analog amplifier, but useful as a switch.
History
The silicon controlled rectifier (SCR) or thyristor proposed by William Shockley in 1950 and championed by Moll and others at Bell Labs was developed in 1956 by power engineers at General Electric (GE), led by Gordon Hall and commercialized by GE's Frank W. "Bill" Gutzwiller. The Institute of Electrical and Electronics Engineers recognized the invention by placing a plaque at the invention site in Clyde, New York, and declaring it an IEEE Historic Milestone.
An earlier gas-filled tube device called a thyratron provided a similar electronic switching capability, where a small control voltage could switch a large current. It is from a combination of "thyratron" and "transistor" that the term "thyristor" is derived.
In recent years, some manufacturers have developed thyristors using silicon carbide (SiC) as the semiconductor material. These have applications in high temperature environments, being capable of operating at temperatures up to 350 °C.
Design
The thyristor is a four-layered, three-terminal semiconductor device, with each layer consisting of alternating N-type or P-type material, for example P-N-P-N. The main terminals, labelled anode and cathode, are across all four layers. The control terminal, called the gate, is attached to p-type material near the cathode. (A variant called an SCS—silicon controlled switch—brings all four layers out to terminals.) The operation of a thyristor can be understood in terms of a pair of tightly coupled bipolar junction transistors, arranged to cause a self-latching action.
Thyristors have three states:
Reverse blocking mode: Voltage is applied in the direction that would be blocked by a diode
Forward blocking mode: Voltage is applied in the direction that would cause a diode to conduct, but the thyristor has not been triggered into conduction
Forward conducting mode: The thyristor has been triggered into conduction and will remain conducting until the forward current drops below a threshold value known as the "holding current"
Gate terminal
The thyristor has three p-n junctions (serially named J1, J2, J3 from the anode).
When the anode is at a positive potential VAK with respect to the cathode with no voltage applied at the gate, junctions J1 and J3 are forward biased, while junction J2 is reverse biased. As J2 is reverse biased, no conduction takes place (Off state). Now if VAK is increased beyond the breakdown voltage VBO of the thyristor, avalanche breakdown of J2 takes place and the thyristor starts conducting (On state).
If a positive potential VG is applied at the gate terminal with respect to the cathode, the breakdown of the junction J2 occurs at a lower value of VAK. By selecting an appropriate value of VG, the thyristor can be switched into the on state quickly.
Once avalanche breakdown has occurred, the thyristor continues to conduct, irrespective of the gate voltage, until: (a) the potential VAK is removed or (b) the current through the device (anode−cathode) becomes less than the holding current specified by the manufacturer. Hence VG can be a voltage pulse, such as the voltage output from a UJT relaxation oscillator.
The gate pulses are characterized in terms of gate trigger voltage (VGT) and gate trigger current (IGT). Gate trigger current varies inversely with gate pulse width in such a way that it is evident that there is a minimum gate charge required to trigger the thyristor.
Switching characteristics
In a conventional thyristor, once it has been switched on by the gate terminal, the device remains latched in the on-state (i.e. does not need a continuous supply of gate current to remain in the on state), providing the anode current has exceeded the latching current (IL). As long as the anode remains positively biased, it cannot be switched off unless the current drops below the holding current (IH). In normal working conditions the latching current is always greater than holding current. In the above figure IL has to come above the IH on y-axis since IL>IH.
A thyristor can be switched off if the external circuit causes the anode to become negatively biased (a method known as natural, or line, commutation). In some applications this is done by switching a second thyristor to discharge a capacitor into the anode of the first thyristor. This method is called forced commutation.
Once the current through the thyristor drops below the holding current, there must be a delay before the anode can be positively biased and retain the thyristor in the off-state. This minimum delay is called the circuit commutated turn off time (tQ). Attempting to positively bias the anode within this time causes the thyristor to be self-triggered by the remaining charge carriers (holes and electrons) that have not yet recombined.
For applications with frequencies higher than the domestic AC mains supply (e.g. 50 Hz or 60 Hz), thyristors with lower values of tQ are required. Such fast thyristors can be made by diffusing heavy metal ions such as gold or platinum which act as charge combination centers into the silicon. Today, fast thyristors are more usually made by electron or proton irradiation of the silicon, or by ion implantation. Irradiation is more versatile than heavy metal doping because it permits the dosage to be adjusted in fine steps, even at quite a late stage in the processing of the silicon.
Types
ACS
ACST
AGT: Anode Gate Thyristor: A thyristor with gate on n-type layer near to the anode
ASCR: Asymmetrical SCR
BCT: Bidirectional Control Thyristor: A bidirectional switching device containing two thyristor structures with separate gate contacts
BOD: Breakover Diode: A gateless thyristor triggered by avalanche current
DIAC: Bidirectional trigger device
Dynistor: Unidirectional switching device
Shockley diode: Unidirectional trigger and switching device
SIDAC: Bidirectional switching device
Trisil, SIDACtor: Bidirectional protection devices
BRT: Base Resistance Controlled Thyristor
ETO: Emitter Turn-Off Thyristor
GTO: Gate Turn-Off thyristor
DB-GTO: Distributed buffer gate turn-off thyristor
MA-GTO: Modified anode gate turn-off thyristor
IGCT: Integrated gate-commutated thyristor
Ignitor: Spark generators for fire-lighter circuits
LASCR: Light-activated SCR, or LTT: light-triggered thyristor
LASS: light-activated semiconducting switch
MCT: MOSFET Controlled Thyristor: It contains two additional FET structures for on/off control.
CSMT or MCS: MOS composite static induction thyristor
PUT or PUJT: Programmable Unijunction Transistor: A thyristor with gate on n-type layer near to the anode used as a functional replacement for unijunction transistor
RCT: Reverse Conducting Thyristor
SCS: Silicon Controlled Switch or Thyristor Tetrode: A thyristor with both cathode and anode gates
SCR: Silicon Controlled Rectifier
SITh: Static Induction Thyristor, or FCTh: Field Controlled Thyristor: containing a gate structure that can shut down anode current flow.
TRIAC: Triode for Alternating Current: A bidirectional switching device containing two thyristor structures with common gate contact
Quadrac: special type of thyristor which combines a DIAC and a TRIAC into a single package.
Reverse conducting thyristor
A reverse conducting thyristor (RCT) has an integrated reverse diode, so is not capable of reverse blocking. These devices are advantageous where a reverse or freewheel diode must be used. Because the SCR and diode never conduct at the same time they do not produce heat simultaneously and can easily be integrated and cooled together. Reverse conducting thyristors are often used in frequency changers and inverters.
Photothyristors
Photothyristors are activated by light. The advantage of photothyristors is their insensitivity to electrical signals, which can cause faulty operation in electrically noisy environments. A light-triggered thyristor (LTT) has an optically sensitive region in its gate, into which electromagnetic radiation (usually infrared) is coupled by an optical fiber. Since no electronic boards need to be provided at the potential of the thyristor in order to trigger it, light-triggered thyristors can be an advantage in high-voltage applications such as HVDC. Light-triggered thyristors are available with in-built over-voltage (VBO) protection, which triggers the thyristor when the forward voltage across it becomes too high; they have also been made with in-built forward recovery protection, but not commercially. Despite the simplification they can bring to the electronics of an HVDC valve, light-triggered thyristors may still require some simple monitoring electronics and are only available from a few manufacturers.
Two common photothyristors include the light-activated SCR (LASCR) and the light-activated TRIAC. A LASCR acts as a switch that turns on when exposed to light. Following light exposure, when light is absent, if the power is not removed and the polarities of the cathode and anode have not yet reversed, the LASCR is still in the "on" state. A light-activated TRIAC resembles a LASCR, except that it is designed for alternating currents.
Failure modes
Thyristor manufacturers generally specify a region of safe firing defining acceptable levels of voltage and current for a given operating temperature. The boundary of this region is partly determined by the requirement that the maximum permissible gate power (PG), specified for a given trigger pulse duration, is not exceeded.
As well as the usual failure modes due to exceeding voltage, current or power ratings, thyristors have their own particular modes of failure, including:
Turn on di/dt: in which the rate of rise of on-state current after triggering is higher than can be supported by the spreading speed of the active conduction area (SCRs & triacs).
Forced commutation: in which the transient peak reverse recovery current causes such a high voltage drop in the sub-cathode region that it exceeds the reverse breakdown voltage of the gate cathode diode junction (SCRs only).
Switch on dv/dt: the thyristor can be spuriously fired without trigger from the gate if the anode-to-cathode voltage rise-rate is too great.
Applications
Thyristors are mainly used where high currents and voltages are involved, and are often used to control alternating currents, where the change of polarity of the current causes the device to switch off automatically, referred to as "zero cross" operation. The device can be said to operate synchronously; being that, once the device is triggered, it conducts current in phase with the voltage applied over its cathode to anode junction with no further gate modulation being required, i.e., the device is biased fully on. This is not to be confused with asymmetrical operation, as the output is unidirectional, flowing only from cathode to anode, and so is asymmetrical in nature.
Thyristors can be used as the control elements for phase angle triggered controllers, also known as phase fired controllers.
They can also be found in power supplies for digital circuits, where they are used as a sort of "enhanced circuit breaker" to prevent a failure in the power supply from damaging downstream components. A thyristor is used in conjunction with a Zener diode attached to its gate, and if the output voltage of the supply rises above the Zener voltage, the thyristor will conduct and short-circuit the power supply output to ground (in general also tripping an upstream breaker or fuse). This kind of protection circuit is known as a crowbar, and has the advantage over a standard circuit breaker or fuse in that it creates a high-conductance path to ground from damaging supply voltage and potentially for stored energy (in the system being powered).
The first large-scale application of thyristors, with associated triggering diac, in consumer products related to stabilized power supplies within color television receivers in the early 1970s. The stabilized high voltage DC supply for the receiver was obtained by moving the switching point of the thyristor device up and down the falling slope of the positive going half of the AC supply input (if the rising slope was used the output voltage would always rise towards the peak input voltage when the device was triggered and thus defeat the aim of regulation). The precise switching point was determined by the load on the DC output supply, as well as AC input fluctuations.
Thyristors have been used for decades as light dimmers in television, motion pictures, and theater, where they replaced inferior technologies such as autotransformers and rheostats. They have also been used in photography as a critical part of flashes (strobes).
Snubber circuits
Thyristors can be triggered by a high rise-rate of off-state voltage. Upon increasing the off-state voltage across the anode and cathode of the thyristor, there will be a flow of charges similar to the charging current of a capacitor. The maximum rate of rise of off-state voltage or dV/dt rating of a thyristor is an important parameter since it indicates the maximum rate of rise of anode voltage that does not bring thyristor into conduction when no gate signal is applied. When the flow of charges due to rate of rise of off-state voltage across the anode and cathode of the thyristor becomes equal to the flow of charges as injected when the gate is energized then it leads to random and false triggering of thyristor which is undesired.
This is prevented by connecting a resistor-capacitor (RC) snubber circuit between the anode and cathode in order to limit the dV/dt (i.e., rate of voltage change over time). Snubbers are energy-absorbing circuits used to suppress the voltage spikes caused by the circuit's inductance when a switch, electrical or mechanical, opens. The most common snubber circuit is a capacitor and resistor connected in series across the switch (transistor).
HVDC electricity transmission
Since modern thyristors can switch power on the scale of megawatts, thyristor valves have become the heart of high-voltage direct current (HVDC) conversion either to or from alternating current. In the realm of this and other very high-power applications, both electrically triggered (ETT) and light-triggered (LTT) thyristors are still the primary choice. Thyristors are arranged into a diode bridge circuit and to reduce harmonics are connected in series to form a 12-pulse converter. Each thyristor is cooled with deionized water, and the entire arrangement becomes one of multiple identical modules forming a layer in a multilayer valve stack called a quadruple valve. Three such stacks are typically mounted on the floor or hung from the ceiling of the valve hall of a long-distance transmission facility.
Comparisons to other devices
The functional drawback of a thyristor is that, like a diode, it only conducts in one direction so it cannot be safely used with AC current. A similar self-latching 5-layer device, called a TRIAC, is able to work in both directions. This added capability, though, also can become a shortfall. Because the TRIAC can conduct in both directions, reactive loads can cause it to fail to turn off during the zero-voltage instants of the AC power cycle. Because of this, use of TRIACs with (for example) heavily inductive motor loads usually requires the use of a "snubber" circuit around the TRIAC to assure that it will turn off with each half-cycle of mains power. Inverse parallel SCRs can also be used in place of the triac; because each SCR in the pair has an entire half-cycle of reverse polarity applied to it, the SCRs, unlike TRIACs, are sure to turn off. The "price" to be paid for this arrangement, however, is the added complexity of two separate, but essentially identical gating circuits.
Although thyristors are heavily used in megawatt-scale rectification of AC to DC, in low- and medium-power (from few tens of watts to few tens of kilowatts) applications they have virtually been replaced by other devices with superior switching characteristics like power MOSFETs or IGBTs. One major problem associated with SCRs is that they are not fully controllable switches. The GTO thyristor and IGCT are two devices related to the thyristor that address this problem. In high-frequency applications, thyristors are poor candidates due to long switching times arising from bipolar conduction. MOSFETs, on the other hand, have much faster switching capability because of their unipolar conduction (only majority carriers carry the current).
See also
Thyristor-controlled reactor
Insulated-gate bipolar transistor
Latch-up
Quadrac
Thyratron
Thyristor drive
Gate turn-off thyristor
References
Sources
Thyristor Theory and Design Considerations; ON Semiconductor; 240 pages; 2006; HBD855/D. (Free PDF download)
Ulrich Nicolai, Tobias Reimann, Jürgen Petzoldt, Josef Lutz: Application Manual IGBT and MOSFET Power Modules, 1. Edition, ISLE Verlag, 1998, . (Free PDF download)
SCR Manual; 6th edition; General Electric Corporation; Prentice-Hall; 1979.
External links
The Early History of the Silicon Controlled Rectifier by Frank William Gutzwiller (of G.E.)
THYRISTORS from All About Circuits
Universal thyristor driving circuit
Thyristor Resources (simpler explanation)
Thyristors of STMicroelectronics
Thyristor basics
Electric power systems components
High-voltage direct current
Power electronics
Solid state switches | Thyristor | [
"Engineering"
] | 4,402 | [
"Electronic engineering",
"Power electronics"
] |
181,206 | https://en.wikipedia.org/wiki/Wide-body%20aircraft | A wide-body aircraft, also known as a twin-aisle aircraft and in the largest cases as a jumbo jet, is an airliner with a fuselage wide enough to accommodate two passenger aisles with seven or more seats abreast. The typical fuselage diameter is . In the typical wide-body economy cabin, passengers are seated seven to ten abreast, allowing a total capacity of 200 to 850 passengers. Seven-abreast aircraft typically seat 160 to 260 passengers, eight-abreast 250 to 380, nine- and ten-abreast 350 to 480. The largest wide-body aircraft are over wide, and can accommodate up to eleven passengers abreast in high-density configurations.
By comparison, a typical narrow-body aircraft has a diameter of , with a single aisle, and seats between two and six people abreast.
Wide-body aircraft were originally designed for a combination of efficiency and passenger comfort and to increase the amount of cargo space. However, airlines quickly gave in to economic factors, and reduced the extra passenger space in order to insert more seats and increase revenue and profits. Wide-body aircraft are also used by commercial cargo airlines, along with other specialized uses.
By the end of 2017, nearly 8,800 wide-body airplanes had been delivered since 1969, with production peaking at 412 in 2015.
History
1960s
Following the success of the Boeing 707 and Douglas DC-8 in the late 1950s and early 1960s, airlines began seeking larger aircraft to meet the rising global demand for air travel. Engineers were faced with many challenges as airlines demanded more passenger seats per aircraft, longer ranges and lower operating costs.
Early jet aircraft such as the 707 and DC-8 seated passengers along either side of a single aisle, with no more than six seats per row. Larger aircraft would have to be longer, higher (double-deck aircraft), or wider in order to accommodate a greater number of passenger seats.
Engineers realized having two decks created difficulties in meeting emergency evacuation regulations with the technology available at that time. During the 1960s, it was also believed that supersonic airliners would succeed larger, slower planes. Thus, it was believed that most subsonic aircraft would become obsolete for passenger travel and would be eventually converted to freighters. As a result, airline manufacturers opted for a wider fuselage rather than a taller one (the 747, and eventually the McDonnell Douglas DC-10 and Lockheed L-1011 TriStar). By adding a second aisle, the wider aircraft could accommodate as many as 10 seats across, but could also be easily converted to a freighter and carry two eight-by-eight freight pallets abreast.
The engineers also opted for creating "stretched" versions of the DC-8 (61, 62 and 63 models), as well as longer versions of Boeing's 707 (-320B and 320C models) and 727 (-200 model); and Douglas' DC-9 (-30, -40, and -50 models), all of which were capable of accommodating more seats than their shorter predecessor versions.
1970s
The wide-body age of jet travel began in 1970 with the entry into service of the first wide-body airliner, the four-engined, partial double-deck Boeing 747. New trijet wide-body aircraft soon followed, including the McDonnell Douglas DC-10 and the L-1011 TriStar. The first wide-body twinjet, the Airbus A300, entered service in 1974. This period came to be known as the "wide-body wars".
L-1011 TriStars were demonstrated in the USSR in 1974, as Lockheed sought to sell the aircraft to Aeroflot. However, in 1976 the Soviet Union launched its own first four-engined wide-body, the Ilyushin Il-86.
After the success of the early wide-body aircraft, several subsequent designs came to market over the next two decades, including the Boeing 767 and 777, the Airbus A330 and Airbus A340, and the McDonnell Douglas MD-11. In the "jumbo" category, the capacity of the Boeing 747 was not surpassed until October 2007, when the Airbus A380 entered commercial service with the nickname "Superjumbo". Both the Boeing 747 and Airbus A380 "jumbo jets" have four engines each (quad-jets), but the upcoming Boeing 777X ("mini jumbo jet") is a twinjet.
In the mid-2000s, rising oil costs in a post-9/11 climate caused airlines to look towards newer, more fuel-efficient aircraft. Two such examples are the Boeing 787 Dreamliner and Airbus A350 XWB. The proposed Comac C929 and C939 may also share this new wide-body market.
The production of the large Boeing 747-8 and Airbus A380 four-engine, long-haul jets has come to an end as airlines are now preferring the smaller, more efficient Airbus A350, Boeing 787 and Boeing 777 twin-engine, long-range airliners.
Design
Fuselage
Although wide-body aircraft have larger frontal areas (and thus greater form drag) than narrow-body aircraft of similar capacity, they have several advantages over their narrow-body counterparts, such as:
Larger cabin space for passengers, giving a more open feeling.
Lower ratio of surface area to volume, and thus lower drag per passenger or cargo volume. The only exception to this would be with very long narrow-body aircraft, such as the Boeing 757 and Airbus A321.
Twin aisles that accelerate loading, unloading, and evacuation compared to a single aisle (wide-body airliners typically have 3.5 to 5 seats abreast per aisle, compared to 5–6 on most narrow-body aircraft).
Reduced overall aircraft length for a given capacity, improving ground manoeuvrability and reducing the risk of tail strikes.
Greater under-floor freight capacity.
Better structural efficiency for larger aircraft than would be possible with a narrow-body design.
British and Russian designers had proposed wide-body aircraft similar in configuration to the Vickers VC10 and Douglas DC-9, but with a wide-body fuselage. The British BAC Three-Eleven project did not proceed due to lack of government backing, while the Russian Ilyushin Il-86 wide-body proposal eventually gave way to a more conventional wing-mounted engine design, most likely due to the inefficiencies of mounting such large engines on the aft fuselage.
Engines
As jet engine power and reliability have increased over the last decades, most of the wide-body aircraft built today have only two engines. A twinjet design is more fuel-efficient than a trijet or quadjet of similar size. The increased reliability of modern jet engines also allows aircraft to meet the ETOPS certification standard, which calculates reasonable safety margins for flights across oceans. The trijet design was dismissed due to higher maintenance and fuel costs compared to a twinjet. Most modern wide-body aircraft have two engines, although the heaviest wide-body aircraft, the Airbus A380 and the Boeing 747-8, are built with four engines. The upcoming Boeing 777X-9 twinjet is approaching the capacity of the earlier Boeing 747.
The Boeing 777 twinjet features the most powerful jet engine, the General Electric GE90. The early variants have a fan diameter of , and the larger GE90-115B has a fan diameter of . This is almost as wide as the Fokker 100 fuselage. Complete GE90 engines can only be ferried by outsize cargo aircraft such as the Antonov An-124, presenting logistics problems if a 777 is stranded in a place due to emergency diversions without the proper spare parts. If the fan is removed from the core, then the engines may be shipped on a Boeing 747 Freighter.
The General Electric GE9X, powering the Boeing 777X, is wider than the GE90 by .
The maximum takeoff weight of the Airbus A380 would not have been possible without the engine technology developed for the Boeing 777 such as contra-rotating spools. Its Trent 900 engine has a fan diameter of , slightly smaller than the GE90 engines on the Boeing 777. The Trent 900 is designed to fit into a Boeing 747-400F freighter for easier transport by air cargo.
Interior
The interiors of aircraft, known as the aircraft cabin, have been undergoing evolution since the first passenger aircraft. Today, between one and four classes of travel are available on wide-body aircraft.
Bar and lounge areas which were once installed on wide-body aircraft have mostly disappeared, but a few have returned in first class or business class on the Airbus A340-600, Boeing 777-300ER, and on the Airbus A380. Emirates has installed showers for first-class passengers on the A380; twenty-five minutes are allotted for use of the room, and the shower operates for a maximum of five minutes.
Depending on how the airline configures the aircraft, the size and seat pitch of the airline seats will vary significantly. For example, aircraft scheduled for shorter flights are often configured at a higher seat density than long-haul aircraft. Due to current economic pressures on the airline industry, high seating densities in the economy class cabin are likely to continue.
In some of the largest single-deck wide-body aircraft, such as the Boeing 777, the extra space above the cabin is used for crew rest areas and galley storage.
Jumbo jets
The term "jumbo jet" usually refers to the largest variants of wide-body airliners; examples include the Boeing 747 (the first wide-body and original "jumbo jet"), Airbus A380 ("superjumbo jet"), and Boeing 777-9. The phrase "jumbo jet" derives from Jumbo, a circus elephant in the 19th century.
Wake turbulence and separation
Aircraft are categorized by ICAO according to the wake turbulence they produce. Because wake turbulence is generally related to the weight of an aircraft, these categories are based on one of four weight categories: light, medium, heavy, and super.
Due to their weight, all current wide-body aircraft are categorized as "heavy", or in the case of the A380 in U.S. airspace, "super".
The wake-turbulence category also is used to guide the separation of aircraft. Super- and heavy-category aircraft require greater separation behind them than those in other categories. In some countries, such as the United States, it is a requirement to suffix the aircraft's call sign with the word heavy (or super) when communicating with air traffic control in certain areas.
Special uses
Wide-body aircraft are used in science, research, and the military. Some wide-body aircraft are used as flying command posts by the military like the Ilyushin Il-80 or the Boeing E-4, while the Boeing E-767 is used for airborne early warning and control. New military weapons are tested aboard wide-bodies, as in the laser weapons testing on the Boeing YAL-1. Other wide-body aircraft are used as flying research stations, such as the joint German–U.S. Stratospheric Observatory for Infrared Astronomy (SOFIA). Airbus A340, Airbus A380, and Boeing 747 four-engine wide-body aircraft are used to test new generations of aircraft engines in flight. A few aircraft have also been converted for aerial firefighting, such as the DC-10-based Tanker 910 and the 747-200-based Evergreen Supertanker.
Some wide-body aircraft are used as VIP transport. To transport those holding the highest offices, Canada uses the Airbus A310, while Russia uses the Ilyushin Il-96. Germany replaced its Airbus A310 with an Airbus A340 in spring 2011. Specially-modified Boeing 747-200s (Boeing VC-25s) are used to transport the President of the United States.
Outsize cargo
Some wide-body aircraft have been modified to enable transport of oversize cargo. Examples include the Airbus Beluga, Airbus BelugaXL and Boeing Dreamlifter. Two specially modified Boeing 747s were used to transport the U.S. Space Shuttle, while the Antonov An-225 was initially built to carry the Buran shuttle.
Comparison
See also
Aircraft seat map
Competition between Airbus and Boeing
Large aircraft
List of large aircraft
References
External links
Airplane seat pitch and width information in table form
WidebodyAircraft.nl information and chronology
Etihad Airways A340-600 interior
Aircraft configurations
Lists of aircraft by design configuration | Wide-body aircraft | [
"Engineering"
] | 2,586 | [
"Aircraft configurations",
"Aerospace engineering"
] |
181,242 | https://en.wikipedia.org/wiki/Organizational%20commitment | In organizational behavior and industrial and organizational psychology, organizational commitment is an individual's psychological attachment to the organization. Organizational scientists have also developed many nuanced definitions of organizational commitment, and numerous scales to measure them. Exemplary of this work is Meyer and Allen's model of commitment, which was developed to integrate numerous definitions of commitment that had been proliferated in the literature. Meyer and Allen's model has also been critiqued because the model is not consistent with empirical findings. It may also not be fully applicable in domains such as customer behavior. There has also been debate surrounding what Meyers and Allen's model was trying to achieve.
The basis behind many of these studies was to find ways to improve how workers feel about their jobs so that these workers would become more committed to their organizations.
Organizational commitment predicts work variables such as turnover, organizational citizenship behavior, and job performance. Some of the factors such as role stress, empowerment, job insecurity and employability, and distribution of leadership have been shown to be connected to a worker's sense of organizational commitment.
Model of commitment
Meyer and Allen's (1991) three-component model of commitment was created to argue that commitment has three different components that correspond with different psychological states. Meyer and Allen created this model for two reasons: first "aid in the interpretation of existing research" and second "to serve as a framework for future research". Their study was based mainly around previous studies of organizational commitment.
Meyer and Allen's research indicated that there are three "mind sets" which can characterize an employee's commitment to the organization. Mercurio (2015) extended this model by reviewing the empirical and theoretical studies on organizational commitment. Mercurio posits that emotional, or affective commitment is the core essence of organizational commitment.
Affective commitment
Affective Commitment is defined as the employee's positive emotional attachment to the organization. Meyer and Allen pegged AC as the "desire" component of organizational commitment. An employee who is affectively committed strongly identifies with the goals of the organization and desires to remain a part of the organization. This employee commits to the organization because they "want to".
This commitment can be influenced by many different demographic characteristics: age, tenure, sex, and education but these influences are neither strong nor consistent. The problem with these characteristics is that while they can be seen, they cannot be clearly defined. Meyer and Allen gave this example that "positive relationships between tenure and commitment maybe due to tenure-related differences in job status and quality" In developing this concept, Meyer and Allen drew largely on Mowday, Porter, and Steers's (2006) concept of commitment, which in turn drew on earlier work by Kanter (1968). Mercurio (2015) stated that..."affective commitment was found to be an enduring, demonstrably indispensable, and central characteristic of organizational commitment".
Continuance commitment
Continuance commitment is the "need" component or the gains versus losses of working in an organization.
"Side bets", or investments, are the gains and losses that may occur should an individual stay or leave an organization. An individual may commit to the organization because he/she perceives a high cost of losing organizational membership (cf. Becker's 1960 "side bet theory").
Things like economic costs (such as pension accruals) and social costs (friendship ties with co-workers) would be costs of losing organizational membership. But an individual doesn't see the positive costs as enough to stay with an organization they must also take into account the availability of alternatives (such as another organization), disrupt personal relationships, and other "side bets" that would be incurred from leaving their organization. The problem with this is that these "side bets" don't occur at once but that they "accumulate with age and tenure".
Normative commitment
The individual commits to and remains with an organization because of feelings of obligation, the last component of organizational commitment. These feelings may derive from a strain on an individual before and after joining an organization. For example, the organization may have invested resources in training an employee who then feels a 'moral' obligation to put forth effort on the job and stay with the organization to 'repay the debt.' It may also reflect an internalized norm, developed before the person joins the organization through family or other socialization processes, that one should be loyal to one's organization. The employee stays with the organization because he/she "ought to". But generally if an individual invest a great deal they will receive "advanced rewards".
Normative commitment is higher in organizations that value loyalty and systematically communicate the fact to employees with rewards, incentives and other strategies. Normative commitment in employees is also high where employees regularly see visible examples of the employer being committed to employee well-being. An employee with greater organizational commitment has a greater chance of contributing to organizational success and will also experience higher levels of job satisfaction. High levels of job satisfaction, in turn, reduces employee turnover and increases the organization's ability to recruit and retain talent. Meyer and Allen based their research in this area more on theoretical evidence rather than empirical, which may explain the lack of depth in this section of their study compared to the others. They drew off Wiener's (2005) research for this commitment component.
Critique to the three-component model
Since the model was made, there has been conceptual critique to what the model is trying to achieve. Specifically from three psychologists, Omar Solinger, Woody Olffen, and Robert Roe. To date, the three-component conceptual model has been regarded as the leading model for organizational commitment because it ties together three aspects of earlier commitment research (Becker, 2005; Buchanan, 2005; Kanter, 1968; Mathieu & Zajac, 1990; Mowday, Porter, & Steers, 1982; Salancik, 2004; Weiner, 2004; Weiner & Vardi, 2005). However, a collection of studies have shown that the model is not consistent with empirical findings. Solinger, Olffen, and Roe use a later model by Alice Eagly and Shelly Chaiken, Attitude-behavior Model (2004), to present that TCM combines different attitude phenomena. They have come to the conclusion that TCM is a model for predicting turnover. In a sense the model describes why people should stay with the organization whether it is because they want to, need to, or ought to. The model appears to mix together an attitude toward a target, that being the organization, with an attitude toward a behavior, which is leaving or staying. They believe the studies should return to the original understanding of organizational commitment as an attitude toward the organization and measure it accordingly. Although the TCM is a good way to predict turnover, these psychologists do not believe it should be the general model. Because Eagly and Chaiken's model is so general, it seems that the TCM can be described as a specific subdivision of their model when looking at a general sense of organizational commitment. It becomes clear that affective commitment equals an attitude toward a target, while continuance and normative commitment are representing different concepts referring to anticipated behavioral outcomes, specifically staying or leaving. This observation backs up their conclusion that organizational commitment is perceived by TCM as combining different target attitudes and behavioral attitudes, which they believe to be both confusing and logically incorrect. The attitude-behavioral model can demonstrate explanations for something that would seem contradictory in the TCM. That is that affective commitment has stronger associations with relevant behavior and a wider range of behaviors, compared to normative and continuance commitment. Attitude toward a target (the organization) is obviously applicable to a wider range of behaviors than an attitude toward a specific behavior (staying). After their research, Sollinger, Olffen, and Roe believe Eagly and Chaiken's attitude-behavior model from 1993 would be a good alternative model to look at as a general organizational commitment predictor because of its approach at organizational commitment as a singular construct, which in turn would help predicting various behaviors beyond turnover.
A five component commitment model
More recently, scholars have proposed a five component model of commitment, though it has been developed in the context of product and service consumption. This model proposes habitual and forced commitment as two additional dimensions which are very germane in consumption settings. It seems, however, that habitual commitment or inertial may also become relevant in many job settings. People get habituated to a job—the routine, the processes, the cognitive schemas associated with a job can make people develop a latent commitment to the job—just as it may occur in a consumption setting. The paper—by Keiningham and colleagues also compared applications of the TCM in job settings and in consumption settings to develop additional insights.
Job satisfaction
Job satisfaction is commonly defined as the extent to which employees like their work. Researchers have examined Job satisfaction for the past several decades. Studies have been devoted to figuring out the dimensions of job satisfaction, antecedents of job satisfaction, and the relationship between satisfaction and commitment. Satisfaction has also been examined under various demographics of gender, age, race, education, and work experience. Most research on job satisfaction has been aimed towards the person-environment fit paradigm. Job satisfaction has been found to be an important area of research because one of the top reasons individuals give for leaving a job is dissatisfaction.
Much of the literature on the relationship between commitment and satisfaction with one's job indicates that if employees are satisfied they develop stronger commitment to their work. Kalleberg (1990) studied work attitudes of workers in the US and Japan and found a correlation of 0.73 between job satisfaction and organizational commitment of workers in Japan and a higher significant correlation of 0.81 among Americans. A study conducted by Dirani and Kuchinke produced results indicating a strong correlation between job commitment and job satisfaction and that satisfaction was a reliable predictor of commitment. Job satisfaction among employees—at least in retail settings—can also strengthen the association between customer satisfaction and customer loyalty.
Perceiving a "Calling" A study at the University of Florida found a positive correlation between the individual's perception of their career being a "calling" and the level of commitment to the job. This study looked at the relation between work commitment and participant's perception of meaning in their job. Participants were tested in the areas of; perceiving a calling, job satisfaction, and job commitment. Results showed a moderate correlation between participants perceiving a calling and job commitment and a weak correlation between perceiving a calling and job satisfaction.
Other factors
Role Stress
Dysfunctions in role performance have been associated with a large number of consequences, almost always negative, which affect the well being of workers and functioning of organizations. An individual's experience of receiving incompatible or conflicting requests (role conflict) and/or the lack of enough information to carry out his/her job (role ambiguity) are causes of role stress. Role ambiguity and conflict decrease worker's performance and are positively related to the probability of the workers leaving the organization. Role conflict and ambiguity have been proposed as determining factors of workers' job satisfaction and organizational commitment.
Empowerment
Empowerment in the workplace has had several different definitions over the years. It has been considered 'energizing followers through leadership, enhancing self efficacy by reducing powerlessness and increasing intrinsic task motivation.' A psychological view of empowerment describes it as 'a process of intrinsic motivation, perceived control, competence, and energizing towards achieving goals.' There are two prominent concepts of empowerment. The first is Structural Empowerment which comes from the Organizational/Management Theory and is described as the ability to get things done and to mobilize resources. The second is Psychological Empowerment which comes from Social Psychological models and is described as psychological perceptions/attitudes of employees about their work and their organizational roles. A study done by Ahmad et al. found support for the relationship between empowerment and job satisfaction and job commitment. The study looked at nurses working in England and nurses working in Malaysia. Taking cultural context into consideration, the study still showed a positive correlation between empowerment and job satisfaction/commitment.
Job Insecurity and Employability
In a study conducted by De Cuyper research found that workers who were on fixed-term contracts or considered "temporary workers" reported higher levels of job insecurity than permanent workers. Job insecurity was found to negatively correlate with job satisfaction and affective organizational commitment in permanent workers. The study also found that job satisfaction and organizational commitment were highly correlated with being a permanent worker.
Distribution of Leadership
A study conducted by Hulpia et al. focused on the impact of the distribution of leadership and leadership support among teachers and how that affected job satisfaction and commitment. The study found that there was a strong relationship between organizational commitment and the cohesion of the leadership team and the amount of leadership support. Previously held beliefs about job satisfaction and commitment among teachers was that they were negatively correlated with absenteeism and turnover and positively correlated with job effort and job performance. This study examined how one leader (usually a principal) effected the job satisfaction and commitment of teachers. The study found that when leadership was distributed by the 'leader' out to the teachers as well workers reported higher job satisfaction and organizational commitment than when most of the leadership fell to one person. Even when it was only the perception of distributed leadership roles workers still reported high levels of job satisfaction/commitment.
Shift to organizational change commitment
By the end of the 1990s, leaders did not find the value in understanding whether or not their people were more or less committed to the organization. It was particularly frustrating that leaders could see that people committed to the organization were not as committed to strategic change initiatives, the majority of which failed to live up to expectations. John Meyer responded to this gap by proposing a model of organizational change commitment.
The new model includes the same 3-components, but also includes a behavioral commitment scale: resistance, passive resistance, compliance, cooperation, and championing. Though Meyer does not cite him, a peer reviewed source for behavioral commitment comes from Leon Coetsee in South Africa. Coetsee brought the resistance-to-commitment model of Harvard consultant Arnold Judson to academic research and has continued developing the model as late as 2011.
Guidelines to enhance
Five rules help to enhance organizational commitment:
Commit to people-first values Put it in writing, hire the right-kind managers, and walk the talk.
Clarify and communicate your mission Clarify the mission and ideology; make it charismatic; use value-based hiring practices; stress values-based orientation and training; build tradition.
Guarantee organizational justice Have a comprehensive grievance procedure; provide for extensive two-way communications.
Community of practice Build value-based homogeneity; share and share alike; emphasize barnraising, cross-utilization, and teamwork; getting people to work together.
Support employee development Commit to actualizing; provide first-year job challenge; enrich and empower; promote from within; provide developmental activities; provide employee security without guarantees.
See also
Employee engagement
High commitment management
Job satisfaction
Onboarding
Organizational justice
Person-environment fit
Psychological contract
Stigma management
References
Organizational behavior
Motivation
Industrial and organizational psychology | Organizational commitment | [
"Biology"
] | 3,094 | [
"Behavior",
"Motivation",
"Organizational behavior",
"Ethology",
"Human behavior"
] |
181,251 | https://en.wikipedia.org/wiki/Lunar%20day | A lunar day is the time it takes for Earth's Moon to complete on its axis one synodic rotation, meaning with respect to the Sun. Informally, a lunar day and a lunar night is each approx. 14 Earth days. The formal lunar day is therefore the time of a full lunar day-night cycle. Due to tidal locking, this equals the time that the Moon takes to complete one synodic orbit around Earth, a synodic lunar month, returning to the same lunar phase. The synodic period is about Earth days, which is about 2.2 days longer than its sidereal period.
Main definition
Relative to the fixed stars on the celestial sphere, the Moon takes 27 Earth days, 7 hours, 43 minutes, 12 seconds to complete one orbit; however, since the Earth–Moon system advances around the Sun at the same time, the Moon must travel farther to return to the same phase. On average, this synodic period lasts 29 days, 12 hours, 44 minutes, 3 seconds, the length of a lunar month on Earth. The exact length varies over time because the speed of the Earth–Moon system around the Sun varies slightly during a year due to the eccentricity of its elliptical orbit, variances in orbital velocity, and a number of other periodic and evolving variations about its observed, relative, mean values, which are influenced by the gravitational perturbations of the Sun and other bodies in the Solar System.
As a result, daylight at a given point on the Moon lasts approximately two weeks from beginning to end, followed by approximately two weeks of lunar night.
Alternate usage
The term lunar day may also refer to the period between moonrises or high moon in a particular location on Earth. This period is typically about 50 minutes longer than a 24-hour Earth day, as the Moon orbits the Earth in the same direction as the Earth's axial rotation.
The term lunar day is also used in the context of night and day, i.e., opposite to the lunar night. This is common in discussions of the huge difference in temperatures, such as discussion about lunar rovers. For example, "the Soviet Union's Luna missions [...] were designed to survive one lunar day (two Earth weeks)", while China's Yutu-2 rover, which landed in January 2019, was designed to survive lunar nights by shutting down.
Lunar calendars
In some lunar calendars, such as the Vikram Samvat, a lunar day, or tithi, is defined as 1/30 of a lunar month, or the time it takes for the longitudinal angle between the Moon and the Sun to increase by 12 degrees. By this definition, lunar days generally vary in duration.
See also
Lunisolar calendar
Mars sol, the Martian day
Synodic day
References
External links
Lunar days and other lunar data for many different cities. Lunarium.co.uk.
Lunar Standard Time (LST) lunarclock.org.
Day
Day, lunar | Lunar day | [
"Physics",
"Mathematics"
] | 603 | [
"Physical quantities",
"Time",
"Units of time",
"Quantity",
"Spacetime",
"Units of measurement"
] |
11,096,735 | https://en.wikipedia.org/wiki/Static%20light%20scattering | Static light scattering is a technique in physical chemistry that measures the intensity of the scattered light to obtain the average molecular weight Mw of a macromolecule like a polymer or a protein in solution. Measurement of the scattering intensity at many angles allows calculation of the root mean square radius, also called the radius of gyration Rg. By measuring the scattering intensity for many samples of various concentrations, the second virial coefficient, A2, can be calculated.
Static light scattering is also commonly utilized to determine the size of particle suspensions in the sub-μm and supra-μm ranges, via the Lorenz-Mie (see Mie scattering) and Fraunhofer diffraction formalisms, respectively.
For static light scattering experiments, a high-intensity monochromatic light, usually a laser, is launched into a solution containing the macromolecules. One or many detectors are used to measure the scattering intensity at one or many angles. The angular dependence is required to obtain accurate measurements of both molar mass and size for all macromolecules of radius above 1–2% of the incident wavelength. Hence simultaneous measurements at several angles relative to the direction of the incident light, known as multi-angle light scattering (MALS) or multi-angle laser light scattering (MALLS), are generally regarded as the standard implementation of static light scattering. Additional details on the history and theory of MALS may be found in multi-angle light scattering.
To measure the average molecular weight directly without calibration from the light scattering intensity, the laser intensity, the quantum efficiency of the detector, and the full scattering volume and solid angle of the detector need to be known. Since this is impractical, all commercial instruments are calibrated using a strong, known scatterer like toluene since the Rayleigh ratio of toluene and a few other solvents were measured using an absolute light scattering instrument.
Theory
For a light scattering instrument composed of many detectors placed at various angles, all the detectors need to respond the same way. Usually, detectors will have slightly different quantum efficiency, different gains, and are looking at different geometrical scattering volumes. In this case, a normalization of the detectors is absolutely needed. To normalize the detectors, a measurement of a pure solvent is made first. Then an isotropic scatterer is added to the solvent. Since isotropic scatterers scatter the same intensity at any angle, the detector efficiency and gain can be normalized with this procedure. It is convenient to normalize all the detectors to the 90° angle detector.
where IR(90) is the scattering intensity measured for the Rayleigh scatterer by the 90° angle detector.
The most common equation to measure the weight-average molecular weight, Mw, is the Zimm equation (the right-hand side of the Zimm equation is provided incorrectly in some texts, as noted by Hiemenz and Lodge):
where
and
with
and the scattering vector for vertically polarized light is
with n0 the refractive index of the solvent, λ the wavelength of the light source, NA the Avogadro constant, c the solution concentration, and dn/dc the change in the refractive index of the solution with change in concentration. The intensity of the analyte measured at an angle is IA(θ). In these equations, the subscript A is for analyte (the solution) and T is for the toluene with the Rayleigh ratio of toluene, RT being 1.35×10−5 cm−1 for a HeNe laser. As described above, the radius of gyration, Rg, and the second virial coefficient, A2, are also calculated from this equation. The refractive index increment dn/dc characterizes the change of the refractive index n with the concentration c and can be measured with a differential refractometer.
A Zimm plot is built from a double extrapolation to zero angle and zero concentration from many angles and many concentration measurements. In its simplest form, the Zimm equation is reduced to:
for measurements made at low angle and infinite dilution since P(0) = 1.
There are typically several analyses developed to analyze the scattering of particles in solution to derive the above-named physical characteristics of particles. A simple static light scattering experiment entails the average intensity of the sample that is corrected for the scattering of the solvent will yield the Rayleigh ratio, R as a function of the angle or the wave vector q as follows:
Data analyses
Guinier plot
The scattered intensity can be plotted as a function of the angle to give information on the Rg which can simply be calculated using the Guinier approximation (developed by André Guinier) as follows:where ln(ΔR(θ)) = lnP(θ) also known as the form factor with q = 4πn0sin(θ/2)/λ. Hence a plot of the corrected Rayleigh ratio, ΔR(θ) vs sin2(θ/2) or q2 will yield a slope Rg2/3. However, this approximation is only true for qRg < 1. Note that for a Guinier plot, the value of dn/dc and the concentration is not needed.
Kratky plot
The Kratky plot is typically used to analyze the conformation of proteins but can be used to analyze the random walk model of polymers. A Kratky plot can be made by plotting sin2(θ/2)ΔR(θ) vs sin(θ/2) or q2ΔR(θ) vs q.
Zimm plot
For polymers and polymer complexes that are monodisperse () as determined by static light scattering, a Zimm plot is a conventional means of deriving the parameters such as Rg, molecular mass Mw and the second virial coefficient A2.
One must note that if the material constant K is not implemented, a Zimm plot will only yield Rg. Hence implementing K will yield the following equation:
The analysis performed with the Zimm plot uses a double-extrapolation to zero concentration and zero scattering angle resulting in a characteristic rhomboid plot. As the angular information is available, it is also possible to obtain the radius of gyration (Rg). Experiments are performed at several angles, which satisfy the condition and at least 4 concentrations. Performing a Zimm analysis on a single concentration is known as a partial Zimm analysis and is only valid for dilute solutions of strong point scatterers. The partial Zimm however, does not yield the second virial coefficient, due to the absence of the variable concentration of the sample. More specifically, the value of the second virial coefficient is either assumed to equal zero or is inputted as a known value in order to perform the partial Zimm analysis.
Debye plot
If the measured particles are smaller than λ/20, the form factor P(θ) can be neglected (P(θ)→1). Therefore, the Zimm equation is simplified to the Debye equation, as follows:
Note that this is also the result of an extrapolation to zero scattering angle. By acquiring data on concentration and scattering intensity, the Debye plot is constructed by plotting Kc/ΔR(θ) vs. concentration. The intercept of the fitted line gives the molecular mass, while the slope corresponds to the 2nd virial coefficient.
As the Debye plot is a simplification of the Zimm equation, the same limitations of the latter apply, i.e., samples should present a monodisperse nature. For polydisperse samples, the resulting molecular mass from a static light-scattering measurement will represent an average value. An advantage of the Debye plot is the possibility to determine the second virial coefficient. This parameter describes the interaction between particles and the solvent. In macromolecule solutions, for instance, it can assume negative (particle-particle interactions are favored), zero, or positive values (particle-solvent interactions are favored).
Multiple scattering
Static light scattering assumes that each detected photon has only been scattered exactly once. Therefore, analysis according to the calculations stated above will only be correct if the sample has been diluted sufficiently to ensure that photons are not scattered multiple times by the sample before being detected. Accurate interpretation becomes exceedingly difficult for systems with non-negligible contributions from multiple scattering. In many commercial instruments where analysis of the scattering signal is automatically performed, the error may never be noticed by the user. Particularly for larger particles and those with high refractive index contrast, this limits the application of standard static light scattering to very low particle concentrations. On the other hand, for soluble macromolecules that exhibit a relatively low refractive index contrast versus the solvent, including most polymers and biomolecules in their respective solvents, multiple scattering is rarely a limiting factor even at concentrations that approach the limits of solubility.
However, as shown by Schaetzel, it is possible to suppress multiple scattering in static light scattering experiments via a cross-correlation approach. The general idea is to isolate singly scattered light and suppress undesired contributions from multiple scattering in a static light scattering experiment. Different implementations of cross-correlation light scattering have been developed and applied. Currently, the most widely used scheme is the so-called 3D-dynamic light scattering method,. The same method can also be used to correct dynamic light scattering data for multiple scattering contributions.
Composition-gradient static light scattering
Samples that change their properties after dilution may not be analyzed via static light scattering in terms of the simple model presented here as the Zimm equation. A more sophisticated analysis known as 'composition-gradient static (or multi-angle) light scattering' (CG-SLS or CG-MALS) is an important class of methods to investigate protein–protein interactions, colligative properties, and other macromolecular interactions as it yields, in addition to size and molecular weight, information on the affinity and stoichiometry of molecular complexes formed by one or more associating macromolecular/biomolecular species. In particular, static light scattering from a dilution series may be analyzed to quantify self-association, reversible oligomerization, and non-specific attraction or repulsion, while static light scattering from mixtures of species may be analyzed to quantify hetero-association.
Applications
One of the main applications of static light scattering for molecular mass determination is in the field of macromolecules, such as proteins and polymers, as it is possible to measure the molecular mass of proteins without any assumption about their shape. Static light scattering is usually combined with other particle characterization techniques, such as size-exclusion chromatography (SEC), dynamic light scattering (DLS), and electrophoretic light scattering (ELS).
See also
Differential static light scatter (DSLS)
Dynamic light scattering
Light scattering
Protein–protein interactions
References
External links
Application of static light scattering
Litesizer
Scattering, absorption and radiative transfer (optics)
Scattering
Polymer chemistry
Polymer physics
Physical chemistry | Static light scattering | [
"Physics",
"Chemistry",
"Materials_science",
"Engineering"
] | 2,303 | [
"Polymer physics",
" absorption and radiative transfer (optics)",
"Applied and interdisciplinary physics",
"Materials science",
"Scattering",
"Particle physics",
"Condensed matter physics",
"Nuclear physics",
"nan",
"Polymer chemistry",
"Physical chemistry"
] |
11,097,538 | https://en.wikipedia.org/wiki/Saturn%20V-3 | The Saturn V-3, also known as the Saturn MLV 5-3, was a conceptual heavy-lift launch vehicle that would have utilized new engines and new stages that were never used on the original Saturn V. The Saturn V-3 was studied by the NASA Marshall Space Flight Center in 1965.
The first stage, called MS-IC-1, was to have used new F-1 engines designated F-1A which utilized a pump-fed design, an anticipated 20% additional thrust, and a six-second improvement in specific impulse on an F-1, with the first stage stretched 20 feet.
The second and third stages, MS-II-2 and MS-IVB-2, were proposed to use new HG-3 engines in place of the J-2 engines, but were never used, although the HG-3 led to the development of the Space Shuttle Main Engine.
The V-3 booster was one of six Saturn MLV designs that never flew, but if these vehicles had been manufactured, they could possibly have been used for the Apollo Applications Program, Manned Orbiting Research Laboratory, Mars fly-by and Mars landing missions in the 1970s and 1980s.
References
Lowther, Scott, Saturn: Development, Details, Derivatives and Descendants
Saturn V Improvement Study, Final report, NASA Contract NAS8-11359.
Saturn V | Saturn V-3 | [
"Astronomy"
] | 274 | [
"Rocketry stubs",
"Astronomy stubs"
] |
11,097,780 | https://en.wikipedia.org/wiki/Saturn%20V-B | Studied in 1968 by Marshall Space Flight Center, the Saturn V-B was considered an interesting vehicle concept because it nearly represents a single-stage to orbit booster, but is actually a stage and a half booster just like the Atlas. The booster would achieve liftoff via five regular F-1 engines; four of the five engines on the Saturn V-B would be jettisoned and could be fully recoverable, with the sustainer stage on the rocket continuing the flight into orbit. The rocket could have had a good launch capability similar to that of the Space Shuttle if it was constructed, but it never flew.
Concept
With use of the Saturn V vehicle during Apollo, NASA began considering plans for a hypothesized evolutionary Saturn V family concept that spans the earth orbital payload spectrum from 50,000 to over 500,000 lbs. The "B" derivative of the Saturn V was a stage and one- half version of the then current S-IC stage and would become the first stage in an effective and economical assembly of upper stages of the evolutionary Saturn family.
The booster would achieve liftoff via five regular F-1 engines; four of the five engines on the Saturn V-B would be jettisoned and could be fully recoverable, with the sustainer stage on the rocket continuing the flight into orbit. The vehicle would be capable of a LEO payload of 50,000 lb with a standard S-IC stage length of 138 ft. Increases in the length of the stage could significantly increase this capability.
See also
Saturn-Shuttle
References
Further reading
Saturn V-B refers to Boeing study
Saturn V | Saturn V-B | [
"Astronomy"
] | 324 | [
"Rocketry stubs",
"Astronomy stubs"
] |
11,099,186 | https://en.wikipedia.org/wiki/List%20of%20railway%20museums | A railway museum is a museum that explores the history of all aspects of rail related transportation, including: locomotives (steam, diesel, and electric), railway cars, trams, and railway signalling equipment. They may also operate historic equipment on museum grounds.
Africa
Egypt
Egyptian Railway Museum (Cairo)
Kenya
Nairobi Railway Museum in Nairobi
Nigeria
NRC/Legacy Railway Museum in Lagos
Senegal
Musée Régional de Thiès in Thiès (Showroom to the Dakar–Niger Railway line)
Sierra Leone
Sierra Leone National Railway Museum in Freetown
South Africa
Outeniqua Transport Museum in George
Sudan
Sudan Railways Museum in Atbarah
Uganda
Uganda Railway Museum in Jinja
Zambia
Railway Museum in Livingstone
Zimbabwe
Bulawayo Railway Museum in Bulawayo
Asia
Armenia
Armenian Railways Museum, Yerevan
Azerbaijan
Azerbaijan Railway Museum, Baku
China
Chan T'ien-yu Memorial Hall, Badaling, Beijing
China Railway Museum, Beijing
Da'anbei Railway Station, Da'an, Jilin Province
Hong Kong Railway Museum
Qingdao-Jinan Railway Museum, Jinan, Shandong Province
Shanghai Railway Museum, Shanghai
Shenyang Railway Museum, Shenyang, Liaoning Province
Wuhan Metro Museum, Wuhan, Hubei Province
Yunnan Railway Museum, Kunming, Yunnan Province
India
Coochbehar Railway Museum. Coochbehar, West Bengal
Ghum Railway Museum, Ghum, West Bengal
Heritage Rail Museum, Bhusawal
Rewari Railway Heritage Museum, Rewari, Haryana
Joshi's Museum of Miniature Railway, Pune, Maharashtra
Kanpur Sangrahalaya, Kanpur, Uttar Pradesh
Gorakhpur Rail Museum, Gorakhpur, Uttar Pradesh,
Rail Museum, Howrah, Kolkata, West Bengal
Mysore Rail Museum, Mysore, Karnataka
Railway Museum, Hubli Junction railway station, Hubballi, Karnataka
National Rail Museum, New Delhi
Patel Chowk Metro Museum, Patel Chowk metro station, New Delhi
Smaranika Tram Museum, Esplanade, West Bengal
Railway Heritage Centre, Tiruchirappalli, Tamil Nadu
Regional Railway Museum, Chennai, Tamil Nadu
Indonesia
Indonesian Railway Museum of Ambarawa, Central Java
Bondowoso Rail and Train Museum, Bondowoso Regency, East Java
Museum of Transport of Taman Mini Indonesia Indah, Jakarta
Sawahlunto Rail and Train Museum, Sawahlunto, West Sumatra
Israel
Turkish Railway Station, Beersheba (South)
Historic Railway Station, Afula (North)
Israel Railway Museum, Haifa (National and North)
Jaffa Railway Station, Tel Aviv (Center)
Kfar Yehoshua Railway Station (North)
Japan
Katakami Heritage Railway, in Misaki, Okayama
Keio Rail-Land, in Hino, Tokyo
Kyoto Railway Museum, in Shimogyō-ku, Kyoto
Kyushu Railway History Museum, in Kitakyushu, Fukuoka
Ome Railway Park, in Ome, Tokyo
Otaru City General Museum, in Otaru, Hokkaido
Railway Museum in Saitama, Saitama
Romance Car Museum in Ebina, Kanagawa
SCMaglev and Railway Park in Nagoya
Shikoku Railway Cultural Center, in Saijō, Ehime
Subway Museum, in Edogawa, Tokyo
Tobu Museum, in Sumida, Tokyo
Tsuyama Railroad Educational Museum, in Tsuyama, Okayama
Train and Bus Museum, in Kawasaki, Kanagawa
Usui Pass Railway Heritage Park, in Annaka, Gunma
Yamanashi Prefectural Maglev Exhibition Center, Tsuru, Yamanashi
Yokohama Tram Museum, in Yokohama, Kanagawa
Closed museums
Modern Transportation Museum next to Bentencho Station, Minato-ku, Osaka (closed 2014)
Sakuma Rail Park, in Hamamatsu, Shizuoka (closed 2009)
Jordan
Hedjaz Jordan Railway Museum, Amman
Malaysia
Gemas Railway Museum, Negeri Sembilan
KTMB Museum, Johor
Kuala Lumpur Railway Station
North Borneo Railway, Sabah
Mongolia
Mongolian Railway History Museum
North Korea
Pyongyang Railway Museum
Pakistan
Pakistan Railways Heritage Museum
Saudi Arabia
Hejaz Railway Museum
Singapore
Singapore Railways Museum
Rail Corridor
Source:
Bukit Timah Railway Station
Tanjung Pagar Railway Station
South Korea
Korea Railroad Museum
Sumjingang Railroad Village in Gokseung County
Sri Lanka
National railway museum, Kadugannawa
Taiwan
Changhua Roundhouse in Changhua
Hamasen Museum of Taiwan Railway in Kaohsiung
Miaoli Railway Museum in Miaoli
National Railway Museum in Taipei (by reservation or events, major opening in 2024)
Taiwan High Speed Rail Museum in Zhongli, Taoyuan
Takao Railway Museum in Kaohsiung
Railway Department Park Branch, National Taiwan Museum in Taipei
Thailand
Railway Outdoor Museum, Bangkok
Thailand-Burma Railway Centre, Kanchanaburi
Train Cemetery, Bangkok
Turkey
Atatürk's Residence and Railway Museum, Ankara
Çamlık Railway Museum, Selçuk, İzmir Province
Istanbul Railway Museum, Istanbul
İzmir Railway Museum, İzmir
Rahmi M. Koç Museum, Istanbul
TCDD Open Air Steam Locomotive Museum, Ankara
Uzbekistan
Railway Museum
Vietnam
Heritage Railway, Da Lat Railway Station
Europe
Austria
Austrian Railway Museum at Vienna
Railway and Mining Museum Ampflwang in Upper Austria
Railway Museum Sigmundsherberg in Lower Austria
Southern Railway Museum in Mürzzuschlag
Strasshof Railway Museum in Lower Austria
Belarus
Baranovichy Railway Museum
Brest Railway Museum
Belgium
Chemin de Fer à vapeur des Trois Vallées
Dendermonde–Puurs Steam Railway
Patrimoine Ferroviaire et Tourisme
Stoomcentrum Maldegem
Train World
Bulgaria
National Transport Museum, Bulgaria
Czech Republic
České dráhy Museum in Lužná
National Technical Museum in Prague
Railway Museum KHKD in Kněževes
Railway Museum Jaroměř in Jaroměř (Železniční muzeum výtopna Jaroměř)
Croatia
Croatian Railway Museum, Zagreb, Croatia
Denmark
Bornholm Narrow Gauge Railway Museum
Danish Railway Museum
Djursland Railway Museum
Skjoldenæsholm Tram Museum
Struer Railway Museum
Estonia
Estonian Railway Museum
Finland
Finnish Railway Museum
Jokioinen Museum Railway
Savo Railway Museum
France
Cité du Train
Musée des tramways à vapeur et des chemins de fer secondaires français
Germany
Augsburg Railway Park, Augsburg, Bavaria
Bahnbetriebswerk Hermeskeil, Hermeskeil, Rhineland-Palatinate
Bavarian Localbahn Society, Bayerisch Eisenstein, Bavaria
Bavarian Railway Museum, Nördlingen, Bavaria
Bochum Dahlhausen Railway Museum, Bochum, North Rhine-Westphalia
Dampfbahn Fränkische Schweiz, Ebermannstadt, Bavaria
Darmstadt-Kranichstein Railway Museum, Darmstadt-Kranichstein, Hesse
DB Museum, Nuremberg, Bavaria
DBK Historic Railway, Crailsheim, Baden-Württemberg
Deutsches Museum, Munich, Bavaria
Dieringhausen Railway Museum, Dieringhausen, North Rhine-Westphalia
Dresden Transport Museum, Dresden, Saxony
Eisenbahnfreunde Zollernbahn, Rottweil, Baden-Württemberg
Franconian Museum Railway, Nuremberg, Bavaria
Frankfurt City Junction Line, Frankfurt, Hesse
Freilassing Locomotive World, Freilassing, Bavaria
German Museum of Technology (Berlin)
German Railway Society, Bruchhausen-Vilsen, Lower Saxony
German Steam Locomotive Museum, Neuenmarkt-Wirsberg, Bavaria
Hannoversches Straßenbahn-Museum, Hanover, Lower Saxony
Historic Railway, Frankfurt, Frankfurt, Hesse
Mellrichstadt-Fladungen railway, Fladungen, Bavaria
Munich Steam Locomotive Company, Munich, Bavaria
Neustadt/Weinstraße Railway Museum, Neustadt an der Weinstrasse, Rhineland-Palatinate
Nuremberg Transport Museum, Nuremberg, Bavaria
Railway Vehicle Preservation Society, Stuttgart, Baden-Württemberg
Rügen Railway & Technology Museum, Prora, Mecklenburg-Western Pomerania
Saxon Railway Museum, Chemnitz, Saxony
Selfkantbahn, Gangelt, North Rhine-Westphalia
Sinsheim Auto & Technik Museum, Sinsheim, Baden-Württemberg
South German Railway Museum, Heilbronn, Baden-Württemberg
Stuttgart Straßenbahn Museum Zuffenhausen, Stuttgart, Baden-Württemberg
Technik Museum Speyer, Speyer, Rhineland Palatinate
Traditionsbetriebswerk Staßfurt, Stassfurt, Saxony-Anhalt
Ulmer Eisenbahnfreunde, Ulm, Baden-Württemberg
Greece
Railway Museum of Athens
Railway Museum of Thessaloniki
Hungary
Hungarian Railway Museum
Millennium Underground Museum, Budapest
Széchenyi Railway Museum
Transport Museum of Budapest
Ireland
Castlerea Railway Museum
Cavan and Leitrim Railway
Donegal Railway Heritage Centre
Railway Preservation Society of Ireland
Italy
Costituendo Musero Ferroviario di Primolano, Primolano, Vicenza
Museo delle Industrie e del Lavoro del Saronnese, Saronno, Varese
Museo Europeo dei Trasporti Ogliari, Ranco, Varese
Museo Ferroviario di Trieste Campo Marzio, Trieste
Museo Ferroviario Piemontese, Savigliano, Turin
Museo Nazionale dei Trasporti, La Spezia
Museo Nazionale Ferroviario di Pietrarsa, Naples
Museo Nazionale Scienza e Tecnologia Leonardo da Vinci, Milan
Latvia
Latvian Railway History Museum
Lithuania
Railway Museum, Vilnius and Šiauliai *Home | Geležinkelių muziejus (ltgmuziejus.lt)
Aukštaitija Narrow Gauge Railway, Panevėžys and Anykščia *Aukštaitijos siaurasis geležinkelis - Siaurukas
Luxembourg
Train 1900
Netherlands
Noord-Nederlands Trein & Tram Museum
het Spoorwegmuseum
Stoom Stichting Nederland
Stoomtrein Goes - Borsele
Museumstoomtram Hoorn-Medemblik
Stoomtrein Valkenburgse Meer
Veluwsche Stoomtrein Maatschappij
Zuid-Limburgse Stoomtrein Maatschappij
Stichting Stadskanaal Rail
Museum Buurtspoorweg
Norway
Norwegian Railway Museum, Hamar
Oslo Tramway Museum, Oslo
Trondheim Tramway Museum
Poland
Narrow Gauge Railway Museum in Sochaczew
Narrow Gauge Railway Museum in Wenecja
Railway and Industry Museum in Jaworzyna Śląska (Muzeum Przemysłu i Kolejnictwa w Jaworzynie Śląskiej), Lower Silesian Voivodeship
Railway Museum in Chabówka village, near Rabka, Nowy Targ County
Railway Museum in Kościerzyna, Pomeranian Voivodeship
Roundhouse Skierniewice in Skierniewice
Warsaw Railway Museum (Muzeum Kolejnictwa w Warszawie)
Wolsztyn Steam Locomotive Roundhouse, the only depot in Europe with everyday scheduled passenger trains driven by steam locomotives
Portugal
National Railway Museum Foundation, Entroncamento
Museum Branches of FMNF
Arco de Baúlhe
Bragança
Chaves
Lousado
Macinhata do Vouga
Nine
Santarém
Valença
Disused or Closed Museums
Braga
Estremoz
Lagos
Museological Warehouses
Barreiro
Estremoz
Livração
Pampilhosa
Peso da Régua
Pocinho
Sernada do Vouga
Tua
Romania
Reșița Steam Locomotive Museum
Sibiu Steam Locomotives Museum
Russia
Museum for Railway Technology Novosibirsk at Seyatel station
Museum of the Moscow Railway (Paveletskaya station), Moscow
Rizhsky Rail Terminal, Moscow
Russian Railway Museum, Saint Petersburg
Railway museum at Nizhny Novgorod
Railway museum at Pereslavl-Zalessky (narrow gauge)
Railway museum at Rostov-on-Don
Railway museum at Yekaterinburg
Serbia
Narrow Gauge Museum in Pozega
Railway Museum in Belgrade
Slovenia
Slovenian Railway Museum
Spain
Basque Railway Museum
Gijón Railway Museum (Museo del Ferrocarill de Asturias)
Railway Centre and Museum in Móra la Nova (Centre d'Interpretació del Ferrocarril de Móra la Nova)
Railway Museum in Ponferrada (Museo del Ferrocarril)
Railway Museum in Madrid (Museo del Ferrocarril)
Railway Museum in Monforte de Lemos (Museo del Ferrocarril de Galicia)
Railway Museum in Vilanova (Museu del Ferrocarril de Vilanova)(Vilanova Railway Museum)
Sweden
Åmål-Årjäng Railway Society, Åmål
Anten-Gräfsnäs Narrow-Gauge Steam Railway, Lake Anten
Dalara and Western Farmlands Railway, Åmål
Eastern Götland Railway Museum, Linköping
Eastern Southernlands Railway, Mariefred
Gothenburg Nostalgia Tram, Gothenburg
Gotland Railway Association, Dalhem
Grängesberg Regional Railway Museum, Grängesberg
Hagfors Railway Museum, Hagfors
Järdaås Pine Forest Railway, Jädraås
Kristiantown Regional Museum, Kristianstad
Landeryd's Railway Museum, Landeryd
Nässjö Railway Museum (in Swedish), Nässjö
Nora Mining District Historical Railway, Nora
Northbottom's Railway Museum, Luleå
Nynäshamn's Railway Museum, Nynäshamn
Railway Museum of Ängelholm, Ängelholm
Skara-Lundsbrunn Railways, Skara
Stockholm Streetcar Museum, Stockholm
Swedish National Railway Museum, Gävle
Upsala-Lenna Jernväg, Uppsala
Switzerland
Blonay–Chamby Museum Railway
Ukraine
Kiev Railway Museum
Museum of History and Development of Donetsk Railway in Donetsk
Museum of History and Technology of Southern Railway in Kharkiv
Lviv Railway History Museum in Lviv
United Kingdom and Crown dependencies
North America
Canada
Guatemala
Guatemala City Railway Museum
Quetzaltenango
Honduras
El Progreso railway museum,Yoro
Mexico
Ferrocarril Interoceánico
Old Railway Station and Railway Museum
United States
Oceania
Australia
Archer Park Rail Museum
Canberra Railway Museum
Dorrigo Steam Railway & Museum
Lachlan Valley Railway
National Railway Museum, Port Adelaide, Port Adelaide
Newport Railway Museum, Champion Road, North Williamstown
NSW Rail Museum, Thirlmere
Rail Motor Society, Paterson
Railway Museum, Bassendean, Western Australia
Richmond Vale Railway Museum
Rosewood Railway Museum, Rosewood
Steamtown Heritage Rail Centre
The Workshops Rail Museum, Ipswich, Queensland
Valley Heights Locomotive Depot Heritage Museum
New Zealand
South America
Argentina
Ferroclub Argentino: Centros de Preservacion Remedios de Escalada, Lynch y Tolosa
Museo Ferroviario de la ciudad de Campana (Buenos Aires)
Chile
Museo Nacional Ferroviario Pablo Neruda, Temuco
Museo Ferroviario de Santiago, Quinta Normal
Brazil
Araraquara Railway Museum
Jaguariuna Railway Museum
Juiz de Fora Railway Museum (pt)
Museu of Train of Rio de Janeiro
Paranapiacaba Technologic Railway Museum
Rio Claro Railway Museum (future)
Peru
National Railway Museum (Peru)
See also
List of railway museums in the United Kingdom
List of New Zealand railway museums and heritage lines
References
Museum
*List of railway museums | List of railway museums | [
"Engineering"
] | 3,027 | [
"Heritage railways",
"Engineering preservation societies"
] |
11,100,132 | https://en.wikipedia.org/wiki/Skin%20flora | Skin flora, also called skin microbiota, refers to microbiota (communities of microorganisms) that reside on the skin, typically human skin.
Many of them are bacteria of which there are around 1,000 species upon human skin from nineteen phyla. Most are found in the superficial layers of the epidermis and the upper parts of hair follicles.
Skin flora is usually non-pathogenic, and either commensal (are not harmful to their host) or mutualistic (offer a benefit). The benefits bacteria can offer include preventing transient pathogenic organisms from colonizing the skin surface, either by competing for nutrients, secreting chemicals against them, or stimulating the skin's immune system. However, resident microbes can cause skin diseases and enter the blood system, creating life-threatening diseases, particularly in immunosuppressed people.
A major non-human skin flora is Batrachochytrium dendrobatidis, a chytrid and non-hyphal zoosporic fungus that causes chytridiomycosis, an infectious disease thought to be responsible for the decline in amphibian populations.
Species variety
Bacteria
The estimate of the number of bacteria species present on skin has been radically changed by the use of 16S ribosomal RNA to identify bacterial species present on skin samples direct from their genetic material. Previously such identification had depended upon microbiological culture upon which many varieties of bacteria did not grow and so were hidden to science.
Staphylococcus epidermidis and Staphylococcus aureus were thought from cultural based research to be dominant. However 16S ribosomal RNA research finds that while common, these species make up only 5% of skin bacteria. However, skin variety provides a rich and diverse habitat for bacteria. Most come from four phyla: Actinomycetota (51.8%), Bacillota (24.4%), Pseudomonadota (16.5%), and Bacteroidota (6.3%).
There are three main ecological areas: sebaceous, moist, and dry. Propionibacteria and Staphylococci species were the main species in sebaceous areas. In moist places on the body Corynebacteria together with Staphylococci dominate. In dry areas, there is a mixture of species but Betaproteobacteria and Flavobacteriales are dominant. Ecologically, sebaceous areas had greater species richness than moist and dry ones. The areas with least similarity between people in species were the spaces between fingers, the spaces between toes, axillae, and umbilical cord stump. Most similarly were beside the nostril, nares (inside the nostril), and on the back.
{| class="wikitable sortable" border="1"
|+ Frequency of the best studied skin microbes
|-
! Organism
! Observations
! Pathogenicity
|-
| Staphylococcus epidermidis
| Common
| occasionally pathogenic
|-
| Staphylococcus aureus
| Infrequent
| usually pathogenic
|-
|Staphylococcus warneri
|Infrequent
| occasionally pathogenic
|-
|Streptococcus pyogenes
|Infrequent
| usually pathogenic
|-
|Streptococcus mitis
|Frequent
| occasionally pathogenic
|-
|Cutibacterium acnes
| Frequent
| occasionally pathogenic
|-
|Corynebacterium spp.
|Frequent
| occasionally pathogenic
|-
|Acinetobacter johnsonii
| Frequent
| occasionally pathogenic
|-
|Pseudomonas aeruginosa
|Infrequent
| occasionally pathogenic
|}
Fungal
A study of the area between toes in 100 young adults found 14 different genera of fungi. These include yeasts such as Candida albicans, Rhodotorula rubra, Torulopsis and Trichosporon cutaneum, dermatophytes (skin living fungi) such as Microsporum gypseum, and Trichophyton rubrum and nondermatophyte fungi (opportunistic fungi that can live in skin) such as Rhizopus stolonifer, Trichosporon cutaneum, Fusarium, Scopulariopsis brevicaulis, Curvularia, Alternaria alternata, Paecilomyces, Aspergillus flavus and Penicillium species.
A study by the National Human Genome Research Institute in Bethesda, Maryland, researched the DNA of human skin fungi at 14 different locations on the body. These were the ear canal, between the eyebrows, the back of the head, behind the ear, the heel, toenails, between the toes, forearm, back, groin, nostrils, chest, palm, and the crook of the elbow. The study showed a large fungal diversity across the body, the richest habitat being the heel, which hosts about 80 species of fungi. By way of contrast, there are some 60 species in toenail clippings and 40 between the toes. Other rich areas are the palm, forearm and inside the elbow, with from 18 to 32 species. The head and the trunk hosted between 2 and 10 each.
Umbilical microbiome
The umbilicus, or navel, is an area of the body that is rarely exposed to UV light, soaps, or bodily secretions (the navel does not produce any secretions or oils) and because it is an almost undisturbed community of bacteria it is an excellent part of the skin microbiome to study. The navel, or umbilicus is a moist microbiome of the body (with high humidity and temperatures), that contains a large amount of bacteria, especially bacteria that favors moist conditions such as Corynebacterium and Staphylococcus.
The Belly Button Biodiversity Project began at North Carolina State University in early 2011 with two initial groups of 35 and 25 volunteers. Volunteers were given sterile cotton swabs and were asked to insert the cotton swabs into their navels, to turn the cotton swab around three times and then return the cotton swab to the researchers in a vial that contained a 0.5 ml 10% phosphate saline buffer. Researchers at North Carolina State University, led by Jiri Hulcr, then grew the samples in a culture until the bacterial colonies were large enough to be photographed and then these pictures were posted on the Belly Button Biodiversity Project's website (volunteers were given sample numbers so that they could view their own samples online). These samples then were analyzed using 16S rDNA libraries so that strains that did not grow well in cultures could be identified.
The researchers at North Carolina State University discovered that while it was difficult to predict every strain of bacteria in the microbiome of the navel that they could predict which strains would be prevalent and which strains of bacteria would be quite rare in the microbiome. It was found that the navel microbiomes only contained a few prevalent types of bacteria (Staphylococcus, Corynebacterium, Actinobacteria, Clostridiales, and Bacilli) and many different types of rare bacteria. Other types of rare organisms were discovered inside the navels of the volunteers including three types of Archaea, two of which were found in one volunteer who claimed not to have bathed or showered for many years.
Staphylococcus and Corynebacterium were among the most common types of bacteria found in the navels of this project's volunteers and these types of bacteria have been found to be the most common types of bacteria found on the human skin in larger studies of the skin microbiome (of which the Belly Button Biodiversity Project is a part). (In these larger studies it has been found that females generally have more Staphylococcus living in their skin microbiomes (usually Staphylococcus epidermidis) and that men have more Corynebacterium living in their skin microbiomes.)
According to the Belly Button Biodiversity Project at North Carolina State University, there are two types of microorganisms found in the navel and surrounding areas. Transient bacteria (bacteria that does not reproduce) forms the majority of the organisms found in the navel, and an estimated 1400 various strains were found in 95% of participants of the study.
The Belly Button Biodiversity Project is ongoing and has now taken swabs from over 500 people. The project was designed with the aim of countering that misconception that bacteria are always harmful to humans and that humans are at war with bacteria. In actuality, most strains of bacteria are harmless if not beneficial for the human body. Another of the project's goals is to foster public interest in microbiology. Working in concert with the Human Microbiome Project, the Belly Button Biodiversity Project also studies the connections between human microbiomes and the factors of age, sex, ethnicity, location and overall health.
Relationship to host
Skin microflora can be commensals, mutualistic or pathogens. Often they can be all three depending upon the strength of the person's immune system. Research upon the immune system in the gut and lungs has shown that microflora aids immunity development: however such research has only started upon whether this is the case with the skin. Pseudomonas aeruginosa is an example of a mutualistic bacterium that can turn into a pathogen and cause disease: if it gains entry into the circulatory system it can result in infections in bone, joint, gastrointestinal, and respiratory systems. It can also cause dermatitis. However, P. aeruginosa produces antimicrobial substances such as pseudomonic acid (that are exploited commercially such as Mupirocin). This works against staphylococcal and streptococcal infections. P. aeruginosa also produces substances that inhibit the growth of fungus species such as Candida krusei, Candida albicans, Torulopsis glabrata, Saccharomyces cerevisiae and Aspergillus fumigatus. It can also inhibit the growth of Helicobacter pylori. So important is its antimicrobial actions that it has been noted that "removing P. aeruginosa from the skin, through use of oral or topical antibiotics, may inversely allow for aberrant yeast colonization and infection."
Another aspect of bacteria is the generation of body odor. Sweat is odorless however several bacteria may consume it and create byproducts which may be considered putrid by humans (as in contrast to flies, for example, that may find them attractive/appealing).
Several examples are:
Propionibacteria in adolescent and adult sebaceous glands can turn its amino acids into propionic acid.
Staphylococcus epidermidis creates body odor by breaking sweat into isovaleric acid (3-methyl butanoic acid).
Bacillus subtilis creates strong foot odor.
Skin defenses
Antimicrobial peptides
The skin creates antimicrobial peptides such as cathelicidins that control the proliferation of skin microbes. Cathelicidins not only reduce microbe numbers directly but also cause the secretion of cytokine release which induces inflammation, angiogenesis, and reepithelialization. Conditions such as atopic dermatitis have been linked to the suppression in cathelicidin production. In rosacea abnormal processing of cathelicidin cause inflammation. Psoriasis has been linked to self-DNA created from cathelicidin peptides that causes autoinflammation. A major factor controlling cathelicidin is vitamin D3.
Acidity
The superficial layers of the skin are naturally acidic (pH 4–4.5) due to lactic acid in sweat and produced by skin bacteria. At this pH mutualistic flora such as Staphylococci, Micrococci, Corynebacterium and Propionibacteria grow but not transient bacteria such as Gram-negative bacteria like Escherichia and Pseudomonas or Gram positive ones such as Staphylococcus aureus. Another factor affecting the growth of pathological bacteria is that the antimicrobial substances secreted by the skin are enhanced in acidic conditions. In alkaline conditions, bacteria cease to be attached to the skin and are more readily shed. It has been observed that the skin also swells under alkaline conditions and opens up allowing bacterial movement to the surface.
Immune system
If activated, the immune system in the skin produces cell-mediated immunity against microbes such as dermatophytes (skin fungi). One reaction is to increase stratum corneum turnover and so shed the fungus from the skin surface. Skin fungi such as Trichophyton rubrum have evolved to create substances that limit the immune response to them. The shedding of skin is a general means to control the buildup of flora upon the skin surface.
Skin diseases
Microorganisms play a role in noninfectious skin diseases such as atopic dermatitis, rosacea, psoriasis, and acne Damaged skin can cause nonpathogenic bacteria to become pathogenic. The diversity of species on the skin is related to later development of dermatitis.
Acne vulgaris
Acne vulgaris is a common skin condition characterised by excessive sebum production by the pilosebaceous unit and inflammation of the skin. Affected areas are typically colonised by Cutibacterium acnes; a member of the commensal microbiota even in those without acne. High populations of C. acnes are linked to acne vulgaris although only certain strains are strongly associated with acne while others with healthy skin. The relative population of C. acnes is similar between those with acne and those without.
Current treatment includes topical and systemic antibacterial drugs which result in decreased C. acnes colonisation and/or activity. Potential probiotic treatment includes the use of Staphylococcus epidermidis to inhibit C. acnes growth. S. epidermidis produces succinic acid which has been shown to inhibit C. acnes growth. Lactobacillus plantarum has also been shown to act as an anti-inflammatory and improve antimicrobial properties of the skin when applied topically. It was also shown to be effective in reducing acne lesion size.
Atopic dermatitis
Individuals with atopic dermatitis have shown an increase in populations of Staphylococcus aureus in both lesional and nonlesional skin. Atopic dermatitis flares are associated with low bacterial diversity due to colonisation by S. aureus and following standard treatment, bacterial diversity has been seen to increase.
Current treatments include combinations of topical or systemic antibiotics, corticosteroids, and diluted bleach baths. Potential probiotic treatments include using the commensal skin bacteria, S. epidermidis, to inhibit S. aureus growth. During atopic dermatitis flares, population levels of S. epidermidis has been shown to increase as an attempt to control S. aureus populations.
Low gut microbial diversity in babies has been associated with an increased risk of atopic dermatitis. Infants with atopic eczema have low levels of Bacteroides and high levels of Bacillota. Bacteroides have anti-inflammatory properties which are essential against dermatitis. (See gut microbiota)
Psoriasis vulgaris
Psoriasis vulgaris typically affects drier skin sites such as elbows and knees. Dry areas of the skin tend to have high microbial diversity and fewer populations than sebaceous sites. A study using swab sampling techniques show areas rich in Bacillota (mainly Streptococcus and Staphylococcus) and Actinomycetota (mainly Corynebacterium and Propionibacterium) are associated with psoriasis. While another study using biopsies associate increased levels of Bacillota and Actinomycetota with healthy skin. However most studies show that individuals affected by psoriasis have a lower microbial diversity in the affected areas.
Treatments for psoriasis include topical agents, phototherapy, and systemic agents. Current research on the skin microbiota's role in psoriasis is inconsistent therefore there are no potential probiotic treatments.
Rosacea
Rosacea is typically connected to sebaceous sites of the skin. The skin mite Demodex folliculorum produce lipases that allow them to use sebum as a source of food therefore they have a high affinity for sebaceous skin sites. Although it is a part of the commensal skin microbiota, patients affected with rosacea show an increase in D. folliculorum compared to healthy individuals, suggesting pathogenicity.
Bacillus oleronius, a Demodex associated microbe, is not typically found in the commensal skin microbiota but initiates inflammatory pathways whose starting mechanism is similar to rosacea patients. Populations of S. epidermidis have also been isolated from pustules of rosacea patients. However it is possible that they were moved by Demodex to areas that favour growth as Demodex has shown to transport bacteria around the face.
Current treatments include topical and oral antibiotics and laser therapy. As current research has yet to show a clear mechanism for Demodex influence in rosacea, there are no potential probiotic treatments.
Clinical
Infected devices
Skin microbes are a potential source of infected medical devices such as catheters.
Hygiene
The human skin is host to numerous bacterial and fungal species, some of which are known to be harmful, some known to be beneficial and the vast majority unresearched. The use of bactericidal and fungicidal soaps will inevitably lead to bacterial and fungal populations which are resistant to the chemicals employed (see drug resistance).
Contagion
Skin flora do not readily pass between people: 30 seconds of moderate friction and dry hand contact results in a transfer of only 0.07% of natural hand flora from naked with a greater percentage from gloves.
Removal
The most effective (60–80% reduction) antimicrobial washing is with ethanol, isopropanol, and n-propanol. Viruses are most affected by high (95%) concentrations of ethanol, while bacteria are more affected by n-propanol.
Unmedicated soaps are not very effective as illustrated by the following data. Health care workers washed their hands once in nonmedicated liquid soap for 30 seconds. The students/technicians for 20 times.
{| class="wikitable" border="1"
|+ Skin flora upon two hospital groups in colony-forming units per ml.
|-
! group and hand skin condition
! unwashed
! washed
|-
| Health care workers healthy
|3.47
|3.15
|-
| Health care workers damaged
|3.33
|3.29
|-
|Students/technicians healthy
|4.39
|3.54
|-
| Students/technicians damaged
|4.58
|4.43
|}
An important use of hand washing is to prevent the transmission of antibiotic resistant skin flora that cause hospital-acquired infections such as methicillin-resistant Staphylococcus aureus. While such flora have become antibiotic resistant due to antibiotics there is no evidence that recommended antiseptics or disinfectants selects for antibiotic-resistant organisms when used in hand washing. However, many strains of organisms are resistant to some of the substances used in antibacterial soaps such as triclosan.
One study of bar soaps in dentist clinics found they all had their own flora and on average from two to five different genera of microorganisms with those used most more likely to have more species varieties. Another study of bar soaps in public toilets found even more flora. Another study found that very dry soaps are not colonized while all are that rest in pools of water. However, one experiment using soaps inoculated with Pseudomonas aeruginosa and Escherichia coli that washing with inoculated bar soap did not transmit these bacteria to participants hands.
Damaged skin
Washing skin repeatedly can damage the protective external layer and cause transepidermal loss of water. This can be seen in roughness characterized by scaling and dryness, itchiness, dermatitis provoked by microorganisms and allergens penetrating the corneal layer and redness. Wearing gloves can cause further problems since it produces a humid environment favoring the growth of microbes and also contains irritants such as latex and talcum powder.
Hand washing can damage skin because the stratum corneum top layer of skin consists of 15 to 20 layers of keratin disks, corneocytes, each of which is each surrounded by a thin film of skin lipids which can be removed by alcohols and detergents.
Damaged skin defined by extensive cracking of skin surface, widespread reddening or occasional bleeding has also been found to be more frequently colonized by Staphylococcus hominis and these were more likely to be methicillin resistant. Though not related to greater antibiotic resistance, damaged skin was also more like to be colonized by Staphylococcus aureus, gram-negative bacteria, Enterococci and Candida.
Comparison with other flora
The skin flora is different from that of the gut which is predominantly Bacillota and Bacteroidota. There is also low level of variation between people that is not found in gut studies. Both gut and skin flora however lack the diversity found in soil flora.
See also
Bacterial disease
Body odor
Gut flora
Human flora
Human microbiome project
Medical microbiology
Microbial ecology
Microflora
Oral microbiology
Skin
Vaginal flora
Zeaspora
References
External links
Cellulitis Skin Infection
Human microbiome project
Todar's Online Textbook of Bacteriology
Hygiene of the Skin: When Is Clean Too Clean?
Bacteriology
Environmental microbiology
Microbiology
Microbiology terms
flora
Microbiomes | Skin flora | [
"Chemistry",
"Biology",
"Environmental_science"
] | 4,615 | [
"Microbiology",
"Microbiology terms",
"Microscopy",
"Microbiomes",
"Environmental microbiology"
] |
11,100,681 | https://en.wikipedia.org/wiki/Space%20Liability%20Convention | The Convention on International Liability for Damage Caused by Space Objects, also known as the Space Liability Convention, is a treaty from 1972 that expands on the liability rules created in the Outer Space Treaty of 1967. In 1978, the crash of the nuclear-powered Soviet satellite Kosmos 954 in Canadian territory led to the only claim filed under the convention.
Status
The Liability Convention was concluded and opened for signature on 29 March 1972. It entered into force on 1 September 1972. As of 1 January 2021, 98 States have ratified the Liability Convention, 19 have signed but not ratified and four international intergovernmental organizations (the European Space Agency, the European Organisation for the Exploitation of Meteorological Satellites, the Intersputnik International Organization of Space Communications, and the European Telecommunications Satellite Organization) have declared their acceptance of the rights and obligations provided for in the Agreement.
Key provisions
States (countries) bear international responsibility for all space objects that are launched within their territory. This means that regardless of who launches the space object, if it was launched from State A's territory, or from State A's facility, or if State A caused the launch to happen, then State A is fully liable for damages that result from that space object.
Joint launches
If two states work together to launch a space object, then both of those states are jointly and severally liable for the damage that object causes. This means that the injured party can sue either of the two states for the full amount of damage.
Claims between states only
Claims under the Liability Convention must be brought by the state against a state. The convention was created to supplement existing and future national laws providing compensation to parties injured by space activities. Whereas under most national legal systems an individual or a corporation may bring a lawsuit against another individual or another corporation, under the Liability Convention claims must be brought on the state level only. This means that if an individual is injured by a space object and wishes to seek compensation under the Liability Convention, the individual must arrange for his or her country to make a claim against the country that launched the space object that caused the damage.
See also
USA-193
Kosmos 954
Skylab
Space law
Outer Space Treaty
Moon Treaty
Rescue Agreement
Space debris
References
External links
United Nations Office for Outer Space Affairs: Convention on International Liability for Damage Caused by Space Objects
Liability treaties
Space treaties
Space law
Treaties concluded in 1972
Treaties entered into force in 1972
Treaties of Algeria
Treaties of Antigua and Barbuda
Treaties of Argentina
Treaties of Australia
Treaties of Austria
Treaties of the Byelorussian Soviet Socialist Republic
Treaties of Belgium
Treaties of Bosnia and Herzegovina
Treaties of Botswana
Treaties of the military dictatorship in Brazil
Treaties of the People's Republic of Bulgaria
Treaties of Canada
Treaties of Chile
Treaties of the People's Republic of China
Treaties of Cuba
Treaties of Cyprus
Treaties of the Czech Republic
Treaties of Czechoslovakia
Treaties of Denmark
Treaties of Ecuador
Treaties of Fiji
Treaties of Finland
Treaties of France
Treaties of Gabon
Treaties of West Germany
Treaties of East Germany
Treaties of the Kingdom of Greece
Treaties of the Hungarian People's Republic
Treaties of India
Treaties of Indonesia
Treaties of Pahlavi Iran
Treaties of Ba'athist Iraq
Treaties of Ireland
Treaties of Israel
Treaties of Italy
Treaties of Japan
Treaties of Kazakhstan
Treaties of Kuwait
Treaties of the Kingdom of Laos
Treaties of Lebanon
Treaties of the Libyan Arab Jamahiriya
Treaties of Mexico
Treaties of the Mongolian People's Republic
Treaties of Montenegro
Treaties of Morocco
Treaties of the Netherlands
Treaties of New Zealand
Treaties of Niger
Treaties of Nigeria
Treaties of Norway
Treaties of Pakistan
Treaties of Papua New Guinea
Treaties of Peru
Treaties of the Polish People's Republic
Treaties of Qatar
Treaties of South Korea
Treaties of the Socialist Republic of Romania
Treaties of the Soviet Union
Treaties of Saint Vincent and the Grenadines
Treaties of Serbia and Montenegro
Treaties of Seychelles
Treaties of Singapore
Treaties of Slovakia
Treaties of Slovenia
Treaties of South Africa
Treaties of Francoist Spain
Treaties of Sri Lanka
Treaties of Sweden
Treaties of Switzerland
Treaties of Syria
Treaties of Tunisia
Treaties of Turkey
Treaties of the Ukrainian Soviet Socialist Republic
Treaties of the United Kingdom
Treaties of the United States
Treaties of Uruguay
Treaties of Yugoslavia
Treaties of Zambia
Treaties of Benin
Treaties of the Dominican Republic
Treaties of Liechtenstein
Treaties of Luxembourg
Treaties of Mali
Treaties of Malta
Treaties of Panama
Treaties of Saudi Arabia
Treaties of Senegal
Treaties of Togo
Treaties of Trinidad and Tobago
Treaties of the United Arab Emirates
Treaties of Venezuela
Treaties of Kenya
Treaties extended to Greenland
Treaties extended to the Faroe Islands
Treaties extended to Aruba
Treaties extended to the Netherlands Antilles
Treaties extended to Saint Christopher-Nevis-Anguilla
Treaties extended to British Antigua and Barbuda
Treaties extended to Bermuda
Treaties extended to the British Virgin Islands
Treaties extended to Brunei (protectorate)
Treaties extended to the Cayman Islands
Treaties extended to British Dominica
Treaties extended to the Falkland Islands
Treaties extended to Gibraltar
Treaties extended to British Grenada
Treaties extended to British Hong Kong
Treaties extended to Montserrat
Treaties extended to the Pitcairn Islands
Treaties extended to Saint Helena, Ascension and Tristan da Cunha
Treaties extended to British Saint Lucia
Treaties extended to British Saint Vincent and the Grenadines
Treaties extended to the British Solomon Islands
Treaties extended to South Georgia and the South Sandwich Islands
Treaties extended to the Turks and Caicos Islands | Space Liability Convention | [
"Astronomy"
] | 1,032 | [
"Space law",
"Outer space"
] |
11,100,726 | https://en.wikipedia.org/wiki/Protocol%20converter | A protocol converter is a device used to convert standard or proprietary protocol of one device to the protocol suitable for the other device or tools to achieve the desired interoperability. Protocols are software installed on the routers, which convert the data formats, data rate and protocols of one network into the protocols of the network in which data is navigating. There are varieties of protocols used in different fields like power generation, transmission and distribution, oil and gas, automation, utilities, and remote monitoring applications. The major protocol translation messages involve conversion of data messages, events, commands, and time synchronization.
General architecture
The general architecture of a protocol converter includes an internal master protocol communicating to the external slave devices and the data collected is used to update the internal database of the converter. When the external master requests for data, the internal slave collects data from the database and sends it to the external master. There will be different schemes for handling the spontaneous reporting of events and commands. There can be different physical medium for communication on protocol-X & Y, which include RS-232, RS-485, Ethernet, etc.
Applications of protocol converters
Protocol Converter applications vary from industry to industry. The protocol converter can be a software converter, hardware converter, or an integrated converter depending on the protocols.
Some of the key applications are:
Substation automation
Building automation
Process automation
The major protocols used in each area of application are listed under List of automation protocols.
Latency and engineering issues in using protocol converters
Protocol Converters are generally used for transforming data and commands from one device or application to another. This necessarily involves transformation of data, commands, their representation, encoding and framing to achieve the conversion.
There are simple and complex types of conversions depending on the application and domain in which this is being used. The simplest and most commonly used conversion is protocol conversion between Modbus RTU and Modbus TCP. In this conversion, there is no change in the overall framing. Hence it is easy to take the Serial Modbus RTU frame and encapsulate it in a TCP/UDP socket and send it over Ethernet. Since both the protocol framings are the same, except for the actual physical layer transmission, both the application layers will interpret data similarly as long as the communication interfaces are made transparent.
However, there do exist very complex conversions, for example: where the data is formatted, the data types supported, the object models, etc. They are so different that the conversion engine needs to make modifications not only in framing, but in mapping information for each type of data, command, and in some cases, the object models. Also, there might be user configurations required in defining the mapping of supported and non-supported data types
These transformations, however, bring about conversion advantages, communication delay, processing latency, and an overall end to end processing time which is finite and needs to be considered in all solution designs.
The latency of end-to-end communication depends on the processing delay of the hardware and/or software being used, the protocol & conversion complexity, and the solution architecture. These latencies can vary for typical industrial and energy automation applications from 10 to 20 milliseconds to as high as 1 second. Solution architectures using protocol converters need to consider this latency and how it will impact the project for which converters are being considered.
Also, the majority of such architectures would involve configuration and mapping which both require considerable engineering effort and engineering time. These need to be considered while defining project schedules.
See also
List of automation protocols
Vehicle bus
List of network protocols
Universal gateway
Cloud storage gateway
External links
Protocol Converter for Real Time management system
White-Paper: Impact of using protocol converters for substation modernization projects using iec 61850
Interoperability
International Electrotechnical Commission | Protocol converter | [
"Engineering"
] | 780 | [
"Electrical engineering organizations",
"Telecommunications engineering",
"International Electrotechnical Commission",
"Interoperability"
] |
11,100,985 | https://en.wikipedia.org/wiki/Equipossibility | Equipossibility is a philosophical concept in possibility theory that is a precursor to the notion of equiprobability in probability theory. It is used to distinguish what can occur in a probability experiment. For example, it is the difference between viewing the possible results of rolling a six sided dice as {1,2,3,4,5,6} rather than {6, not 6}. The former (equipossible) set contains equally possible alternatives, while the latter does not because there are five times as many alternatives inherent in 'not 6' as in 6. This is true even if the die is biased so that 6 and 'not 6' are equally likely to occur (equiprobability).
The Principle of Indifference of Laplace states that equipossible alternatives may be accorded equal probabilities if nothing more is known about the underlying probability distribution. However, it is a matter of contention whether the concept of equipossibility, also called equispecificity (from equispecific), can truly be distinguished from the concept of equiprobability.
In Bayesian inference, one definition of equipossibility is "a transformation group which leaves invariant one's state of knowledge". Equiprobability is then defined by normalizing the Haar measure of this symmetry group. This is known as the principle of transformation groups.
References
External links
Book Chapter by Henry E. Kyburg Jr. on equipossibility, with the 6/not-6 example above
Quotes on equipossibility in classical probability
Probability interpretations
Possibility | Equipossibility | [
"Mathematics"
] | 337 | [
"Probability interpretations"
] |
11,101,049 | https://en.wikipedia.org/wiki/Obstetrical%20forceps | Obstetrical forceps are a medical instrument used in childbirth. Their use can serve as an alternative to the ventouse (vacuum extraction) method.
Medical uses
Forceps births, like all assisted births, should only be undertaken to help promote the health of the mother or baby. In general, a forceps birth is likely to be safer for both the mother and baby than the alternatives – either a ventouse birth or a caesarean section – although caveats such as operator skill apply.
Advantages of forceps use include avoidance of caesarean section (and the short and long-term complications that accompany this), reduction of delivery time, and general applicability with cephalic presentation (head presentation). Common complications include the possibility of bruising the baby and causing more severe vaginal tears (perineal laceration) than would otherwise be the case. Severe and rare complications (occurring less frequently than 1 in 200) include nerve damage, Descemet's membrane rupture, skull fractures, and cervical cord injury.
Maternal factors for use of forceps:
Maternal exhaustion.
Prolonged second stage of labour.
Maternal illness such as heart disease, hypertension, glaucoma, aneurysm, or other conditions that make pushing difficult or dangerous.
Hemorrhaging.
Analgesic drug-related inhibition of maternal effort (especially with epidural/spinal anaesthesia).
Fetal factors for use of forceps:
Non-reassuring fetal heart tracing.
Fetal distress.
After-coming head in breech delivery.
Complications
Baby
Cuts and bruises.
Increased risk of facial nerve injury (usually temporary).
Increased risk of clavicle fracture (rare).
Increased risk of intracranial hemorrhage - sometimes leading to death: 4/10,000.
Increased risk of damage to cranial nerve VI, resulting in strabismus.
Mother
Increased risk of perineal lacerations, pelvic organ prolapse, and incontinence.
Increased risk of injury to vagina and cervix.
Increased postnatal recovery time and pain.
Increased difficulty evacuating during recovery time.
Structure
Obstetric forceps consist of two branches (blades) that are positioned around the head of the fetus. These branches are defined as left and right depending on which side of the mother's pelvis they will be applied. The branches usually, but not always, cross at a midpoint, which is called the articulation. Most forceps have a locking mechanism at the articulation, but a few have a sliding mechanism instead that allows the two branches to slide along each other. Forceps with a fixed lock mechanism are used for deliveries where little or no rotation is required, as when the fetal head is in line with the mother's pelvis. Forceps with a sliding lock mechanism are used for deliveries requiring more rotation.
The blade of each forceps branch is the curved portion that is used to grasp the fetal head. The forceps should surround the fetal head firmly, but not tightly. The blade characteristically has two curves, the cephalic and the pelvic curves. The cephalic curve is shaped to conform to the fetal head. The cephalic curve can be rounded or rather elongated depending on the shape of the fetal head. The pelvic curve is shaped to conform to the birth canal and helps direct the force of the traction under the pubic bone. Forceps used for rotation of the fetal head should have almost no pelvic curve.
The handles are connected to the blades by shanks of variable lengths. Forceps with longer shanks are used if rotation is being considered.
Anglo-American types
All American forceps are derived from French forceps (long forceps) or English forceps (short forceps). Short forceps are applied on the fetal head already descended significantly in the maternal pelvis (i.e., proximal to the vagina). Long forceps are able to reach a fetal head still in the middle or even in the upper part of the maternal pelvis. At present practice, it is uncommon to use forceps to access a fetal head in the upper pelvis. So, short forceps are preferred in the UK and USA. Long forceps are still in use elsewhere.
Simpson forceps (1848) are the most commonly used among the types of forceps and has an elongated cephalic curve. These are used when there is substantial molding, that is, temporary elongation of the fetal head as it moves through the birth canal.
Elliot forceps (1860) are similar to Simpson forceps but with an adjustable pin in the end of the handles which can be drawn out as a means of regulating the lateral pressure on the handles when the instrument is positioned for use. They are used most often with women who have had at least one previous vaginal delivery because the muscles and ligaments of the birth canal provide less resistance during second and subsequent deliveries. In these cases, the fetal head may remain rounder.
Kielland forceps (1915, Norwegian) are distinguished by having no angle between the shanks and the blades and a sliding lock. The pelvic curve of the blades is identical to all other forceps. The common misperception that there is no pelvic curve has become so entrenched in the obstetric literature that it may never be able to be overcome, but it can be proved by holding a blade of Kielland's against any other forceps of one's choice. Kielland forceps are probably the most common forceps used for rotation. The sliding mechanism at the articulation can be helpful in asynclitic births (when the fetal head is tilted to the side) since it is no longer in line with the birth canal. Because the handles, shanks, and blades are all in the same plane the forceps can be applied in any position to affect rotation. Because the shanks and handles are not angled, the forceps cannot be applied to a high station as readily as those with the angle since the shanks impinge on the perineum.
Wrigley's forceps, named after Arthur Joseph Wrigley, are used in low or outlet deliveries (see explanations below), when the maximum diameter is about above the vulva. Wrigley's forceps were designed for use by general practitioner obstetricians, having the safety feature of an inability to reach high into the pelvis. Obstetricians now use these forceps most commonly in cesarean section delivery where manual traction is proving difficult. The short length results in a lower chance of uterine rupture.
Piper's forceps has a perineal curve to allow application to the after-coming head in breech delivery.
Technique
The cervix must be fully dilated and retracted and the membranes ruptured. The urinary bladder should be empty, perhaps with the use of a catheter. High forceps are never indicated in the modern era. Mid forceps can occasionally be indicated but require operator skill and caution. The station of the head must be at the level of the ischial spines. The woman is placed on her back, usually with the aid of stirrups or assistants to support her legs. A regional anaesthetic (usually either a spinal, epidural or pudendal block) is used to help the mother remain comfortable during the birth. Ascertaining the precise position of the fetal head is paramount, and though historically was accomplished by feeling the fetal skull suture lines and fontanelles, in the modern era, confirmation with ultrasound is essentially mandatory. At this point, the two blades of the forceps are individually inserted, the left blade first for the commonest occipito-anterior position; posterior blade first if a transverse position, then locked. The position on the baby's head is checked. The fetal head is then rotated to the occiput anterior position if it is not already in that position. An episiotomy may be performed if necessary. The baby is then delivered with gentle (maximum 30 lbf or 130 Newton) traction in the axis of the pelvis.
Outlet, low, mid or high
The accepted clinical standard classification system for forceps deliveries according to station and rotation was developed by the American College of Obstetricians and Gynecologists and consists of:
Outlet forceps delivery, where the forceps are applied when the fetal head has reached the perineal floor and its scalp is visible between contractions. This type of assisted delivery is performed only when the fetal head is in a straight forward or backward vertex position or in slight rotation (less than 45 degrees to the right or left) from one of these positions.
Low forceps delivery, when the baby's head is at +2 station or lower. There is no restriction on rotation for this type of delivery.
Midforceps delivery, when the baby's head is above +2 station. There must be head engagement before it can be carried out.
High forceps delivery is not performed in modern obstetrics practice. It would be a forceps-assisted vaginal delivery performed when the baby's head is not yet engaged.
History
The obstetric forceps were invented by the eldest son of the Chamberlen family of surgeons. The Chamberlens were French Huguenots from Normandy who worked in Paris before they migrated to England in 1569 to escape the religious violence in France. William Chamberlen, the patriarch of the family, was most likely a surgeon; he had two sons, both named Pierre, who became maverick surgeons and specialists in midwifery. William and the eldest son practiced in Southampton and then settled in London. The inventor was probably the eldest Peter Chamberlen the elder, who became obstetrician-surgeon of Queen Henriette, wife of King Charles I of England and daughter of Henry IV, King of France. He was succeeded by his nephew, Dr. Peter Chamberlen (barbers-surgeons were not doctors in the sense of physician), as royal obstetrician. The success of this dynasty of obstetricians with the royal family and high nobles was related in part to the use of this "secret" instrument allowing delivery of a live child in difficult cases.
In fact, the instrument was kept secret for 150 years by the Chamberlen family, although there is evidence for its presence as far back as 1634. Hugh Chamberlen the elder, grandnephew of Peter the eldest, tried to sell the instrument in Paris in 1670, but the demonstration he performed in front of François Mauriceau, responsible for Paris Hotel-Dieu maternity, was a failure which resulted in the death of mother and child. The secret may have been sold by Hugh Chamberlen to Dutch obstetricians at the start of the 18th century in Amsterdam, but there are doubts about the authenticity of what was actually provided to buyers.
The forceps were used most notably in difficult childbirths. The forceps could avoid some infant deaths when previous approaches (involving hooks and other instruments) extracted them in parts. In the interest of secrecy, the forceps were carried into the birthing room in a lined box and would only be used once everyone was out of the room and the mother blindfolded.
Models derived from the Chamberlen instrument finally appeared gradually in England and Scotland in 1735. About 100 years after the invention of the forceps by Peter Chamberlen Sr. a surgeon by the name of Jan Palfijn presented his obstetric forceps to the Paris Academy of Sciences in 1723. They contained parallel blades and were called the Hands of Palfijn.
These "hands" were possibly the instruments described and used in Paris by Gregoire father and son, Dussée, and Jacques Mesnard.
In 1813, Peter Chamberlen's midwifery tools were discovered at Woodham Mortimer Hall near Maldon (UK) in the attic of the house. The instruments were found along with gloves, old coins and trinkets. The tools discovered also contained a pair of forceps that were assumed to have been invented by the father of Peter Chamberlen because of the nature of the design.
The Chamberlen family's forceps were based on the idea of separating the two branches of "sugar clamp" (as those used to remove "stones" from bladder), which were put in place one after another in the birth canal. This was not possible with conventional tweezers previously tested. However, they could only succeed in a maternal pelvis of normal dimensions and on fetal heads already well engaged (i.e. well lowered into maternal pelvis). Abnormalities of pelvis were much more common in the past than today, which complicated the use of Chamberlen forceps. The absence of pelvic curvature of the branches (vertical curvature to accommodate the anatomical curvature of maternal sacrum) prohibited blades from reaching the upper-part of the pelvis and exercising traction in the natural axis of pelvic excavation.
In 1747, French obstetrician Andre Levret, published (Observations on the Causes and Accidents of Several difficult Deliveries), in which he described his modification of the instrument to follow the curvature of the maternal pelvis, this "pelvic curve" allowing a grip on a fetal head still high in the pelvic excavation, which could assist in more difficult cases.
This improvement was published in 1751 in England by William Smellie in the book A Treatise on the theory and practice of midwifery. After this fundamental improvement, the forceps would become a common obstetrical instrument for more than two centuries.
The last improvement of the instrument was added in 1877 by a French obstetrician, Stephan Tarnier in "descriptions of two new forceps." This instrument featured a traction system misaligned with the instrument itself, sometimes called the "third curvature of the forceps". This particularly ingenious traction system, allowed the forceps to exercise traction on the head of the child following the axis of the maternal pelvic excavation, which had never been possible before.
Tarnier's idea was to "split" mechanically the grabbing of the fetal head (between the forceps blades) on which the operator does not intervene after their correct positioning, from a mechanical accessory set on the forceps itself, the "tractor" on which the operator exercises traction needed to pull down the fetal head in the correct axis of the pelvic excavation. Tarnier forceps (and its multiple derivatives under other names) remained the most widely used system in the world until the development of the cesarean section.
Forceps had a profound influence on obstetrics as it allowed for the speedy delivery of the baby in cases of difficult or obstructed labour. Over the course of the 19th century, many practitioners attempted to redesign the forceps, so much so that the Royal College of Obstetrics and Gynecologists' collection has several hundred examples. In the last decades, however, with the ability to perform a cesarean section relatively safely, and the introduction of the ventouse or vacuum extractor, the use of forceps and training in the technique of its use has sharply declined.
Historical role in the medicalisation of childbirth
The introduction of the obstetrical forceps provided huge advances in the medicalisation of childbirth. Before the 18th century, childbirth was thought of as a medical phase that could be overseen by a female relative. Usually, if a doctor had to get involved that meant something had gone wrong. Around this era in the 18th century, there were no female doctors. Since males were exclusively called in under extreme circumstances, the act of childbirth was thought to be better known to a midwife or female relative than a male doctor. Usually the male doctor's job was to save the mother's life if, for example, the baby had become stuck on his or her way exiting the mother.
Before the obstetrical forceps, this had to be done by cutting the baby out piece by piece. In other cases, if the baby was deemed undeliverable, then the doctor would use a tool called a crochet. This was used to crush the baby's skull, allowing the baby to be pulled out of the mother's womb. Still in other cases, a caesarean section (c section) could be performed, but this would almost always result in the mother's death. "In addition, women who had forceps deliveries had shorter after-childbirth complications than those who had caesarean sections performed." These procedures came with various risks to the mother's health, along with the death of the baby.
However, with the introduction of the obstetrical forceps, the male doctor had a more important role. In many cases, they could actually save the baby's life if called early enough. Although the use of the forceps in childbirth came with its own set of risks, the positives included a significant decrease in risk to the mother, a decrease in child morbidity, and a decreased risk to the baby. Since the forceps in childbirth were made public around 1720, they gave male doctors a way to assist and even oversee childbirths.
Around this time, in large cities such as London and Paris, some men would become devoted to obstetrical practices. It became stylish among wealthy women of the era to have their childbirth overseen by male midwives. A notable male midwife was William Hunter. He popularised obstetrics. "In 1762, he was appointed as obstetrician to Queen Charlotte." In addition, with the use of forceps, male doctors invented lying in hospitals to provide safe, somewhat advanced obstetrical care because of the use of the obstetrical forceps.
Historical complications
Child birth was not considered a medical practice before the 18th century. It was mostly overseen by a midwife, mother, stepmother, neighbor, or any female relative. Around the 19th and 20th centuries, childbirth was considered dangerous for women. With the introduction of obstetrical forceps, this allowed non-medical professionals, such as the aforementioned individuals, to continue to oversee childbirths. In addition, this gave some of the public more comfort in trusting childbirth oversight to common people. However, the introduction of obstetrical forceps also had a negative effect, because there was no medical oversight of childbirth by any kind of medical professional, this exposed the practice to unnecessary risks and complications for the fetus and mother. These risks could range from minimal effects to lifetime consequences for both individuals. The baby could develop cuts and bruises in various body parts due to the forcible squeezing of his or her body through the mother's vagina. In addition, there could be bruising on the baby's face if the forceps' handler were to squeeze too tight. In some extreme cases, this could cause temporary or permanent facial nerve injury. Furthermore, if the forceps' handler were to twist his or her wrist while the grip was on the baby's head, this would twist the baby's neck and cause damage to a cranial nerve, resulting in strabismus. In rare cases, a clavicle fracture to the baby could occur. The addition of obstetrical forceps came with complication to the mother during and after childbirth. The use of the forceps gave rise to an increased risk in cuts and lacerations along the vaginal wall. This, in turn, would cause an increase in post-operative recovery time and increase the pain experienced by the mother. In addition, the use of forceps could cause more difficulty evacuating during the recovery time as compared to a mother who did not use the forceps. While some of these risks and complications were very common, in general, many people overlooked them and continued to use them.
See also
Instruments used in general surgery
References
External links
GLOWM video demonstrating forceps delivery technique
Equipment used in childbirth
Obstetrical procedures
Medical equipment
Surgical instruments | Obstetrical forceps | [
"Biology"
] | 4,096 | [
"Medical equipment",
"Medical technology"
] |
11,101,076 | https://en.wikipedia.org/wiki/Inch%20of%20water | Inches of water is a non-SI unit for pressure. It is also given as inches of water gauge (iwg or in.w.g.), inches water column (inch wc, in. WC, " wc, etc. or just wc or WC), inAq, Aq, or inHO. The units are conventionally used for measurement of certain pressure differentials such as small pressure differences across an orifice, or in a pipeline or shaft, or before and after a coil in an HVAC unit.
It is defined as the pressure exerted by a column of water of 1 inch in height at defined conditions. At a temperature of 4 °C (39.2 °F) pure water has its highest density (1000 kg/m3). At that temperature and assuming the standard acceleration of gravity, 1 inAq is approximately .
Alternative standard in uncommon usage are 60 °F (15,6 °C), or 68 °F (20 °C), and depends on industry standards rather than on international standards.
Feet of water is an alternative way to specify pressure as height of a water column; it is conventionally equated to .
In North America, air and other industrial gases are often measured in inches of water when at low pressure. This is in contrast to inches of mercury or pounds per square inch (psi, lbf/in) for larger pressures. One usage is in the measurement of air ("wind") that supplies a pipe organ and is referred simply as inches. It is also used in natural gas distribution for measuring utilization pressure (U.P., i.e. the residential point of use) which is typically between 6 and 7 inches WC or about 0.25 lbf/in.
1 inAq ≈ 0.036 lbf/in, or 27.7 inAq ≈ 1 lbf/in.
{|
|-
|1 inH2O ||= 249.0889 pascals
|-
|rowspan=7|
|= 2.490889 mbar or hectopascals
|-
|= 2.54 cmH2O
|-
|≈
|-
|≈ or mmHg
|-
|≈
|-
|≈
|}
See also
Pressure head
Barometer
Centimetre of water
Inch of mercury
Millimetre of mercury
References
Units of pressure | Inch of water | [
"Mathematics"
] | 477 | [
"Quantity",
"Units of measurement",
"Units of pressure"
] |
11,101,129 | https://en.wikipedia.org/wiki/Film%20temperature | In fluid thermodynamics, the film temperature () is an approximation of the temperature of a fluid inside a convection boundary layer. It is calculated as the arithmetic mean of the temperature at the surface of the solid boundary wall () and the free-stream temperature ():
The film temperature is often used as the temperature at which fluid properties are calculated when using the Prandtl number, Nusselt number, Reynolds number or Grashof number to calculate a heat transfer coefficient, because it is a reasonable first approximation to the temperature within the convection boundary layer.
Somewhat confusing terminology may be encountered in relation to boilers and heat exchangers, where the same term is used to refer to the limit (hot) temperature of a fluid in contact with a hot surface.
References
Fluid dynamics
Heat transfer | Film temperature | [
"Physics",
"Chemistry",
"Engineering"
] | 162 | [
"Transport phenomena",
"Physical phenomena",
"Heat transfer",
"Chemical engineering",
"Thermodynamics",
"Piping",
"Fluid dynamics"
] |
11,101,338 | https://en.wikipedia.org/wiki/Equiprobability | Equiprobability is a property for a collection of events that each have the same probability of occurring. In statistics and probability theory it is applied in the discrete uniform distribution and the equidistribution theorem for rational numbers. If there are events under consideration, the probability of each occurring is
In philosophy it corresponds to a concept that allows one to assign equal probabilities to outcomes when they are judged to be equipossible or to be "equally likely" in some sense. The best-known formulation of the rule is Laplace's principle of indifference (or principle of insufficient reason), which states that, when "we have no other information than" that exactly mutually exclusive events can occur, we are justified in assigning each the probability This subjective assignment of probabilities is especially justified for situations such as rolling dice and lotteries since these experiments carry a symmetry structure, and one's state of knowledge must clearly be invariant under this symmetry.
A similar argument could lead to the seemingly absurd conclusion that the sun is as likely to rise as to not rise tomorrow morning. However, the conclusion that the sun is equally likely to rise as it is to not rise is only absurd when additional information is known, such as the laws of gravity and the sun's history. Similar applications of the concept are effectively instances of circular reasoning, with "equally likely" events being assigned equal probabilities, which means in turn that they are equally likely. Despite this, the notion remains useful in probabilistic and statistical modeling.
In Bayesian probability, one needs to establish prior probabilities for the various hypotheses before applying Bayes' theorem. One procedure is to assume that these prior probabilities have some symmetry which is typical of the experiment, and then assign a prior which is proportional to the Haar measure for the symmetry group: this generalization of equiprobability is known as the principle of transformation groups and leads to misuse of equiprobability as a model for incertitude.
See also
Principle of indifference
Laplacian smoothing
Uninformative prior
A priori probability
Aequiprobabilism
Uniform probability distributions:
Continuous
Discrete
Information gain
References
External links
Quotes on equiprobability in classical probability
Probability interpretations
Philosophy of statistics | Equiprobability | [
"Mathematics"
] | 467 | [
"Probability interpretations",
"Philosophy of statistics"
] |
11,101,522 | https://en.wikipedia.org/wiki/Isomorphism-closed%20subcategory | In category theory, a branch of mathematics, a subcategory of a category is said to be isomorphism closed or replete if every -isomorphism with belongs to This implies that both and belong to as well.
A subcategory that is isomorphism closed and full is called strictly full. In the case of full subcategories it is sufficient to check that every -object that is isomorphic to an -object is also an -object.
This condition is very natural. For example, in the category of topological spaces one usually studies properties that are invariant under homeomorphisms—so-called topological properties. Every topological property corresponds to a strictly full subcategory of
References
Category theory | Isomorphism-closed subcategory | [
"Mathematics"
] | 147 | [
"Functions and mappings",
"Mathematical structures",
"Mathematical objects",
"Fields of abstract algebra",
"Category theory",
"Mathematical relations"
] |
11,101,714 | https://en.wikipedia.org/wiki/Luggin%20capillary | A Luggin capillary (also Luggin probe, Luggin tip, or Luggin-Haber capillary) is a small tube that is used in electrochemistry. The capillary defines a clear sensing point for the reference electrode near the working electrode. This is in contrast to the poorly defined, large reference electrode.
References
External links
Advanced Electrochemistry. Slide 15: Luggin capillary for reference electrode.
Electrodes
Electrochemistry | Luggin capillary | [
"Chemistry"
] | 101 | [
"Physical chemistry stubs",
"Electrochemistry",
"Electrodes",
"Electrochemistry stubs"
] |
11,102,669 | https://en.wikipedia.org/wiki/Automated%20exception%20handling | Automated exception handling is a computing term referring to the computerized handling of errors. Runtime systems (engines) such as those for the Java programming language or .NET Framework lend themselves to an automated mode of exception or error handling. In these environments, software errors do not crash the operating system or runtime engine, but rather generate exceptions. Recent advances in these runtime engines enables specialized runtime engine add-on products to provide automated exception handling that is independent of the source code and provides root-cause information for every exception of interest.
How it works
Upon exception, the runtime engine calls an error interception tool that is attached to the runtime engine (e.g., Java virtual machine (JVM)). Based on the nature of the exception, such as its type and the class and method in which it occurred, and based on user preferences, an exception can be either handled or ignored.
If the preference is to handle the exception, then based on handling preferences such as memory search depth, the error interception utility extracts memory values from heap and stack memories. This snapshot then produces the equivalent of a debugger screen (as if there had been a debugger) at the moment of the exception.
Effects
This mechanism enables the automated handling of software errors independent of the application source code and of its developers. It is a direct artifact of the runtime engine paradigm and it enables unique advantages to the software life cycle that were unavailable before.
References
Java (programming language) | Automated exception handling | [
"Technology"
] | 303 | [
"Computing stubs",
"Computer science",
"Computer science stubs"
] |
11,103,166 | https://en.wikipedia.org/wiki/Pro-oxidant | Pro-oxidants are chemicals that induce oxidative stress, either by generating reactive oxygen species or by inhibiting antioxidant systems. The oxidative stress produced by these chemicals can damage cells and tissues, for example, an overdose of the analgesic paracetamol (acetaminophen) can fatally damage the liver, partly through its production of reactive oxygen species.
Some substances can serve as either antioxidants or pro-oxidants, depending on conditions. Some of the important conditions include the concentration of the chemical and if oxygen or transition metals are present. While thermodynamically very favored, reduction of molecular oxygen or peroxide to superoxide or hydroxyl radical respectively is spin forbidden. This greatly reduces the rates of these reactions, thus allowing aerobic life to exist. As a result, the reduction of oxygen typically involves either the initial formation of singlet oxygen, or spin–orbit coupling through a reduction of a transition-series metal such as manganese, iron, or copper. This reduced metal then transfers the single electron to molecular oxygen or peroxide.
Metals
Transition metals can serve as pro-oxidants. For example, chronic manganism is a classic "pro-oxidant" disease. Another disease associated with the chronic presence of a pro-oxidant transition-series metal is hemochromatosis, associated with elevated iron levels. Similarly, Wilson's disease is associated with elevated tissue levels of copper. Such syndromes tend to be associated with common symptomology. Thus, all are occasional symptoms of (e.g) hemochromatosis, another name for which is "bronze diabetes". The pro-oxidant herbicide paraquat, Wilson's disease, and striatal iron have similarly been linked to human Parkinsonism. Paraquat also produces Parkinsonian-like symptoms in rodents.
Fibrosis
Fibrosis or scar formation is another pro-oxidant-related symptom. E.g., interocular copper or vitreous chalicosis is associated with severe vitreous fibrosis, as is interocular iron. Liver cirrhosis is also a major symptom of Wilson's disease. The pulmonary fibrosis produced by paraquat and the antitumor agent bleomycin is also thought to be induced by the pro-oxidant properties of these agents. It may be that oxidative stress produced by such agents mimics a normal physiological signal for fibroblast conversion to myofibroblasts.
Pro-oxidant vitamins
Vitamins that are reducing agents can be pro-oxidants. Vitamin C has antioxidant activity when it reduces oxidizing substances such as hydrogen peroxide, however, it can also reduce metal ions which leads to the generation of free radicals through the Fenton reaction.
2 Fe2+ + 2 H2O2 → 2 Fe3+ + 2 OH· + 2 OH−
2 Fe3+ + Ascorbate → 2 Fe2+ + Dehydroascorbate
The metal ion in this reaction can be reduced, oxidized, and then re-reduced, in a process called redox cycling that can generate reactive oxygen species.
The relative importance of the antioxidant and pro-oxidant activities of antioxidant vitamins is an area of current research, but vitamin C, for example, appears to have a mostly antioxidant action in the body. However, less data is available for other dietary antioxidants, such as polyphenol antioxidants, zinc, and vitamin E.
Use in medicine
Several important anticancer agents both bind to DNA and generate reactive oxygen species. These include adriamycin and other anthracyclines, bleomycin, and cisplatin. These agents may show specific toxicity towards cancer cells because of the low level of antioxidant defenses found in tumors. Recent research demonstrates that redox dysregulation originating from metabolic alterations and dependence on mitogenic and survival signaling through reactive oxygen species represents a specific vulnerability of malignant cells that can be selectively targeted by pro-oxidant non-genotoxic redox chemotherapeutics.
Photodynamic therapy is used to treat some cancers as well as other conditions. It involves the administration of a photosensitizer followed by exposing the target to appropriate wavelengths of light. The light excites the photosensitizer, causing it to generate reactive oxygen species, which can damage or destroy diseased or unwanted tissue.
See also
Antioxidant
Oxidizing agent
Reducing agent
Methylene blue
DCPIP
References
Antioxidants
Cell biology
Chemical pathology
Physiology
Redox | Pro-oxidant | [
"Chemistry",
"Biology"
] | 983 | [
"Cell biology",
"Redox",
"Physiology",
"Electrochemistry",
"nan",
"Biochemistry",
"Chemical pathology"
] |
11,103,761 | https://en.wikipedia.org/wiki/Ligand%20dependent%20pathway | There are two types of pathway for substitution of ligands in a complex. The ligand dependent pathway is the one whereby the chemical properties of the ligand affect the rate of substitution. Alternatively, there is the ligand independent pathway, which is where the ligand does not have an effect.
This is of vital importance in the world of inorganic chemistry and complex ions.
References
Chemical bonding | Ligand dependent pathway | [
"Physics",
"Chemistry",
"Materials_science"
] | 75 | [
"Chemical bonding",
"Condensed matter physics",
"nan"
] |
11,104,113 | https://en.wikipedia.org/wiki/NGC%202808 | NGC 2808 is a globular cluster in the constellation Carina. The cluster currently belongs to the Milky Way, although it was likely stolen from a dwarf galaxy that collided with the Milky Way. NGC 2808 is one of the Milky Way's most massive clusters, containing more than a million stars. It is estimated to be 12.5 billion years old.
The cluster is being disrupted by the galactic tide, trailing a long tidal tail.
Star generations
It had been thought that NGC 2808, like typical globular clusters, contains only one generation of stars formed simultaneously from the same material. In 2007, a team of astronomers led by Giampaolo Piotto of the University of Padua in Italy investigated Hubble Space Telescope images of NGC 2808 taken in 2005 and 2006 with Hubble's Advanced Camera for Surveys. Unexpectedly, they found that this cluster is composed of three generations of stars, all born within 200 million years of the formation of the cluster.
Astronomers have argued that globular clusters can produce only one generation of stars, because the radiation from first generation stars would drive the residual gas not consumed in the first star generation phase out of the cluster. However, the great mass of a cluster such as NGC 2808 may suffice to gravitationally counteract the loss of gaseous matter. Thus, a second and a third generation of stars may form.
An alternative explanation for the three star generations of NGC 2808 is that it may actually be the remnant core of a dwarf galaxy that collided with the Milky Way, called the Sausage Galaxy.
See also
Messier 54 and Messier 79, two other possibly extragalactic globular clusters
Omega Centauri
References
External links
Globular clusters
Carina (constellation)
2808
Canis Major Overdensity | NGC 2808 | [
"Astronomy"
] | 362 | [
"Carina (constellation)",
"Constellations"
] |
11,105,238 | https://en.wikipedia.org/wiki/SL2%28R%29 | {{DISPLAYTITLE:SL2(R)}}
In mathematics, the special linear group SL(2, R) or SL2(R) is the group of 2 × 2 real matrices with determinant one:
It is a connected non-compact simple real Lie group of dimension 3 with applications in geometry, topology, representation theory, and physics.
SL(2, R) acts on the complex upper half-plane by fractional linear transformations. The group action factors through the quotient PSL(2, R) (the 2 × 2 projective special linear group over R). More specifically,
PSL(2, R) = SL(2, R) / {±I},
where I denotes the 2 × 2 identity matrix. It contains the modular group PSL(2, Z).
Also closely related is the 2-fold covering group, Mp(2, R), a metaplectic group (thinking of SL(2, R) as a symplectic group).
Another related group is SL±(2, R), the group of real 2 × 2 matrices with determinant ±1; this is more commonly used in the context of the modular group, however.
Descriptions
SL(2, R) is the group of all linear transformations of R2 that preserve oriented area. It is isomorphic to the symplectic group Sp(2, R) and the special unitary group SU(1, 1). It is also isomorphic to the group of unit-length coquaternions. The group SL±(2, R) preserves unoriented area: it may reverse orientation.
The quotient PSL(2, R) has several interesting descriptions, up to Lie group isomorphism:
It is the group of orientation-preserving projective transformations of the real projective line
It is the group of conformal automorphisms of the unit disc.
It is the group of orientation-preserving isometries of the hyperbolic plane.
It is the restricted Lorentz group of three-dimensional Minkowski space. Equivalently, it is isomorphic to the indefinite orthogonal group SO+(1,2). It follows that SL(2, R) is isomorphic to the spin group Spin(2,1)+.
Elements of the modular group PSL(2, Z) have additional interpretations, as do elements of the group SL(2, Z) (as linear transforms of the torus), and these interpretations can also be viewed in light of the general theory of SL(2, R).
Homographies
Elements of PSL(2, R) are homographies on the real projective line :
These projective transformations form a subgroup of PSL(2, C), which acts on the Riemann sphere by Möbius transformations.
When the real line is considered the boundary of the hyperbolic plane, PSL(2, R) expresses hyperbolic motions.
Möbius transformations
Elements of PSL(2, R) act on the complex plane by Möbius transformations:
This is precisely the set of Möbius transformations that preserve the upper half-plane. It follows that PSL(2, R) is the group of conformal automorphisms of the upper half-plane. By the Riemann mapping theorem, it is also isomorphic to the group of conformal automorphisms of the unit disc.
These Möbius transformations act as the isometries of the upper half-plane model of hyperbolic space, and the corresponding Möbius transformations of the disc are the hyperbolic isometries of the Poincaré disk model.
The above formula can be also used to define Möbius transformations of dual and double (aka split-complex) numbers. The corresponding geometries are in non-trivial relations to Lobachevskian geometry.
Adjoint representation
The group SL(2, R) acts on its Lie algebra sl(2, R) by conjugation (remember that the Lie algebra elements are also 2 × 2 matrices), yielding a faithful 3-dimensional linear representation of PSL(2, R). This can alternatively be described as the action of PSL(2, R) on the space of quadratic forms on R2. The result is the following representation:
The Killing form on sl(2, R) has signature (2,1), and induces an isomorphism between PSL(2, R) and the Lorentz group SO+(2,1). This action of PSL(2, R) on Minkowski space restricts to the isometric action of PSL(2, R) on the hyperboloid model of the hyperbolic plane.
Classification of elements
The eigenvalues of an element A ∈ SL(2, R) satisfy the characteristic polynomial
and therefore
This leads to the following classification of elements, with corresponding action on the Euclidean plane:
If , then A is called elliptic, and is conjugate to a rotation.
If , then A is called parabolic, and is a shear mapping.
If , then A is called hyperbolic, and is a squeeze mapping.
The names correspond to the classification of conic sections by eccentricity: if one defines eccentricity as half the absolute value of the trace (ε = |tr|; dividing by 2 corrects for the effect of dimension, while absolute value corresponds to ignoring an overall factor of ±1 such as when working in PSL(2, R)), then this yields: , elliptic; , parabolic; , hyperbolic.
The identity element 1 and negative identity element −1 (in PSL(2, R) they are the same), have trace ±2, and hence by this classification are parabolic elements, though they are often considered separately.
The same classification is used for SL(2, C) and PSL(2, C) (Möbius transformations) and PSL(2, R) (real Möbius transformations), with the addition of "loxodromic" transformations corresponding to complex traces; analogous classifications are used elsewhere.
A subgroup that is contained with the elliptic (respectively, parabolic, hyperbolic) elements, plus the identity and negative identity, is called an elliptic subgroup (respectively, parabolic subgroup, hyperbolic subgroup).
The trichotomy of SL(2, R) into elliptic, parabolic, and hyperbolic elements is a classification into subsets, not subgroups: these sets are not closed under multiplication (the product of two parabolic elements need not be parabolic, and so forth). However, each element is conjugate to a member of one of 3 standard one-parameter subgroups (possibly times ±1), as detailed below.
Topologically, as trace is a continuous map, the elliptic elements (excluding ±1) form an open set, as do the hyperbolic elements (excluding ±1). By contrast, the parabolic elements, together with ±1, form a closed set that is not open.
Elliptic elements
The eigenvalues for an elliptic element are both complex, and are conjugate values on the unit circle. Such an element is conjugate to a rotation of the Euclidean plane – they can be interpreted as rotations in a possibly non-orthogonal basis – and the corresponding element of PSL(2, R) acts as (conjugate to) a rotation of the hyperbolic plane and of Minkowski space.
Elliptic elements of the modular group must have eigenvalues {ω, ω−1}, where ω is a primitive 3rd, 4th, or 6th root of unity. These are all the elements of the modular group with finite order, and they act on the torus as periodic diffeomorphisms.
Elements of trace 0 may be called "circular elements" (by analogy with eccentricity) but this is rarely done; they correspond to elements with eigenvalues ±i, and are conjugate to rotation by 90°, and square to -I: they are the non-identity involutions in PSL(2).
Elliptic elements are conjugate into the subgroup of rotations of the Euclidean plane, the special orthogonal group SO(2); the angle of rotation is arccos of half of the trace, with the sign of the rotation determined by orientation. (A rotation and its inverse are conjugate in GL(2) but not SL(2).)
Parabolic elements
A parabolic element has only a single eigenvalue, which is either 1 or -1. Such an element acts as a shear mapping on the Euclidean plane, and the corresponding element of PSL(2, R) acts as a limit rotation of the hyperbolic plane and as a null rotation of Minkowski space.
Parabolic elements of the modular group act as Dehn twists of the torus.
Parabolic elements are conjugate into the 2 component group of standard shears × ±I: . In fact, they are all conjugate (in SL(2)) to one of the four matrices , (in GL(2) or SL±(2), the ± can be omitted, but in SL(2) it cannot).
Hyperbolic elements
The eigenvalues for a hyperbolic element are both real, and are reciprocals. Such an element acts as a squeeze mapping of the Euclidean plane, and the corresponding element of PSL(2, R) acts as a translation of the hyperbolic plane and as a Lorentz boost on Minkowski space.
Hyperbolic elements of the modular group act as Anosov diffeomorphisms of the torus.
Hyperbolic elements are conjugate into the 2 component group of standard squeezes × ±I: ; the hyperbolic angle of the hyperbolic rotation is given by arcosh of half of the trace, but the sign can be positive or negative: in contrast to the elliptic case, a squeeze and its inverse are conjugate in SL₂ (by a rotation in the axes; for standard axes, a rotation by 90°).
Conjugacy classes
By Jordan normal form, matrices are classified up to conjugacy (in GL(n, C)) by eigenvalues and nilpotence (concretely, nilpotence means where 1s occur in the Jordan blocks). Thus elements of SL(2) are classified up to conjugacy in GL(2) (or indeed SL±(2)) by trace (since determinant is fixed, and trace and determinant determine eigenvalues), except if the eigenvalues are equal, so ±I and the parabolic elements of trace +2 and trace -2 are not conjugate (the former have no off-diagonal entries in Jordan form, while the latter do).
Up to conjugacy in SL(2) (instead of GL(2)), there is an additional datum, corresponding to orientation: a clockwise and counterclockwise (elliptical) rotation are not conjugate, nor are a positive and negative shear, as detailed above; thus for absolute value of trace less than 2, there are two conjugacy classes for each trace (clockwise and counterclockwise rotations), for absolute value of the trace equal to 2 there are three conjugacy classes for each trace (positive shear, identity, negative shear), and for absolute value of the trace greater than 2 there is one conjugacy class for a given trace.
Iwasawa or KAN decomposition
The Iwasawa decomposition of a group is a method to construct the group as a product of three Lie subgroups K, A, N.
For these three subgroups are
These three elements are the generators of the Elliptic, Hyperbolic, and Parabolic subsets respectively.
Topology and universal cover
As a topological space, PSL(2, R) can be described as the unit tangent bundle of the hyperbolic plane. It is a circle bundle, and has a natural contact structure induced by the symplectic structure on the hyperbolic plane. SL(2, R) is a 2-fold cover of PSL(2, R), and can be thought of as the bundle of spinors on the hyperbolic plane.
The fundamental group of SL(2, R) is the infinite cyclic group Z. The universal covering group, denoted , is an example of a finite-dimensional Lie group that is not a matrix group. That is, admits no faithful, finite-dimensional representation.
As a topological space, is a line bundle over the hyperbolic plane. When imbued with a left-invariant metric, the 3-manifold becomes one of the eight Thurston geometries. For example, is the universal cover of the unit tangent bundle to any hyperbolic surface. Any manifold modeled on is orientable, and is a circle bundle over some 2-dimensional hyperbolic orbifold (a Seifert fiber space).
Under this covering, the preimage of the modular group PSL(2, Z) is the braid group on 3 generators, B3, which is the universal central extension of the modular group. These are lattices inside the relevant algebraic groups, and this corresponds algebraically to the universal covering group in topology.
The 2-fold covering group can be identified as Mp(2, R), a metaplectic group, thinking of SL(2, R) as the symplectic group Sp(2, R).
The aforementioned groups together form a sequence:
However, there are other covering groups of PSL(2, R) corresponding to all n, as n Z < Z ≅ π1 (PSL(2, R)), which form a lattice of covering groups by divisibility; these cover SL(2, R) if and only if n is even.
Algebraic structure
The center of SL(2, R) is the two-element group {±1}, and the quotient PSL(2, R) is simple.
Discrete subgroups of PSL(2, R) are called Fuchsian groups. These are the hyperbolic analogue of the Euclidean wallpaper groups and Frieze groups. The most famous of these is the modular group PSL(2, Z), which acts on a tessellation of the hyperbolic plane by ideal triangles.
The circle group SO(2) is a maximal compact subgroup of SL(2, R), and the circle SO(2) / {±1} is a maximal compact subgroup of PSL(2, R).
The Schur multiplier of the discrete group PSL(2, R) is much larger than Z, and the universal central extension is much larger than the universal covering group. However these large central extensions do not take the topology into account and are somewhat pathological.
Representation theory
SL(2, R) is a real, non-compact simple Lie group, and is the split-real form of the complex Lie group SL(2, C). The Lie algebra of SL(2, R), denoted sl(2, R), is the algebra of all real, traceless 2 × 2 matrices. It is the Bianchi algebra of type VIII.
The finite-dimensional representation theory of SL(2, R) is equivalent to the representation theory of SU(2), which is the compact real form of SL(2, C). In particular, SL(2, R) has no nontrivial finite-dimensional unitary representations. This is a feature of every connected simple non-compact Lie group. For outline of proof, see non-unitarity of representations.
The infinite-dimensional representation theory of SL(2, R) is quite interesting. The group has several families of unitary representations, which were worked out in detail by Gelfand and Naimark (1946), V. Bargmann (1947), and Harish-Chandra (1952).
See also
Linear group
Special linear group
Projective linear group
Modular group
SL(2, C) (Möbius transformations)
Projective transformation
Fuchsian group
Table of Lie groups
Anosov flow
References
Group theory
Lie groups
Projective geometry
Hyperbolic geometry
3-manifolds | SL2(R) | [
"Mathematics"
] | 3,348 | [
"Lie groups",
"Mathematical structures",
"Group theory",
"Fields of abstract algebra",
"Algebraic structures"
] |
11,105,645 | https://en.wikipedia.org/wiki/Condosity | Condosity is a comparative measurement of electrical conductivity of a solution.
The condosity of any given solution is defined as the molar concentration of a sodium chloride (NaCl) solution that has the same specific electrical conductance as the solution under test.
By way of example, for a 2 Molar potassium chloride (KCl) solution, the condosity would be expected to be somewhat greater than 2.0. This is because potassium is a better conductor than sodium.
Applications
The measurement is sometimes used in biological systems to provide an assessment of the properties of bodily or cellular liquids, or the properties of solutes in the physical environment. When measuring the properties of bodily fluids such as urine, condosity is expressed in units of millimoles per litre (mM/L).
References
Solutions | Condosity | [
"Chemistry"
] | 162 | [
"Homogeneous chemical mixtures",
"Solutions"
] |
11,105,906 | https://en.wikipedia.org/wiki/Russell%20M.%20Pitzer | Russell Mosher Pitzer (born May 10, 1938) is an American theoretical chemist and educator.
He was born in Berkeley, California and attended public schools in this and the Washington, D.C. area. He received his B.S. in chemistry in 1959 from the California Institute of Technology, his A.M. in physics from Harvard University in 1963, and his Ph.D. in chemical physics from Harvard University in 1963.
At Harvard, Pitzer worked with William N. Lipscomb, Jr. in cooperation with the research group of John C. Slater at M.I.T. to develop computer programs to use Slater orbitals to produce self-consistent field (SCF) molecular orbitals.
The ethane barrier (see diagram at right) was first calculated accurately by Pitzer and Lipscomb using Hartree Fock Self-Consistent Field (SCF) theory. Ethane gives a classic, simple example of such a rotational barrier, the minimum energy to produce a 360-degree bond rotation of a molecular substructure. The three hydrogens at each end are free to pinwheel about the central carbon-carbon bond, provided that there is sufficient energy to overcome the barrier of the carbon-hydrogen bonds at each end of the molecule bumping into each other by way of overlap (exchange) repulsion.
Also at Harvard, Pitzer also helped formulate the perturbed Hartree–Fock equations in a form for calculating the effects of external electric and magnetic fields on molecules.
He was a postdoctoral fellow at M.I.T. and a faculty member at Caltech before joining the Chemistry Department at Ohio State University in 1968. He was promoted to Professor in 1979 and served as Department Chair from 1989 to 1994.
His group wrote computer software to enable calculation of molecular energies and other properties. In 1979, with John Yates, he published the first Jahn-Teller-Effect study (on cobalt trifluoride, CoF3) to use a computed energy surface. Relativistic effects could be included. An early application with A. Chang was the first assignment of the visible spectrum of uranocene.
During 1986–87 he served as Acting Associate Director of the Ohio Supercomputer Center, cofounding the center and the Ohio Academic Resources Network. During 2001–03 he served as Interim Director of the Ohio Supercomputer Center. In 2004 he received the Faculty Award For Distinguished University Service. He retired in 2008.
His father was former Stanford University president Kenneth Pitzer and his grandfather, Russell K. Pitzer, founded Pitzer College, one of the seven Claremont Colleges in California. Russell M. Pitzer served as a trustee of Pitzer College from 1988 to 2012, and in 2003 received a Doctor of Humane Letters honorary degree in recognition of this service. In 2018 the Ohio Supercomputer Center named their newly purchased supercomputer Pitzer in honor of his role in founding the center.
References
Living people
Harvard University alumni
Massachusetts Institute of Technology fellows
Ohio State University faculty
21st-century American chemists
Theoretical chemists
California Institute of Technology alumni
1938 births
20th-century American chemists
People from Berkeley, California
Scientists from California | Russell M. Pitzer | [
"Chemistry"
] | 647 | [
"Theoretical chemists",
"American theoretical chemists"
] |
11,105,988 | https://en.wikipedia.org/wiki/Antwerp%20Water%20Works | The Antwerp Water Works () or AWW produces water for the city of Antwerp (Belgium) and its surroundings. The AWW has a yearly production of and a revenue of 100 million euro.
History
Between 1832 and 1892, Antwerp was struck every ten to fifteen years by a major cholera epidemic which each time claimed a few thousand lives and lasted for about two years. In 1866 the cholera epidemic infected about 5000 people and about 3000 people died. Between 1861 and 1867 several propositions were done for a water supply for Antwerp. In 1873, under mayor Leopold De Wael, it was decided that a concession should be granted to secure the water supply of the city.
On 25 June 1873, a concession of 50 years was granted to the English engineers, Joseph Quick from London, together with John Dick, to organize the water supply of Antwerp. Due to a lack of funds and a dispute between the partners this venture stranded. In 1879, the English engineering company Easton & Anderson took over the yards and the concession. Within two years they succeeded in finishing the work. An exploitation society was established: the Antwerp Waterworks Company Limited, a society according to English law which would be in charge of the exploitation from 1881 up to 1930.
The water was won from the Nete river at the bridge of Walem. It was purified according to an original method: an iron filter. In the period 1881 up to 1908 the system was repaired repeatedly, until eventually a new method of filtration was chosen which was a combination of fast with slow sand filtration. This method of filtration is still being used today for the treatment of a large part of the raw material, now water from the Albert Canal.
In 1930, the concession came to an end, as no agreement could be reached with the English owners concerning a new construction in which the municipalities surrounding Antwerp would be included. The city of Antwerp took over the company and founded a mixed intermunicipal company (private and public participation) in which the English Waterworks kept a minority participation. The remaining shares were in the hands of the city of Antwerp and the surrounding municipalities of Berchem, Boechout, Borgerhout, Deurne, Edegem, Ekeren, Hoboken, Hove, Mortsel, Kontich and Wilrijk. The English withdrew from the company in 1965. In the same year a new production site in Oelegem was established and a new office building in Antwerp. During the dry summer of 1976 it became clear that the reserve capacity needed to be expanded and in 1982 the reservoir of Broechem was inaugurated. The second concession ended after 53 years, so in 1983 a new concession to the AWW was granted.
In 2003 Brabo Industrial Water Solutions (BIWS) started, a consortium with Ondeo Industrial Solutions, to provide water tailored for the industry. In 2004 the RI-ANT project started (together with Aquafin), which takes over the management and the maintenance of the sewerage network of Antwerp.
See also
EU water policy
Public water supply
Water purification
References
Sources
AWW
AWW History (Dutch)
Water treatment facilities
Water companies of Belgium
Water supply and sanitation in Belgium
Companies based in Antwerp
Antwerp | Antwerp Water Works | [
"Chemistry"
] | 650 | [
"Water treatment",
"Water treatment facilities"
] |
11,106,399 | https://en.wikipedia.org/wiki/Polar%20wind | The polar wind or plasma fountain is a permanent outflow of plasma from the polar regions of Earth's magnetosphere. Conceptually similar to the solar wind, it is one of several mechanisms for the outflow of ionized particles. Ions accelerated by a polarization electric field known as an ambipolar electric field is believed to be the primary cause of polar wind. Similar processes operate on other planets.
History
In 1966 Bauer and, separately, Dessler ahd Michel noted that since the Earth's geomagnetic field above the poles forms a long tail away from the Sun out beyond the Moon's orbit, ions should flow from the higher pressure region in the ionosphere out into space.
The term "polar wind" was coined in 1968 in a pair of articles by Banks and Holzer and by Ian Axford. Since the process by which the ionospheric plasma flows away from the Earth along magnetic field lines, is similar to the flow of solar plasma away from the Sun's corona (the solar wind), Axford suggested the term "polar wind."
The earliest experimental characterization of the polar wind came from the 1966 Explorer 33 and especially the 1974 ISIS-2 satellite projects. Additional data from the 1981 Dynamics Explorer led to some uncertainty in the theoretical models about the role of cool O+ ions. This issue was cleared up with the more comprehensive data from 1989 Akebono satellite, and the 1996 Polar satellite.
The idea for the polar wind originated with the desire to solve the paradox of the terrestrial helium budget. This paradox consists of the fact that helium in the Earth's atmosphere seems to be produced (via radioactive decay of uranium and thorium) faster than it is lost by escaping from the upper atmosphere. The realization that some helium could be ionized, and therefore escape the Earth along open magnetic field lines near the magnetic poles (the 'polar wind'), is one possible solution to the paradox.
Causes
After 30 years of research, the "classical" cause of the polar wind has been shown to be ambipolar outflow of thermal plasma: ion acceleration by a polarization electric field in the ionosphere.
The polarization or ambipolar electric field was originally proposed in the 1920s for ionized stellar atmospheres. Gravitational charge separation creates a field amounting to
where is the gravitational field and is the mean ionic mass, half the difference between the mass of the singly charged ions and the electron. This simple formula is only applicable in a plasma in hydrostatic equilibrium. More complex models applicable to real plasmas show larger field strength. In any case the field is very small but, unlike other forces, it points away from gravity. In low density plasma at high altitude it overwhelms gravity for light ions.
In region of the polar wind, the ionospheric plasma expands and the low density allows gravity to pull ions down relative to the electrons in the plasma. The charge separation results in the electric field which then sends some of the ions up and out of the atmosphere. This mechanism is known as "ambipolar outflow" and the field as "ambipolar electric field" or "polarization electric field". Additional mechanisms include ion acceleration by solar photoelectrons escaping along magnetic field lines.
The outflow of ions due to the ambipolar electric field end up accumulating in the plasmasphere if they follow closed magnetic field lines but ions following open magnetic field lines exit the Earth system. Ions following open magnetic field lines are push away from the Sun by forces of the solar wind (anti-solar convection).
Measurements
Numerous investigations of the polar wind have launched, including ISIS-2, Dynamics Explorer, the Akebono satellite, and the Polar satellite, covering a variety of altitudes, latitudes, and times relative to the solar cycle. Some of the conclusions include:
the primary ingredients in the polar wind are electrons, hydrogen (H+), helium (He+), and oxygen (O+) ions,
O+ ions dominate at below 4000km,
all three ion species reach supersonic velocities above 7000km and velocities increase to over Mach number 2 above 50,000km.
polar wind velocity increases with altitude, and is higher on the dayside of the Earth,
The polarization or ambipolar electric field was directly measured in 2022 by a sounding rocket launched from Svalbard. This NASA mission was called Endurance. Comparing the electrical potential at altitude of 250 km to that at 768 km gave a difference of +0.55 volt with an uncertainty of 0.09 volt. The voltage is similar to that used in a wristwatch battery but is sufficient to account for the polar wind.
See also
Ambipolar diffusion
References
External links
Atmosphere
Terrestrial plasmas
Space plasmas | Polar wind | [
"Physics"
] | 981 | [
"Space plasmas",
"Astrophysics"
] |
11,106,681 | https://en.wikipedia.org/wiki/Arsenical | Arsenicals are chemical compounds that contain arsenic. In a military context, the term arsenical refers to toxic arsenic compounds that are used as chemical warfare agents. This includes blister agents, blood agents and vomiting agents. Historically, they were used extensively as insecticides, especially lead arsenate.
Examples
Blister agents
Ethyldichloroarsine
Lewisite
Methyldichloroarsine
Phenyldichloroarsine
Blood agents
Arsine
Vomiting agents
Adamsite
Diphenylchlorarsine
Diphenylcyanoarsine
Phenyldichloroarsine
References
Arsenic compounds
Chemical weapons
Poisons | Arsenical | [
"Chemistry",
"Biology",
"Environmental_science"
] | 127 | [
"Chemical accident",
"Toxicology",
"Chemical weapons",
"Poisons",
"Biochemistry"
] |
11,107,329 | https://en.wikipedia.org/wiki/SN%202006gy | SN 2006gy was an extremely energetic supernova, also referred to as a hypernova, that was discovered on September 18, 2006. It was first observed by Robert Quimby and P. Mondol, and then studied by several teams of astronomers using facilities that included the Chandra, Lick, and Keck Observatories. In May 2007, NASA and several of the astronomers announced the first detailed analyses of the supernova, describing it as the "brightest stellar explosion ever recorded". In October 2007, Quimby announced that SN 2005ap had broken SN 2006gy's record as the brightest-ever recorded supernova, and several subsequent discoveries are brighter still. Time magazine listed the discovery of SN 2006gy as third in its Top 10 Scientific Discoveries for 2007.
Characteristics
SN 2006gy occurred in the galaxy NGC 1260, approximately 238 million light-years (73 megaparsecs) away. The energy radiated by the explosion has been estimated at 1051 ergs (1044 J), making it a hundred times more powerful than the typical supernova explosion which radiates 1049 ergs (1042 J) of energy. Although at its peak the SN 2006gy supernova was intrinsically 400 times as luminous as SN 1987A, which was bright enough to be seen by the naked eye, SN 2006gy was more than 1,400 times as far away as SN 1987A, and too far away to be seen without a telescope.
SN 2006gy is classified as a type II supernova because it showed lines of hydrogen in its spectrum, although the extreme brightness indicates that it is different from the typical type II supernova. Several possible mechanisms have been proposed for such a violent explosion, all requiring a very massive progenitor star. The most likely explanations involve the efficient conversion of explosive kinetic energy to radiation by interaction with circumstellar material, similar to a type IIn supernova but on a larger scale. Such a scenario might occur following mass loss of in a luminous blue variable eruption, or through pulsational pair instability ejections. Denis Leahy and Rachid Ouyed, Canadian scientists from the University of Calgary, have proposed that SN 2006gy was a quark-nova, heralding the birth of a quark star.
Similarity to Eta Carinae
Eta Carinae (η Carinae or η Car) is a highly luminous hypergiant star located approximately 7,500 light-years from Earth in the Milky Way galaxy. Since Eta Carinae is 32,000 times closer than SN 2006gy, the light from it will be about a billion-fold brighter. It is estimated to be similar in size to the star which became SN 2006gy. Dave Pooley, one of the discoverers of SN 2006gy, says that if Eta Carinae exploded in a similar fashion, it would be bright enough that one could read by its light on Earth at night, and would even be visible during the daytime. SN 2006gy's apparent magnitude (m) was 15, so a similar event at Eta Carinae will have an m of about −7.5. According to astrophysicist Mario Livio, this could happen at any time, but the risk to life on Earth would be low.
References
SIMBAD data
External links
Light curves and spectra on the Open Supernova Catalog
Astronomy Picture of the Day 10 May 2007
Giant exploding star outshines previous supernovas (CNN.com)
Space.com article on SN 2006gy.
Star dies in brightest supernova, BBC, Tuesday, 8 May 2007, 03:35 GMT
The Greatest Show in Space, Time magazine Thursday, May 21st, 2007 Pages 56–57
Supernova may offer new view of early universe
Lick Observatory Laser Guide Star Adaptive Optics
Image SN 2006gy
Perseus (constellation)
Supernovae
Hypernovae
20060918 | SN 2006gy | [
"Chemistry",
"Astronomy"
] | 801 | [
"Supernovae",
"Astronomical events",
"Hypernovae",
"Constellations",
"Perseus (constellation)",
"Explosions"
] |
11,107,782 | https://en.wikipedia.org/wiki/Cape%20foot | A Cape foot is a unit of length defined as 1.0330 English feet (and equal to 12.396 English inches, or 0.31485557516 meters) found in documents of belts and diagrams relating to landed property. It was identically equal to the Rijnland voet and was introduced into South Africa by the Dutch settlers in the seventeenth and eighteenth century.
Its relationship to the English foot was clarified in 1859 by an Act of the government of the Cape Colony, South Africa. It was used for land surveying and title deeds in rural areas of South Africa apart from Natal and was also for urban surveying and title deeds in the Transvaal. There were 144 square Cape feet in one Cape rood and 600 Cape roods (86,400 square Cape feet) in one morgen.
Its use ceased when South Africa adopted the metric system in 1977, though it has not yet been entirely replaced in pre-existing title deeds.
References
Units of measurement | Cape foot | [
"Mathematics"
] | 196 | [
"Quantity",
"Units of measurement"
] |
11,107,815 | https://en.wikipedia.org/wiki/Press%20brake | A press brake is a type of brake, a machine used for bending sheet metal and metal plate. It forms predetermined bends by clamping the workpiece between a matching top tool and bottom die.
Typically, two C-frames form the sides of the press brake, connected to a table at the bottom and on a movable beam at the top. The bottom tool is mounted on the table, with the top tool mounted on the upper beam.
Types
A brake can be described by basic parameters, such as the force or tonnage and the working length. Additional parameters include the stroke length, the distance between the frame uprights or side housings, distance to the back gauge, and work height. The upper beam usually operates at a speed ranging from 1 to 15 mm/s (in working mode) and up to 200 mm/s (depends on the type) in idle mode (approach and return).
There are several types of press brakes including nut-stop hydraulic, synchro hydraulic, electric and hybrid.
Hydraulic presses operate by means of two synchronized hydraulic cylinders on the C-frames moving the upper beam. Servo-electric brakes use a servo-motor to drive a ballscrew or belt drive to exert tonnage on the ram.
Historically, a mechanical press entailed with energy that was added to a flywheel with an electric motor. A clutch engages the flywheel to power a crank mechanism that moves the ram vertically. Accuracy and speed are two advantages of the mechanical press.
Until the 1950s, mechanical brakes dominated the world market. The advent of better hydraulics and computer controls have led to hydraulic machines being the most popular.
Today's press brakes are controlled by two types of controls, NC (Numeric Controlled) or CNC (Computer Numeric Controlled). NC is a basic controller where the CNC is the high-end controller. Although the initial outlay might be more than with an NC , a CNC controller can be more effective, keeping cost down in the long run.
Pneumatic and servo-electric machines are typically used in lower tonnage applications. Hydraulic brakes produce accurate, high-quality products; are reliable; use little energy; and are safer because, unlike flywheel-driven presses, the motion of the ram can be easily stopped at any time in response to a safety device, e.g. a light curtain or other presence sensing device.
Back gauge
Recent improvements are mainly in the control and a device called a back gauge. A back gauge is a device that can be used to accurately position a piece of metal so that the brake puts the bend in the correct place. Furthermore, the back gauge can be programmed to move between bends to repeatedly make complex parts. The animation to the right shows the operation of the back gauge, setting the distance from the edge of the material or previous bend to the center of the die.
Press brakes often include multi-axis computer-controlled back gauges. They allow operators to position material correctly and sequence the bends step-by-step until completed. Optical sensors allow operators to make adjustments during the bending process. These sensors send real-time data about the bending angle in the bend cycle to machine controls that adjust process parameters.
Dies
Press brakes can be used for many different forming jobs with the right die design. Types of dies include:
V-dies—the most common type of die. The bottom dies can be made with different-sized die openings to handle a variety of materials and bend angles.
Air bending—V-dies can be produced in a variety of angles to suit the sheet metal bend angle required, but more often dies with a smaller than necessary V angle are used, the actual sheet metal bend angle being determined by how closely the upper and lower dies are brought together. In the animation above, ~30 degree dies are being used to produce 90 degree bends. The air gap which remains between the lower die and the sheet metal after the bend is completed is the reason for the term "air bending".
Rotary bending dies—a cylindrical shape with an 88-degree V-notch cut along its axis is seated in the "saddle" of the punch. The die is an anvil over which the rocker bends the sheet.
90 degree dies—largely used for bottoming operations. The die opening dimension depends on material thickness.
Acute angle (air-bending) dies—used in air bending, these can actually be used to produce acute, 90 degree, and obtuse angles by varying how deeply the punch enters the die by adjusting the ram.
Gooseneck (return-flanging) dies—The punch is designed to allow for clearance of already formed flanges
Offset dies—a combination punch and die set that bends two angles in one stroke to produce a Z shape.
Hemming dies—two-stage dies combining an acute angle die with a flattening tool.
Seaming dies—There are a number of ways to build dies to produce seams in sheets and tubes.
Radius dies—A radiussed bend can be produced by a rounded punch. The bottom die may be a V-die or may include a spring pad or rubber pad to form the bottom of the die.
Beading dies—A bead or a "stopped rib" may be a feature that stiffens the resulting part. The punch has a rounded head with flat shoulders on each side of the bead. The bottom die is the inverse of the punch.
Curling dies—The die forms a curled or coiled edge on the sheet.
Tube- and pipe-forming dies—a first operation bends the edges of the sheet to make the piece roll up. Then a die similar to a curling die causes the tube to be formed. Larger tubes are formed over a mandrel.
Four-way die blocks—A single die block may have a V machined into each of four sides for ease of changeover of small jobs.
Channel-forming dies—A punch can be pressed into a die to form two angles at the bottom of the sheet, forming an angular channel.
U-bend dies—Similar to channel forming, but with a rounded bottom. Spring back may be a problem and a means may need to be provided for countering it.
Box-forming dies—While a box may be formed by simple angle bends on each side, the different side lengths of a rectangular box must be accommodated by building the punch in sections. The punch also needs to be high enough to accommodate the height of the resulting box's sides.
Corrugating dies—Such dies have a wavy surface and may involve spring-loaded punch elements.
Multiple-bend dies—A die set may be built in the shape of the desired profile and form several bends on a single stroke of the press.
Rocker-type dies—A rocker insert in the punch may allow for some side-to-side motion, in addition to the up-and-down motion of the press.
See also
Brake (sheet metal bending)
Bending machine (manufacturing)
Bending (mechanics)
Tube bending
Spring Back Compensation
References
External links
Further reading
Benson, Steve D. Press Brake Technology: A Guide to Precision Sheet Metal Bending. Society of Manufacturing Engineers, 1997.
Hydraulic machinery
Press tools
Machine tools
Metalworking tools
Articles containing video clips
Presswork | Press brake | [
"Physics",
"Engineering"
] | 1,475 | [
"Machine tools",
"Physical systems",
"Hydraulics",
"Hydraulic machinery",
"Industrial machinery"
] |
11,107,891 | https://en.wikipedia.org/wiki/Atlantic%20Southeast%20Airlines%20Flight%202311 | Atlantic Southeast Airlines Flight 2311 was a regularly scheduled commuter flight in Georgia in the southeastern United States, from Hartsfield–Jackson Atlanta International Airport to Glynco Jetport (since renamed Brunswick Golden Isles Airport) in Brunswick on April 5, 1991.
The flight, operating a twin-turboprop Embraer EMB 120 Brasilia, crashed north of Brunswick while approaching the airport for landing. All 23 aboard were killed, including astronaut Sonny Carter and former U.S. Senator John Tower.
Four years later, another Embraer Brasilia of ASA crashed in the Georgia countryside in similar circumstances, with nine fatalities.
Background
Aircraft
The aircraft involved was an Embraer EMB 120 Brasilia (registration number N270AS), manufactured on November 30, 1990. Equipped with two Pratt & Whitney PW-118 engines and Hamilton Standard 14RF-9 propellers, it received its U.S. standard airworthiness certificate on December 20. In service less than four months, the aircraft had accumulated about 816 flying hours and 845 cycles prior to the accident.
Only one deferred maintenance item was noted in the maintenance logs, for fuel leaking from the auxiliary power unit (APU) cowling. The circuit breaker for the APU had been pulled while spare parts could be made available to fix the cowling. Because they were not required at the time, the aircraft did not have a cockpit voice recorder or flight data recorder.
Flight crew
Captain Mark Friedline, aged 34, had been hired by Atlantic Southeast Airlines ten years earlier in May 1981, and was fully qualified to fly three different commercial aircraft, including the EMB-120. At the time of the accident, he had accumulated an estimated 11,724 total flying hours, of which 5,720 hours were in the EMB-120. Friedline had been involved in the development of the EMB-120, and its introduction to service in the United States, and was trained to fly the aircraft by the manufacturer. An inspector described his knowledge of aircraft systems "extensive", and his pilot techniques as "excellent".
First Officer Hank Johnston, aged 36, was hired by Atlantic Southeast Airlines in June 1988, and was a qualified flight instructor. Because more than six months had passed since he had undergone an FAA medical inspection and been issued a first-class certificate, it automatically reverted to a second-class certificate, adequate for his duties as a first officer. At the time of the accident, Johnston had accumulated about 3,925 total flying hours, of which 2,795 hours were in the EMB-120.
Accident
On the morning of the accident, the captain and first officer arrived at the Dothan Regional Airport by taxi about 06:15 Eastern Standard Time. The taxi cab driver reported that the crew was in good spirits and readily engaged in conversation. The crew flew first to Atlanta, then performed a round trip to Montgomery, Alabama, before returning to Atlanta. After this round trip, the crew had a scheduled break for around two and a half hours, in which they were described to be well rested and talkative.
Flight 2311 was scheduled initially to be operated by N228AS, another EMB-120. This airplane experienced mechanical problems, so the flight was switched to N270AS. This aircraft had flown four times already on the day of the accident, with no reports of any problems. N270AS departed Atlanta, operating ASA Flight 2311, at 13:47, 23 minutes behind schedule.
Flight 2311 deviated slightly in its flight path to Brunswick to avoid poor weather. Just after 14:48, the flight crew acknowledged to Jacksonville air route traffic control center that the airport was in sight, and Flight 2311 was subsequently cleared for a visual approach to Glynco Jetport on runway 7, which the flight crew acknowledged.
The last transmission received from Flight 2311 was to the ASA manager at the airport, who reported that the flight made an "in-range call" on the company radio frequency, and that the pilot gave no indication that the flight had any mechanical problems. Witnesses reported seeing the aircraft approaching the airport in visual meteorological conditions at a much lower than normal altitude. Several witnesses estimated that the aircraft flew over them at an altitude of above the ground.
According to most of the interviewed witnesses, the airplane suddenly rolled to the left until the wings were perpendicular to the ground. The aircraft then descended in a nose-down attitude and disappeared from sight behind trees near the airport. One witness told investigators that they saw a puff of smoke emanate from the aircraft prior to or subsequent to the airplane rolling to the left. Others reported loud engine noises described as a squeal, whine, or an overspeeding or accelerating engine during the last moments of the flight, although they said that these noises seemed to have stopped, or at least faded before the aircraft impacted with flat ground two miles short of the runway.
One witness interviewed by the NTSB, a pilot driving on a road southwest of the airport, told investigators that he saw the airplane in normal flight at normal altitudes, and that he believed that the approach was not abnormal. The airplane completed a 180° turn from the downwind leg of the approach and continued the turn. He then saw the aircraft pitch slightly, before it rolled to the left until the wings were vertical. The airplane then turned nose-down and smashed into the ground. He saw no fire or smoke during the flight, and he believed both propellers were rotating.
Investigation
An investigation carried out by the National Transportation Safety Board (NTSB) initially determined that a malfunction of the flight control surfaces, including a rudder or ailerons hardover or asymmetric flaps, could not have caused the accident, after multiple pilots in simulators managed to keep the aircraft under control. Engine failure was also ruled out by detailed inspection of the two engines. The investigators found that the "circumstances of this accident indicate that a severe asymmetric thrust condition caused a left roll that led to loss of control of the airplane. The NTSB's investigation examined all the possible events that could have caused the loss of control. The powerplant and propeller examinations indicated that the engines were operating normally, but that a propeller system malfunction occurred", which allowed the left propeller's angles to be oriented nearly perpendicular to the direction of flight, resulting in insufficient thrust and higher drag on the left side.
The NTSB conducted a test flight in an EMB-120 with the left engine having the propeller control mechanism set to a similar mechanical condition, but blocking the propeller blades from moving below 22° to not endanger the flight crew. The flight crew was found to be unable to perceive any problem with the airplane until the propeller blade angle was between 24 and 26°. They stated that the airplane would have "become very difficult to control after the propeller reached the 22° stop, so the pilots of flight 2311 most likely did not notice a problem with the airplane until the propeller began to overspeed and roll control was affected." Thus, the flight crew would have been unable to declare an emergency, as the event was so sudden. The crashed aircraft's left engine propeller blades went to 3° instead of the commanded 79.2° for feathering.
The NTSB's final report, while acknowledging that Atlantic Southeast's practice of overworking pilots (the pilots only received an estimated 5 to 6 hours of sleep in violation of federal aviation regulations) played no direct part in the accident, still raised concerns that the airline, along with other commuter airline corporations, "scheduled reduced rest periods for about 60% of the layovers in its day-to-day operations. The NTSB believes that this practice is inconsistent with the level of safety intended by the regulations, which is to allow reduced rest periods as a contingency to a schedule disruption, and has the potential of adversely affecting pilot fitness and performance."
Probable cause
On April 28, 1992, the NTSB published its final accident report, including its determination of the cause of the crash:
The National Transportation Safety Board determines that, the probable cause of this accident was the loss of control in flight as a result of a malfunction of the left engine propeller control unit, which allowed the propeller blade angles to go below the flight idle position. Contributing to the accident was the deficient design of the propeller control unit by Hamilton Standard and the approval of the design by the Federal Aviation Administration. The design did not correctly evaluate the failure mode that occurred during this flight, which resulted in an uncommanded and uncorrectable movement of the blades of the airplane's left propeller below the flight idle position.
Passengers
There were 20 passengers onboard. Among them were former U.S. Senator John Tower from Texas (head of the Tower Commission for the Iran–Contra affair) and astronaut Manley "Sonny" Carter. All 23 passengers and crew died in the crash.
Depictions in media
The Discovery Channel Canada / National Geographic TV series Mayday (also called Air Crash Investigation, Air Emergency or Air Disasters) dramatized the accident in a 2016 episode titled "Steep Impact".
See also
Atlantic Southeast Airlines Flight 529
List of accidents and incidents involving commercial aircraft
References
External links
Aviation accidents and incidents in the United States in 1991
Airliner accidents and incidents in Georgia (U.S. state)
1991 in Georgia (U.S. state)
Accidents and incidents involving the Embraer EMB 120 Brasilia
2311
Glynn County, Georgia
April 1991 events in the United States
Airliner accidents and incidents caused by mechanical failure
Airliner accidents and incidents caused by engine failure | Atlantic Southeast Airlines Flight 2311 | [
"Materials_science"
] | 1,953 | [
"Airliner accidents and incidents caused by mechanical failure",
"Mechanical failure"
] |
11,108,817 | https://en.wikipedia.org/wiki/Portable%20emissions%20measurement%20system | A portable emissions measurement system (PEMS) is a vehicle emissions testing device that is small and light enough to be carried inside or moved with a motor vehicle that is being driven during testing, rather than on the stationary rollers of a dynamometer that only simulates real-world driving.
Early examples of mobile vehicle emissions equipment were developed and marketed in the early 1990s by Warren Spring Laboratory UK during the early 1990s, which was used to measure on-road emissions as part of the UK Environment Research Program. Governmental agencies like United States Environmental Protection Agency (USEPA) and various states and private entities have begun to use PEMS in order to reduce both the costs and time involved in making mobile emissions decisions.
The European Commission introduced PEMS as a mandatory requirement for light-duty vehicle type approval in 2016 by amending the regulation that was established in 2007.
Introduction of PEMS
Leo Breton of the US EPA invented the Real-time On-road Vehicle Emissions Reporter (ROVER) in 1995. The first commercially available device was invented by Michal Vojtisek-Lom, and developed by David Miller of Clean Air Technologies International (CATI) Inc. in Buffalo, New York in 1999. These early field devices used engine data from either an on-board diagnostics (OBD) port, or directly from an engine sensor array. The first unit was developed for, and sold to - Dr. H. Christopher Frey of North Carolina State University (NCSU) for the first on-road testing project, which was sponsored by the North Carolina Department of Transportation. David W. Miller, who co-founded CATI, first coined the phrase "Portable Emissions Measurement System" and "PEMS" when working on a 2000 New York City Metropolitan Transportation Agency bus project with Dr. Thomas Lanni of the New York State Department of Environmental Conservation, as a short-hand description of the new device. Other governmental groups and universities soon followed, and quickly began to use the equipment due to its balance of accuracy, low cost, light weight, and availability. From 1999 through 2004, research groups such as Virginia Tech, Penn State, and Texas A&M Transportation Institute, Texas Southern University and others began to use PEMS in border crossing projects, roadway evaluations, traffic control methods, before-and-after scenarios, and ferries, planes, and off-road vehicles, to explore what was possible outside of a lab environment. A project performed in April 2002 by the California Air Resources Board(CARB) - using non-1065 PEMS equipment, tested 40 trucks over a period of 2½ days; of which, 22 trucks were tested on road in Tulare, California. During this time, a high-profile project performed with early PEMS equipment was the World Trade Center (WTC) Ground Zero Project in lower Manhattan, testing concrete pumpers, bulldozers, graders, and later - diesel cranes on Building #7 - 40 stories high. Other early PEMS projects such as Dr. Chris Frey's field work was used by the USEPA in the development of the MOVES Model. However, users such as regulators and vehicle manufacturers had to wait for ROVER to be commercialized to conduct actual measurements of mass emissions rather than depend on estimates of mass emissions using data the OBD port, or a direct engine measurement, in order to have a more defensible data set. This push led to a new 2005 standard known as CFR 40 Part 1065.
Many governmental entities (such as the USEPA and the United Nations Framework Convention on Climate Change or UNFCCC) have identified target mobile-source pollutants in various mobile standards as , , Particulate Matter (PM), Carbon Monoxide (CO), Hydrocarbons(HC), to ensure that emissions standards are being met. Further, these governing bodies have begun adopting in-use testing program for non-road diesel engines, as well as other types of internal combustion engines, and are requiring the use of PEMS testing. It is important to delineate the various classifications of the latest 'transferable' emissions testing equipment from-time PEMS equipment, in order to best understand the desire of portability in field-testing of emissions.
Economic advantage of PEMS equipment
Because a PEMS unit is able to be carried easily by one person from jobsite to jobsite, and can be used without the requirement of 'team lifting', the required emissions testing projects are economically viable. Simply put, more testing can be done more quickly, by less workers, dramatically increasing the amount of testing done in a certain time period. This in turn, significantly reduces the 'cost per test', yet at the same time increases the overall accuracy required in a 'real-world' environment. Because the law of large numbers will create a convergence of results, it means that repeatability, predictability, and accuracy are enhanced, while simultaneously reducing the overall cost of the testing.
On-road emissions patterns identified by PEMS
Nearly all modern engines, when tested new and according to the accepted testing protocols in a laboratory, produce relatively low emissions well within the set standards. As all individual engines of the same series are supposed to be identical, only one or several engines of each series get tested. The tests have shown that:
The bulk of the total emissions can come from relatively short high-emissions episodes
Emissions characteristics can be different even among otherwise identical engines
Emissions outside of the bounds of the laboratory test procedures are often higher than under the operating and ambient conditions comparable to those during laboratory testing
Emissions deteriorate significantly over the useful life of the vehicles
There are large variances among the deterioration rates, with the high emissions rates often attributable to various mechanical malfunctions
These findings are consistent with published literature, and with the data from a myriad of subsequent studies. They are more applicable to spark-ignition engines and considerably less to diesels, but with the regulation-driven advances in diesel engine technology (comparable to the advances in spark-ignition engines since the 1970s) it can be expected that these findings are likely to be applicable to the new generation diesel engines. Since 2000, multiple entities have used PEMS data to measured in-use, on-road emissions on hundreds of diesel engines installed in school buses, transit buses, delivery trucks, plow trucks, over-the-road trucks, pickups, vans, forklifts, excavators, generators, loaders, compressors, locomotives, passenger ferries, and other on-road, off-road and non-road applications. All the previously listed findings were demonstrated; in addition, it was noticed that extended idling of engines can have a significant impact on the emissions during subsequent operation.
Also, PEMS testing identified several engine "anomalies" where fuel-specific NOx emissions were two to three times higher than expected during some modes of operation, suggesting deliberate alterations of the engine control unit (ECU) settings. Such data set can be readily used for developing emissions inventories, as well as to evaluate various improvements in engines, fuels, exhaust after-treatment and other areas. (Data collected on "conventional" fleets then serves as "baseline" data to which various improvements are compared.) This data set can also be examined for compliance with not-to-exceed (NTE) and in-use emissions standards, which are 'US-based' emission standards that require on-road testing.
Accuracy of PEMS
It is often difficult for PEMS to offer the same accuracy and variety of species measured as is possible with top-of-the-line laboratory instrumentation because PEMS are typically limited in size, weight and power consumption. For this reason, objections were raised against using PEMS for compliance verification. But there is also the potential for inaccuracy in fleet emissions deduced from laboratory measurements. For this reason, European WLTP results from PEMS will be weighted with a conformity factor of 2.1 (1.5 after 2019), i.e. the emissions measured by the PEMS are allowed to be a factor 2.1 higher than the limit.
It is expected that a variety of on-board systems will be designed, ranging from bread-box sized PEMS to instrumented trailers towed behind the tested truck. The benefits of each approach need to be considered in light of other sources of errors associated with emissions monitoring, notably vehicle-to-vehicle differences, and the emissions variability within the vehicle itself.
Additional PEMS criteria
PEMS need to be safe enough to use on public roads. During testing, portable emissions systems could attach extensions of the tailpipe, add lines and cables outside the vehicle, carry lead-acid batteries in the passenger compartment, have hot components accessible to bystanders, block emergency exits, interfere with the driver, or have loose components that could be caught in moving parts. Modifications to or disassembly of the tested vehicle such as drilling into the exhaust or removing intake air system need to be examined for their acceptance by both fleet managers and drivers, especially on passenger-carrying vehicles. The test equipment can not draw excessive electrical load from the test vehicle. Instead, sealed lead-acid batteries, fuel cells and generators have been used as external power sources, though they may add other hazards during driving.
The more time and expertise installation of the equipment requires, the greater the cost of testing, limiting the number of vehicles that can be tested. More testing is also possible with equipment that is versatile enough to be used on more than one type of vehicle. The weight and size of the equipment and consumables like calibration gases might limit moving to a sufficient number locations. Any restrictions on transport of hazardous materials (i.e.Flame ionization detector (FID) fuel or calibration gases) need to be taken into the account. The ability of the test crew to repair PEMS in the field using locally available resources can also be essential.
PEMS suitability to application
Ultimately, it should be demonstrated to show that a PEMS is suitable to the desired application. If the ultimate goal is to verify the compliance with in-use emissions requirements, a fleet of vehicles with known characteristics – including engines with dual-mapping and otherwise non-compliant engines – should be made available for testing. It should be then up to the PEMS manufacturers to practically demonstrate how these non-compliant vehicles can be identified using their system.
Testing volume and safe repeatability
In order to achieve the required amount of 'testing volume' needed to validate real-world testing, three points must be considered:
System accuracy
Federal and/or state health and safety guidelines and/or standards
Economic viability based on the first two points.
Once a particular portable emissions system has been identified and pronounced as accurate, the next step is to ensure that the worker(s) are properly protected from work hazards associated with the task(s) being performed in the use of the testing equipment. For example, typical functions for a worker may be to transport the equipment to the jobsite (i.e. car, truck, train, or plane), carry the equipment to the jobsite, and lift the equipment into position.
Advantages of PEMS
On-road vehicle emissions testing is very different from the laboratory testing, bringing both considerable benefits and challenges: As the testing can take place during the regular operation of the tested vehicles, a large number of vehicles can be tested within a relatively short period of time and at relatively low cost. Engines than cannot be easily tested otherwise (i.e., ferry boat propulsion engines) can be tested. True real-world emissions data can be obtained. The instruments have to be small, lightweight, withstand difficult environment, and must not pose a safety hazard. Emissions data is subject to considerable variances, as real-world conditions are often neither well defined nor repeatable, and significant variances in emissions can exist even among otherwise identical engines.
On-road emissions testing therefore requires a different mindset than the traditional approach of testing in the laboratory and using models to predict real-world performance. In the absence of established methods, use of PEMS requires careful, thoughtful, broad approach. This should be considered when designing, evaluating and selecting PEMS for the desired application.
A recent example of PEMS advantages over laboratory testing is the Volkswagen (VW) Scandal of 2015.
Under a small grant from the International Council on Clean Transportation (icct), Daniel K Carder of West Virginia University (WVU) uncovered on-board software "cheats" that VW had installed on some diesel passenger vehicles (Dieselgate scandal). The only way the discovery could have been made was by a non-programmed, random, on-road evaluation - utilizing a PEMS device. VW is now liable for over US$14 billion in fines. In 2016, these latest developments led to a global resurgence of interest in smaller, lighter, integrated and cost-effective "non-1065" PEMS, similar to the demonstration on MythBusters 2011 Episode 171 of "Bikes and Bazookas", in which a non-1065 PEMS was used to establish the difference between car and motorcycle pollution.
Subcategory: integrated PEMS (iPEMS)
Overview of integrated PEMS (iPEMS) development
In response to Dieselgate, the "Real Driving Emissions" (RDE) standard has been developed in the European Union (EU) which has, in turn, increased the demand for smaller, lighter, more portable, less expensive and integrated PEMS equipment kits. iPEMS equipment is not presently able to be used as a "certification" device in the U.S.
Definition of iPEMS
The following features are common to the smaller and lighter class of iPEMS equipment:
A complete, self-contained, and internally modular Portable Emissions Measurement System (PEMS) kit
including a built-in, on-board power source,
no more than 7 kg in total weight (including carrying case, exhaust connectors, and any additional equipment required for use),
able to be carried by one person,
which can be transported through an airport terminal and stored in the overhead bin of an airplane;
once deployed at a field site, the iPEMS has the ability to be testing vehicles within 30 minutes (assuming that the required onboard power pack has been charged);
the length of testing time capability from the integrated power pack is a minimum of two hours;
minimum pollutant testing capabilities must include: Nitrogen Oxides (NOx), Carbon Dioxide (), and either Particulate Matter (PM) or Particulate Number (PN);
testing accuracy must be within 10% (or better) of a 1065 PEMS.
Advantages of iPEMS over 1065 PEMS equipment
The advantage of iPEMS equipment is that they are designed to both complement 1065 PEMS in addition to providing expanded capabilities, which are being driven by the requirements for quicker decision-making compounded by the 2015 Volkswagen scandal. These devices are presently being pursued by both the European Union (EU) and China for their RDE Programs.
See also
National Ambient Air Quality Standards (US)
Volkswagen emissions scandal
References
External links
Request for grant application by Texas Commission on Environmental Quality for a project that required the use of a PEMS
Study of snowmobile emissions by University of Denver uses PEMS to detect emissions Winter Motor-Vehicle Emissions in Yellowstone National Park
M. J. Bradley & Associates, Inc. monitored emissions from off-road vehicles at the World Trade center reconstruction site Investigation of Diesel Emission Control Technologies on Off-Road Construction Equipment at the World Trade Center and PATH Re-Development Site
Report by North Carolina State University for the USEPA with recommendations for the next generation of PEMS devices to be developed Recommended Strategy For On-Board Emission Data Analysis and Collection For the New Generation Model
Biodiesel study for the North Carolina Department of Transportation performed by North Carolina State University uses a PEMS device to gather emissions comparison data Operational Evaluation of Emissions and Fuel Use of B20 Versus Diesel Fueled Dump Trucks
Environmental Protection Agency information page on PEMS Science for the 21st Century
The Imperial College London (UK) Centre for Transport Studies conducted a vehicle performance study and gathered emissions data along the M42 Motorway using a PEMS device Instantaneous Vehicle Emission Monitoring
Recent uses and presentations of PEMS: University of California (UCR) College of Engineering - Center for Environmental Research and Technology (CE-CERT)
Engineering equipment
Regulators of biotechnology products
Air pollution emissions | Portable emissions measurement system | [
"Engineering",
"Biology"
] | 3,344 | [
"Regulation of biotechnologies",
"Biotechnology products",
"Regulators of biotechnology products",
"nan"
] |
11,108,970 | https://en.wikipedia.org/wiki/Naturescaping | Naturescaping (or nature scaping) is a method of landscape design and landscaping that allows people and nature to coexist with landscaping. By incorporating certain native plants into one's yard, one can curtail the loss of wildlife habitat and attract beneficial insects, birds, and other creatures.
Origins
Naturescaping takes some of its principles from the US Environmental Protection Agency's (EPA) "GreenScaping" or "Beneficial Landscaping" programs — which strive to reduce water, energy, and chemical usage. Naturescaping is an organic discipline of this practice, that is easily adapted to backyards.
History
Most universities throughout the country, that have agricultural programs, also have university cooperative extensions. These programs include Master Gardeners. The practice of naturescaping is being taught at several of these universities.
Current acceptance
The practice has spawned many non-profit groups to form near universities teaching this practice. Many include some form of the phrase "naturescaping" in their name. Some states have recognized the benefits to society of this practice and those who either volunteer or create a naturescaped garden. For instance, Oregon offers a state tax incentive..
See also
Landscape ecology
Sustainable landscaping
External links
Oregon Department of Fish and Wildlife (ODFW) promotes Naturescaping
NatureScaping of Southwest Washington
Nature Scaping for Clean Rivers
North Carolina Division of Pollution Prevention and Environmental Assistance
Horticulture
Organic gardening
Landscape architecture
Types of garden | Naturescaping | [
"Engineering"
] | 291 | [
"Landscape architecture",
"Architecture"
] |
11,109,012 | https://en.wikipedia.org/wiki/Yevgeniy%20Chazov | Yevgeniy Ivanovich Chazov (; 10 June 1929 – 12 November 2021) was a physician of the Soviet Union and Russia, specializing in cardiology, Chief of the Fourth Directorate of the ministry of health, academic of the Russian Academy of Sciences and the Russian Academy of Medical Sciences, a recipient of numerous awards and decorations, Soviet, Russian, and foreign.
Biography
Chazov was born in 1929. He was a graduate of the Kiev Medical Institute. Following his graduation he worked as a clinic surgeon, and later joined the research institute of therapy of the USSR Academy of Medical Sciences. He served as a managing director of the A. L. Myasnikov Research Institute. Chazov was the director of the Moscow cardiological center since 1976. It is one of the largest such centers in the world, comprising 10 separate institutes. As the chief of the fourth directorate of the Ministry of Health, which took care of Soviet leaders, he was widely regarded to be a person responsible for the health of the Soviet leadership, although he sometimes denied that he was their "personal physician". He was the deputy health minister and appointed minister of health in 1987. Chazov was a member of the central committee of the Communist Party.
In his book of memoirs, Health and Power he described many circumstances concerning the health of the Soviet leaders and of some leaders of the Soviet satellites.
Nobel Peace Prize
Yevgeniy Chazov was a co-founder and co-president of International Physicians for the Prevention of Nuclear War. Charged with promoting research on the probable medical, psychological, and biospheric effects of nuclear war, the group was awarded the Nobel Peace Prize on 10 December 1985. On the occasion of the award, Chazov gave the acceptance speech in Oslo. At that time the group represented more than 135,000 members from 41 countries. Many groups protested the decision to include Chazov, and alleged that Chazov was responsible for some of the Soviet abuses of psychiatry and medicine and for attacks against a 1975 recipient of the Nobel Peace Prize, the physicist and Soviet dissident Andrei D. Sakharov.
Personal life
Chazov was married three times. He had two daughters, Tatyana and Irina, from the first and second marriage, respectively.
Legacy
On 16 December 2022, a monument to the founder of "Kremlin medicine" - cardiologist Evgeny Chazov was erected on the territory of the Moscow Central Clinical Hospital of the Presidential Administration of the Russian Federation.
References
External links
Acceptance speech on the occasion of the award of the Nobel Peace Prize in Oslo (10 December 1985)
Nobel Lecture (11 December 1985)
1929 births
2021 deaths
Candidates of the Central Committee of the 26th Congress of the Communist Party of the Soviet Union
Members of the Central Committee of the 26th Congress of the Communist Party of the Soviet Union
Members of the Central Committee of the 27th Congress of the Communist Party of the Soviet Union
Russian cardiologists
Soviet cardiologists
People from Nizhny Novgorod
Ministers of health of the Soviet Union
Full Members of the USSR Academy of Sciences
Full Members of the Russian Academy of Sciences
Academicians of the USSR Academy of Medical Sciences
Academicians of the Russian Academy of Medical Sciences
Members of the Serbian Academy of Sciences and Arts
Foreign members of the Bulgarian Academy of Sciences
Members of the Tajik Academy of Sciences
Soviet anti–nuclear weapons activists
Russian anti–nuclear weapons activists
Full Cavaliers of the Order "For Merit to the Fatherland"
Recipients of the Lomonosov Gold Medal
Heroes of Socialist Labour
Recipients of the Order of Lenin
Recipients of the Lenin Prize
Recipients of the USSR State Prize
State Prize of the Russian Federation laureates
Bogomolets National Medical University alumni
Léon Bernard Foundation Prize laureates
Foreign members of the Serbian Academy of Sciences and Arts
Tajikistani physicians
20th-century Tajikistani scientists
21st-century Tajikistani scientists | Yevgeniy Chazov | [
"Technology"
] | 780 | [
"Science and technology awards",
"Recipients of the Lomonosov Gold Medal"
] |
11,109,112 | https://en.wikipedia.org/wiki/Mentalization | In psychology, mentalization is the ability to understand the mental state – of oneself or others – that underlies overt behaviour.
Mentalization can be seen as a form of imaginative mental activity that lets us perceive and interpret human behaviour in terms of intentional mental states (e.g., needs, desires, feelings, beliefs, goals, purposes, and reasons). It is sometimes described as "understanding misunderstanding." Another term that David Wallin has used for mentalization is "Thinking about thinking". Mentalization can occur either automatically or consciously.
Background
While the broader concept of theory of mind has been explored at least since Descartes, the specific term 'mentalization' emerged in psychoanalytic literature in the late 1960s, and became empirically tested in 1983 when Heinz Wimmer and Josef Perner ran the first experiment to investigate when children can understand false belief, inspired by Daniel Dennett's interpretation of a Punch and Judy scene.
The field diversified in the early 1990s when Simon Baron-Cohen and Uta Frith, building on the Wimmer and Perner study, and others merged it with research on the psychological and biological mechanisms underlying autism and schizophrenia. Concomitantly, Peter Fonagy and colleagues applied it to developmental psychopathology in the context of attachment relationships gone awry. More recently, several child mental health researchers such as Arietta Slade, John Grienenberger, Alicia Lieberman, Daniel Schechter, and Susan Coates have applied mentalization both to research on parenting and to clinical interventions with parents, infants, and young children.
Implications
Mentalization has implications for attachment theory and self-development. According to Peter Fonagy, individuals with disorganized attachment style (e.g., due to physical, psychological, or sexual abuse) can have greater difficulty developing the ability to mentalize. Attachment history partially determines the strength of mentalizing capacity of individuals. Securely attached individuals tend to have had a primary caregiver that has more complex and sophisticated mentalizing abilities. As a consequence, these children possess more robust capacities to represent the states of their own and other people's minds. Early childhood exposure to mentalization can protect the individual from psychosocial adversity. This early childhood exposure to genuine parental mentalization fosters development of mentalizing capabilities in the child themselves. There is also suggestion that genuine parental mentalization is beneficial to child learning; when a child feels they are being viewed as an intentional agent, they feel contingently responded to, which promotes epistemic trust and triggers learning in the form of natural pedagogy - this increases the quality of learning in the child. This theory needs further empirical support.
Research
Mentalization or better mentalizing, has a number of different facets which can be measured with various methods. A prominent method of assessment of Parental Mentalization is the Parental Development Interview (PDI), a 45-question semi-structured interview, investigating parents’ representations of their children, themselves as parents, and their relationships with their children. An efficient self-report measure of Parental Mentalization is the Parental Reflective Functioning Questionnaire (PRFQ) created by Patrick Luyten and colleagues. The PRFQ is a brief, multidimensional assessment of parental reflective functioning (mentalization), aimed to be easy to administer to parents in a wide range of socioeconomic populations. The PRFQ is recommended for use as a screening tool for studies with large populations and does not aim to replace more comprehensive measures, such as the PDI or observer-based measures.
Mentalization, it has been increasingly underscored by Peter Fonagy and colleagues, may be a common factor in psychological treatment and results from a 2024 systematic review indicate the mentalization may be a mediator and moderator of mental health outcome across diagnosis and treatment approach although more studies are needed.
A 2024 study investigated the longitudinal impact of mentalizing on well-being and emotion regulation strategies in a non-clinical sample, finding that impairments in mentalizing negatively predicted well-being and positively predicted emotional suppression over one year. Research has also found a link between dopamine levels and the ability to mentalize. In particular, reducing dopamine activity in healthy individuals using the drug haloperidol impaired their mentalizing abilities, suggesting that dopamine plays a direct role in these social cognitive processes.
Fourfold dimensions
According to the American Psychiatric Association's Handbook of Mentalizing in Mental Health Practice, mentalization takes place along a series of four parameters or dimensions: Automatic/Controlled, Self/Other, Inner/Outer, and Cognitive/Affective.
Each dimension can be exercised in either a balanced or unbalanced way, while effective mentalization also requires a balanced perspective across all four dimensions.
Automatic/Controlled. Automatic (or implicit) mentalizing is a fast-processing unreflective process, calling for little conscious effort or input; whereas controlled mentalization (explicit) is slow, effortful, and demanding of full awareness. In a balanced personality, shifts from automatic to controlled smoothly occur when misunderstandings arise in a conversation or social setting, to put things right. Inability to shift from automatic mentalization can lead to a simplistic, one-sided view of the world, especially when emotions run high; while conversely inability to leave controlled mentalization leaves one trapped in a 'heavy', endlessly ruminative thought-mode.
Self/Other involves the ability to mentalize about one's own state of mind, as well as about that of another. Lack of balance means an overemphasis on either self or other.
Inner/Outer: Here problems can arise from an over-emphasis on external conditions, and a neglect of one's own feelings and experience.
Cognitive/Affective are in balance when both dimensions are engaged, as opposed to either an excessive certainty about one's own one-sided ideas, or an overwhelming of thought by floods of emotion.
See also
References
Sources
Further reading
External links
Anthony Bateman's homepage.
Mentalization factoids – compiled by Frederick Leonhardt. A summary of mentalization.
Developmental psychology
Psychological concepts | Mentalization | [
"Biology"
] | 1,246 | [
"Behavioural sciences",
"Behavior",
"Developmental psychology"
] |
4,265,443 | https://en.wikipedia.org/wiki/Hartwick%20Pines%20State%20Park | Hartwick Pines State Park is a public recreation area covering in Crawford County near Grayling and Interstate 75 on the Lower Peninsula of the U.S. state of Michigan. The state park contains an old-growth forest of white pines and red pines, known as the Hartwick Pines. It is claimed by the Michigan Department of Natural resources that this old growth area, along with the Red Pine Natural Area Preserve in Roscommon County resembles the appearance of all Northern Michigan prior to the logging era. These areas do, however, lack the reoccurring low intensity fires which once occurred throughout northern Michigan, impacting regeneration of red pine and eastern hemlock, as well as leading to an increased content of hardwood species such as sugar maple and beech.
History
The Hartwick Pines are a old-growth remnant of a pine grove that was withdrawn from logging by a local timbering firm in 1927—a time when very little old-growth pine remained in northern Michigan. Karen Michelson Hartwick, widow of lumberman Major Edward Hartwick, donated the grove, which was then in size, and 8000 surrounding acres (32.4 km2) of cutover land to the state of Michigan as a memorial to the logging industry.
Salling Hansen Lumber Company heavily logged much of the property within Hartwick Pines State Park during the 1880s and 1890s. The Civilian Conservation Corps planted many of the park's trees in the 1930s as part of a massive restoration effort. Hence, this forest is known as "second growth."
On November 11, 1940, the Armistice Day Blizzard badly damaged the Hartwick Pines old-growth pine grove. of old trees were destroyed by windthrow from this and other storms, leaving behind the that remain.
Logging museum
The Hartwick Pines Logging Museum was erected by the Civilian Conservation Corps (CCC) in 1934–1935. It contains recreated exhibit rooms, photographs and artifacts of the lumber boom years of northern Michigan. The museum is located in two replica logging camp buildings and has outdoor exhibits of logging equipment and an enclosed steam-powered sawmill that is operated during summer events. The museum is administered by the Michigan Department of Natural Resources' Michigan History Museum.
Activities and amenities
The Michigan Forest Visitor Center contains an exhibit hall on the history of the forests in Michigan, an auditorium, classroom space, a bookstore operated by the non-profit Friends of Hartwick Pines, and restrooms. The visitor center has an auditorium that can seat 105 people and a nine-projector multi-image slide show. The show is approximately 14 minutes long and shares the story of logging from past until today. Programs and special events are offered throughout the year.
The state park includes a campground, day-use area, and network of four-season trails for summer hiking and winter cross-country skiing. The Old Growth Forest Trail to the pine grove is a loop long. The Old Growth Forest is an even-aged stand of pines estimated to be between 350 and 375 years old. The tallest trees are between 150 and 160 feet tall, and have a girth of more than four feet DBH (Diameter at breast height). These eastern white pine are some of the largest trees in the eastern United States. The last remaining virgin maple and beech hardwood forest in the state is at Warren Woods State Park.
There are two foot trails on the south side of M-93. The wooded Au Sable River foot trail is approximately three miles in length and takes hikers across the East Branch at two different locations. The Mertz Grade Trail winds through forest and field for approximately two miles and was named for the early logging railroad spur it shares for a portion of its distance.
There are four small lakes located within the state park. Two of the lakes were originally named Bright and Star Lake. However, there were too many Star Lakes so they settled on Bright and Glory Lake.
In the news
Although changes in drilling technology make drilling for oil and gas possible under historically nonproductive strata in northern Michigan, including sections of state forests, the state of Michigan decided in 2014 not to auction off mineral rights under Hartwick Pines.
See also
Edward E. Hartwick Memorial Building: a structure within the park.
References
External links
Hartwick Pines State Park Michigan Department of Natural Resources
Hartwick Pines State Park Map Michigan Department of Natural Resources
State parks of Michigan
Protected areas of Crawford County, Michigan
Protected areas established in 1927
1927 establishments in Michigan
Old-growth forests
Civilian Conservation Corps in Michigan
Forestry museums in the United States
Industry museums in Michigan | Hartwick Pines State Park | [
"Biology"
] | 905 | [
"Old-growth forests",
"Ecosystems"
] |
4,265,532 | https://en.wikipedia.org/wiki/List%20of%20old-growth%20forests | This is a list of areas of existing old-growth forest which include at least of old growth. Ecoregion information from "Terrestrial Ecoregions of the World".
(NB: The terms "old growth" and "virgin" may have various definitions and meanings throughout the world. See old-growth forest for more information.)
Africa
Asia
Australia
In Australia, the 1992 National Forest Policy Statement (NFPS) made specific provision for the protection of old growth forests. The NFPS initiated a process for undertaking assessments of forests for conservation values, including old growth values. A working group of state and Australian Government agencies took the NFPS definition into consideration in developing a definition that was accepted by all governments (JANIS 1997).
In 2008, only a relatively small area (15%) of Australia's forests (mostly tall, wet forests) had been assessed for old-growth values.
Of the of forest in Australia assessed for their old-growth status, (22%) is classified as old-growth. Almost half of Australia's identified old-growth forest is in NSW, mostly on public land. More than 73% of Australia's identified old-growth forests are in formal or informal nature conservation reserves.
In 2001, Western Australia became the first state in Australia to cease logging in old-growth forests.
The term "old-growth forests" is rarely used in New Zealand, instead, "The Bush" is used to refer to native forests. There are large contiguous areas of forest cover that are protected areas.
Europe
North America
Canada
United States
Central America
Caribbean
South America
See also
List of oldest trees
Old-Growth Forest Network
Notes
References
Lists of forests
Forestry-related lists | List of old-growth forests | [
"Biology"
] | 345 | [
"Old-growth forests",
"Ecosystems"
] |
4,265,632 | https://en.wikipedia.org/wiki/Ice%20protection%20system | In aeronautics, ice protection systems keep atmospheric moisture from accumulating on aircraft surfaces, such as wings, propellers, rotor blades, engine intakes, and environmental control intakes. Ice buildup can change the shape of airfoils and flight control surfaces, degrading control and handling characteristics as well as performance. An anti-icing, de-icing, or ice protection system either prevents formation of ice, or enables the aircraft to shed the ice before it becomes dangerous.
Effects of icing
Aircraft icing increases weight and drag, decreases lift, and can decrease thrust. Ice reduces engine power by blocking air intakes. When ice builds up by freezing upon impact or freezing as runoff, it changes the aerodynamics of the surface by modifying the shape and the smoothness of the surface which increases drag, and decreases wing lift or propeller thrust. Both a decrease in lift on the wing due to an altered airfoil shape, and the increase in weight from the ice load will usually result having to fly at a greater angle of attack to compensate for lost lift to maintain altitude. This increases fuel consumption and further reduces speed, making a stall more likely to occur, causing the aircraft to lose altitude.
Ice accumulates on helicopter rotor blades and aircraft propellers causing weight and aerodynamic imbalances that are amplified due to their rotation.
Anti-ice systems installed on jet engines or turboprops help prevent airflow problems and avert the risk of serious internal engine damage from ingested ice. These concerns are most acute with turboprops, which more often have sharp turns in the intake path where ice tends to accumulate.
System types
Pneumatic deicing boots
The pneumatic boot is usually made of layers of rubber or other elastomers, with one or more air chambers between the layers. If multiple chambers are used, they are typically shaped as stripes aligned with the long direction of the boot. It is typically placed on the leading edge of an aircraft's wings and stabilizers. The chambers are rapidly inflated and deflated, either simultaneously, or in a pattern of specific chambers only. The rapid change in shape of the boot is designed to break the adhesive force between the ice and the rubber, and allow the ice to be carried away by the air flowing past the wing. However, the ice must fall away cleanly from the trailing sections of the surface, or it could re-freeze behind the protected area. Re-freezing of ice in this manner was a contributing factor to the crash of American Eagle Flight 4184.
Older pneumatic boots were thought to be subject to ice bridging. Slush could be pushed out of reach of the inflatable sections of the boot before hardening. This was resolved by speeding up the inflation/deflation cycle, and by alternating the timing of adjacent cells. Testing and case studies performed in the 1990s have demonstrated that ice bridging is not a significant concern with modern boot designs.
Pneumatic boots are appropriate for low and medium speed aircraft, without leading edge lift devices such as slats, so this system is most commonly found on smaller turboprop aircraft such as the Saab 340 and Embraer EMB 120 Brasilia. Pneumatic de-icing boots are sometimes found on other types, especially older aircraft. These are rarely used on modern jet aircraft. They were invented by B.F. Goodrich in 1923.
Fluid deicing
Sometimes called a weeping wing, running wet, or evaporative system, these systems use a deicing fluid, typically based on ethylene glycol or isopropyl alcohol, to prevent ice forming and to break up accumulated ice on critical surfaces of an aircraft.
One or two electrically-driven pumps send the fluid to proportioning units that divide the flow between areas to be protected. A second pump is used for redundancy, especially for aircraft certified for flight into known icing conditions, with additional mechanical pumps for the windshield. Fluid is forced through holes in panels on the leading edges of the wings, horizontal stabilizers, fairings, struts, engine inlets, and from a slinger-ring on the propeller and the windshield sprayer. These panels have diameter holes drilled in them, with . The system is self cleaning, and the fluid helps clean the aircraft, before it is blown away by the slipstream. The system was initially used during World War II by the British, having been developed by Tecalemit-Kilfrost-Sheepbridge Stokes (TKS).
Advantages of fluid systems are mechanical simplicity and minimal airflow disruption from the minuscule holes; this made the systems popular in older business jets. Disadvantages are greater maintenance requirements than pneumatic boots, the weight of potentially unneeded fluid aboard the aircraft, the finite supply of fluid when it is needed, and the unpredictable need to refill the fluid, which complicates en route stops.
Bleed air
Bleed air systems are used by most large aircraft with jet engines or turboprops. Hot air is "bled" off one or more engines' compressor sections into tubes routed through wings, tail surfaces, and engine inlets. Spent air is exhausted through holes in the wings' undersides.
A disadvantage of these systems is that supplying an adequate amount of bleed air can negatively affect engine performance. Higher-than-normal power settings are often required during cruise or descent, particularly with one or more inoperative engines. More significantly, use of bleed air affects engine temperature limits and often necessitates reduced power settings during climb, which may cause a substantial loss of climb performance with particularly critical consequences if an engine were to fail. This latter concern has resulted in bleed air systems being uncommon in small turbine aircraft, although they have been successfully implemented on some small aircraft such as the Cessna CitationJet.
Electro-thermal
Electro-thermal systems use heating coils (much like a low output stove element) buried in the airframe structure to generate heat when a current is applied. The heat can be generated continuously, or intermittently.
The Boeing 787 Dreamliner uses electro-thermal ice protection. In this case the heating coils are embedded within the composite wing structure. Boeing claims the system uses half the energy of engine fed bleed-air systems, and reduces drag and noise.
Etched foil heating coils can be bonded to the inside of metal aircraft skins to lower power use compared to embedded circuits as they operate at higher power densities. For general aviation, ThermaWing uses a flexible, electrically conductive, graphite foil attached to a wing's leading edge. Electric heaters heat the foil which melts ice.
Small wires or other conductive materials can be embedded in the windscreen to heat the windscreen. Pilots can turn on the electric heater to provide sufficient heat to prevent the formation of ice on the windscreen. However, windscreen electric heaters may only be used in flight, as they can overheat the windscreen. They can also cause compass deviation errors by as much as 40°.
One proposal used carbon nanotubes formed into thin filaments which are spun into a 10 micron-thick film. The film is a poor electrical conductor, due to gaps between the nanotubes. Instead, current causes a rapid rise in temperature, heating up twice as fast as nichrome, the heating element of choice for in-flight de-icing, while using half the energy at one ten-thousandth the weight. Sufficient material to cover the wings of a 747 weighs and costs roughly 1% of nichrome. Aerogel heaters have also been suggested, which could be left on continuously at low power.
Electro-mechanical
Electro-mechanical expulsion deicing systems (EMEDS) use a percussive force initiated by actuators inside the structure which induce a shock wave in the surface to be cleared.
Hybrid systems have also been developed that combine the EMEDS with heating elements, where a heater prevents ice accumulation on the leading edge of the airfoil and the EMED system removes accumulations aft of the heated portion of the airfoil.
Passive (icephobic coatings)
Passive systems employ icephobic surfaces. Icephobicity is analogous to hydrophobicity and describes a material property that is resistant to icing. The term is not well defined but generally includes three properties: low adhesion between ice and the surface, prevention of ice formation, and a repellent effect on supercooled droplets. Icephobicity requires special material properties but is not identical to hydrophobicity.
To minimize accretion, researchers are seeking icephobic materials. Candidates include carbon nanotubes and slippery liquid infused porous surfaces (SLIPS) which repel water when it forms into ice.
See also
Atmospheric icing
Icing conditions
List of aircraft icing accidents and incidents
References
Bibliography
External links
SAE paper on Electro-Thermal Ice Protection by Strehlow, R. and Moser, R.
Ice in transportation | Ice protection system | [
"Physics"
] | 1,823 | [
"Physical systems",
"Transport",
"Ice in transportation"
] |
4,265,659 | https://en.wikipedia.org/wiki/Julius%20Wess | Julius Erich Wess (5 December 19348 August 2007) was an Austrian theoretical physicist noted as the co-inventor of the Wess–Zumino model and Wess–Zumino–Witten model in the field of supersymmetry and conformal field theory. He was also a recipient of the Max Planck medal, the Wigner medal, the Gottfried Wilhelm Leibniz Prize, the Heineman Prize, and of several honorary doctorates.
Life and work
Wess was born in Oberwölz Stadt, a small town in the Austrian state of Styria.In 1957 he received his Ph.D. in Vienna, where he was a student of Hans Thirring. His Ph.D. examiner was acclaimed quantum mechanics physicist Erwin Schrödinger. After working at CERN in Switzerland and at the Courant Institute of New York University, United States, he became a professor at the University of Karlsruhe. In later life, Wess was professor at the Ludwig Maximilian University of Munich. After his retirement he worked at DESY in Hamburg.
His doctoral students include Hermann Nicolai.
Julius Wess died at the age of 72 in Hamburg, following a stroke.
Publications
Scientific articles authored by Julius Wess recorded in INSPIRE-HEP.
References
Further reading
Julius Wess Nachruf
Die Fakultät für Physik trauert um ihren Kollegen Prof. Dr. Julius Wess
Wess Nachruf HU Berlin
1934 births
2007 deaths
People from Murau District
Theoretical physicists
Mathematical physicists
20th-century Austrian physicists
20th-century German physicists
Academic staff of the Ludwig Maximilian University of Munich
Gottfried Wilhelm Leibniz Prize winners
Members of the Austrian Academy of Sciences
Members of the European Academy of Sciences and Arts
Winners of the Max Planck Medal
People associated with CERN | Julius Wess | [
"Physics"
] | 375 | [
"Theoretical physics",
"Theoretical physicists"
] |
4,265,809 | https://en.wikipedia.org/wiki/Perilla%20ketone | Perilla ketone is a natural terpenoid that consists of a furan ring with a six-carbon side chain containing a ketone functional group. It is a colorless oil that is sensitive to oxygen, becoming colored upon standing. The ketone was identified in 1943 by Sebe as the main component of the essential oil of Perilla frutescens. Perilla ketone is present in the leaves and seeds of purple mint (Perilla frutescens), which is toxic to some animals. When cattle and horses consume purple mint when grazing in fields in which it grows, the perilla ketone causes pulmonary edema leading to a condition sometimes called perilla mint toxicosis.
Synthesis
Perilla ketone was synthesized in 1957 by Matsuura from 3-furoyl chloride and an organocadmium compound similar to the Gilman reagent made from an isoamyl Grignard reagent and cadmium chloride. Perilla ketone has also been prepared in 74% yield via the Stille reaction from a 3-furyl-organotin compound and isocaproyl chloride in tetrahydrofuran solvent.
See also
Perillene
References
3-Furyl compounds
Ketones
Monoterpenes | Perilla ketone | [
"Chemistry"
] | 255 | [
"Ketones",
"Functional groups"
] |
4,265,878 | https://en.wikipedia.org/wiki/Groff%20%28software%29 | groff ( ) (also called GNU troff) is a typesetting system that creates formatted output when given plain text mixed with formatting commands. It is the GNU replacement for the troff and nroff text formatters, which were both developed from the original roff.
Groff contains a large number of helper programs, preprocessors, and postprocessors including eqn, tbl, pic and soelim. There are also several macro packages included that duplicate, expand on the capabilities of, or outright replace the standard troff macro packages.
Groff development of new features is active, and is an important part of free, open source, and UNIX derived operating systems such as Linux and 4.4BSD derivatives — notably because troff macros are used to create man pages, the standard form of documentation on Unix and Unix-like systems.
OpenBSD has replaced groff with mandoc in the base install, since their 4.9 release, as has macOS Ventura.
History
groff is an original implementation written primarily in C++ by James Clark and is modeled after ditroff, including many extensions. The first version, 0.3.1, was released June 1990. The first stable version, 1.04, was announced in November 1991. groff was developed as free software to provide an easily obtained replacement for the standard AT&T troff/nroff package, which at the time was proprietary, and was not always available even on branded UNIX systems. In 1999, Werner Lemberg and Ted Harding took over maintenance of groff.
See also
TeX
Desktop publishing
Notes
References
External links
groff mailing list archive (searchable)
Groff Forum, hosted by Nabble, archiving the groff mailing list into a searchable forum (sadly none of the emails are visible today).
gives background and examples of troff, including the GNU roff implementation.
Home page of mom macros
GNU Project software
groff
Free typesetting software | Groff (software) | [
"Mathematics"
] | 415 | [
"Troff",
"Mathematical markup languages"
] |
4,265,892 | https://en.wikipedia.org/wiki/Method%20of%20matched%20asymptotic%20expansions | In mathematics, the method of matched asymptotic expansions is a common approach to finding an accurate approximation to the solution to an equation, or system of equations. It is particularly used when solving singularly perturbed differential equations. It involves finding several different approximate solutions, each of which is valid (i.e. accurate) for part of the range of the independent variable, and then combining these different solutions together to give a single approximate solution that is valid for the whole range of values of the independent variable. In the Russian literature, these methods were known under the name of "intermediate asymptotics" and were introduced in the work of Yakov Zeldovich and Grigory Barenblatt.
Method overview
In a large class of singularly perturbed problems, the domain may be divided into two or more subdomains. In one of these, often the largest, the solution is accurately approximated by an asymptotic series found by treating the problem as a regular perturbation (i.e. by setting a relatively small parameter to zero). The other subdomains consist of one or more small regions in which that approximation is inaccurate, generally because the perturbation terms in the problem are not negligible there. These areas are referred to as transition layers in general, and specifically as boundary layers or interior layers depending on whether they occur at the domain boundary (as is the usual case in applications) or inside the domain, respectively.
An approximation in the form of an asymptotic series is obtained in the transition layer(s) by treating that part of the domain as a separate perturbation problem. This approximation is called the inner solution, and the other is the outer solution, named for their relationship to the transition layer(s). The outer and inner solutions are then combined through a process called "matching" in such a way that an approximate solution for the whole domain is obtained.
A simple example
Consider the boundary value problem
where is a function of independent time variable , which ranges from 0 to 1, the boundary conditions are and , and is a small parameter, such that .
Outer solution, valid for t = O(1)
Since is very small, our first approach is to treat the equation as a regular perturbation problem, i.e. make the approximation , and hence find the solution to the problem
Alternatively, consider that when and are both of size O(1), the four terms on the left hand side of the original equation are respectively of sizes , O(1), and O(1). The leading-order balance on this timescale, valid in the distinguished limit , is therefore given by the second and fourth terms, i.e.,
This has solution
for some constant . Applying the boundary condition , we would have ; applying the boundary condition , we would have . It is therefore impossible to satisfy both boundary conditions, so is not a valid approximation to make across the whole of the domain (i.e. this is a singular perturbation problem). From this we infer that there must be a boundary layer at one of the endpoints of the domain where needs to be included. This region will be where is no longer negligible compared to the independent variable , i.e. and are of comparable size, i.e. the boundary layer is adjacent to . Therefore, the other boundary condition applies in this outer region, so , i.e. is an accurate approximate solution to the original boundary value problem in this outer region. It is the leading-order solution.
Inner solution, valid for t = O(ε)
In the inner region, and are both tiny, but of comparable size, so define the new O(1) time variable . Rescale the original boundary value problem by replacing with , and the problem becomes
which, after multiplying by and taking , is
Alternatively, consider that when has reduced to size , then is still of size O(1) (using the expression for ), and so the four terms on the left hand side of the original equation are respectively of sizes , , O(1) and O(1). The leading-order balance on this timescale, valid in the distinguished limit , is therefore given by the first and second terms, i.e.
This has solution
for some constants and . Since applies in this inner region, this gives , so an accurate approximate solution to the original boundary value problem in this inner region (it is the leading-order solution) is
Matching
We use matching to find the value of the constant . The idea of matching is that the inner and outer solutions should agree for values of in an intermediate (or overlap) region, i.e. where . We need the outer limit of the inner solution to match the inner limit of the outer solution, i.e.,
which gives .
The above problem is the simplest of the simple problems dealing with matched asymptotic expansions. One can immediately calculate that is the entire asymptotic series for the outer region whereas the correction to the inner solution is and the constant of integration must be obtained from inner-outer matching.
Notice, the intuitive idea for matching of taking the limits i.e. doesn't apply at this level. This is simply because the underlined term doesn't converge to a limit. The methods to follow in these types of cases are either to go for a) method of an intermediate variable or using b) the Van-Dyke matching rule. The former method is cumbersome and works always whereas the Van-Dyke matching rule is easy to implement but with limited applicability. A concrete boundary value problem having all the essential ingredients is the following.
Consider the boundary value problem
The conventional outer expansion gives , where must be obtained from matching.
The problem has boundary layers both on the left and on the right. The left boundary layer near has a thickness whereas the right boundary layer near has thickness . Let us first calculate the solution on the left boundary layer by rescaling , then the differential equation to satisfy on the left is
and accordingly, we assume an expansion .
The inhomogeneous condition on the left provides us the reason to start the expansion at . The leading order solution is .
This with van-Dyke matching gives .
Let us now calculate the solution on the right rescaling , then the differential equation to satisfy on the right is
and accordingly, we assume an expansion
The inhomogeneous condition on the right provides us the reason to start the expansion at . The leading order solution is . This with van-Dyke matching gives . Proceeding in a similar fashion if we calculate the higher order-corrections we get the solutions as
Composite solution
To obtain our final, matched, composite solution, valid on the whole domain, one popular method is the uniform method. In this method, we add the inner and outer approximations and subtract their overlapping value, , which would otherwise be counted twice. The overlapping value is the outer limit of the inner boundary layer solution, and the inner limit of the outer solution; these limits were above found to equal . Therefore, the final approximate solution to this boundary value problem is,
Note that this expression correctly reduces to the expressions for and when is and O(1), respectively.
Accuracy
This final solution satisfies the problem's original differential equation (shown by substituting it and its derivatives into the original equation). Also, the boundary conditions produced by this final solution match the values given in the problem, up to a constant multiple. This implies, due to the uniqueness of the solution, that the matched asymptotic solution is identical to the exact solution up to a constant multiple. This is not necessarily always the case, any remaining terms should go to zero uniformly as .
Not only does our solution successfully approximately solve the problem at hand, it closely approximates the problem's exact solution. It happens that this particular problem is easily found to have exact solution
which has the same form as the approximate solution, by the multiplying constant. The approximate solution is the first term in a binomial expansion of the exact solution in powers of .
Location of boundary layer
Conveniently, we can see that the boundary layer, where and are large, is near , as we supposed earlier. If we had supposed it to be at the other endpoint and proceeded by making the rescaling , we would have found it impossible to satisfy the resulting matching condition. For many problems, this kind of trial and error is the only way to determine the true location of the boundary layer.
Harder problems
The problem above is a simple example because it is a single equation with only one dependent variable, and there is one boundary layer in the solution. Harder problems may contain several co-dependent variables in a system of several equations, and/or with several boundary and/or interior layers in the solution.
It is often desirable to find more terms in the asymptotic expansions of both the outer and the inner solutions. The appropriate form of these expansions is not always clear: while a power-series expansion in may work, sometimes the appropriate form involves fractional powers of , functions such as , et cetera. As in the above example, we will obtain outer and inner expansions with some coefficients which must be determined by matching.
Second-order differential equations
Schrödinger-like second-order differential equations
A method of matched asymptotic expansions - with matching of solutions in the common domain of validity - has been developed and used extensively by Dingle and Müller-Kirsten for the derivation of asymptotic expansions of the solutions and characteristic numbers (band boundaries) of Schrödinger-like second-order differential equations with periodic potentials - in particular for the Mathieu equation (best example), Lamé and ellipsoidal wave equations, oblate and prolate spheroidal wave equations, and equations with anharmonic potentials.
Convection–diffusion equations
Methods of matched asymptotic expansions have been developed to find approximate solutions to the Smoluchowski convection–diffusion equation, which is a singularly perturbed second-order differential equation. The problem has been studied particularly in the context of colloid particles in linear flow fields, where the variable is given by the pair distribution function around a test particle. In the limit of low Péclet number, the convection–diffusion equation also presents a singularity at infinite distance (where normally the far-field boundary condition should be placed) due to the flow field being linear in the interparticle separation. This problem can be circumvented with a spatial Fourier transform as shown by Jan Dhont.
A different approach to solving this problem was developed by Alessio Zaccone and coworkers and consists in placing the boundary condition right at the boundary layer distance, upon assuming (in a first-order approximation) a constant value of the pair distribution function in the outer layer due to convection being dominant there. This leads to an approximate theory for the encounter rate of two interacting colloid particles in a linear flow field in good agreement with the full numerical solution.
When the Péclet number is significantly larger than one, the singularity at infinite separation no longer occurs and the method of matched asymptotics can be applied to construct the full solution for the pair distribution function across the entire domain.
See also
Asymptotic analysis
Multiple-scale analysis
Activation energy asymptotics
References
Differential equations
Asymptotic analysis | Method of matched asymptotic expansions | [
"Mathematics"
] | 2,349 | [
"Mathematical analysis",
"Mathematical objects",
"Differential equations",
"Equations",
"Asymptotic analysis"
] |
4,266,054 | https://en.wikipedia.org/wiki/NGC%2057 | NGC 57 is an elliptical galaxy in the constellation Pisces. It was discovered by German-British astronomer William Herschel on 8 October 1784.
Supernovae
Three supernovae have been observed in NGC 57:
SN 2010dq (typeIa, mag. 17) was discovered by Kōichi Itagaki on 3 June 2010. It was 17" west and 1" south of the center of NGC 57 at coordinates , .
2011fp (type Ia, mag. 17.9) was discovered by Kōichi Itagaki on 29 August 2011.
SN 2020mza (mag 19.9) was discovered on 18 June 2020.
See also
Elliptical galaxy
List of NGC objects (1–1000)
Pisces (constellation)
References
External links
Discovery image of SN 2010dq (2010-06-03) / Wikisky DSS2 zoom-in of same region
Elliptical galaxies
Pisces (constellation)
0057
00145
001037
17841008
+03-01-031
Discoveries by William Herschel | NGC 57 | [
"Astronomy"
] | 213 | [
"Pisces (constellation)",
"Constellations"
] |
4,266,335 | https://en.wikipedia.org/wiki/Thirty%20Meter%20Telescope | The Thirty Meter Telescope (TMT) is a planned extremely large telescope (ELT) proposed to be built on Mauna Kea, on the island of Hawai'i. The TMT would become the largest visible-light telescope on Mauna Kea.
Scientists have been considering ELTs since the mid 1980s. In 2000, astronomers considered the possibility of a telescope with a light-gathering mirror larger than in diameter, using either small segments that create one large mirror, or a grouping of larger mirrors working as one unit. The US National Academy of Sciences recommended a telescope be the focus of U.S. interests, seeking to see it built within the decade.
Scientists at the University of California, Santa Cruz and Caltech began development of a design that would eventually become the TMT, consisting of a 492-segment primary mirror with nine times the power of the Keck Observatory. Due to its light-gathering power and the optimal observing conditions which exist atop Mauna Kea, the TMT would enable astronomers to conduct research which is infeasible with current instruments. The TMT is designed for near-ultraviolet to mid-infrared (0.31 to 28 μm wavelengths) observations, featuring adaptive optics to assist in correcting image blur. The TMT will be at the highest altitude of all the proposed ELTs. The telescope has government-level support from several nations.
The proposed location on Mauna Kea has been controversial among the Native Hawaiian community. Demonstrations attracted press coverage after October 2014, when construction was temporarily halted due to a blockade of the roadway. When construction of the telescope was set to resume, construction was blocked by further protests each time. In 2015, Governor David Ige announced several changes to the management of Mauna Kea, including a requirement that the TMT's site will be the last new site on Mauna Kea to be developed for a telescope. The Board of Land and Natural Resources approved the TMT project, but the Supreme Court of Hawaii invalidated the building permits in December 2015, ruling that the board had not followed due process. In October 2018, the Court approved the resumption of construction; however, no further construction has occurred due to continued opposition. In July 2023 a new state appointed oversight board, which includes Native Hawaiian community representatives and cultural practitioners, began a five-year transition to assume management over Mauna Kea and its telescope sites, which may be a path forward. In April 2024, TMT's project manager apologized for the organization having "contributed to division in the community", and stated that TMT's approach to construction in Hawai'i is "very different now from TMT in 2019."
An alternate site for the Thirty Meter Telescope has been proposed for La Palma, Canary Islands, Spain, but is considered less scientifically favorable by astronomers. , there were no specific timelines or schedules regarding new start or completion dates.
Background
In 2000, astronomers began considering the potential of telescopes larger than in diameter. The technology to build a mirror larger than does not exist; instead scientists considered two methods: either segmented smaller mirrors as used in the Keck Observatory, or a group of 8-meter (26') mirrors mounted to form a single unit. The US National Academy of Sciences made a suggestion that a telescope should be the focus of US astronomy interests and recommended that it be built within the decade.
The University of California, along with Caltech, began development of a 30-meter telescope that same year. The California Extremely Large Telescope (CELT) began development, along with the Giant Segmented Mirror Telescope (GSMT), and the Very Large Optical Telescope (VLOT). These studies would eventually define the Thirty Meter Telescope. The TMT would have nine times the collecting area of the older Keck telescope using slightly smaller mirror segments in a vastly larger group. Another telescope of a large diameter in the works is the Extremely Large Telescope (ELT) being built in northern Chile.
The telescope is designed for observations from near-ultraviolet to mid-infrared (0.31 to 28 μm wavelengths). In addition, its adaptive optics system will help correct for image blur caused by the atmosphere of the Earth, helping it to reach the potential of such a large mirror. Among existing and planned extremely large telescopes, the TMT will have the highest elevation and will be the second-largest telescope once the ELT is built. Both use segments of small hexagonal mirrors—a design vastly different from the large mirrors of the Large Binocular Telescope (LBT) or the Giant Magellan Telescope (GMT). Each night, the TMT would collect 90 terabytes of data. The TMT has government-level support from the following countries: Canada, Japan and India. The United States is also contributing some funding, but less than the formal partnership.
Proposed locations
In cooperation with AURA, the TMT project completed a multi-year evaluation of six sites:
Roque de los Muchachos Observatory, La Palma, Canary Islands, Spain
Cerro Armazones, Antofagasta Region, Republic of Chile
Cerro Tolanchar, Antofagasta Region, Republic of Chile
Cerro Tolar, Antofagasta Region, Republic of Chile
Mauna Kea, Hawaii, United States (This site was chosen and approval was granted in April 2013)
San Pedro Mártir, Baja California, Mexico
Hanle, Ladakh, India
The TMT Observatory Corporation board of directors narrowed the list to two sites, one in each hemisphere, for further consideration: Cerro Armazones in Chile's Atacama Desert and Mauna Kea on Hawaii Island. On July 21, 2009, the TMT board announced Mauna Kea as the preferred site. The final TMT site selection decision was based on a combination of scientific, financial, and political criteria. Chile is also where the European Southern Observatory is building the ELT. If both next-generation telescopes were in the same hemisphere, there would be many astronomical objects that neither could observe. The telescope was given approval by the state Board of Land and Natural Resources in April 2013.
There has been opposition to the building of the telescope, based on potential disruption to the fragile alpine environment of Mauna Kea due to construction, traffic, and noise, which is a concern for the habitat of several species, and because Mauna Kea is a sacred site for the Native Hawaiian culture. The Hawaii Board of Land and Natural Resources conditionally approved the Mauna Kea site for the TMT in February 2011. The approval has been challenged; however, the Board officially approved the site following a hearing on February 12, 2013.
Partnerships and funding
The Gordon and Betty Moore Foundation has committed US$200 million for construction. Caltech and the University of California have committed an additional US$50 million each. Japan, which has its own large telescope at Mauna Kea, the Subaru, is also a partner.
In 2008, the National Astronomical Observatory of Japan (NAOJ) joined TMT as a collaborator institution. The following year, the telescope cost was estimated to be $970 million to $1.4 billion. That same year, the National Astronomical Observatories of the Chinese Academy of Sciences (NAOC) joined TMT as an observer. The observer status is the first step in becoming a full partner in the construction of the TMT and participating in the engineering development and scientific use of the observatory. By 2024, China was not a partner in TMT.
In 2010, a consortium of Indian Astronomy Research Institutes (IIA, IUCAA and ARIES) joined TMT as an observer, subject to approval of funding from the Indian government. Two years later, India and China became partners with representatives on the TMT board. Both countries agreed to share the telescope construction costs, expected to top $1 billion. India became a full member of the TMT consortium in 2014. In 2019 the India-based company Larsen & Toubro (L&T) were awarded the contract to build the segment support assembly (SSA), which "are complex optomechanical sub-assemblies on which each hexagonal mirror of the 30-metre primary mirror, the heart of the telescope, is mounted".
The IndiaTMT Optics Fabricating Facility (ITOFF) will be constructed at the Indian Institute of Astrophysics campus in the city of Hosakote, near Bengaluru. India will supply 80 of the 492 mirror segments for the TMT. A.N. Ramaprakash, the associate programme director of India-TMT, stated; "All sensors, actuators and SSAs for the whole telescope are being developed and manufactured in India, which will be put together in building the heart of TMT", also adding; "Since it is for the first time that India is involved in such a technically demanding astronomy project, it is also an opportunity to put to test the abilities of Indian scientists and industries, alike."
The continued financial commitment from the Canadian government had been in doubt due to economic pressures. In April 2015, Prime Minister Stephen Harper announced that Canada would commit $243.5 million over a period of 10 years. The telescope's unique enclosure was designed by Dynamic Structures Ltd. in British Columbia. In a 2019 online petition, a group of Canadian academics called on succeeding Canadian Prime Minister Justin Trudeau together with Industry Minister Navdeep Bains and Science Minister Kirsty Duncan to divest Canadian funding from the project. , the Canadian astronomy community has named TMT its top facility priority for the decade ahead.
Design
The TMT would be housed in a general-purpose observatory capable of investigating a broad range of astrophysical problems. The total diameter of the dome will be with the total dome height at (comparable in height to an eighteen-storey building). The total area of the structure is projected to be within a complex.
Telescope
The centerpiece of the TMT Observatory is to be a Ritchey-Chrétien telescope with a diameter primary mirror. This mirror is to be segmented and consist of 492 smaller (), individual hexagonal mirrors. The shape of each segment, as well as its position relative to neighboring segments, will be controlled actively.
A secondary mirror is to produce an unobstructed field-of-view of 20 arcminutes in diameter with a focal ratio of 15. A flat tertiary mirror is to direct the light path to science instruments mounted on large Nasmyth platforms. The telescope is to have an alt-azimuth mount. Target acquisition and system configuration capabilities need to be achieved within 5 minutes, or ten minutes if relocating to a newer device. To achieve these time limitations the TMT will use a software architecture linked by a service based communications system. The moving mass of the telescope, optics, and instruments will be about . The design of the facility descends from the Keck Observatory.
Adaptive optics
Integral to the observatory is a Multi-Conjugate Adaptive Optics (MCAO) system. This MCAO system will measure atmospheric turbulence by observing a combination of natural (real) stars and artificial laser guide stars. Based on these measurements, a pair of deformable mirrors will be adjusted many times per second to correct optical wave-front distortions caused by the intervening turbulence.
This system will produce diffraction-limited images over a 30-arc-second diameter field-of-view, which means that the core of the point spread function will have a size of 0.015 arc-second at a wavelength of 2.2 micrometers, almost ten times better than the Hubble Space Telescope.
Scientific instrumentation
Early-light capabilities
Three instruments are planned to be available for scientific observations:
Wide Field Optical Spectrometer (WFOS) provides a seeing limit that goes down to the ultraviolet with optical (0.3–1.0 μm wavelength) imaging and spectroscopy capable of 40-square arc-minute field-of-view. The TMT will use precision cut focal plane masks and enable long-slit observations of individual objects as well as short-slit observations of hundreds of different objects at the same time. The spectrometer will use natural (uncorrected) seeing images.
Infrared Imaging Spectrometer (IRIS) mounted on the observatory MCAO system, capable of diffraction-limited imaging and integral-field spectroscopy at near-infrared wavelengths (0.8–2.5 μm). Principal investigators are James Larkin of UCLA and Anna Moore of Caltech. Project scientist is Shelley Wright of UC San Diego.
Infrared Multi-object Spectrometer (IRMS) allowing close to diffraction-limited imaging and slit spectroscopy over a 2 arc-minute diameter field-of-view at near-infrared wavelengths (0.8–2.5 μm).
Approval process and protests
In 2008, the TMT corporation selected two semi-finalists for further study, Mauna Kea and Cerro Amazones. In July 2009, Mauna Kea was selected. Once TMT selected Mauna Kea, the project began a regulatory and community process for approval. Mauna Kea is ranked as one of the best sites on Earth for telescope viewing and is home to 13 other telescopes built at the summit of the mountain, within the Mauna Kea Observatories grounds. Telescopes generate money for the big island, with millions of dollars in jobs and subsidies gained by the state. The TMT would be one of the most expensive telescopes ever created.
However, the proposed construction of the TMT on Mauna Kea sparked protests and demonstrations across the state of Hawaii. Mauna Kea is the most sacred mountain in Hawaiian culture as well as conservation land held in trust by the state of Hawaii.
2010-2014: Initial approval, permit and contested case hearing
In 2010 Governor Linda Lingle of the State of Hawaii signed off on an environmental study after 14 community meetings. The BLNR held hearings on December 2 and December 3, 2010, on the application for a permit.
On February 25, 2011, the board granted the permits after multiple public hearings. This approval had conditions, in particular, that a hearing about contesting the approval be heard. A contested case hearing was held in August 2011, which led to a judgment by the hearing officer for approval in November 2012. The telescope was given approval by the state Board of Land and Natural Resources in April 2013. This process was challenged in court with a lower court ruling in May 2014. The Intermediate Court of Appeals of the State of Hawaii declined to hear an appeal regarding the permit until the Hawaii Department of Land and Natural Resources first issued a decision from the contested case hearing that could then be appealed to the court.
2014-2015: First blockade, construction halts, State Supreme Court invalidates permit
The dedication and ground-breaking ceremony was held, but interrupted by protesters on October 7, 2014. The project became the focal point of escalating political conflict, police arrests and continued litigation over the proper use of conservation lands. Native Hawaiian cultural practice and religious rights became central to the opposition, with concerns over the lack of meaningful dialogue during the permitting process. In late March 2015, demonstrators again halted the construction crews. On April 2, 2015, about 300 protesters gathered on Mauna Kea, some of them trying to block the access road to the summit; 23 arrests were made. Once the access road to the summit was cleared by the police, about 40 to 50 protesters began following the heavily laden and slow-moving construction trucks to the summit construction site.
On April 7, 2015, the construction was halted for one week at the request of Hawaii state governor David Ige, after the protest on Mauna Kea continued. Project manager Gary Sanders stated that TMT agreed to the one-week stop for continued dialogue; Kealoha Pisciotta, president of Mauna Kea Anaina Hou, one of the organizations that have challenged the TMT in court, viewed the development as positive but said opposition to the project would continue. On April 8, 2015, Governor Ige announced that the project was being temporarily postponed until at least April 20, 2015. Construction was set to begin again on June 24, though hundreds of protesters gathered on that day, blocking access to the construction site for the TMT. Some protesters camped on the access road to the site, while others rolled large rocks onto the road. The actions resulted in 11 arrests.
The TMT company chairman stated: "T.M.T. will follow the process set forth by the state." A revised permit was approved on September 28, 2017, by the Hawaii Board of Land and Natural Resources.
On December 2, 2015, the Hawaii State Supreme Court ruled the 2011 permit from the State of Hawaii Board of Land and Natural Resources (BLNR) was invalid ruling that due process was not followed when the Board approved the permit before the contested case hearing. The high court stated: "BLNR put the cart before the horse when it approved the permit before the contested case hearing," and "Once the permit was granted, Appellants were denied the most basic element of procedural due process – an opportunity to be heard at a meaningful time and in a meaningful manner. Our Constitution demands more".
2017-2019: BLNR hearings, Court validates revised permit
In March 2017, the BLNR hearing officer, retired judge Riki May Amano, finished six months of hearings in Hilo, Hawaii, taking 44 days of testimony from 71 witnesses.
On July 26, 2017, Amano filed her recommendation that the Land Board grant the construction permit.
On September 28, 2017, the BLNR, acting on Amano's report, approved, by a vote of 5-2, a Conservation District Use Permit (CDUP) for the TMT. Numerous conditions, including the removal of three existing telescopes and an assertion that the TMT is to be the last telescope on the mountain, were attached to the permit.
On October 30, 2018, the Supreme Court of Hawaii ruled 4-1, that the revised permit was acceptable, allowing construction to proceed. On July 10, 2019, Hawaii Gov. David Ige and the Thirty Meter Telescope International Observatory jointly announced that construction would begin the week of July 15, 2019.
2019 blockade and aftermath
On July 15, 2019, renewed protests blocked the access road, again preventing construction from commencing. On July 17, 38 protestors were arrested, all of whom were kupuna (elders) as the blockage of the access road continued. The blockade lasted 4 weeks and shut down all 12 observatories on Mauna Kea, the longest shut down in the 50-year history of the observatories. The full shut down ended when state officials brokered a deal that included building a new road around the campsite of the demonstrations and providing a complete list of vehicles accessing the road to show they are not associated with the TMT. The protests were labeled a fight for indigenous rights and a field-defining moment for astronomy. While there is both native and non-native Hawaiian support for the TMT, a "substantial percentage of the native Hawaiian population" oppose the construction and see the proposal itself as a continued disregard to their basic rights.
The 50 years of protests against the use of Mauna Kea has drawn into question the ethics of conducting research with telescopes on the mountain. The controversy is about more than the construction and is about generations of conflict between Native Hawaiians, the U.S. Government and private interests. The American Astronomical Society stated through their Press Officer, Rick Fienberg; "The Hawaiian people have numerous legitimate grievances concerning the way they’ve been treated over the centuries. These grievances have simmered for many years, and when astronomers announced their intention to build a new giant telescope on Maunakea, things boiled over". On July 18, 2019, an online petition titled "Impeach Governor David Ige" was posted to Change.org. The petition gathered over 25,000 signatures. The governor and others in his administration received death threats over the construction of the telescope.
On December 19, 2019, Hawaii Governor David Ige announced that the state would reduce its law enforcement personnel on Mauna Kea. At the same time, the TMT project stated it was not prepared to start construction anytime soon.
2020s
Early in 2020, TMT and the Giant Magellan Telescope (GMT) jointly presented their science and technical readiness to the U.S. National Academies Astro2020 panel. Chile is the site for GMT in the south and Mauna Kea is being considered as the primary site for TMT in the north. The panel has produced a series of recommendations for implementing a strategy and vision for the coming decade of U.S. Astronomy & Astrophysics frontier research and prioritize projects for future funding.
In July 2020, TMT confirmed it would not resume construction on TMT until 2021, at the earliest. The COVID-19 pandemic resulted in TMT's partnership working from home around the world and presented a public health threat as well as travel and logistical challenges.
On August 13, 2020, the Speaker of the Hawaii House of Representatives, Scott Saiki announced that the National Science Foundation (NSF) has initiated an informal outreach process to engage stakeholders interested in the Thirty Meter Telescope project. After listening to and considering the stakeholders’ viewpoints, the NSF acknowledged a delay in the environmental review process for TMT while seeking to provide a more inclusive, meaningful, and culturally appropriate process.
In November 2021, Fengchuan Liu was appointed the Project Manager of TMT and moved his office to Hilo.
, no further construction was announced or initiated. Continued progress on instrument design, mirror casting & polishing, and other critical operational technicalities were worked through or were being worked on. In July 2023 a new state appointed board, the Maunakea Stewardship Oversight Authority, began a five-year transition to assume management over the Mauna Kea site and all telescopes on the mountain. While there are no specific timelines or schedules regarding new start or completion dates, activist Noe Noe Wong-Wilson is quoted by Astronomy magazine as saying, "It's still early in the life of the new authority, but there's actually a pathway forward." The authority includes representatives from Native Hawaiian communities and cultural practitioners as well as astronomers and others. The body will have full control of the site from July 2028.
Opposition in the Canary Islands
In response to the ongoing protests that occurred in July 2019, the TMT project officials requested a building permit for a second site choice, the Spanish island of La Palma in the Canary Islands. Rafael Rebolo, the director of the Canary Islands Astrophysics Institute, confirmed that he had received a letter requesting a building permit for the site as a backup in case the Hawaii site cannot be constructed. Some astronomers argue however that La Palma is not an adequate site to build the telescope due to the island’s comparatively low elevation, which would enable water vapor to frequently interfere with observations due to water vapor’s tendency to absorb light at midinfrared wavelengths. Such atmospheric interference could impact observing times for research into exoplanets, galactic formation, and cosmology. Other astronomers argue that construction of the telescope in La Palma would disrupt projected international collaboration between the United States and other involved countries such as Japan, Canada, and France.
Environmentalists such as Ben Magec and the environmental advocacy organization Ecologistas en Acción in the Canary Islands are gearing up to fight against its construction there as well. According to EEA spokesperson Pablo Bautista, the projected TMT construction area in the Canary Islands exists inside a protected conservation refuge which hosts at least three archeological sites of the indigenous Guanche people, who lived on the islands for thousands of years before Spanish colonization. On July 29, 2021, Judge Roi López Encinas of the High Court of Justice of the Canary Islands, revoked the 2017 concession of public lands by local authorities for the TMT construction. Encinas ruled that the land concessions were invalid as they were not covered by an international treaty on scientific research and that the TMT International Observatory consortium did not express concrete intent to build on the La Palma site as opposed to the site in Mauna Kea.
On July 19, 2022, The National Science Foundation announced it will carry out a new environmental survey of the possible impacts of the construction of the Thirty Meter Telescope at proposed building sites at both Mauna Kea and at the Canary Islands. Continued funding for the telescope will not be considered prior to the results of the environmental survey, updates on the project's technical readiness, and comments from the public.
By 2023, TIO has addressed all protests and they are clear to build there now.
See also
Extremely Large Telescope
Very Large Telescope
Giant Magellan Telescope
List of largest optical reflecting telescopes
References
External links
Mauna Kea page at the Hawai'i Department of Land and Natural Resources
INSIGHTS ON PBS HAWAII Should the Thirty Meter Telescope Be Built? (air-date video; April 30, 2015)
Astronomical observatories in Hawaii
Telescopes
Telescopes under construction | Thirty Meter Telescope | [
"Astronomy"
] | 5,138 | [
"Telescopes",
"Astronomical instruments"
] |
4,266,404 | https://en.wikipedia.org/wiki/Luminary%20%28astrology%29 | The luminaries were what traditional astrologers called the two astrological "planets" which were the brightest and most important objects in the heavens, that is, the Sun and the Moon. Luminary means, source of light. The Sun and Moon, being the most abundant sources of light to the inhabitants of Earth are known as luminaries. The astrological significance warrants the classification of the Sun and Moon separately from the planets, in that the Sun and Moon have to do with man's spiritual consciousness, while the planetary influences operate through the physical mechanism. The Moon is a luminary in the biblical sense that it affords to Man "light by night". Some early, Pre-Newtonian astronomers to observe and study luminaries include Pythagoras, Aristotle, Claudius Ptolemy, al-Khwarizmi, Nicolaus Copernicus, Tycho Brahe, Galileo Galilei, and Johannes Kepler.
Origins
The Sun and Moon were considered the rulers of day, and night, in accordance with the doctrine of astrology of sect: diurnal (or daytime) planets, which were ruled by the Sun, and nocturnal (or nighttime) planets, which were ruled by the Moon.
The Sun was also the sect ruler—or the luminary of sect for all charts of events and individuals born in the daytime, when the Sun was over the horizon; and the Moon was the sect ruler or luminary of sect for night charts, when the Sun was below the horizon.
Ancient astrologers divided all astrological factors into day and night groups: essential dignities, Arabian Parts (or "Lots") and all planetary characteristics. Even each of the Starry planets themselves "belonged" to one luminary or the other. The luminary "in charge" of any given chart was called the luminary of sect. (See sect.)
The luminaries can be found in the Bible:
And God made two great lights; the greater light to rule the day, and the lesser light to rule the night: He made the stars also. And God set them in the firmament of the heaven to give light upon the earth, And to rule over the day and over the night, and to divide the light from the darkness: and God saw that it was good. (Genesis 1:16-18, King James Version)
In modern Western astrology, the importance of the Moon and the Sun has even come to outweigh all the other celestial factors. In the interpretation of chart data. In Hindu astrology, the Moon (and the Ascendant) have that distinction.
Early beliefs of the Sun and Moon
In the early history of all people we find the Sun and Moon regarded as human beings, connected with the daily life of mankind, and influencing in some mysterious way man's existence, and controlling his/her destiny. We find the luminaries alluded to as ancestors, heroes, and benefactors, who, after a life of usefulness on Earth, were transported to the heavens, where they continue to look down on, and, in a measure, rule over earthly affairs. The basis of mythology is the worship of the solar great father, and the lunar great mother. For centuries, people have worshiped and regarded the luminaries as objects of higher powers. Oftentimes, the Sun and Moon have been considered as opposite sexes. For example, the Sun being "the father" and the Moon being "the mother". In Australia, the Moon was considered to be a man, the Sun a woman, who appears at dawn in a coat of red kangaroo skins. Shakespeare speaks of the Moon as "she," while in Peru, the Moon was regarded as a mother who was both sister and wife of the Sun. The sex of each has been disputed and thought of differently over the centuries. This confusion in the sex, ascribed to the Sun and Moon by different nations, may have arisen from the fact that the day is mild and friendly, hence the Sun which rules the day would properly be considered feminine, while the Moon which rules the chill and stern night might appropriately be regarded as a man. On the contrary, in equatorial regions, the day is forbidding and burning, while the night is mild and pleasant. Applying these analogies, it appears that the sex of the Sun and Moon would, by some tribes, be the reverse of those ascribed to them by others, climatic conditions being responsible for the confusion.
Also many cultures believe the Sun and Moon to be husband and wife or brother and sister. From the conception that the Sun and Moon were husband and wife many legends concerning them were created, chief among these being the old Persian belief that the stars were the children of the Sun and Moon. As common with many marriages it was also thought that the sun and moons marriage was an uneasy one. There are many legends of the Sun and Moon that relate their disputes and marital troubles, for mythology reveals that as husband and wife the Sun and Moon did not live happily together, some of these explaining the reason of seasons and weather. The myths and thoughts of the relationship between the Sun and Moon and their role in the universe are high in quantity. Correlations and connections can be made with some of these beliefs but oftentimes many disagree with others.
Luminaries in medicine
In early modern England, the medical effects of the Sun and the Moon had been traditionally explained by a vast symbolic system of "analogies, correspondences, and relations among apparently discrete elements in man and the universe," which had its conceptual origins in the works of Aristotle, Ptolemy, and Galen. The ultimate causes of planetary emanations had been considered "occult," an Aristotelian and early modern term utilized when distinguishing "qualities which were evident to the senses from those which were hidden" After the Restoration, many physicians attempted to rid the natural world of occult causes and to explain invisible forces solar and lunar emanations via mechanical, chemical, and mathematical systems.
To explain the medical effects of the luminaries, the English physicians Richard Mead (1673-1754) and James Gibbs (d. 1724) utilized iatromechanism, which regarded the body as a Cartesian machine, conforming in its functions to mechanical laws. Physiological phenomena could thus be explained in terms of physics. Richard Mead subsequently applied Newton's gravitational theories to Pitcairne's hydraulic iatromechanism and astrological medicine. In De imperio solis ac lunae in corpora humana et morbis inde oriundis [A treatise concerning the influence of the Sun and moon on human bodies and the diseases thereby produced] (1704), Mead stressed the mechanical effects of solar and lunar emanations, especially the gravitational effects of the tides, on the pressure of vessels and fluids within the human body.
Further reading
Luminaries: The Psychology of the Sun and Moon in the Horoscope by Liz Greene, Howard Sasportas
References
History of astrology | Luminary (astrology) | [
"Astronomy"
] | 1,434 | [
"History of astrology",
"History of astronomy"
] |
4,266,780 | https://en.wikipedia.org/wiki/Message%20Signaled%20Interrupts | Message Signaled Interrupts (MSI) are a method of signaling interrupts, using special in-band messages to replace traditional out-of-band signals on dedicated interrupt lines. While message signaled interrupts are more complex to implement in a device, they have some significant advantages over pin-based out-of-band interrupt signalling, such as improved interrupt handling performance. This is in contrast to traditional interrupt mechanisms, such as the legacy interrupt request (IRQ) system.
Message signaled interrupts are supported in PCI bus since its version 2.2, and in later available PCI Express bus. Some non-PCI architectures also use message signaled interrupts.
Overview
Traditionally, a device has an interrupt line (pin) which it asserts when it wants to signal an interrupt to the host processing environment. This traditional form of interrupt signalling is an out-of-band form of control signalling since it uses a dedicated path to send such control information, separately from the main data path. MSI replaces those dedicated interrupt lines with in-band signalling, by exchanging special messages that indicate interrupts through the main data path. In particular, MSI allows the device to write a small amount of interrupt-describing data to a special memory-mapped I/O address, and the chipset then delivers the corresponding interrupt to a processor.
A common misconception with MSI is that it allows the device to send data to a processor as part of the interrupt. The data that is sent as part of the memory write transaction is used by the chipset to determine which interrupt to trigger on which processor; that data is not available for the device to communicate additional information to the interrupt handler.
As an example, PCI Express does not have separate interrupt pins at all; instead, it uses special in-band messages to allow pin assertion or deassertion to be emulated. Some non-PCI architectures also use MSI; as another example, HP GSC devices do not have interrupt pins and can generate interrupts only by writing directly to the processor's interrupt register in memory space. The HyperTransport protocol also supports MSI.
Advantages
While more complex to implement in a device, message signalled interrupts have some significant advantages over pin-based out-of-band interrupt signalling. On the mechanical side, fewer pins makes for a simpler, cheaper, and more reliable connector. While this is no advantage to the standard PCI connector, PCI Express takes advantage of these savings.
MSI increases the number of interrupts that are possible. While conventional PCI was limited to four interrupts per card (and,
because they were shared among all cards, most are using only one), message signalled interrupts allow dozens of interrupts per card, when that is useful.
There is also a slight performance advantage. In software, a pin-based interrupt could race with a posted write to memory. That is, the PCI device would write data to memory and then send an interrupt to indicate the DMA write was complete. However, a PCI bridge or memory controller might buffer the write in order to not interfere with some other memory use. The interrupt could arrive before the DMA write was complete, and the processor could read stale data from memory. To prevent this race, interrupt handlers were required to read from the device to ensure that the DMA write had finished. This read had a moderate performance penalty. An MSI write cannot pass a DMA write, so the race is eliminated.
MSI types
PCI defines two optional extensions to support Message Signalled Interrupts, MSI and MSI-X. PCI Express defines its own message-based mechanism to emulate legacy PCI interrupts.
MSI
MSI (first defined in PCI 2.2) permits a device to allocate 1, 2, 4, 8, 16 or 32 interrupts. The device is programmed with an address to write to (this address is generally a control register in an interrupt controller), and a 16-bit data word to identify it. The interrupt number is added to the data word to identify the interrupt. Some platforms such as Windows do not use all 32 interrupts but only use up to 16 interrupts.
MSI-X
MSI-X (first defined in PCI 3.0) permits a device to allocate up to 2048 interrupts. The single address used by original MSI was found to be restrictive for some architectures. In particular, it made it difficult to target individual interrupts to different processors, which is helpful in some high-speed networking applications. MSI-X allows a larger number of interrupts and gives each one a separate target address and data word. Devices with MSI-X do not necessarily support 2048 interrupts.
Optional features in MSI (64-bit addressing and interrupt masking) are also mandatory with MSI-X.
PCI Express legacy interrupt emulation
PCI Express does not have physical interrupt pins, but emulates the 4 physical interrupt pins of PCI via dedicated PCI Express Messages such as Assert_INTA and Deassert_INTC. Being message-based (at the PCI Express layer), this mechanism provides some, but not all, of the advantages of the PCI layer MSI mechanism: the 4 virtual pins per device are no longer shared on the bus (although PCI Express controllers may still combine legacy interrupts internally), and interrupt changes no longer inherently suffer from race conditions.
PCI Express permits devices to use these legacy interrupt messages, retaining software compatibility with PCI drivers, but they are required to also support MSI or MSI-X in the PCI layer.
x86 systems
On Intel systems, the LAPIC must be enabled for the PCI (and PCI Express) MSI/MSI-X to work, even on uniprocessor (single core) systems. In these systems, MSIs are handled by writing the interrupt vector directly into the LAPIC of the processor/core that needs to service the interrupt. The Intel LAPICs of 2009 supported up to 224 MSI-based interrupts. According to a 2009 Intel benchmark using Linux, using MSI reduced the latency of interrupts by a factor of almost three when compared to I/O APIC delivery.
Operating system support
In the Microsoft family of operating systems, Windows Vista and later versions have support for both MSI and MSI-X. Support was added in the Longhorn development cycle around 2004. MSI is not supported in earlier versions like Windows XP or Windows Server 2003.
Solaris Express 6/05 released in 2005 added support for MSI an MSI-X as part of their new device driver interface (DDI) interrupt framework.
FreeBSD 6.3 and 7.0 released in 2008 added support for MSI and MSI-X.
OpenBSD 5.0 released in 2011 added support for MSI. 6.0 added support for MSI-X.
Linux gained support for MSI and MSI-X around 2003. Linux kernel versions before 2.6.20 are known to have serious bugs and limitations in their implementation of MSI/MSI-X.
Haiku gained support for MSI around 2010. MSI-X support was added later, in 2013.
NetBSD 8.0 released in 2018 added support for MSI and MSI-X.
VxWorks 7 supports MSI and MSI-X
See also
Interrupt handler
Interrupt request (PC architecture)
References
External links
Introduction to Message-Signalled Interrupts - MSDN
Linux MSI HOWTO
Digital electronics
Interrupts
Peripheral Component Interconnect | Message Signaled Interrupts | [
"Technology",
"Engineering"
] | 1,568 | [
"Electronic engineering",
"Interrupts",
"Events (computing)",
"Digital electronics"
] |
4,266,962 | https://en.wikipedia.org/wiki/Gin%20pole | A gin pole is a mast supported by one or more guy-wires that uses a pulley or block and tackle mounted on its upper end to lift loads. The lower end is braced or set in a shallow hole and positioned so the upper end lies above the object to be lifted. The pole (also known as a mast, boom, or spar) is secured with three or more guys. These are manipulated to move the load laterally, with up and down controlled by the pulley or block. In tower construction, a gin pole can also be “jumped” up the completed sections of a tower to lift the higher sections into place.
Etymology
The gin pole is derived from a gyn, and considered a form of derrick, called a standing derrick or pole derrick, distinguished from sheers (or shear legs) by having a single boom rather than a two-legged one.
Applications
In addition to being used as simple lifting devices in field operations such as construction, logging, loading and unloading boats, and emergency repairs, gin poles are well suited to raising loads above structures too tall to reach with a crane, such as placing an antenna on top of a tower/steeple, and to lift segments of a tower on top of one another during erection. When used to create a segmented tower, the gin pole can be detached, raised, and re-attached to the just-completed segment in order to lift the next. This process of jumping is repeated until the topmost portion of the tower is completed. They can also hold a person if strong enough (thus opening stage uses, such as in magic shows).
Gin poles are mounted on trucks as a primitive form of mobile crane, used in combination with a typically front-mounted winch for lifting and relocating loads, and in salvage operations in lieu of a more capable wrecker.
References
External links
Notes & drawings
tower contractor's description with diagram
gin pole regulations in California
gin pole failure leads to lawsuit april 15, 1948 in philadelphia
Tools
Cranes (machines)
Lifting equipment | Gin pole | [
"Physics",
"Technology",
"Engineering"
] | 411 | [
"Machines",
"Lifting equipment",
"Physical systems",
"Cranes (machines)",
"Architecture stubs",
"Engineering vehicles",
"Architecture"
] |
4,267,366 | https://en.wikipedia.org/wiki/Arcing%20horns | Arcing horns (sometimes arc-horns) are projecting conductors used to protect insulators or switch hardware on high voltage electric power transmission systems from damage during flashover. Overvoltages on transmission lines, due to atmospheric electricity, lightning strikes, or electrical faults, can cause arcs across insulators (flashovers) that can damage them. Alternately, atmospheric conditions or transients that occur during switching can cause an arc to form in the breaking path of a switch during its operation. Arcing horns provide a path for flashover to occur that bypasses the surface of the protected device. Horns are normally paired on either side of an insulator, one connected to the high voltage part and the other to ground, or at the breaking point of a switch contact. They are frequently to be seen on insulator strings on overhead lines, or protecting transformer bushings.
The horns can take various forms, such as simple cylindrical rods, circular guard rings, or contoured curves, sometimes known as 'stirrups'.
Background
High voltage equipment, particularly that which is installed outside, such as overhead power lines, is commonly subject to transient overvoltages, which may be caused by phenomena such as lightning strikes, faults on other equipment, or switching surges during circuit re-energisation. Overvoltage events such as these are unpredictable, and in general cannot be completely prevented. Line terminations, at which a transmission line connects to a busbar or transformer bushing, are at greatest risk to overvoltage due to the change in characteristic impedance at this point.
An electrical insulator serves to provide physical separation of conducting parts, and under normal operating conditions is continuously subject to a high electric field which occupies the air surrounding the equipment. Overvoltage events may cause the electric field to exceed the dielectric strength of air and result in the formation of an arc between the conducting parts and over the surface of the insulator. This is called flashover. Contamination of the surface of the insulator reduces the breakdown strength and increases the tendency to flash over. On an electrical transmission system, protective relays are expected to detect the formation of the arc and automatically open circuit breakers to discharge the circuit and extinguish the arc. Under a worst case, this process may take as long as several seconds, during which time the insulator surface would be in close contact with the highly energetic plasma of the arc. This is very damaging to an insulator, and may shatter brittle glass or ceramic disks, resulting in its complete failure.
Operation
Arcing horns form a spark gap across the insulator with a lower breakdown voltage than the air path along the insulator surface, so an overvoltage will cause the air to break down and the arc to form between the arcing horns, diverting it away from the surface of the insulator. An arc between the horns is more tolerable for the equipment, providing more time for the fault to be detected and the arc to be safely cleared by remote circuit breakers. The geometry of some designs encourages the arc to migrate away from the insulator, driven by rising currents as it heats the surrounding air. As it does so, the path length increases, cooling the arc, reducing the electric field and causing the arc to extinguish itself when it can no longer span the gap. Other designs can utilise the magnetic field produced by the high current to drive the arc away from the insulator. This type of arrangement can be known as a magnetic blowout.
Design criteria and maintenance regimes may treat arcing horns as sacrificial equipment, cheaper and more easily replaced than the insulator, failure of which can result in complete destruction of the equipment it insulates. Failure of insulator strings on overhead lines could result in the parting of the line, with significant safety and cost implications.
Arcing horns thus play a role in the process of correlating system protection with protective device characteristics, known as insulation coordination. The horns should provide, amongst other characteristics, near-infinite impedance during normal operating conditions to minimise conductive current losses, low impedance during the flashover, and physical resilience to the high temperature of the arc.
As operating voltages increase, greater consideration must be given to such design principles. At medium voltages, one of the two horns may be omitted as the horn-to-horn gap can otherwise be small enough to be bridged by an alighting bird. Alternatively, duplex gaps consisting of two sections on opposite sides of the insulator can be fitted. Low voltage distribution systems, in which the risk of arcing is much lower, may not use arcing horns at all.
The presence of the arcing horns necessarily disturbs the normal electric field distribution across the insulator due to their small but significant capacitance. More importantly, a flashover across arcing horns produces an earth fault resulting in a circuit outage until the fault is cleared by circuit breaker operation. For this reason, non-linear resistors known as varistors can replace arcing horns at critical locations.
If the horns are incorrectly seated, damaging resistive heating can occur during arcing.
Switch protection
Arcing horns are sometimes installed on air-insulated switchgear and transformers to protect the switch arm from arc damage. When a high voltage switch breaks a circuit, an arc can establish itself between the switch contacts before the current can be interrupted. The horns are designed to endure the arc rather than the contact surfaces of the switch itself.
Corona and grading rings
Arcing horns are not to be confused with corona rings (or the similar grading rings) which are ring-shaped assemblies surrounding connectors, or other irregular hardware pieces on high potential equipment. Corona rings and grading rings are intended to equalize and redistribute accumulated potential away from components that might be subject to local accumulation and destructive discharges, although sometimes either device may be installed in close proximity to an arcing horn assembly.
References
Electric power systems components
Electric arcs | Arcing horns | [
"Physics"
] | 1,226 | [
"Plasma phenomena",
"Physical phenomena",
"Electric arcs"
] |
4,267,651 | https://en.wikipedia.org/wiki/Parable%20of%20the%20Workers%20in%20the%20Vineyard |
The Parable of the Workers in the Vineyard (also called the Parable of the Laborers in the Vineyard or the Parable of the Generous Employer) is a parable of Jesus which appears in chapter 20 of the Gospel of Matthew in the New Testament. It is not included in the other canonical gospels. It has been described as a difficult parable to interpret.
Text
Interpretations
The parable has often been interpreted to mean that even those who are converted late in life earn equal rewards along with those converted early, and also that people who convert early in life need not feel jealous of those later converts. An alternative interpretation identifies the early laborers as Jews, some of whom resent the late-comers (Gentiles) being welcomed as equals in God's Kingdom. Both of these interpretations are discussed in Matthew Henry's 1706 Commentary on the Bible.
An alternative interpretation is that all Christians can be identified with the eleventh-hour workers. Arland J. Hultgren writes: "While interpreting and applying this parable, the question inevitably arises: Who are the eleventh-hour workers in our day? We might want to name them, such as deathbed converts or persons who are typically despised by those who are longtime veterans and more fervent in their religious commitment. But it is best not to narrow the field too quickly. At a deeper level, we are all the eleventh-hour workers; to change the metaphor, we are all honored guests of God in the kingdom. It is not really necessary to decide who the eleventh-hour workers are. The point of the parable—both at the level of Jesus and the level of Matthew's Gospel—is that God saves by grace, not by our worthiness. That applies to all of us."
A USCCB interpretation is that the parable's "close association with Mt 19:30 suggests that its teaching is the equality of all the disciples in the reward of inheriting eternal life." The USCCB interpret Mt 19:30 as: "[A]ll who respond to the call of Jesus, at whatever time (first or last), will be the same in respect to inheriting the benefits of the kingdom, which is the gift of God." In giving himself via the beatific vision, God is the greatest reward.
Some commentators have used the parable to justify the principle of a "living wage", though generally conceding that this is not the main point of the parable. An example is John Ruskin in the 19th century, who quoted the parable in the title of his book Unto This Last. Ruskin did not discuss the religious meaning of the parable but rather its social and economic implications.
Parallels
Many details of the parable, including when the workers receive their pay at the end of the day, the complaints from those who worked a full day, and the response from the king/landowner are paralleled in a similar parable found in tractate Berakhot in the Jerusalem Talmud:
To what can Rebbi Bun bar Ḥiyya be likened? To a king who hired many workers and there was one worker who was exceptionally productive in his work. What did the king do? He took him and walked with him the long and the short. In the evening, the workers came to receive their wages and he gave him his total wages with them. The workers complained and said: we were toiling the entire day and this one did toil only for two hours and he gave him his total wages with us! The king told them: This one produced in two hours more than what you produced all day long. So Rebbi Bun produced in Torah in 28 years what an outstanding Or, with the Arabic ותֹיק “constant, resolute”. student cannot learn in a hundred years. (Jerusalem Berakhot 2.8)
See also
Life of Jesus in the New Testament
Ministry of Jesus
References
Workers in the Vineyard, Parable of the
Gospel of Matthew
Works about labor
Fair division | Parable of the Workers in the Vineyard | [
"Mathematics"
] | 815 | [
"Recreational mathematics",
"Game theory",
"Fair division"
] |
4,267,751 | https://en.wikipedia.org/wiki/Balsam%20of%20Peru | Balsam of Peru or Peru balsam, also known and marketed by many other names, is a balsam derived from a tree known as Myroxylon balsamum var. pereirae; it is found in El Salvador, where it is an endemic species.
Balsam of Peru is used in food and drink for flavoring, in perfumes and toiletries for fragrance, and in medicine and pharmaceutical items for healing properties. It has a sweet scent. In some instances, balsam of Peru is listed on the ingredient label of a product by one of its various names, but it may not be required to be listed by its name by mandatory labeling conventions.
It can cause allergic reactions, with numerous large surveys identifying it as being in the "top five" allergens most commonly causing patch test reactions. It may cause inflammation, redness, swelling, soreness, itching, and blisters, including allergic contact dermatitis, stomatitis (inflammation and soreness of the mouth or tongue), cheilitis (inflammation, rash, or painful erosion of the lips, oropharyngeal mucosa, or angles of the mouth), pruritus, hand eczema, generalized or resistant plantar dermatitis, rhinitis, and conjunctivitis.
Harvesting and processing
Balsam of Peru is obtained by using rags to soak up the resin after strips of bark are removed from the trunk of Myroxylon balsamum var. pereirae, boiling the rags and letting the balsam sink in water. The balsam is an aromatic dark-brown oily fluid.
Composition
Balsam of Peru contains 25 or so different substances, including cinnamein, cinnamic acid, cinnamyl cinnamate, benzyl benzoate, benzoic acid, and vanillin. It also contains cinnamyl alcohol, cinnamaldehyde, farnesol, and nerolidol. A minority of it, approximately 30–40%, contains resins or esters of unknown composition.
Uses
Balsam of Peru is used in food and drink for flavoring, in perfumes and toiletries for fragrance, and in medicine and pharmaceutical items for healing properties.
In some cases, it is listed on the ingredient label of a product by one of its various names. Naturally occurring ingredients may contain substances identical to or very closely related to balsam of Peru.
It has four primary uses:
flavoring in foods and drinks such as
caffeinated drinks (flavored coffee, flavored tea)
alcoholic drinks (wine, beer, gin, liqueurs, apéritifs)
soft drinks (cola, root beer)
preserved citrus (candied fruit peel, marmalade)
tomato-containing products (Mexican and Italian foods with red sauces, ketchup)
sauces (chili sauce, barbecue sauce, chutney)
pickled vegetables
sweets (baked goods and pastries, pudding, ice cream, chewing gum, candy)
fragrance in perfumes, cosmetics, and toiletries, including colognes, deodorants, soaps, shampoos, conditioners, after-shave lotions, lipsticks, creams, lotions, ointments, baby powders, sunscreens, suntan lotions
antiseptic and flavoring in medicinal products such as
over-the-counter products (toothpaste, mouthwash, hemorrhoid suppositories and ointment, cough medicine/suppressant and lozenges, diaper rash ointments, oral and lip ointments, tincture of benzoin, wound spray [as it has been reported to inhibit Mycobacterium tuberculosis as well as the common ulcer-causing bacteria H. pylori in test-tube studies], calamine lotion, surgical dressings)
professional dental supplies (dental cement, eugenol used by dentists, some periodontal impression materials, treatment of dry socket)
glue, typically as a mounting medium for microscope specimens because of purified balsam of Peru's optical properties, specifically its transparency and refractive index of 1.597, very close to that of many glasses used in optics
It also can be found in scented tobacco, cleaning products, pesticides, insect repellants, air fresheners and deodorizers, scented candles, and oil paint.
Allergy
A number of national and international surveys have identified balsam of Peru as being in the "top five" allergens most commonly causing patch test reactions in people referred to dermatology clinics. A study in 2001 found that 3.8% of the general population patch tested was allergic to it. Many flavorings and perfumes contain components identical to balsam of Peru. It may cause redness, swelling, itching, and blisters.
People allergic to balsam of Peru or other chemically related substances may experience a contact dermatitis reaction. If they have oral exposure, they may experience stomatitis (inflammation and soreness of the mouth or tongue), and cheilitis (inflammation, rash, or painful erosion of the lips, oropharyngeal mucosa, or angles of their mouth). If they ingest it, they may experience pruritus and contact dermatitis in the perianal region, possibly due to unabsorbed substances in the feces. It can cause a flare-up of hand eczema. Among the other allergic reactions to balsam of Peru are generalized or resistant plantar dermatitis, rhinitis, and conjunctivitis, In a case study in Switzerland, a woman who was allergic to balsam of Peru was allergic to her boyfriend's semen following intercourse after he drank large amounts of Coca-Cola.
A positive patch test is used to diagnose an allergy to balsam of Peru. Positive patch test results indicate that the person may have problems with certain flavorings, medications, and perfumed products. Among foods, the most commonly implicated are spices, citrus, and tomatoes.
People allergic to balsam of Peru may benefit from a diet in which they avoid ingesting foods that contain it. Naturally occurring ingredients may contain substances identical to or very closely related to balsam of Peru, and may cause the same allergic reactions. In some instances, balsam of Peru is listed on the ingredient label of a product by one of its various names, but it may not be required to be listed by its name by mandatory labeling conventions (in fragrances, for example, it may simply be covered by an ingredient listing of "fragrance"). To determine if balsam of Peru is in a product, often doctors have to contact the manufacturer of the products used by the patient.
Before 1977, the main recommended marker for perfume allergy was balsam of Peru, which is still advised. The presence of balsam of Peru in a cosmetic will be denoted by the INCI term Myroxylon pereirae.
Because of allergic reactions, since 1982 crude balsam of Peru has been banned by the International Fragrance Association from use as a fragrance compound, but extracts and distillates are used up to a maximum level of 0.4% in products, and are not covered by mandatory labeling.
In March 2006, the European Commission, Health and Consumer Protection Directorate-General, Scientific Committee on Consumer Products, issued an opinion on balsam of Peru. It confirmed that crude balsam of Peru should not be used as a fragrance ingredient, because of a wide variety of test results on its sensitizing potential, but that extracts and distillates can be used up to a maximum level of 0.4% in products.
History
The name balsam of Peru is a misnomer. In the early period of the Spanish conquest of Central and South America, the balsam was collected in Central America and shipped to Callao (the port of Lima) in Peru, then shipped onward to Europe. It acquired the name of "Peru" because it was shipped via there. Its export to Europe was first documented in the seventeenth century in the German pharmacopoeia. Today it is extracted under a handicraft process, and is mainly exported from El Salvador. Another balsam, balsam of Tolu, is extracted from Myroxylon balsamum var. balsamum in a different way.
Alternate names
Among the alternate names used for balsam of Peru are:
balsam Peru
Peru balsam
Peruvian balsam
bálsamo del Perú (Spanish)
balsamum Peruvianim (Latin)
baume du Pérou (French)
baume Péruvien (French)
baume de San Salvador (French)
black balsam
China oil
Honduras balsam
Indian balsam
Surinam balsam
References
Amburaneae
Perfume ingredients
Resins
Crops originating from the Americas
Flavors
Spices
Allergology
Effects of external causes | Balsam of Peru | [
"Physics"
] | 1,824 | [
"Amorphous solids",
"Unsolved problems in physics",
"Resins"
] |
4,267,797 | https://en.wikipedia.org/wiki/Jialiang | Jialiang () is an ancient Chinese device for measuring several volume standards.
The term jialiang is mentioned in the Rites of Zhou. The passage describes the construction of one that includes three measures, fu (釜), dou (豆), and sheng (升); furthermore, the instrument weighs one jun (鈞) and its sound is the gong of huangzhong (黃鐘之宮). Known jialiang give standards for the five measures yue (龠), ge (合, equal to two yue), sheng (升, equal to ten he), dou (斗, equal to ten sheng), and hu (斛, equal to ten dou).
The earliest known jialiang was made in the first year of Wang Mang's short-lived Xin dynasty (9–23 CE), in order to standardize the measurements across the empire. Wang chose bronze to emphasize that it would last and remain as a reference. A 216-character inscription records the process of the casting and shows where each of the five measurements can be found. Wang Mang's jialiang is now in the National Palace Museum, Taipei.
The continuing availability of Wang Mang's device has ensured it an important place in researching historical Chinese metrology for millennia. Investigations on the jialiang were undertaken by such scholars as Liu Hui of the Three Kingdoms period and Zu Chongzhi and Li Chunfeng of the Tang dynasty. According to the research of modern scholar Liu Fu 劉復, at the time of its creation, following its measurements, the standard units correspond to the modern metric system in this way: 1 chi (尺) was 23.1 cm; 1 sheng (升) was 200 mL; and 1 jin (斤) was 226.7 g.
In Beijing's Forbidden City, before the Hall of Supreme Harmony and the Palace of Heavenly Purity, there are two jialiangs cast during the reign of the Qianlong Emperor, a square one in front of the Hall of Supreme Harmony and a round one in front of the Palace of Heavenly Purity. These were cast in 1745 based on an examination of the Wang Mang's round jialiang and the square jialiang made during the time of Emperor Taizong of Tang.
The jialiang and the adjacent sundial (rigui 日晷) have been described as symbolizing the sovereignty and unity of the imperial reign or emphasizing that the emperor is just and fair.
Notes
Chinese inventions
Volumetric instruments | Jialiang | [
"Technology",
"Engineering"
] | 511 | [
"Volumetric instruments",
"Measuring instruments"
] |
4,267,984 | https://en.wikipedia.org/wiki/Rydberg%20state | The Rydberg states of an atom or molecule are electronically excited states with energies that follow the Rydberg formula as they converge on an ionic state with an ionization energy. Although the Rydberg formula was developed to describe atomic energy levels, it has been used to describe many other systems that have electronic structure roughly similar to atomic hydrogen. In general, at sufficiently high principal quantum numbers, an excited electron-ionic core system will have the general character of a hydrogenic system and the energy levels will follow the Rydberg formula. Rydberg states have energies converging on the energy of the ion. The ionization energy threshold is the energy required to completely liberate an electron from the ionic core of an atom or molecule. In practice, a Rydberg wave packet is created by a laser pulse on a hydrogenic atom and thus populates a superposition of Rydberg states. Modern investigations using pump-probe experiments show molecular pathways – e.g. dissociation of (NO)2 – via these special states.
Rydberg series
Rydberg series describe the energy levels associated with partially removing an electron from the ionic core. Each Rydberg series converges on an ionization energy threshold associated with a particular ionic core configuration. These quantized Rydberg energy levels can be associated with the quasiclassical Bohr atomic picture. The closer you get to the ionization threshold energy, the higher the principal quantum number, and the smaller the energy difference between "near threshold Rydberg states." As the electron is promoted to higher energy levels, the spatial excursion of the electron from the ionic core increases and the system is more like the Bohr quasiclassical picture.
Energy of Rydberg states
The energy of Rydberg states can be refined by including a correction called the quantum defect in the Rydberg formula. The "quantum defect" correction is associated with the presence of a distributed ionic core. Even for many electronically excited molecular systems, the ionic core interaction with an excited electron can take on the general aspects of the interaction between the proton and the electron in the hydrogen atom. The spectroscopic assignment of these states follows the Rydberg formula and they are called Rydberg states of molecules.
Molecular Rydberg states
Although the energy formula of Rydberg series is a result of hydrogen-like atom structure, Rydberg states are also present in molecules. Wave functions of high Rydberg states are very diffuse and span diameters that approach infinity. As a result, any isolated neutral molecule behaves like a hydrogen-like atom at the Rydberg limit. For molecules with multiple stable monovalent cations, multiple Rydberg series may exist. Because of the complexity of molecular spectra, low-lying Rydberg states of molecules are often mixed with valence states with similar energy and are thus not pure Rydberg states.
See also
Rydberg atom
Rydberg matter
Orbital state
References
Atomic Spectra and Atomic Structure, Gerhard Herzberg, Prentice-Hall, 1937.
Atoms and Molecules, Martin Karplus and Richard N. Porter, Benjamin & Company, Inc., 1970.
External links
Army Creates Quantum Sensor That Detects Entire Radio-Frequency Spectrum; Defense One.
Rydberg Atoms and the Quantum Defect; Physics Department, Davidson College.
Rydberg Transitions; Chemistry and Biochemistiry, Georgia Tech.
Atomic physics
Atomic, molecular, and optical physics | Rydberg state | [
"Physics",
"Chemistry"
] | 669 | [
"Quantum mechanics",
"Atomic physics",
" molecular",
"Atomic",
" and optical physics"
] |
4,268,170 | https://en.wikipedia.org/wiki/Bentorite | Bentorite is a mineral with the chemical formula . It is colored violet to light violet. Its crystals are hexagonal to dihexagonal dipyramidal. It is transparent and has vitreous luster. It has perfect cleavage. It is not radioactive. Bentorite is rated 2 on the Mohs scale.
The mineral was first described in 1980 by Shulamit Gross for an occurrence in the Hatrurim Formation of Danian age along the western margin of the Dead Sea, Israel. It was named by its discoverer, Shulamit Gross, for Yaakov Ben-Tor (1910–2002), Professor at the Hebrew University of Jerusalem and the University of California, San Diego, California, US, for his contributions to geology and mineralogy in Israel.
Formation
The only naturally occurring bentorite that has been discovered is in the Hatrurim Formation near the Dead Sea in Israel. The formation consists of a mixture of metamorphosed clays, limestones, and marls. The original sediments were enriched in chromium, and later experienced heating to >1000 °C at atmospheric pressure. This formed a natural Portland cement which has since been hydrated from groundwater and/or rainwater to form a natural concrete. The source of the heat is thought to be due to combustion of coal, oil, or gas. Following this combustion metamorphosis, highly alkaline fluids penetrated and altered the rock to form supergene veins of bentorite.
Applications
When suitably prepared, concrete contains crystals of ettringite that can exchange aluminium for chromium, converting the ettringite to bentorite. This allows concretes to sequester chromium present as an environmental pollutant.
See also
Brownmillerite
Gehlenite
Larnite
Spurrite
Ye'elimite
References
Calcium minerals
Chromium minerals
Aluminium minerals
Sulfate minerals
Hexagonal minerals
Minerals in space group 194 | Bentorite | [
"Chemistry"
] | 399 | [
"Hydrate minerals",
"Hydrates"
] |
4,268,240 | https://en.wikipedia.org/wiki/Resonance-enhanced%20multiphoton%20ionization | Resonance-enhanced multiphoton ionization (REMPI) is a technique applied to the spectroscopy of atoms and small molecules. In practice, a tunable laser can be used to access an excited intermediate state. The selection rules associated with a two-photon or other multiphoton photoabsorption are different from the selection rules for a single photon transition. The REMPI technique typically involves a resonant single or multiple photon absorption to an electronically excited intermediate state followed by another photon which ionizes the atom or molecule. The light intensity to achieve a typical multiphoton transition is generally significantly larger than the light intensity to achieve a single photon photoabsorption. Because of this, subsequent photoabsorption is often very likely. An ion and a free electron will result if the photons have imparted enough energy to exceed the ionization threshold energy of the system. In many cases, REMPI provides spectroscopic information that can be unavailable to single photon spectroscopic methods, for example rotational structure in molecules is easily seen with this technique.
REMPI is usually generated by a focused frequency tunable laser beam to form a small-volume plasma. In REMPI, first m photons are simultaneously absorbed by an atom or molecule in the sample to bring it to an excited state. Other n photons are absorbed afterwards to generate an electron and ion pair. The so-called m+n REMPI is a nonlinear optical process, which can only occur within the focus of the laser beam. A small-volume plasma is formed near the laser focal region. If the energy of m photons does not match any state, an off-resonant transition can occur with an energy defect ΔE, however, the electron is very unlikely to remain in that state. For large detuning, it resides there only during the time Δt. The uncertainty principle is satisfied for Δt, where ћ=h/2π and h is the Planck constant (6.6261×10^-34 J∙s). Such transition and states are called virtual, unlike real transitions to states with long lifetimes. The real transition probability is many orders of magnitude higher than the virtual transition one, which is called resonance enhanced effect.
Rydberg states
High photon intensity experiments can involve multiphoton processes with the absorption of integer multiples of the photon energy. In experiments that involve a multiphoton resonance, the intermediate is often a low-lying Rydberg state, and the final state is often an ion. The initial state of the system, photon energy, angular momentum and other selection rules can help in determining the nature of the intermediate state. This approach is exploited in resonance-enhanced multiphoton ionization spectroscopy (REMPI). The technique is in wide use in both atomic and molecular spectroscopy. An advantage of the REMPI technique is that the ions can be detected with almost complete efficiency and even time resolved for their mass. It is also possible to gain additional information by performing experiments to look at the energy of the liberated photoelectron in these experiments.
Microwave detection
Coherent microwave scattering from electrons in REMPI-induced plasma filaments adds the capability to measure selectively-ionized species with a high spatial and temporal resolution - allowing for nonintrusive determinations of concentration profiles without the use of physical probes or electrodes. It has been applied for the detection of species such as argon, xenon, nitric oxide, carbon monoxide, atomic oxygen, and methyl radicals both within enclosed cells, open air, and atmospheric flames.
Microwave detection is based on homodyne or heterodyne technologies. They can significantly increase the detection sensitivity by suppressing the noise and follow sub-nanosecond plasma generation and evolution. The homodyne detection method mixes the detected microwave electric field with its own source to produce a signal proportional to the product of the two. The signal frequency is converted down from tens of gigahertz to below one gigahertz so that the signal can be amplified and observed with standard electronic devices. Because of the high sensitivity associated with the homodyne detection method, the lack of background noise in the microwave regime, and the capability of time gating of the detection electronics synchronous with the laser pulse, very high SNRs are possible even with milliwatt microwave sources. These high SNRs allow the temporal behavior of the microwave signal to be followed on a sub-nanosecond time scale. Thus the lifetime of electrons within the plasma can be recorded. By utilizing a microwave circulator, a single microwave horn transceiver has been built, which significantly simplifies the experimental setup.
Detection in the microwave region has numerous advantages over optical detection. Using homodyne or heterodyne technologies, the electric field rather than the power can be detected, so much better noise rejection can be achieved. In contrast to optical heterodyne techniques, no alignment or mode matching of the reference is necessary. The long wavelength of the microwaves leads to effective point coherent scattering from the plasma in the laser focal volume, so phase matching is unimportant and scattering in the backward direction is strong. Many microwave photons can be scattered from a single electron, so the amplitude of the scattering can be increased by increasing the power of the microwave transmitter. The low energy of the microwave photons corresponds to thousands of more photons per unit energy than in the visible region, so shot noise is drastically reduced. For weak ionization characteristic of trace species diagnostics, the measured electric field is a linear function of the number of electrons which is directly proportional to the trace species concentration. Furthermore, there is very little solar or other natural background radiation in the microwave spectral region.
See also
Rydberg ionization spectroscopy
Compare with laser-induced fluorescence (LIF)
References
Spectroscopy
Ionization | Resonance-enhanced multiphoton ionization | [
"Physics",
"Chemistry"
] | 1,198 | [
"Ionization",
"Physical phenomena",
"Molecular physics",
"Spectrum (physical sciences)",
"Instrumental analysis",
"Spectroscopy"
] |
4,268,332 | https://en.wikipedia.org/wiki/Cardinal%20point%20%28optics%29 | In Gaussian optics, the cardinal points consist of three pairs of points located on the optical axis of a rotationally symmetric, focal, optical system. These are the focal points, the principal points, and the nodal points; there are two of each. For ideal systems, the basic imaging properties such as image size, location, and orientation are completely determined by the locations of the cardinal points; in fact, only four points are necessary: the two focal points and either the principal points or the nodal points. The only ideal system that has been achieved in practice is a plane mirror, however the cardinal points are widely used to the behavior of real optical systems. Cardinal points provide a way to analytically simplify an optical system with many components, allowing the imaging characteristics of the system to be approximately determined with simple calculations.
Explanation
The cardinal points lie on the optical axis of an optical system. Each point is defined by the effect the optical system has on rays that pass through that point, in the paraxial approximation. The paraxial approximation assumes that rays travel at shallow angles with respect to the optical axis, so that and Aperture effects are ignored: rays that do not pass through the aperture stop of the system are not considered in the discussion below.
Focal points and planes
The front focal point of an optical system, by definition, has the property that any ray that passes through it will emerge from the system parallel to the optical axis. The rear (or back) focal point of the system has the reverse property: rays that enter the system parallel to the optical axis are focused such that they pass through the rear focal point.
The front and rear (or back) focal planes are defined as the planes, perpendicular to the optic axis, which pass through the front and rear focal points. An object infinitely far from the optical system forms an image at the rear focal plane. For an object at a finite distance, the image is formed at a different location, but rays that leave the object parallel to one another cross at the rear focal plane.
A diaphragm or "stop" at the rear focal plane of a lens can be used to filter rays by angle, since an aperture centred on the optical axis there will only pass rays that were emitted from the object at a sufficiently small angle from the optical axis. Using a sufficiently small aperture in the rear focal plane will make the lens object-space telecentric.
Similarly, the allowed range of angles on the output side of the lens can be filtered by putting an aperture at the front focal plane of the lens (or a lens group within the overall lens), and a sufficiently small aperture will make the lens image-space telecentric. This is important for DSLR cameras having CCD sensors. The pixels in these sensors are more sensitive to rays that hit them straight on than to those that strike at an angle. A lens that does not control the angle of incidence at the detector will produce pixel vignetting in the images.
Principal planes and points
The two principal planes of a lens have the property that a ray emerging from the lens to have crossed the rear principal plane at the same distance from the optical axis that the ray to have crossed the front principal plane, as viewed from the front of the lens. This means that the lens can be treated as if all of the refraction happened at the principal planes, and rays travel parallel to the optical axis between the planes. (Linear magnification between the principal planes is +1.) The principal planes are crucial in defining the properties of an optical system, since the magnification of the system is determined by the distance from an object to the front principal plane and the distance from the rear principal plane to the object's image. The principal points are the points where the principal planes cross the optical axis.
If the medium surrounding an optical system has a refractive index of 1 (e.g., air or vacuum), then the distance from each principal plane to the corresponding focal point is just the focal length of the system. In the more general case, the distance to the foci is the focal length multiplied by the index of refraction of the medium.
For a single lens surrounded by a medium of refractive index , the locations of the principal points and with respect to the respective lens vertices are given by the formulas where is the focal length of the lens, is its thickness, and and are the radii of curvature of its surfaces. Positive signs indicate distances to the right of the corresponding vertex, and negative to the left.
For a thin lens in air, the principal planes both lie at the location of the lens. The point where they cross the optical axis is sometimes misleadingly called the optical centre of the lens. For a real lens the principal planes do not necessarily pass through the centre of the lens and can even be outside the lens.
Nodal points
The front and rear nodal points of a lens have the property that a ray aimed at one of them will be refracted by the lens such that it appears to have come from the other with the same angle to the optical axis. (Angular magnification between nodal points is +1.) The nodal points therefore do for angles what the principal planes do for transverse distance. If the medium on both sides of an optical system is the same (e.g., air or vacuum), then the front and rear nodal points coincide with the front and rear principal points, respectively.
Gauss's original 1841 paper only discussed the main rays through the focal points. A colleague, Johann Listing, was the first to describe the nodal points in 1845 to evaluate the human eye, where the image is in fluid. The cardinal points were all included in a single diagram as early as 1864 (Donders), with the object in air and the image in a different medium.
The nodal points characterize a ray that goes through the centre of a lens without any angular deviation. For a lens in air with the aperture stop at the principal planes, this would be a chief ray since the nodal points and principal points coincide in this case. This is a valuable addition in its own right to what has come to be called "Gaussian optics", and if the image was in fluid instead, then that same ray would refract into the new medium, as it does in the diagram to the right. A ray through the nodal points has parallel input and output portions (blue). A simple method to find the rear nodal point for a lens with air on one side and fluid on the other is to take the rear focal length and divide it by the image medium index, which gives the effective focal length (EFL) of the lens. The EFL is the distance from the rear nodal point to the rear focal point.
The power of a lens is equal to or . For collimated light, a lens could be placed in air at the second nodal point of an optical system to give the same paraxial properties as an original lens system with an image in fluid. The power of the entire eye is about 60 dioptres, for example. Similarly, a lens used totally in fluid, like an intraocular lens, has the same definition for power, with an average value of about 21 dioptres.
Nodal points and the eye
The eye itself has a second special use of the nodal point that tends to be obscured by paraxial discussions. The cornea and retina are highly curved, unlike most imaging systems, and the optical design of the eye has the property that a "direction line" that is parallel to the input rays can be used to find the magnification or to scale retinal locations. This line passes approximately through the 2nd nodal point, but rather than being an actual paraxial ray, it identifies the image formed by ray bundles that pass through the centre of the pupil. The terminology comes from Volkmann in 1836, but most discussions incorrectly imply that paraxial properties of rays extend to very large angles, rather than recognizing this as a unique property of the eye's design. This scaling property is well-known, very useful, and very simple: angles drawn with a ruler centred on the posterior pole of the lens on a cross-section of the eye can approximately scale the retina over more than an entire hemisphere. It is only in the 2000s that the limitations of this approximation have become apparent, with an exploration into why some intraocular lens (IOL) patients see dark shadows in the far periphery (negative dysphotopsia, which is probably due to the IOL being much smaller than the natural lens.)
Optical center
[[File:Spherical Lens Optical Center, 2024-09-10.png|thumb|A diagram showing how to find the optical center O of a spherical lens. N and N are the lens's nodal points.]]
The optical center of a spherical lens is a point such that if a ray passes through it, the ray's path after leaving the lens will be parallel to its path before it entered.
In the figure at right, the points A and B are where parallel lines of radii of curvature R1 and R2 meet the lens surfaces. As a result, dashed lines tangent to the surfaces at A and B are also parallel. Because two triangles OBC2 and OAC1 are similar (i.e., their angles are same), . In whatever choice of A and B, the radii of curvatures and are same and the curvature center locations and are also same. As a result, the optical center location O, defined by the ratio on the optical axis, is fixed for a given lens.
Photography
The nodal points are widely misunderstood in photography, where it is commonly asserted that the light rays "intersect" at "the nodal point", that the iris diaphragm of the lens is located there, and that this is the correct pivot point for panoramic photography, so as to avoid parallax error. These claims generally arise from confusion about the optics of camera lenses, as well as confusion between the nodal points and the other cardinal points of the system. A better choice of the point about which to pivot a camera for panoramic photography can be shown to be the centre of the system's entrance pupil. Item #6. On the other hand, swing-lens cameras with fixed film position rotate the lens about the rear nodal point to stabilize the image on the film.
Surface vertices
In optics, surface vertices are the points where each optical surface crosses the optical axis. They are important primarily because they are physically measurable parameters for the optical element positions, and so the positions of the cardinal points of the optical system must be known with respect to the surface vertices to describe the system.
In anatomy, the surface vertices of the eye's lens are called the anterior and posterior poles''' of the lens.
Modeling optical systems as mathematical transformations
In geometrical optics, for each object ray entering an optical system, a single and unique image ray exits from the system. In mathematical terms, the optical system performs a transformation that maps every object ray to an image ray. The object ray and its associated image ray are said to be each other. This term also applies to corresponding pairs of object and image points and planes. The object and image rays, points, and planes are considered to be in two distinct optical spaces, and ; additional intermediate optical spaces may be used as well.
Rotationally symmetric optical systems; optical axis, axial points, and meridional planes
An optical system is rotationally symmetric if its imaging properties are unchanged by rotation about some axis. This (unique) axis of rotational symmetry is the optical axis of the system. Optical systems can be folded using plane mirrors; the system is still considered to be rotationally symmetric if it possesses rotational symmetry when unfolded. Any point on the optical axis (in any space) is an .
Rotational symmetry greatly simplifies the analysis of optical systems, which otherwise must be analyzed in three dimensions. Rotational symmetry allows the system to be analyzed by considering only rays confined to a single transverse plane containing the optical axis. Such a plane is called a ; it is a cross-section through the system.
Ideal, rotationally symmetric, optical imaging system
An , rotationally symmetric, optical imaging system must meet three criteria:
All rays "originating" from object point converge to a single and unique image point; imaging is stigmatic.
Object planes perpendicular to the optical axis are conjugate to image planes perpendicular to the axis.
The image of an object confined to a plane normal to the axis is geometrically similar to the object.
In some optical systems imaging is stigmatic for one or perhaps a few object points, but to be an ideal system imaging must be stigmatic for object point. In an ideal system, every object point maps to a different image point.
Unlike rays in mathematics, optical rays extend to infinity in both directions. Rays are when they are in the part of the optical system to which they apply, and are elsewhere. For example, object rays are real on the object side of the optical system, while image rays are real on the image side of the system. In stigmatic imaging, an object ray intersecting any specific point in object space must be conjugate to an image ray intersecting the conjugate point in image space. A consequence is that every point on an object ray is conjugate to some point on the conjugate image ray.
Geometrical similarity implies the image is a scale model of the object. There is no restriction on the image's orientation; the image may be inverted or otherwise rotated with respect to the object.
Focal and afocal systems, focal points systems have no focal points, principal points, or nodal points. In such systems an object ray parallel to the optical axis is conjugate to an image ray parallel to the optical axis. A system is '' if an object ray parallel to the axis is conjugate to an image ray that intersects the optical axis. The intersection of the image ray with the optical axis is the focal point in image space. Focal systems also have an axial object point F such that any ray through F is conjugate to an image ray parallel to the optical axis. F is the object space focal point of the system.
Transformation
The transformation between object space and image space is completely defined by the cardinal points of the system, and these points can be used to map any point on the object to its conjugate image point.
See also
Film plane
Pinhole camera model
Radius of curvature (optics)
Vergence (optics)
Notes and references
Geometrical optics
Geometric centers
Science of photography
de:Brennebene | Cardinal point (optics) | [
"Physics",
"Mathematics"
] | 3,016 | [
"Point (geometry)",
"Geometric centers",
"Symmetry"
] |
4,268,488 | https://en.wikipedia.org/wiki/Sand%20bath | A sand bath is a common piece of laboratory equipment made from a container filled with heated sand. It is used to evenly heat another container, most often during a chemical reaction.
A sand bath is most commonly used in conjunction with a hot plate or heating mantle. A beaker is filled with sand or metal pellets (called shot) and is placed on the plate or mantle. The reaction vessel is then partially covered by sand or pellets. The sand or shot then conducts the heat from the plate to all sides of the reaction vessel.
This technique allows a reaction vessel to be heated throughout with minimal stirring, as opposed to heating the bottom of the vessel and waiting for convection to heat the remainder, cutting down on both the duration of the reaction and the possibility of side reactions that may occur at higher temperatures.
A variation on this theme is the water bath in which the sand is replaced with water. It can be used to keep a reaction vessel at the temperature of boiling water until all water is evaporated (see Standard enthalpy change of vaporization).
Sand baths are one of the oldest known pieces of laboratory equipment, having been used by the alchemists. In Arabic alchemy, a sand bath was known as a qadr. In Latin alchemy, a sand bath was called balneum siccum, balneum cineritium, or balneum arenosum.
See also
Heat bath
Water bath
Oil bath
Notes
References
External links
https://web.archive.org/web/20110604144037/http://digicoll.library.wisc.edu/cgi-bin/HistSciTech/HistSciTech-idx?type=turn&entity=HistSciTech000900240229&isize=L
Laboratory equipment
Thermodynamics
Alchemical tools | Sand bath | [
"Physics",
"Chemistry",
"Mathematics"
] | 389 | [
"Thermodynamics",
"Dynamical systems"
] |
4,268,565 | https://en.wikipedia.org/wiki/Bookmark%20manager | A bookmark manager is any software program or feature designed to store, organize, and display web bookmarks. The bookmarks feature included in each major web browser is a rudimentary bookmark manager. More capable bookmark managers are available online as web apps, mobile apps, or browser extensions, and may display bookmarks as text links or graphical tiles (often depicting icons). Social bookmarking websites are bookmark managers. Start page browser extensions, new tab page browser extensions, and some browser start pages, also have bookmark presentation and organization features, which are typically tile-based. Some more general programs, such as certain note taking apps, have bookmark management functionality built-in.
See also
Bookmark destinations
Deep links
Home pages
Types of bookmark management
Enterprise bookmarking
Comparison of enterprise bookmarking platforms
Social bookmarking
List of social bookmarking websites
Other weblink-based systems
Search engine
Comparison of search engines with social bookmarking systems
Search engine results page
Web directory
Lists of websites
References
Software engineering | Bookmark manager | [
"Technology",
"Engineering"
] | 209 | [
"Software engineering",
"Systems engineering",
"Information technology",
"Computer engineering"
] |
4,268,620 | https://en.wikipedia.org/wiki/Inflatable%20space%20structures | Inflatable space structures are structures which use pressurized air to maintain shape and rigidity. The technological approach has been employed from the early days of the space program with satellites such as Echo, to impact attenuation system that enabled the successful landing of the Pathfinder satellite and rover on Mars in 1997. Inflatable structures are also candidates for space structures, given their low weight, and hence easy transportability.
Application
Inflatable space structures use pressurized air or gas to maintain shape and rigidity. Notable examples of terrestrial inflatable structures include inflatable boats, and some military tents. The airships of the twentieth century are examples of the concept applied in the aviation environment.
NASA has investigated inflatable, deployable structures since the early 1950s. Concepts include inflatable satellites, booms, and antennas. Inflatable heatshields, decelerators, and airbags can be used for entry, descent and landing applications. Inflatable habitats, airlocks, and space stations are possible for in-space living spaces and surface exploration missions.
The Echo 1 satellite, launched in 1960, was large inflated satellite with a diameter of 30 meters and coated with reflective material that allowed for radio signals to be bounced off its surface. The satellite was sent to orbit in a flat-folded configuration and inflated once in orbit. The airbags used on the Mars Pathfinder mission descent and landing in 1997 are an example of use of an inflatable system for impact attenuation.
Space Solar Power (SSP) solutions employing inflatable structures have been designed and qualified for space by NASA engineers.
NASA is testing a deployable heat shield solution in space as a secondary payload on the launch that will deliver the NASA JPSS-2 launch in late 2022. The Low-Earth Orbit Flight Test of an Inflatable Decelerator (LOFTID) is designed to demonstrate aerobraking and re-entry from 18,000 miles per hour after separation from the launch vehicle adapter structure.
The space station concepts developed by Bigelow Aerospace is an example of an inflatable crewed orbital space habitat.
References
Structural engineering
Spacecraft components | Inflatable space structures | [
"Engineering"
] | 441 | [
"Structural engineering",
"Construction",
"Civil engineering",
"Architecture stubs",
"Architecture"
] |
4,268,863 | https://en.wikipedia.org/wiki/Tetracyanoquinodimethane | Tetracyanoquinodimethane (TCNQ) is an organic compound with the chemical formula . It is an orange crystalline solid. This cyanocarbon, a relative of para-quinone, is an electron acceptor that is used to prepare charge transfer salts, which are of interest in molecular electronics.
Preparation and structure
TCNQ is prepared by the condensation of 1,4-cyclohexanedione with malononitrile, followed by dehydrogenation of the resulting diene with bromine:
The molecule is planar, with D2h symmetry.
Reactions
Like tetracyanoethylene (TCNE), TCNQ is easily reduced to give a blue-coloured radical anion. The reduction potential is about −0.3 V relative to the ferrocene/ferrocenium couple. This property is exploited in the development of charge-transfer salts. TCNQ also forms complexes with electron-rich metal complexes.
Charge transfer salts
TCNQ achieved great attention because it forms charge-transfer salts with high electrical conductivity. These discoveries were influential in the development of organic electronics. Illustrative is the product from treatment of TCNQ with the electron donor tetrathiafulvene (TTF), TCNQ forms an ion pair, the TTF-TCNQ complex, in which TCNQ is the acceptor. This salt crystallizes in a one-dimensionally stacked polymer, consisting of segregated stacks of cations and anions of the donors and the acceptors, respectively. The complex crystal is an organic semiconductor that exhibits metallic electric conductivity.
Related compounds
Tetracyanoethylene, another cyanocarbon that functions as an electron acceptor.
Tetrathiafulvalene, another organic compound that functions as an electron donor.
References
Nitriles
Cyclohexadienes
Vinylidene compounds
Organic semiconductors
Conjugated dienes | Tetracyanoquinodimethane | [
"Chemistry"
] | 405 | [
"Semiconductor materials",
"Molecular electronics",
"Functional groups",
"Nitriles",
"Organic semiconductors"
] |
4,269,567 | https://en.wikipedia.org/wiki/Dog | The dog (Canis familiaris or Canis lupus familiaris) is a domesticated descendant of the wolf. Also called the domestic dog, it was selectively bred from an extinct population of wolves during the Late Pleistocene by hunter-gatherers. The dog was the first species to be domesticated by humans, over 14,000 years ago and before the development of agriculture. Experts estimate that due to their long association with humans, dogs have gained the ability to thrive on a starch-rich diet that would be inadequate for other canids.
Dogs have been bred for desired behaviors, sensory capabilities, and physical attributes. Dog breeds vary widely in shape, size, and color. They have the same number of bones (with the exception of the tail), powerful jaws that house around 42 teeth, and well-developed senses of smell, hearing, and sight. Compared to humans, dogs have an inferior visual acuity, a superior sense of smell, and a relatively large olfactory cortex. They perform many roles for humans, such as hunting, herding, pulling loads, protection, companionship, therapy, aiding disabled people, and assisting police and the military.
Communication in dogs includes eye gaze, facial expression, vocalization, body posture (including movements of bodies and limbs), and gustatory communication (scents, pheromones, and taste). They mark their territories by urinating on them, which is more likely when entering a new environment. Over the millennia, dogs became uniquely adapted to human behavior; this adaptation includes being able to understand and communicate with humans. As such, the human–canine bond has been a topic of frequent study, and dogs' influence on human society has given them the sobriquet of "man's best friend".
The global dog population is estimated at 700 million to 1 billion, distributed around the world. The dog is the most popular pet in the United States, present in 34–40% of households. Developed countries make up approximately 20% of the global dog population, while around 75% of dogs are estimated to be from developing countries, mainly in the form of feral and community dogs.
Taxonomy
Dogs are domesticated members of the family Canidae. They are classified as a subspecies of Canis lupus, along with wolves and dingoes. Dogs were domesticated from wolves over 14,000 years ago by hunter-gatherers, before the development of agriculture. The remains of the Bonn–Oberkassel dog, buried alongside humans between 14,000 and 15,000 years ago, are the earliest to be conclusively identified as a domesticated dog. Genetic studies show that dogs likely diverged from wolves between 27,000 and 40,000 years ago. The dingo and the related New Guinea singing dog resulted from the geographic isolation and feralization of dogs in Oceania over 8,000 years ago.
Dogs, wolves, and dingoes have sometimes been classified as separate species. In 1758, the Swedish botanist and zoologist Carl Linnaeus assigned the genus name Canis (which is the Latin word for "dog") to the domestic dog, the wolf, and the golden jackal in his book, Systema Naturae. He classified the domestic dog as Canis familiaris and, on the next page, classified the grey wolf as Canis lupus. Linnaeus considered the dog to be a separate species from the wolf because of its upturning tail (cauda recurvata in Latin term), which is not found in any other canid. In the 2005 edition of Mammal Species of the World, mammalogist W. Christopher Wozencraft listed the wolf as a wild subspecies of Canis lupus and proposed two additional subspecies: familiaris, as named by Linnaeus in 1758, and dingo, named by Meyer in 1793. Wozencraft included hallstromi (the New Guinea singing dog) as another name (junior synonym) for the dingo. This classification was informed by a 1999 mitochondrial DNA study.
The classification of dingoes is disputed and a political issue in Australia. Classifying dingoes as wild dogs simplifies reducing or controlling dingo populations that threaten livestock. Treating dingoes as a separate species allows conservation programs to protect the dingo population. Dingo classification affects wildlife management policies, legislation, and societal attitudes. In 2019, a workshop hosted by the IUCN/Species Survival Commission's Canid Specialist Group considered the dingo and the New Guinea singing dog to be feral Canis familiaris. Therefore, it did not assess them for the IUCN Red List of threatened species.
Domestication
The earliest remains generally accepted to be those of a domesticated dog were discovered in Bonn-Oberkassel, Germany. Contextual, isotopic, genetic, and morphological evidence shows that this dog was not a local wolf. The dog was dated to 14,223 years ago and was found buried along with a man and a woman, all three having been sprayed with red hematite powder and buried under large, thick basalt blocks. The dog had died of canine distemper. This timing indicates that the dog was the first species to be domesticated in the time of hunter-gatherers, which predates agriculture. Earlier remains dating back to 30,000 years ago have been described as Paleolithic dogs, but their status as dogs or wolves remains debated because considerable morphological diversity existed among wolves during the Late Pleistocene.
DNA sequences show that all ancient and modern dogs share a common ancestry and descended from an ancient, extinct wolf population that was distinct from any modern wolf lineage. Some studies have posited that all living wolves are more closely related to each other than to dogs, while others have suggested that dogs are more closely related to modern Eurasian wolves than to American wolves.
The dog is a domestic animal that likely travelled a commensal pathway into domestication (i.e. humans initially neither benefitted nor were harmed by wild dogs eating refuse from their camps). The questions of when and where dogs were first domesticated remains uncertain. Genetic studies suggest a domestication process commencing over 25,000 years ago, in one or several wolf populations in either Europe, the high Arctic, or eastern Asia. In 2021, a literature review of the current evidence infers that the dog was domesticated in Siberia 23,000 years ago by ancient North Siberians, then later dispersed eastward into the Americas and westward across Eurasia, with dogs likely accompanying the first humans to inhabit the Americas. Some studies have suggested that the extinct Japanese wolf is closely related to the ancestor of domestic dogs.
In 2018, a study identified 429 genes that differed between modern dogs and modern wolves. As the differences in these genes could also be found in ancient dog fossils, these were regarded as being the result of the initial domestication and not from recent breed formation. These genes are linked to neural crest and central nervous system development. These genes affect embryogenesis and can confer tameness, smaller jaws, floppy ears, and diminished craniofacial development, which distinguish domesticated dogs from wolves and are considered to reflect domestication syndrome. The study concluded that during early dog domestication, the initial selection was for behavior. This trait is influenced by those genes which act in the neural crest, which led to the phenotypes observed in modern dogs.
Breeds
There are around 450 official dog breeds, the most of any mammal. Dogs began diversifying in the Victorian era, when humans took control of their natural selection. Most breeds were derived from small numbers of founders within the last 200 years. Since then, dogs have undergone rapid phenotypic change and have been subjected to artificial selection by humans. The skull, body, and limb proportions between breeds display more phenotypic diversity than can be found within the entire order of carnivores. These breeds possess distinct traits related to morphology, which include body size, skull shape, tail phenotype, fur type, and colour. As such, humans have long used dogs for their desirable traits to complete or fulfill a certain work or role. Their behavioural traits include guarding, herding, hunting, retrieving, and scent detection. Their personality traits include hypersocial behavior, boldness, and aggression. Present-day dogs are dispersed around the world. An example of this dispersal is the numerous modern breeds of European lineage during the Victorian era.
Anatomy and physiology
Size and skeleton
Dogs are extremely variable in size, ranging from one of the largest breeds, the Great Dane, at and , to one of the smallest, the Chihuahua, at and . All healthy dogs, regardless of their size and type, have the same amount of bones (with the exception of the tail), although there is significant skeletal variation between dogs of different types. The dog's skeleton is well adapted for running; the vertebrae on the neck and back have extensions for back muscles, consisting of epaxial muscles and hypaxial muscles, to connect to; the long ribs provide room for the heart and lungs; and the shoulders are unattached to the skeleton, allowing for flexibility.
Compared to the dog's wolf-like ancestors, selective breeding since domestication has seen the dog's skeleton increase in size for larger types such as mastiffs and miniaturised for smaller types such as terriers; dwarfism has been selectively bred for some types where short legs are preferred, such as dachshunds and corgis. Most dogs naturally have 26 vertebrae in their tails, but some with naturally short tails have as few as three.
The dog's skull has identical components regardless of breed type, but there is significant divergence in terms of skull shape between types. The three basic skull shapes are the elongated dolichocephalic type as seen in sighthounds, the intermediate mesocephalic or mesaticephalic type, and the very short and broad brachycephalic type exemplified by mastiff type skulls. The jaw contains around 42 teeth, and it has evolved for the consumption of flesh. Dogs use their carnassial teeth to cut food into bite-sized chunks, more especially meat.
Senses
Dogs' senses include vision, hearing, smell, taste, touch, and magnetoreception. One study suggests that dogs can feel small variations in Earth's magnetic field. Dogs prefer to defecate with their spines aligned in a north–south position in calm magnetic field conditions.
Dogs' vision is dichromatic; their visual world consists of yellows, blues, and grays. They have difficulty differentiating between red and green, and much like other mammals, the dog's eye is composed of two types of cone cells compared to the human's three. The divergence of the eye axis of dogs ranges from 12 to 25°, depending on the breed, which can have different retina configurations. The fovea centralis area of the eye is attached to a nerve fiber, and is the most sensitive to photons. Additionally, a study found that dogs' visual acuity was up to eight times less effective than a human, and their ability to discriminate levels of brightness was about two times worse than a human.
While the human brain is dominated by a large visual cortex, the dog brain is dominated by a large olfactory cortex. Dogs have roughly forty times more smell-sensitive receptors than humans, ranging from about 125million to nearly 300million in some dog breeds, such as bloodhounds. This sense of smell is the most prominent sense of the species; it detects chemical changes in the environment, allowing dogs to pinpoint the location of mating partners, potential stressors, resources, etc. Dogs also have an acute sense of hearing up to four times greater than that of humans. They can pick up the slightest sounds from about compared to for humans.
Dogs have stiff, deeply embedded hairs known as whiskers that sense atmospheric changes, vibrations, and objects not visible in low light conditions. The lower most part of whiskers hold more receptor cells than other hair types, which help in alerting dogs of objects that could collide with the nose, ears, and jaw. Whiskers likely also facilitate the movement of food towards the mouth.
Coat
The coats of domestic dogs are of two varieties: "double" being common in dogs (as well as wolves) originating from colder climates, made up of a coarse guard hair and a soft down hair, or "single", with the topcoat only. Breeds may have an occasional "blaze", stripe, or "star" of white fur on their chest or underside. Premature graying can occur in dogs as early as one year of age; this is associated with impulsive behaviors, anxiety behaviors, and fear of unfamiliar noise, people, or animals. Some dog breeds are hairless, while others have a very thick corded coat. The coats of certain breeds are often groomed to a characteristic style, for example, the Yorkshire Terrier's "show cut".
Dewclaw
A dog's dewclaw is the fifth digit in its forelimb and hind legs. Dewclaws on the forelimbs are attached by bone and ligament, while the dewclaws on the hind legs are attached only by skin. Most dogs aren't born with dewclaws in their hind legs, and some are without them in their forelimbs. Dogs' dewclaws consist of the proximal phalanges and distal phalanges. Some publications theorize that dewclaws in wolves, who usually do not have dewclaws, were a sign of hybridization with dogs.
Tail
A dog's tail is the terminal appendage of the vertebral column, which is made up of a string of 5 to 23 vertebrae enclosed in muscles and skin that support the dog's back extensor muscles. One of the primary functions of a dog's tail is to communicate their emotional state. The tail also helps the dog maintain balance by putting its weight on the opposite side of the dog's tilt, and it can also help the dog spread its anal gland's scent through the tail's position and movement. Dogs can have a violet gland (or supracaudal gland) characterized by sebaceous glands on the dorsal surface of their tails; in some breeds, it may be vestigial or absent. The enlargement of the violet gland in the tail, which can create a bald spot from hair loss, can be caused by Cushing's disease or an excess of sebum from androgens in the sebaceous glands.
A study suggests that dogs show asymmetric tail-wagging responses to different emotive stimuli. "Stimuli that could be expected to elicit approach tendencies seem to be associated with [a] higher amplitude of tail-wagging movements to the right side". Dogs can injure themselves by wagging their tails forcefully; this condition is called kennel tail, happy tail, bleeding tail, or splitting tail. In some hunting dogs, the tail is traditionally docked to avoid injuries. Some dogs can be born without tails because of a DNA variant in the T gene, which can also result in a congenitally short (bobtail) tail. Tail docking is opposed by many veterinary and animal welfare organisations such as the American Veterinary Medical Association and the British Veterinary Association. Evidence from veterinary practices and questionnaires showed that around 500 dogs would need to have their tail docked to prevent one injury.
Health
Numerous disorders have been known to affect dogs. Some are congenital and others are acquired. Dogs can acquire upper respiratory tract diseases including diseases that affect the nasal cavity, the larynx, and the trachea; lower respiratory tract diseases which includes pulmonary disease and acute respiratory diseases; heart diseases which includes any cardiovascular inflammation or dysfunction of the heart; haemopoietic diseases including anaemia and clotting disorders; gastrointestinal disease such as diarrhoea and gastric dilatation volvulus; hepatic disease such as portosystemic shunts and liver failure; pancreatic disease such as pancreatitis; renal disease; lower urinary tract disease such as cystitis and urolithiasis; endocrine disorders such as diabetes mellitus, Cushing's syndrome, hypoadrenocorticism, and hypothyroidism; nervous system diseases such as seizures and spinal injury; musculoskeletal disease such as arthritis and myopathies; dermatological disorders such as alopecia and pyoderma; ophthalmological diseases such as conjunctivitis, glaucoma, entropion, and progressive retinal atrophy; and neoplasia.
Common dog parasites are lice, fleas, fly larvae, ticks, mites, cestodes, nematodes, and coccidia. Taenia is a notable genus with 5 species in which dogs are the definitive host. Additionally, dogs are a source of zoonoses for humans. They are responsible for 99% of rabies cases worldwide; however, in some developed countries such as the UK, rabies is absent from dogs and is instead only transmitted by bats. Other common zoonoses are hydatid disease, leptospirosis, pasteurellosis, ringworm, and toxocariasis. Common infections in dogs include canine adenovirus, canine distemper virus, canine parvovirus, leptospirosis, canine influenza, and canine coronavirus. All of these conditions have vaccines available.
Dogs are the companion animal most frequently reported for exposure to toxins. Most poisonings are accidental and over 80% of reports of exposure to the ASPCA animal poisoning hotline are due to oral exposure. The most common substances people report exposure to are: pharmaceuticals, toxic foods, and rodenticides. Data from the Pet Poison Helpline shows that human drugs are the most frequent cause of toxicosis death. The most common household products ingested are cleaning products. Most food related poisonings involved theobromine poisoning (chocolate). Other common food poisonings include xylitol, Vitis (grapes, raisins, etc.) and Allium (garlic, onions, etc.). Pyrethrin insecticides were the most common cause of pesticide poisoning. Metaldehyde a common pesticide for snails and slugs typically causes severe outcomes when ingested by dogs.
Neoplasia is the most common cause of death for dogs. Other common causes of death are heart and renal failure. Their pathology is similar to that of humans, as is their response to treatment and their outcomes. Genes found in humans to be responsible for disorders are investigated in dogs as being the cause and vice versa.
Lifespan
The typical lifespan of dogs varies widely among breeds, but the median longevity (the age at which half the dogs in a population have died and half are still alive) is approximately 12.7 years. Obesity correlates negatively with longevity with one study finding obese dogs to have a life expectancy approximately a year and a half less than dogs with a healthy weight.
In a 2024 UK study analyzing 584,734 dogs, it was concluded that purebred dogs lived longer than crossbred dogs, challenging the previous notion of the latter having the higher life expectancies. The authors noted that their study included "designer dogs" as crossbred and that purebred dogs were typically given better care than their crossbred counterparts, which likely influenced the outcome of the study. Other studies also show that fully mongrel dogs live about a year longer on average than dogs with pedigrees. Furthermore, small dogs with longer muzzles have been shown to have higher lifespans than larger medium-sized dogs with much more depressed muzzles. For free-ranging dogs, less than 1 in 5 reach sexual maturity, and the median life expectancy for feral dogs is less than half of dogs living with humans.
Reproduction
In domestic dogs, sexual maturity happens around six months to one year for both males and females, although this can be delayed until up to two years of age for some large breeds. This is the time at which female dogs will have their first estrous cycle, characterized by their vulvas swelling and producing discharges, usually lasting between 4 and 20 days. They will experience subsequent estrous cycles semiannually, during which the body prepares for pregnancy. At the peak of the cycle, females will become estrous, mentally and physically receptive to copulation. Because the ova survive and can be fertilized for a week after ovulation, more than one male can sire the same litter. Fertilization typically occurs two to five days after ovulation. After ejaculation, the dogs are coitally tied for around 5–30 minutes because of the male's bulbus glandis swelling and the female's constrictor vestibuli contracting; the male will continue ejaculating until they untie naturally due to muscle relaxation. 14–16 days after ovulation, the embryo attaches to the uterus, and after seven to eight more days, a heartbeat is detectable. Dogs bear their litters roughly 58 to 68 days after fertilization, with an average of 63 days, although the length of gestation can vary. An average litter consists of about six puppies.
Neutering
Neutering is the sterilization of animals via gonadectomy, which is an orchidectomy (castration) in dogs and ovariohysterectomy (spay) in bitches. Neutering reduces problems caused by hypersexuality, especially in male dogs. Spayed females are less likely to develop cancers affecting the mammary glands, ovaries, and other reproductive organs. However, neutering increases the risk of urinary incontinence in bitches, prostate cancer in dogs, and osteosarcoma, hemangiosarcoma, cruciate ligament rupture, pyometra, obesity, and diabetes mellitus in either sex.
Neutering is the most common surgical procedure in dogs less than a year old in the US and is seen as a control method for overpopulation. Neutering often occurs as early as 6–14 weeks in shelters in the US. The American Society for the Prevention of Cruelty to Animals (ASPCA) advises that dogs not intended for further breeding should be neutered so that they do not have undesired puppies that may later be euthanized. However, the Society for Theriogenology and the American College of Theriogenologists made a joint statement that opposes mandatory neutering; they said that the cause of overpopulation in the US is cultural.
Neutering is less common in most European countries, especially in Nordic countries—except for the UK, where it is common. In Norway, neutering is illegal unless for the benefit of the animal's health (e.g., ovariohysterectomy in case of ovarian or uterine neoplasia). Some European countries have similar laws to Norway, but their wording either explicitly allows for neutering for controlling reproduction or it is allowed in practice or by contradiction through other laws. Italy and Portugal have passed recent laws that promote it. Germany forbids early age neutering, but neutering is still allowed at the usual age. In Romania, neutering is mandatory except for when a pedigree to select breeds can be shown.
Inbreeding depression
A common breeding practice for pet dogs is to mate them between close relatives (e.g., between half- and full-siblings). In a study of seven dog breeds (the Bernese Mountain Dog, Basset Hound, Cairn Terrier, Brittany, German Shepherd Dog, Leonberger, and West Highland White Terrier), it was found that inbreeding decreases litter size and survival. Another analysis of data on 42,855 Dachshund litters found that as the inbreeding coefficient increased, litter size decreased and the percentage of stillborn puppies increased, thus indicating inbreeding depression. In a study of Boxer litters, 22% of puppies died before reaching 7 weeks of age. Stillbirth was the most frequent cause of death, followed by infection. Mortality due to infection increased significantly with increases in inbreeding.
Behavior
Dog behavior has been shaped by millennia of contact with humans. They have acquired the ability to understand and communicate with humans and are uniquely attuned to human behaviors. Behavioral scientists suggest that a set of social-cognitive abilities in domestic dogs that are not possessed by the dog's canine relatives or other highly intelligent mammals, such as great apes, are parallel to children's social-cognitive skills.
Most domestic animals were initially bred for the production of goods. Dogs, on the other hand, were selectively bred for desirable behavioral traits. In 2016, a study found that only 11 fixed genes showed variation between wolves and dogs. These gene variations indicate the occurrence of artificial selection and the subsequent divergence of behavior and anatomical features. These genes have been shown to affect the catecholamine synthesis pathway, with the majority of the genes affecting the fight-or-flight response (i.e., selection for tameness) and emotional processing. Compared to their wolf counterparts, dogs tend to be less timid and less aggressive, though some of these genes have been associated with aggression in certain dog breeds. Traits of high sociability and lack of fear in dogs may include genetic modifications related to Williams-Beuren syndrome in humans, which cause hypersociability at the expense of problem-solving ability. In a 2023 study of 58 dogs, some dogs classified as attention deficit hyperactivity disorder-like showed lower serotonin and dopamine concentrations. A similar study claims that hyperactivity is more common in male and young dogs. A dog can become aggressive because of trauma or abuse, fear or anxiety, territorial protection, or protecting an item it considers valuable. Acute stress reactions from post-traumatic stress disorder (PTSD) seen in dogs can evolve into chronic stress. Police dogs with PTSD can often refuse to work.
Dogs have a natural instinct called prey drive (the term is chiefly used to describe training dogs' habits) which can be influenced by breeding. These instincts can drive dogs to consider objects or other animals to be prey or drive possessive behavior. These traits have been enhanced in some breeds so that they may be used to hunt and kill vermin or other pests. Puppies or dogs sometimes bury food underground. One study found that wolves outperformed dogs in finding food caches, likely due to a "difference in motivation" between wolves and dogs. Some puppies and dogs engage in coprophagy out of habit, stress, for attention, or boredom; most of them will not do it later in life. A study hypothesizes that the behavior was inherited from wolves, a behavior likely evolved to lessen the presence of intestinal parasites in dens. Most dogs can swim. In a study of 412 dogs, around 36.5% of the dogs could not swim; the other 63.5% were able to swim without a trainer in a swimming pool. A study of 55 dogs found a correlation between swimming and 'improvement' of the hip osteoarthritis joint.
Nursing
The female dog may produce colostrum, a type of milk high in nutrients and antibodies, 1–7 days before giving birth. Milk production lasts for around three months, and increases with litter size. The dog can sometimes vomit and refuse food during child contractions. In the later stages of the dog's pregnancy, nesting behaviour may occur. Puppies are born with a protective fetal membrane that the mother usually removes shortly after birth. Dogs can have the maternal instincts to start grooming their puppies, consume their puppies' feces, and protect their puppies, likely due to their hormonal state. While male-parent dogs can show more disinterested behaviour toward their own puppies, most can play with the young pups as they would with other dogs or humans. A female dog may abandon or attack her puppies or her male partner dog if she is stressed or in pain.
Intelligence
Researchers have tested dogs' ability to perceive information, retain it as knowledge, and apply it to solve problems. Studies of two dogs suggest that dogs can learn by inference. A study with Rico, a Border Collie, showed that he knew the labels of over 200 different items. He inferred the names of novel things by exclusion learning and correctly retrieved those new items after four weeks of the initial exposure. A study of another Border Collie, Chaser, documented that he had learned the names and could associate them by verbal command with over 1,000 words.
One study of canine cognitive abilities found that dogs' capabilities are similar to those of horses, chimpanzees, or cats. One study of 18 household dogs found that the dogs could not distinguish food bowls at specific locations without distinguishing cues; the study stated that this indicates a lack of spatial memory. A study stated that dogs have a visual sense for number. The dogs showed a ratio-dependent activation both for numerical values from 1–3 to larger than four.
Dogs demonstrate a theory of mind by engaging in deception. Another experimental study showed evidence that Australian dingos can outperform domestic dogs in non-social problem-solving, indicating that domestic dogs may have lost much of their original problem-solving abilities once they joined humans. Another study showed that dogs stared at humans after failing to complete an impossible version of the same task they had been trained to solve. Wolves, under the same situation, avoided staring at humans altogether.
Communication
Dog communication is the transfer of information between dogs, as well as between dogs and humans. Communication behaviors of dogs include eye gaze, facial expression, vocalization, body posture (including movements of bodies and limbs), and gustatory communication (scents, pheromones, and taste). Dogs mark their territories by urinating on them, which is more likely when entering a new environment. Both sexes of dogs may also urinate to communicate anxiety or frustration, submissiveness, or when in exciting or relaxing situations. Aroused dogs can be a result of the dogs' higher cortisol levels. Dogs begin socializing with other dogs by the time they reach the ages of 3 to 8 weeks, and at about 5 to 12 weeks of age, they alter their focus from dogs to humans. Belly exposure in dogs can be a defensive behavior that can lead to a bite or to seek comfort.
Humans communicate with dogs by using vocalization, hand signals, and body posture. With their acute sense of hearing, dogs rely on the auditory aspect of communication for understanding and responding to various cues, including the distinctive barking patterns that convey different messages. A study using functional magnetic resonance imaging (fMRI) has shown that dogs respond to both vocal and nonvocal voices using the brain's region towards the temporal pole, similar to that of humans' brains. Most dogs also looked significantly longer at the face whose expression matched the valence of vocalization. A study of caudate responses shows that dogs tend to respond more positively to social rewards than to food rewards.
Ecology
Population
The dog is the most widely abundant large carnivoran living in the human environment. In 2020, the estimated global dog population was between 700 million and 1 billion. In the same year, a study found the dog to be the most popular pet in the United States, as they were present in 34 out of every 100 homes. About 20% of the dog population live in developed countries. In the developing world, it is estimated that three-quarters of the world's dog population lives in the developing world as feral, village, or community dogs. Most of these dogs live as scavengers and have never been owned by humans, with one study showing that village dogs' most common response when approached by strangers is to run away (52%) or respond aggressively (11%).
Competitors
Feral and free-ranging dogs' potential to compete with other large carnivores is limited by their strong association with humans. Although wolves are known to kill dogs, wolves tend to live in pairs in areas where they are highly persecuted, giving them a disadvantage when facing large dog groups. In some instances, wolves have displayed an uncharacteristic fearlessness of humans and buildings when attacking dogs, to the extent that they have to be beaten off or killed. Although the numbers of dogs killed each year are relatively low, there is still a fear among humans of wolves entering villages and farmyards to take dogs, and losses of dogs to wolves have led to demands for more liberal wolf hunting regulations.
Coyotes and big cats have also been known to attack dogs. In particular, leopards are known to have a preference for dogs and have been recorded to kill and consume them, no matter their size. Siberian tigers in the Amur river region have killed dogs in the middle of villages. They will not tolerate wolves as competitors within their territories, and the tigers could be considering dogs in the same way. Striped hyenas are known to kill dogs in their range. Dogs as introduced predators have affected the ecology of New Zealand, which lacked indigenous land-based mammals before human settlement. Dogs have made 11 vertebrate species extinct and are identified as a 'potential threat' to at least 188 threatened species worldwide. Dogs have also been linked to the extinction of 156 animal species. Dogs have been documented to have killed a few birds of the endangered species, the kagu, in New Caledonia.
Diet
Dogs are typically described as omnivores. Compared to wolves, dogs from agricultural societies have extra copies of amylase and other genes involved in starch digestion that contribute to an increased ability to thrive on a starch-rich diet. Similar to humans, some dog breeds produce amylase in their saliva and are classified as having a high-starch diet. Despite being an omnivore, dogs are only able to conjugate bile acid with taurine. They must get vitamin D from their diet.
Of the twenty-one amino acids common to all life forms (including selenocysteine), dogs cannot synthesize ten: arginine, histidine, isoleucine, leucine, lysine, methionine, phenylalanine, threonine, tryptophan, and valine. Like cats, dogs require arginine to maintain nitrogen balance. These nutritional requirements place dogs halfway between carnivores and omnivores.
Range
As a domesticated or semi-domesticated animal, the dog has notable exceptions of presence in:
The Aboriginal Tasmanians, who were separated from Australia before the arrival of dingos on that continent
The Andamanese peoples, who were isolated when rising sea levels covered the land bridge to Myanmar
The Fuegians, who instead domesticated the Fuegian dog, an already extinct different canid species
Individual Pacific islands whose maritime settlers did not bring dogs or where the dogs died out after original settlement, notably the Mariana Islands, Palau and most of the Caroline Islands with exceptions such as Fais Island and Nukuoro, the Marshall Islands, the Gilbert Islands, New Caledonia, Vanuatu, Tonga, Marquesas, Mangaia in the Cook Islands, Rapa Iti in French Polynesia, Easter Island, the Chatham Islands, and Pitcairn Island (settled by the Bounty mutineers, who killed off their dogs to escape discovery by passing ships).
Dogs were introduced to Antarctica as sled dogs. Starting practice in December 1993, dogs were later outlawed by the Protocol on Environmental Protection to the Antarctic Treaty international agreement due to the possible risk of spreading infections.
Roles with humans
The domesticated dog originated as a predator and scavenger. They inherited complex behaviors, such as bite inhibition, from their wolf ancestors, which would have been pack hunters with complex body language. These sophisticated forms of social cognition and communication may account for dogs' trainability, playfulness, and ability to fit into human households and social situations, and probably also their co-existence with early human hunter-gatherers. Dogs perform many roles for people, such as hunting, herding, pulling loads, protection, assisting police and the military, companionship, and aiding disabled individuals. These roles in human society have earned them the nickname "man's best friend" in the Western world. In some cultures, however, dogs are also a source of meat.
Pets
The keeping of dogs as companions, particularly by elites, has a long history. Pet-dog populations grew significantly after World War II as suburbanization increased. In the 1980s, there have been changes in the pet dog's functions, such as the increased role of dogs in the emotional support of their human guardians. Within the second half of the 20th century, more and more dog owners considered their animal to be a part of the family. This major social status shift allowed the dog to conform to social expectations of personality and behavior. The second has been the broadening of the concepts of family and the home to include dogs-as-dogs within everyday routines and practices.
Products such as dog-training books, classes, and television programs, target dog owners. Some dog-trainers have promoted a dominance model of dog-human relationships. However, the idea of the "alpha dog" trying to be dominant is based on a controversial theory about wolf packs. It has been disputed that "trying to achieve status" is characteristic of dog-human interactions. Human family members have increased participation in activities in which the dog is an integral partner, such as dog dancing and dog yoga.
According to statistics published by the American Pet Products Manufacturers Association in the National Pet Owner Survey in 2009–2010, an estimated 77.5 million people in the United States have pet dogs. The source shows that nearly 40% of American households own at least one dog, of which 67% own just one dog, 25% own two dogs, and nearly 9% own more than two dogs. The data also shows an equal number of male and female pet dogs; less than one-fifth of the owned dogs come from shelters.
Workers
In addition to dogs' role as companion animals, dogs have been bred for herding livestock (such as collies and sheepdogs); for hunting; for rodent control (such as terriers); as search and rescue dogs; as detection dogs (such as those trained to detect illicit drugs or chemical weapons); as homeguard dogs; as police dogs (sometimes nicknamed "K-9"); as welfare-purpose dogs; as dogs who assist fishermen retrieve their nets; and as dogs that pull loads (such as sled dogs). In 1957, the dog Laika became one of the first animals to be launched into Earth orbit aboard the Soviets's Sputnik 2; Laika died during the flight from overheating. Various kinds of service dogs and assistance dogs, including guide dogs, hearing dogs, mobility assistance dogs, and psychiatric service dogs, assist individuals with disabilities. A study of 29 dogs found that 9 dogs owned by people with epilepsy were reported to exhibit attention-getting behavior to their handler 30 seconds to 45 minutes prior to an impending seizure; there was no significant correlation between the patients' demographics, health, or attitude towards their pets.
Shows and sports
Dogs compete in breed-conformation shows and dog sports (including racing, sledding, and agility competitions). In dog shows, also referred to as "breed shows", a judge familiar with the specific dog breed evaluates individual purebred dogs for conformity with their established breed type as described in a breed standard. Weight pulling, a dog sport involving pulling weight, has been criticized for promoting doping and for its risk of injury.
Dogs as food
Humans have consumed dog meat going back at least 14,000 years. It's unknown to what extent prehistoric dogs were consumed and bred for meat. For centuries, the practice was prevalent in Southeast Asia, East Asia, Africa, and Oceania before cultural changes triggered by the spread of religions resulted in dog meat consumption declining and becoming more taboo. Switzerland, Polynesia, and pre-Columbian Mexico historically consumed dog meat. Some Native American dogs, like the Peruvian Hairless Dog and Xoloitzcuintle, were raised to be sacrificed and eaten. Han Chinese traditionally ate dogs. Consumption of dog meat declined but did not end during the Sui dynasty (581–618) and Tang dynasty (618–907) due in part to the spread of Buddhism and the upper class rejecting the practice. Dog consumption was rare in India, Iran, and Europe.
Eating dog meat is a social taboo in most parts of the world, though some still consume it in modern times. It is still consumed in some East Asian countries, including China, Vietnam, Korea, Indonesia, and the Philippines. An estimated 30 million dogs are killed and consumed in Asia every year. China is the world's largest consumer of dogs, with an estimated 10 to 20 million dogs killed every year for human consumption. In Vietnam, about 5 million dogs are slaughtered annually. In 2024, China, Singapore, and Thailand placed a ban on the consumption of dogs within their borders. In some parts of Poland and Central Asia, dog fat is reportedly believed to be beneficial for the lungs. Proponents of eating dog meat have argued that placing a distinction between livestock and dogs is Western hypocrisy and that there is no difference in eating different animals' meat.
There is a long history of dog meat consumption in South Korea, but the practice has fallen out of favor. A 2017 survey found that under 40% of participants supported a ban on the distribution and consumption of dog meat. This increased to over 50% in 2020, suggesting changing attitudes, particularly among younger individuals. In 2018, the South Korean government passed a bill banning restaurants that sell dog meat from doing so during that year's Winter Olympics. On 9 January 2024, the South Korean parliament passed a law banning the distribution and sale of dog meat. It will take effect in 2027, with plans to assist dog farmers in transitioning to other products. The primary type of dog raised for meat in South Korea has been the Nureongi. In North Korea where meat is scarce, eating dog is a common and accepted practice, officially promoted by the government.
Health risks
In 2018, the World Health Organization (WHO) reported that 59,000 people died globally from rabies, with 59.6% of the deaths in Asia and 36.4% in Africa. Rabies is a disease for which dogs are the most significant vector. Dog bites affect tens of millions of people globally each year. The primary victims of dog bite incidents are children. They are more likely to sustain more serious injuries from bites, which can lead to death. Sharp claws can lacerate flesh and cause serious infections. In the United States, cats and dogs are a factor in more than 86,000 falls each year. It has been estimated that around 2% of dog-related injuries treated in U.K. hospitals are domestic accidents. The same study concluded that dog-associated road accidents involving injuries more commonly involve two-wheeled vehicles. Some countries and cities have also banned or restricted certain dog breeds, usually for safety concerns.
Toxocara canis (dog roundworm) eggs in dog feces can cause toxocariasis. It is estimated that nearly 14% of people in the United States are infected with Toxocara; about 10,000 cases are reported each year. Untreated toxocariasis can cause retinal damage and decreased vision. Dog feces can also contain hookworms that cause cutaneous larva migrans in humans.
Health benefits
The scientific evidence is mixed as to whether a dog's companionship can enhance human physical and psychological well-being. Studies suggest that there are benefits to physical health and psychological well-being, but they have been criticized for being "poorly controlled". One study states that "the health of elderly people is related to their health habits and social supports but not to their ownership of, or attachment to, a companion animal". Earlier studies have shown that pet-dog or -cat guardians make fewer hospital visits and are less likely to be on medication for heart problems and sleeping difficulties than non-guardians. People with pet dogs took considerably more physical exercise than those with cats or those without pets; these effects are relatively long-term. Pet guardianship has also been associated with increased survival in cases of coronary artery disease. Human guardians are significantly less likely to die within one year of an acute myocardial infarction than those who do not own dogs. Studies have found a small to moderate correlation between dog-ownership and increased adult physical-activity levels.
A 2005 paper by the British Medical Journal states: Recent research has failed to support earlier findings that pet ownership is associated with a reduced risk of cardiovascular disease, a reduced use of general practitioner services, or any psychological or physical benefits on health for community dwelling older people. Research has, however, pointed to significantly less absenteeism from school through sickness among children who live with pets. Health benefits of dogs can result from contact with dogs in general, not solely from having dogs as pets. For example, when in a pet dog's presence, people show reductions in cardiovascular, behavioral, and psychological indicators of anxiety and are exposed to immune-stimulating microorganisms, which can protect against allergies and autoimmune diseases (according to the hygiene hypothesis). Other benefits include dogs as social support.
One study indicated that wheelchair-users experience more positive social interactions with strangers when accompanied by a dog than when they are not. In a 2015 study, it was found that having a pet made people more inclined to foster positive relationships with their neighbors. In one study, new guardians reported a significant reduction in minor health problems during the first month following pet acquisition, which was sustained through the 10-month study.
Using dogs and other animals as a part of therapy dates back to the late-18th century, when animals were introduced into mental institutions to help socialize patients with mental disorders. Animal-assisted intervention research has shown that animal-assisted therapy with a dog can increase smiling and laughing among people with Alzheimer's disease. One study demonstrated that children with ADHD and conduct disorders who participated in an education program with dogs and other animals showed increased attendance, knowledge, and skill-objectives and decreased antisocial and violent behavior compared with those not in an animal-assisted program.
Cultural importance
Artworks have depicted dogs as symbols of guidance, protection, loyalty, fidelity, faithfulness, alertness, and love. In ancient Mesopotamia, from the Old Babylonian period until the Neo-Babylonian period, dogs were the symbol of Ninisina, the goddess of healing and medicine, and her worshippers frequently dedicated small models of seated dogs to her. In the Neo-Assyrian and Neo-Babylonian periods, dogs served as emblems of magical protection. In China, Korea, and Japan, dogs are viewed as kind protectors.
In mythology, dogs often appear as pets or as watchdogs. Stories of dogs guarding the gates of the underworld recur throughout Indo-European mythologies and may originate from Proto-Indo-European traditions. In Greek mythology, Cerberus is a three-headed, dragon-tailed watchdog who guards the gates of Hades. Dogs also feature in association with the Greek goddess Hecate. In Norse mythology, a dog called Garmr guards Hel, a realm of the dead. In Persian mythology, two four-eyed dogs guard the Chinvat Bridge. In Welsh mythology, Cŵn Annwn guards Annwn. In Hindu mythology, Yama, the god of death, owns two watchdogs named Shyama and Sharvara, which each have four eyes—they are said to watch over the gates of Naraka. A black dog is considered to be the vahana (vehicle) of Bhairava (an incarnation of Shiva).
In Christianity, dogs represent faithfulness. Within the Roman Catholic denomination specifically, the iconography of Saint Dominic includes a dog after the saint's mother dreamt of a dog springing from her womb and became pregnant shortly after that. As such, the Dominican Order (Ecclesiastical Latin: Domini canis) means "dog of the Lord" or "hound of the Lord". In Christian folklore, a church grim often takes the form of a black dog to guard Christian churches and their churchyards from sacrilege. Jewish law does not prohibit keeping dogs and other pets but requires Jews to feed dogs (and other animals that they own) before themselves and to make arrangements for feeding them before obtaining them. The view on dogs in Islam is mixed, with some schools of thought viewing them as unclean, although Khaled Abou El Fadl states that this view is based on "pre-Islamic Arab mythology" and "a tradition [...] falsely attributed to the Prophet". The Sunni Maliki school jurists disagree with the idea that dogs are unclean.
Terminology
Dog – the species (or subspecies) as a whole, also any male member of the same.
Bitch – any female member of the species (or subspecies).
Puppy or pup – a young member of the species (or subspecies) under 12 months old.
Sire – the male parent of a litter.
Dam – the female parent of a litter.
Litter – all of the puppies resulting from a single whelping.
Whelping – the act of a bitch giving birth.
Whelps – puppies still dependent upon their dam.
See Also
Saint Guinefort
References
Bibliography
External links
Biodiversity Heritage Library bibliography for Canis lupus familiaris
Fédération Cynologique Internationale (FCI) – World Canine Organisation
Dogs in the Ancient World, an article on the history of dogs
View the dog genome on Ensembl
Genome of Canis lupus familiaris (version UU_Cfam_GSD_1.0/canFam4), via UCSC Genome Browser
Data of the genome of Canis lupus familiaris, via NCBI
Data of the genome assembly of Canis lupus familiaris (version UU_Cfam_GSD_1.0/canFam4), via NCBI
Wolves
Scavengers
Cosmopolitan mammals
Animal models
Extant Late Pleistocene first appearances
Mammals described in 1758
Taxa named by Carl Linnaeus
English words | Dog | [
"Biology"
] | 10,386 | [
"Model organisms",
"Animal models"
] |
4,269,572 | https://en.wikipedia.org/wiki/Stripline | In electronics, stripline is a transverse electromagnetic (TEM) transmission line medium invented by Robert M. Barrett of the Air Force Cambridge Research Centre in the 1950s. Stripline is the earliest form of planar transmission line.
Description
A stripline circuit uses a flat strip of metal which is sandwiched between two parallel ground planes. The insulating material of the substrate forms a dielectric. The width of the strip, the thickness of the substrate and the relative permittivity of the substrate determine the characteristic impedance of the strip which is a transmission line. As shown in the diagram, the central conductor need not be equally spaced between the ground planes. In the general case, the dielectric material may be different above and below the central conductor. A stripline that uses air as the dielectric material is known as an air stripline.
To prevent the propagation of unwanted modes, the two ground planes must be shorted together. This is commonly achieved by a row of vias running parallel to the strip on each side.
Like coaxial cable, stripline is non-dispersive, and has no cutoff frequency. Good isolation between adjacent traces can be achieved more easily than with microstrip.
Stripline provides for enhanced noise immunity against the propagation of radiated RF emissions, at the expense of slower propagation speeds when compared to microstrip lines. The effective permittivity of striplines equals the relative permittivity of the dielectric substrate because of wave propagation only in the substrate. Hence striplines have higher effective permittivity in comparison to microstrip lines, which in turn reduces wave propagation speed (see also velocity factor) according to
History
Stripline, now used as a generic term, was originally a proprietary brand of Airborne Instruments Laboratory Inc. (AIL). The version as produced by AIL was essentially air insulated (air stripline) with just a thin layer of dielectric material - just enough to support the conducting strip. The conductor was printed on both sides of the dielectric. The more familiar version with the space between the two plates completely filled with dielectric was originally produced by Sanders Associates who marketed it under the brand name of triplate.
Stripline was initially preferred to its rival, microstrip, made by ITT. Transmission in stripline is purely TEM mode and consequently there is no dispersion (provided that the dielectric of substrate is not itself dispersive). Also, discontinuity elements on the line (gaps, stubs, posts etc) present a purely reactive impedance. This is not the case with microstrip; the differing dielectrics above and below the strip result in longitudinal non-TEM components to the wave. This results in dispersion and discontinuity elements have a resistive component causing them to radiate. In the 1950s Eugene Fubini, at the time working for AIL, jokingly suggested that a microstrip dipole would make a good antenna. This was intended to highlight the drawbacks of microstrip, but the microstrip patch antenna has become the most popular design of antenna in mobile devices. Stripline remained in the ascendent for its performance advantages through the 1950s and 1960s but eventually microstrip won out, especially in mass produced items, because it was easier to assemble and the lack of an upper dielectric meant that components were easier to access and adjust. As the complexity of printed circuits increased, this convenience issue became more important until today microstrip is the dominant planar technology. Miniaturisation also leads to favouring microstrip because its disadvantages are not so severe in a miniaturised circuit. However, stripline is still chosen where operation over a wide band is required.
Comparison to microstrip
Microstrip is similar to stripline transmission line except that the microstrip is not sandwiched, it is on a surface layer, above a ground plane.
Stripline is more expensive to fabricate than microstrip, and because of the second groundplane, the strip widths are much narrower for a given impedance and board thickness than for microstrip.
Characteristic Impedance
An accurate closed form equation for the characteristic impedance of a stripline with a thin centered conductor has been reported as
Where:
Note that when the conductor thickness is small, T<<1 or t<<h, the equations simplify significantly.
Where:
The accuracy of the formula is claimed to be at least 1% for W/(H-T) > .05 and T< 0.025.
For thick conductors, Wheeler provides the following more accurate equations
Where:
Where T and W are as defined the same as the above expression.
The accuracy is claimed to be at least 0.5% for C>0.25.
Non-centered conductor
For stripline conductors that are not centered, that is, the distance to the upper ground plane is not the same as to the lower ground plane, strategies exist to estimate the characteristic impedance in at least one of two ways.
Zo estimation using upper and lower capacitance
If the asymmetry of the conductor placement is not large, the lower and upper capacitance per unit length may be estimated for the upper ground plane and the lower ground plane using centered stripline equations and standard transmission line equations for homogeneous lines, , and where is the sped of light.
The of each stripline may be evaluated independently, and the results used to estimate the of the asymmetric stripline. Small errors are introduced in the estimation due to the slightly differing capacitance paths to the ground planes between the asymmetric case being estimated and the symmetric cases used to make the estimation, so only small asymmetric placement of the strip will be expected to produce an acceptable estimation for of the asymmetrically placed strip.
To summarize:
.
Where:
is the speed of light in a vacuum.
and are measured from center of the conductor to the lower and upper ground plane, respectively.
Co and Lo are the capacitance and inductance per unit length of the associated transmission line.
Zo estimation using microstrip metallic cover
If there is no dielectric in the asymmetric stripline, then the stripline looks like a microstrip with a dielectric of air, , inside a metallic enclosure. This permits the air characteristic impedance, , to be calculated using microstrip metallic enclosure equations. When is known, may be calculated using . The accuracy of this estimation is quantified and listed in the microstrip metallic enclosure equations.
Losses
Since microstrip loss calculation are not directly a function of dielectric constant and geometry or metallic cover height, microstrip loss equations may also be used for stripline losses by treating εre as a constant equal to εr.
See also
Printed circuit board
Distributed element filter
Power dividers and directional couplers
Microstrips
References
Bibliography
Arthur A. Oliner, "The evolution of electromagnetic waveguides", in chapter 16, Sarkar et al., History of wireless, John Wiley and Sons, 2006 .
Yarman, Binboga Siddik, Design of Ultra Wideband Antenna Matching Networks, Springer, 2008 .
External links
Stripline in Microwave Encyclopedia
Planar transmission lines
Microwave technology
Electronic circuits | Stripline | [
"Engineering"
] | 1,500 | [
"Electronic engineering",
"Electronic circuits"
] |
4,269,882 | https://en.wikipedia.org/wiki/Background%20radiation%20equivalent%20time | Background radiation equivalent time (BRET) or background equivalent radiation time (BERT) is a unit of measurement of ionizing radiation dosage amounting to one day worth of average human exposure to background radiation.
BRET units are used as a measure of low level radiation exposure. The health hazards of low doses of ionizing radiation are unknown and controversial, because the effects, mainly cancer and genetic damage, take many years to appear, and the incidence due to radiation exposure can't be statistically separated from the many other causes of these diseases. The purpose of the BRET measure is to allow a low level dose to be easily compared with a universal yardstick: the average dose of background radiation, mostly from natural sources, that every human unavoidably receives during daily life. Background radiation level is widely used in radiological health fields as a standard for setting exposure limits. Presumably, a dose of radiation which is equivalent to what a person would receive in a few days of ordinary life will not increase their rate of disease measurably.
Definition
The BRET is the creation of Professor J R Cameron. The BRET value corresponding to a dose of radiation is the number of days of average natural background dose it is equivalent to. It is calculated from the equivalent dose in sieverts by dividing by the average annual background radiation dose in Sv, and multiplying by 365:
The definition of the BRET unit is apparently unstandardized, and depends on what value is used for the average annual background radiation dose, which varies greatly across time and location. The 2000 UNSCEAR estimate for worldwide average natural background radiation dose is 2.4 mSv (240 mrem), with a range from 1 to 13 mSv. A small area in India as high as 30 mSv (3 rem). Using the 2.4 mSv value each BRET unit equals 6.6 μSv.
BRET values for diagnostic radiography procedures range from 2 BRET for a dental x-ray to around 400 for a barium enema study.
See also
Background radiation
Banana equivalent dose
Flight-time equivalent dose
Radiology
References
Utah Division of Radiation Control: X-ray Dose Comparisons
Radioactivity
Background radiation
Equivalent units | Background radiation equivalent time | [
"Physics",
"Chemistry",
"Mathematics"
] | 443 | [
"Equivalent quantities",
"Quantity",
"Equivalent units",
"Radioactivity",
"Nuclear physics",
"Units of measurement"
] |
4,270,231 | https://en.wikipedia.org/wiki/Ecotechnology | Ecotechnology is an applied science that seeks to fulfill human needs while causing minimal ecological disruption, by harnessing and manipulating natural forces to leverage their beneficial effects. Ecotechnology integrates two fields of study: the 'ecology of technics' and the 'technics of ecology,' requiring an understanding of the structures and processes of ecosystems and societies. All sustainable engineering that can reduce damage to ecosystems, adopt ecology as a fundamental basis, and ensure conservation of biodiversity and sustainable development may be considered as forms of ecotechnology.
Ecotechnology emphasizes approaching a problem from a holistic point of view; for example, holding that environmental remediation of rivers should not only consider one single area but the whole catchment area, which includes the upstream, middle-stream, and downstream sections.
The construction industry can, in the ecotechnology view, reduce its impact on nature by consulting experts on the environment.
Ecotechnics
During Ecotechnics '95 - International Symposium on Ecological Engineering in Östersund, Sweden, the participants agreed on the definition: "Ecotechnics is defined as the method of designing future societies within ecological frames."
Ecotechnics is defined as the 'techne' of bodies. Ecotechnics thinks of the body as a technology which makes possible the inclusion of a whole new range of bodies. This gives people more agency and biopower over their own use of their bodies. This makes it usable for queer theory and disability studies. An interpretation also refers to the term as the craft of the home.
In classifying the body as a technical object, Jean-Luc Nancy explained how it works by partitioning bodies into their own zones and spaces, which also allow such bodies to connect with other bodies. Hence, Nancy claims that technology determine our interactions with other beings in the world. Ecotechnics is also central in Sullivan's and Murray's collection of essays Queering the Technologisation of Bodies. It is built on Bernard Stiegler's work that sees the body and technology as a double process: the technology and the body are informed by each other. Derrida who extends on both Nancy and Stiegler's ideas argues that the 'proper body' implicates interconnections of technical additions. Ecotechnics goes against the essentialist and binary notion of the body as a technological object which positions it within post-structuralism. The body can only be understood within its environment and this environment is a technical one.
Nancy also applied the ecotechnics concept to contemporary issues such as war and globalization. He maintained, for instance, that modern conflicts are produced by the dividing lines between: North and South; rich and poor; and, integrated and excluded. He also believes that ecotechnics is undoing communities due to the elimination of the polis and the prevalence of oikos, calling for a global sovereignty that would administer the world as a single household.
See also
Afforestation
Agroforestry
Analog forestry
Biomass
Biomass (ecology)
Buffer strip
Collaborative innovation network
Deforestation
Deforestation during the Roman period
Desertification
Ecological engineering
Ecological engineering methods
Energy-efficient landscaping
Forest farming
Forest gardening
Great Plains Shelterbelt
GreenTec Awards
Hedgerow
Home gardens
Human ecology
Institute of Ecotechnics
Macro-engineering
Megaprojects
Mid Sweden University
Permaculture
Permaforestry
Proposed sahara forest project
Push–pull technology
Sand fence
Seawater Greenhouse
Sustainable agriculture
Sustainable design
Terra preta
Thomas P. Hughes
Wildcrafting
Windbreak
References
Further reading
Allenby, B.R., and D.J. Richards (1994), The Greening of Industrial Ecosystems. National Academy Press, Washington, DC.
Braungart, M., and W. McDonough (2002). Cradle to Cradle: Remaking the Way We Make Things. North Point Press, .
Huesemann, Michael H., and Joyce A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, Chapter 13, "The Design of Environmentally Sustainable and Appropriate Technologies", New Society Publishers, Gabriola Island, British Columbia, Canada, , 464 pp.
Von Weizsacker, E.U., C. Hargroves, M.H. Smith, C. Desha, and P. Stasinopoulos (2009). Factor Five: Transforming the Global Economy through 80% Improvements in Resource Productivity, Routledge.
External links
Ecotechnology research at Mid Sweden University, Östersund, Sweden
The Institute of Ecotechnics, London, U.K.
ecoTECHNOLOGY for Vehicles, Transport Canada, Ottawa, Canada
Eco Technology Show, 11-12 June 2015, Brighton, U.K.
Environmental science | Ecotechnology | [
"Environmental_science"
] | 968 | [
"nan"
] |
4,270,544 | https://en.wikipedia.org/wiki/Daniel%20J.%20Shanefield | Daniel Jay Shanefield (April 29, 1930 – November 13, 2013) was a United States ceramic engineer.
Shanefield was born in Orange, New Jersey, and earned a bachelor's degree in chemistry from Rutgers University in 1956; he went on to graduate studies at the same university, receiving his Ph.D. in physical chemistry from Rutgers in 1962. He worked from 1962 to 1967 at ITT Research Laboratories, and from 1967 to 1986 at Bell Laboratories. In 1986 he returned to Rutgers as a Professor II (a professorial rank at Rutgers that is one step above a normal full professor).
At Bell Laboratories, Shanefield was the co-inventor with Richard E. Mistler of the tape casting technique for forming thin ceramic films. He pioneered the development of a phase-change memory system based on an earlier patent of Stanford R. Ovshinsky; Shanefield's work in this area "represented the first proof of the phase change memory concept". Beginning in the mid-1970s, Shanefield was an early proponent of double-blind ABX testing of high-end audio electronics; in 1980 he reported in High Fidelity magazine that there were no audible differences between several different power amplifiers, setting off what became known in audiophile circles as "the great debate".
Shanefield is the author of two books, Organic Additives and Ceramic Processing (Kluwer, 1995; 2nd ed., Kluwer, 1996) and Industrial Electronics for Engineers, Chemists, and Technicians (William Andrew Publishing, 2001).
He was a four-time winner of the AT&T Outstanding Achievement Award and was elected as a Fellow of the American Ceramic Society in 1993.
Shanefield died in Honolulu, Hawaii, aged 83.
References
1930 births
2013 deaths
American engineers
Rutgers University alumni
People from Orange, New Jersey
American physical chemists
Ceramic engineering
Engineers from New Jersey
Fellows of the American Ceramic Society | Daniel J. Shanefield | [
"Engineering"
] | 383 | [
"Ceramic engineering"
] |
4,270,598 | https://en.wikipedia.org/wiki/Handgun%20effectiveness | Handgun effectiveness is a measure of the stopping power of a handgun: its ability to incapacitate a hostile target as quickly and efficiently as possible.
Overview
Most handgun projectiles have significantly lower energy than centerfire rifles and shotguns. What they lack in power, they make up for in being small and lightweight, lending to concealability and practicality. Handgun power and the effectiveness of different cartridges are widely debated topics. Experimental research among civilians, law enforcement agencies, militaries, and ammunition companies is constantly ongoing. Factors that can influence handgun effectiveness include handgun design, bullet type, and bullet capabilities (e.g. wound mechanisms, penetration, velocity, and weight).
Factors
Cavitation
Most handgun projectiles wound primarily through the size of the hole they produce, known as a permanent cavity or simply a bullet hole. Rifles are capable of much higher velocities with similar cartridges and add Temporary cavitation for additional lethality. Many handgun bullets move too slowly to cause temporary cavitation, but it may occur if the bullet fragments, strikes inelastic tissue (liver, spleen, kidneys, CNS), or transfers at least of energy into the subject. This last instance usually requires a larger and/or higher velocity projectile than is commonly used with handguns.
Penetration
One factor used to measure a handgun's effectiveness is penetration. The FBI's requirement for all service rounds is penetration in calibrated ballistic gelatin. This generally ensures a bullet will reach the vital human organs from many angles and through many different layers and materials of clothing. Penetration is often argued as the most important factor in handgun cartridge wounding potential outside the skill of the shooter.
Ballistic Pressure Wave/Hydrostatic Shock
There is a significant body of evidence that Hydrostatic shock (more precisely known as the ballistic pressure wave) can contribute to handgun bullet effectiveness.
Recent work published by scientists M Courtney and A Courtney provides compelling support for the role of a ballistic pressure wave in incapacitation and injury.
This work builds upon the earlier works of Suneson et al. where the researchers implanted high-speed pressure transducers into the brain of pigs and demonstrated that a significant pressure wave reaches the brain of pigs shot in the thigh.
These scientists observed neural damage in the brain caused by the distant effects of the ballistic pressure wave originating in the thigh.
The results of Suneson et al. were confirmed and expanded upon by a later experiment in dogs
which "confirmed that distant effect exists in the central nervous system after a high-energy missile impact to an extremity. A high-frequency oscillating pressure wave with large amplitude and short duration was found in the brain after the extremity impact of a high-energy missile ..." Wang et al. observed significant damage in both the hypothalamus and hippocampus regions of the brain due to remote effects of the ballistic pressure wave.
Caliber
Handgun calibers are a frequently discussed and disputed factor in handgun effectiveness. It is generally agreed that most intermediate handgun calibers will yield similar terminal results if using modern, quality ammunition. Caliber selection often can be reduced to balancing a handgun's physical features; weapon size, magazine or cylinder capacity, recoil, and ease of use. These features are all largely determined by the cartridge that the weapon fires. A list of many handgun calibers can be found at List of handgun cartridges.
One-shot stops
The only scientifically proven and biologically possible way to guarantee instant incapacitation is through the destruction of the central nervous system or brain. This will usually cease all motor-related and voluntary actions. If the central nervous system is not damaged or destroyed, there will be no immediate, physiological incapacitation. Since a central nervous system hit is very difficult in a dynamic situation, some people will use expanding ammunition or larger calibers. These can increase the odds of striking a part of the central nervous system.
For example, a popular caliber in the United States is .45 ACP. It is among the largest practical handgun calibers in use, featuring a diameter bullet. With well-made expanding ammunition, a .452 bullet often expands to .70 caliber or larger. With a 9 mm Luger cartridge, the normal .355 bullet may expand to .50 or larger. Theoretically, a larger caliber should cause slightly more dangerous wounds. However, the unpredictable and uncontrolled nature of handgun use outside a laboratory environment makes this difficult to determine.
Other situations where a single shot stops an attacker are most likely psychologically based. The attacker may be surprised that their subject is armed, and could flee without ever being struck. Alternatively, an attacker may be frightened after being shot and decides to disengage rather than press the assault.
See also
Pistol-whipping
References
External links
Ballistics By The Inch Relationship between barrel length and bullet velocity.
Effectiveness
Ballistics | Handgun effectiveness | [
"Physics"
] | 984 | [
"Applied and interdisciplinary physics",
"Ballistics"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.