id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
3,118,602
https://en.wikipedia.org/wiki/Juglone
Juglone, also called 5-hydroxy-1,4-naphthalenedione (IUPAC) is a phenolic organic compound with the molecular formula C10H6O3. In the food industry, juglone is also known as C.I. Natural Brown 7 and C.I. 75500. It is insoluble in benzene but soluble in dioxane, from which it crystallizes as yellow needles. It is an isomer of lawsone, which is the active dye compound in the henna leaf. Juglone occurs naturally in the leaves, roots, husks, fruit (the epicarp), and bark of plants in the Juglandaceae family, particularly the black walnut (Juglans nigra), and is toxic or growth-stunting to many types of plants. It is sometimes used as an herbicide, as a dye for cloth and inks, and as a coloring agent for foods and cosmetics. History The allelopathic effects of walnut trees on other plants were observed as far back as the 1st century CE. Juglone itself was first isolated from black walnut in 1856, and was identified as the compound responsible for its allelopathic effects in 1881. In 1921, a study observed that tomato plants near black walnut trees exhibited wilted leaves, suggesting an adverse interaction. In 1926, instances of apple tree damage caused by both Juglans nigra and Juglans cinerea (butternut) trees were reported in northern Virginia. Certain apple tree varieties displayed varying levels of resistance to walnut toxicity. In 1926, it was observed that walnut trees in alfalfa fields resulted in crop death, while grass remained unaffected. Subsequent experiments indicated that the toxic compound within walnut trees exhibited limited solubility in water, implying that the compound underwent chemical changes upon leaving the tree. It was only in 1928 that the phytotoxic nature of the compound was identified for other plant species. The scientific community faced controversy when the harmful effects of walnut trees on certain crops and trees were initially reported, following claims that the trees damaging apple trees in northern Virginia were not walnut trees at all. In 1942 it was demonstrated that tomato and alfalfa germination and seedling growth were inhibited by contact with pieces of walnut roots, providing additional scientific evidence of juglone's phytotoxicity. Juglone was used medicinally in America during the early 1900s, prescribed for the treatment of various skin diseases. Chemistry Synthesis Juglone is derived by oxidation of hydrojuglone, 1,5-dihydroxynaphthalene, after enzymatic hydrolysis. It can also be obtained by oxidations of 5,8-dihydroxy-1-tetralone with silver oxide (Ag2O), manganese dioxide (MnO2), or 2,3-dichloro-5,6-dicyano-1,4-benzoquinone (DDQ). Extraction Juglone has been extracted from the husk of walnut fruit of which it contains 2–4% by fresh weight. Degradation Before oxidization, juglone exists in plants such as walnuts in the form of colorless hydroxyjuglone, the hydroquinone analog. This is rapidly oxidized to juglone once exposed to air. The evidence that hydroxyjuglone is readily degraded is most apparent in the color change of walnut hulls from yellow to black after being freshly cut. Indigenous bacteria found in the soil of black walnut roots, most notably Pseudomonas putida J1, are able to metabolize juglone and use it as their primary source of energy and carbon. Because of this, juglone is not so active as a cytotoxin in well-aerated soils. Spectral data The IR spectrum for juglone shows bands at 3400, 1662, and 1641 cm−1, which are characteristic respectively of the hydroxyl and two carbonyl groups. The 13C-NMR spectrum shows 10 signals, including at 160.6, 183.2, and 189.3 ppm for the hydroxyl and two carbonyl carbons. Biological effects Juglone is an allelopathic compound, a substance produced by a plant to stunt the growth of another plant. Juglone affects germination of plants less than it affects growth of the root and stem systems. In below average concentrations, it has increased the rate of germination in some coniferous seeds. Juglone exerts its effect by inhibiting certain enzymes needed for metabolic function. This in turn inhibits the effects of respiration of mitochondria and inhibits photosynthesis found in common crops such as maize and soy at juglone concentrations that are at or below those common in nature. In addition to these inhibitions, juglone has been shown to alter the relationship between plants and water because of its effect on stomatal functioning. The rise in popularity of alley cropping with black walnut trees and maize in the American Midwest, due to the high value of black walnut trees, has led to certain studies being conducted about the particular relationship between the two species. Research has shown that juglone affects the yield of maize crops; however, the practice of pruning and usage of root barriers greatly reduce these effects. Some plants and trees are resistant to juglone including some species of maple (Acer), birch (Betula), and beech (Fagus). It is highly toxic to many insect herbivores. However, some of them, example Actias luna (Luna moth), can detoxify juglone (and related naphthoquinones) to non-toxic 1,4,5-trihydroxynaphthalene. It has also shown anthelmintic (expelling parasitic worms) activity on mature and immature Hymenolepis nana in mice. Naphthoquinonic compounds also exhibit antimicrobial activity. Uses Juglone is occasionally used as a herbicide. Traditionally, juglone has been used as a natural dye for clothing and fabrics, particularly wool, and as ink. Because of its tendency to create dark orange-brown stains, juglone has also found use as a coloring agent for foods and cosmetics, such as hair dyes. See also Plumbagin References Herbicides Natural dyes Food colorings 1,4-Naphthoquinones Hydroxynaphthoquinones Substances discovered in the 19th century +
Juglone
[ "Biology" ]
1,328
[ "Herbicides", "Biocides" ]
3,118,707
https://en.wikipedia.org/wiki/Wetted%20area
In fluid dynamics, the wetted area is the surface area that interacts with the working fluid or gas. In maritime use, the wetted area is the area of the watercraft's hull which is immersed in water. This has a direct relationship on the overall hydrodynamic drag of the ship or submarine. In aeronautics, the wetted area is the area which is in contact with the external airflow. This has a direct relationship on the overall aerodynamic drag of the aircraft. See also: Wetted aspect ratio. In motorsport, such as Formula One, the term wetted surfaces is used to refer to the bodywork, wings and the radiator, which are in direct contact with the airflow, similarly to the term's use in aeronautics. References Intake Aerodynamics (October 1999) by Seddon and Goldsmith, Blackwell Science and the AIAA Educational Series; 2nd edition Naval architecture Aerodynamics
Wetted area
[ "Chemistry", "Engineering" ]
188
[ "Naval architecture", "Aerodynamics", "Aerospace engineering", "Marine engineering", "Fluid dynamics" ]
12,044,399
https://en.wikipedia.org/wiki/List%20decoding
In coding theory, list decoding is an alternative to unique decoding of error-correcting codes for large error rates. The notion was proposed by Elias in the 1950s. The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct. This allows for handling a greater number of errors than that allowed by unique decoding. The unique decoding model in coding theory, which is constrained to output a single valid codeword from the received word could not tolerate a greater fraction of errors. This resulted in a gap between the error-correction performance for stochastic noise models (proposed by Shannon) and the adversarial noise model (considered by Richard Hamming). Since the mid 90s, significant algorithmic progress by the coding theory community has bridged this gap. Much of this progress is based on a relaxed error-correction model called list decoding, wherein the decoder outputs a list of codewords for worst-case pathological error patterns where the actual transmitted codeword is included in the output list. In case of typical error patterns though, the decoder outputs a unique single codeword, given a received word, which is almost always the case (However, this is not known to be true for all codes). The improvement here is significant in that the error-correction performance doubles. This is because now the decoder is not confined by the half-the-minimum distance barrier. This model is very appealing because having a list of codewords is certainly better than just giving up. The notion of list-decoding has many interesting applications in complexity theory. The way the channel noise is modeled plays a crucial role in that it governs the rate at which reliable communication is possible. There are two main schools of thought in modeling the channel behavior: Probabilistic noise model studied by Shannon in which the channel noise is modeled precisely in the sense that the probabilistic behavior of the channel is well known and the probability of occurrence of too many or too few errors is low Worst-case or adversarial noise model considered by Hamming in which the channel acts as an adversary that arbitrarily corrupts the codeword subject to a bound on the total number of errors. The highlight of list-decoding is that even under adversarial noise conditions, it is possible to achieve the information-theoretic optimal trade-off between rate and fraction of errors that can be corrected. Hence, in a sense this is like improving the error-correction performance to that possible in case of a weaker, stochastic noise model. Mathematical formulation Let be a error-correcting code; in other words, is a code of length , dimension and minimum distance over an alphabet of size . The list-decoding problem can now be formulated as follows: Input: Received word , error bound Output: A list of all codewords whose hamming distance from is at most . Motivation for list decoding Given a received word , which is a noisy version of some transmitted codeword , the decoder tries to output the transmitted codeword by placing its bet on a codeword that is “nearest” to the received word. The Hamming distance between two codewords is used as a metric in finding the nearest codeword, given the received word by the decoder. If is the minimum Hamming distance of a code , then there exists two codewords and that differ in exactly positions. Now, in the case where the received word is equidistant from the codewords and , unambiguous decoding becomes impossible as the decoder cannot decide which one of and to output as the original transmitted codeword. As a result, the half-the minimum distance acts as a combinatorial barrier beyond which unambiguous error-correction is impossible, if we only insist on unique decoding. However, received words such as considered above occur only in the worst-case and if one looks at the way Hamming balls are packed in high-dimensional space, even for error patterns beyond half-the minimum distance, there is only a single codeword within Hamming distance from the received word. This claim has been shown to hold with high probability for a random code picked from a natural ensemble and more so for the case of Reed–Solomon codes which is well studied and quite ubiquitous in the real world applications. In fact, Shannon's proof of the capacity theorem for q-ary symmetric channels can be viewed in light of the above claim for random codes. Under the mandate of list-decoding, for worst-case errors, the decoder is allowed to output a small list of codewords. With some context specific or side information, it may be possible to prune the list and recover the original transmitted codeword. Hence, in general, this seems to be a stronger error-recovery model than unique decoding. List-decoding potential For a polynomial-time list-decoding algorithm to exist, we need the combinatorial guarantee that any Hamming ball of radius around a received word (where is the fraction of errors in terms of the block length ) has a small number of codewords. This is because the list size itself is clearly a lower bound on the running time of the algorithm. Hence, we require the list size to be a polynomial in the block length of the code. A combinatorial consequence of this requirement is that it imposes an upper bound on the rate of a code. List decoding promises to meet this upper bound. It has been shown non-constructively that codes of rate exist that can be list decoded up to a fraction of errors approaching . The quantity is referred to in the literature as the list-decoding capacity. This is a substantial gain compared to the unique decoding model as we now have the potential to correct twice as many errors. Naturally, we need to have at least a fraction of the transmitted symbols to be correct in order to recover the message. This is an information-theoretic lower bound on the number of correct symbols required to perform decoding and with list decoding, we can potentially achieve this information-theoretic limit. However, to realize this potential, we need explicit codes (codes that can be constructed in polynomial time) and efficient algorithms to perform encoding and decoding. (p, L)-list-decodability For any error fraction and an integer , a code is said to be list decodable up to a fraction of errors with list size at most or -list-decodable if for every , the number of codewords within Hamming distance from is at most Combinatorics of list decoding The relation between list decodability of a code and other fundamental parameters such as minimum distance and rate have been fairly well studied. It has been shown that every code can be list decoded using small lists beyond half the minimum distance up to a bound called the Johnson radius. This is quite significant because it proves the existence of -list-decodable codes of good rate with a list-decoding radius much larger than In other words, the Johnson bound rules out the possibility of having a large number of codewords in a Hamming ball of radius slightly greater than which means that it is possible to correct far more errors with list decoding. List-decoding capacity Theorem (List-Decoding Capacity). Let and The following two statements hold for large enough block length . i) If , then there exists a -list decodable code. ii) If , then every -list-decodable code has . Where is the -ary entropy function defined for and extended by continuity to What this means is that for rates approaching the channel capacity, there exists list decodable codes with polynomial sized lists enabling efficient decoding algorithms whereas for rates exceeding the channel capacity, the list size becomes exponential which rules out the existence of efficient decoding algorithms. The proof for list-decoding capacity is a significant one in that it exactly matches the capacity of a -ary symmetric channel . In fact, the term "list-decoding capacity" should actually be read as the capacity of an adversarial channel under list decoding. Also, the proof for list-decoding capacity is an important result that pin points the optimal trade-off between rate of a code and the fraction of errors that can be corrected under list decoding. Sketch of proof The idea behind the proof is similar to that of Shannon's proof for capacity of the binary symmetric channel where a random code is picked and showing that it is -list-decodable with high probability as long as the rate For rates exceeding the above quantity, it can be shown that the list size becomes super-polynomially large. A "bad" event is defined as one in which, given a received word and messages it so happens that , for every where is the fraction of errors that we wish to correct and is the Hamming ball of radius with the received word as the center. Now, the probability that a codeword associated with a fixed message lies in a Hamming ball is given by where the quantity is the volume of a Hamming ball of radius with the received word as the center. The inequality in the above relation follows from the upper bound on the volume of a Hamming ball. The quantity gives a very good estimate on the volume of a Hamming ball of radius centered on any word in Put another way, the volume of a Hamming ball is translation invariant. To continue with the proof sketch, we conjure the union bound in probability theory which tells us that the probability of a bad event happening for a given is upper bounded by the quantity . With the above in mind, the probability of "any" bad event happening can be shown to be less than . To show this, we work our way over all possible received words and every possible subset of messages in Now turning to the proof of part (ii), we need to show that there are super-polynomially many codewords around every when the rate exceeds the list-decoding capacity. We need to show that is super-polynomially large if the rate . Fix a codeword . Now, for every picked at random, we have since Hamming balls are translation invariant. From the definition of the volume of a Hamming ball and the fact that is chosen uniformly at random from we also have Let us now define an indicator variable such that Taking the expectation of the volume of a Hamming ball we have Therefore, by the probabilistic method, we have shown that if the rate exceeds the list-decoding capacity, then the list size becomes super-polynomially large. This completes the proof sketch for the list-decoding capacity. List decodability of Reed-Solomon Codes In 2023, building upon three seminal works, coding theorists showed that, with high probability, Reed-Solomon codes defined over random evaluation points are list decodable up to the list-decoding capacity over linear size alphabets. List-decoding algorithms In the period from 1995 to 2007, the coding theory community developed progressively more efficient list-decoding algorithms. Algorithms for Reed–Solomon codes that can decode up to the Johnson radius which is exist where is the normalised distance or relative distance. However, for Reed-Solomon codes, which means a fraction of errors can be corrected. Some of the most prominent list-decoding algorithms are the following: Sudan '95 – The first known non-trivial list-decoding algorithm for Reed–Solomon codes that achieved efficient list decoding up to errors developed by Madhu Sudan. Guruswami–Sudan '98 – An improvement on the above algorithm for list decoding Reed–Solomon codes up to errors by Madhu Sudan and his then doctoral student Venkatesan Guruswami. Parvaresh–Vardy '05 – In a breakthrough paper, Farzad Parvaresh and Alexander Vardy presented codes that can be list decoded beyond the radius for low rates . Their codes are variants of Reed-Solomon codes which are obtained by evaluating correlated polynomials instead of just as in the case of usual Reed-Solomon codes. Guruswami–Rudra '06 - In yet another breakthrough, Venkatesan Guruswami and Atri Rudra give explicit codes that achieve list-decoding capacity, that is, they can be list decoded up to the radius for any . In other words, this is error-correction with optimal redundancy. This answered a question that had been open for about 50 years. This work has been invited to the Research Highlights section of the Communications of the ACM (which is “devoted to the most important research results published in Computer Science in recent years”) and was mentioned in an article titled “Coding and Computing Join Forces” in the Sep 21, 2007 issue of the Science magazine. The codes that they are given are called folded Reed-Solomon codes which are nothing but plain Reed-Solomon codes but viewed as a code over a larger alphabet by careful bundling of codeword symbols. Because of their ubiquity and the nice algebraic properties they possess, list-decoding algorithms for Reed–Solomon codes were a main focus of researchers. The list-decoding problem for Reed–Solomon codes can be formulated as follows: Input: For an Reed-Solomon code, we are given the pair for , where is the th bit of the received word and the 's are distinct points in the finite field and an error parameter . Output: The goal is to find all the polynomials of degree at most which is the message length such that for at least values of . Here, we would like to have as small as possible so that a greater number of errors can be tolerated. With the above formulation, the general structure of list-decoding algorithms for Reed-Solomon codes is as follows: Step 1: (Interpolation) Find a non-zero bivariate polynomial such that for . Step 2: (Root finding/Factorization) Output all degree polynomials such that is a factor of i.e. . For each of these polynomials, check if for at least values of . If so, include such a polynomial in the output list. Given the fact that bivariate polynomials can be factored efficiently, the above algorithm runs in polynomial time. Applications in complexity theory and cryptography Algorithms developed for list decoding of several interesting code families have found interesting applications in computational complexity and the field of cryptography. Following is a sample list of applications outside of coding theory: Construction of hard-core predicates from one-way permutations. Predicting witnesses for NP-search problems. Amplifying hardness of Boolean functions. Average case hardness of permanent of random matrices. Extractors and Pseudorandom generators. Efficient traitor tracing. References External links A Survey on list decoding by Madhu Sudan Notes from a course taught by Madhu Sudan Notes from a course taught by Luca Trevisan Notes from a course taught by Venkatesan Guruswami Notes from a course taught by Atri Rudra P. Elias, "List decoding for noisy channels," Technical Report 335, Research Laboratory of Electronics, MIT, 1957. P. Elias, "Error-correcting codes for list decoding," IEEE Transactions on Information Theory, vol. 37, pp. 5–12, 1991. J. M. Wozencraft, "List decoding," Quarterly Progress Report, Research Laboratory of Electronics, MIT, vol. 48, pp. 90–95, 1958. Venkatesan Guruswami's PhD thesis Algorithmic Results in List Decoding Folded Reed–Solomon code Coding theory Error detection and correction Computational complexity theory
List decoding
[ "Mathematics", "Engineering" ]
3,196
[ "Discrete mathematics", "Coding theory", "Reliability engineering", "Error detection and correction" ]
12,044,574
https://en.wikipedia.org/wiki/FYVE%2C%20RhoGEF%20and%20PH%20domain%20containing
FYVE, RhoGEF and PH domain containing (FGD) is a gene family consisting of: FGD1 FGD2 FGD3 FGD4 Type 1 is associated with Aarskog-Scott syndrome. See also Guanine nucleotide exchange factor RhoGEF References External links Gene families
FYVE, RhoGEF and PH domain containing
[ "Chemistry", "Biology" ]
71
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
12,045,078
https://en.wikipedia.org/wiki/Excision%20repair%20cross-complementing
Excision repair cross-complementing (ERCC) is a set of proteins which are involved in DNA repair. In humans, ERCC proteins are transcribed from the following genes: ERCC1, ERCC2, ERCC3, ERCC4, ERCC5, ERCC6, and ERCC8. Members 1 though 5 are associated with Xeroderma Pigmentosum. Members 6 and 8 are associated with Cockayne syndrome. References DNA repair
Excision repair cross-complementing
[ "Chemistry", "Biology" ]
97
[ "DNA repair", "Protein stubs", "Biochemistry stubs", "Molecular genetics", "Cellular processes" ]
12,045,114
https://en.wikipedia.org/wiki/Flory%20convention
In polymer science, the Flory convention is a convention for labelling rotational isomers of polymers. It is named after Nobel Prize-winning Paul Flory. The convention states that for a given bond, when the dihedral angle formed between the previous and subsequent bonds projected on the plane normal to the bond is 0 degrees, the state is labelled as "trans", and when the angle is 180 degrees, the angle is labelled as "cis". References Biophysics
Flory convention
[ "Physics", "Biology" ]
96
[ "Applied and interdisciplinary physics", "Biophysics" ]
12,045,475
https://en.wikipedia.org/wiki/T-box
T-box refers to a group of transcription factors involved in embryonic limb and heart development. Every T-box protein has a relatively large DNA-binding domain, generally comprising about a third of the entire protein that is both necessary and sufficient for sequence-specific DNA binding. All members of the T-box gene family bind to the "T-box", a DNA consensus sequence of TCACACCT. Members T-boxes are especially important to the development of embryos, found in zebrafish oocyte by Bruce et al 2003 and Xenopus laevis oocyte by Xanthos et al 2001. They are also expressed in later stages, including adult mouse and rabbit studied by Szabo et al 2000. Mutations in the first one found caused short tails in mice, and thus the protein encoded was named brachyury, Greek for "short-tail". In mice this gene is named Tbxt, and in humans it is named TBXT. Brachyury has been found in all bilaterian animals that have been screened, and is also present in the cnidaria. The mouse Tbxt gene was cloned and found to be a 436 amino acid embryonic nuclear transcription factor. The protein brachyury binds to the T-box through a region at its N-terminus. Protein activity The encoded proteins of TBX5 and TBX4 play a role in limb development, and play a major role in limb bud initiation specifically. For instance, in chickens TBX4 specifies hindlimb status while Tbx5 specifies forelimb status. The activation of these proteins by Hox genes initiates signaling cascades that involve the Wnt signaling pathway and FGF signals in limb buds. Ultimately, TBX4 and TBX5 lead to the development of apical ectodermal ridge (AER) and zone of polarizing activity (ZPA) signaling centers in the developing limb bud, which specify the orientation growth of the developing limb. Together, TBX5 and TBX4 play a role in patterning the soft tissues (muscles and tendons) of the musculoskeletal system. Defects In humans, and some other animals, defects in the TBX5 gene expression are responsible for Holt–Oram syndrome, which is characterized by at least one abnormal wrist bone. Other arm bones are almost always affected, though the severity can vary widely, from complete absence of a bone, to only a reduction in bone length. Seventy-five percent of affected individuals also have heart defects, most often there is no separation between the left and right ventricle of the heart. TBX3 is associated with ulnar–mammary syndrome in humans, but is also responsible for the presence or absence of dun color in horses, and has no deleterious effects whether expressed or not. T-box genes Genes encoding T-box proteins include: TBXT () the first found (in mice) TBR1 () TBX1 () TBX2 () TBX3 () TBX4 () TBX5 () TBX6 () TBX10 () TBX15 () TBX18 () TBX19 () TBX20 () TBX21 () TBX22 () See also Brachyury Homeobox Development genes in plants: G-box, I-box, W-box, Z-box References Further reading External links Transcription factors
T-box
[ "Chemistry", "Biology" ]
707
[ "Protein stubs", "Gene expression", "Signal transduction", "Biochemistry stubs", "Induced stem cells", "Transcription factors" ]
12,045,930
https://en.wikipedia.org/wiki/Binary%20integer%20decimal
The IEEE 754-2008 standard includes decimal floating-point number formats in which the significand and the exponent (and the payloads of NaNs) can be encoded in two ways, referred to as binary encoding and decimal encoding. Both formats break a number down into a sign bit s, an exponent q (between qmin and qmax), and a p-digit significand c (between 0 and 10p−1). The value encoded is (−1)s×10q×c. In both formats the range of possible values is identical, but they differ in how the significand c is represented. In the decimal encoding, it is encoded as a series of p decimal digits (using the densely packed decimal (DPD) encoding). This makes conversion to decimal form efficient, but requires a specialized decimal ALU to process. In the binary integer decimal (BID) encoding, it is encoded as a binary number. Format Using the fact that 210 = 1024 is only slightly more than 103 = 1000, 3n-digit decimal numbers can be efficiently packed into 10n binary bits. However, the IEEE formats have significands of 3n+1 digits, which would generally require 10n+4 binary bits to represent. This would not be efficient, because only 10 of the 16 possible values of the additional four bits are needed. A more efficient encoding can be designed using the fact that the exponent range is of the form 3×2k, so the exponent never starts with 11. Using the Decimal32 encoding (with a significand of 3*2+1 decimal digits) as an example (e stands for exponent, m for mantissa, i.e. significand): If the significand starts with 0mmm, omitting the leading 0 bit lets the significand fit into 23 bits: s 00eeeeee (0)mmm mmmmmmmmmm mmmmmmmmmm s 01eeeeee (0)mmm mmmmmmmmmm mmmmmmmmmm s 10eeeeee (0)mmm mmmmmmmmmm mmmmmmmmmm If the significand starts with 100m, omitting the leading 100 bits lets the significand fit into 21 bits. The exponent is shifted over 2 bits, and a 11 bit pair shows that this form is being used: s 1100eeeeee (100)m mmmmmmmmmm mmmmmmmmmm s 1101eeeeee (100)m mmmmmmmmmm mmmmmmmmmm s 1110eeeeee (100)m mmmmmmmmmm mmmmmmmmmm Infinity, quiet NaN and signaling NaN use encodings beginning with s 1111: s 11110 xxxxxxxxxxxxxxxxxxxxxxxxxx s 111110 xxxxxxxxxxxxxxxxxxxxxxxxx s 111111 xxxxxxxxxxxxxxxxxxxxxxxxx The bits shown in parentheses are implicit: they are not included in the 32 bits of the Decimal32 encoding, but are implied by the two bits after the sign bit. The Decimal64 and Decimal128 encodings have larger exponent and significand fields, but operate in a similar fashion. For the Decimal128 encoding, 113 bits of significand is actually enough to encode 34 decimal digits, and the second form is never actually required. Cohort A decimal floating-point number can be encoded in several ways, the different ways represent different precisions, for example 100.0 is encoded as 1000×10−1, while 100.00 is encoded as 10000×10−2. The set of possible encodings of the same numerical value is called a cohort in the standard. If the result of a calculation is inexact the largest amount of significant data is preserved by selecting the cohort member with the largest integer that can be stored in the significand along with the required exponent. Range The proposed IEEE 754r standard limits the range of numbers to a significand of the form 10n−1, where n is the number of whole decimal digits that can be stored in the bits available so that decimal rounding is effected correctly. Performance A binary encoding is inherently less efficient for conversions to or from decimal-encoded data, such as strings (ASCII, Unicode, etc.) and BCD. A binary encoding is therefore best chosen only when the data are binary rather than decimal. IBM has published some unverified performance data. See also IEEE 754 References Further reading Computer arithmetic
Binary integer decimal
[ "Mathematics" ]
948
[ "Computer arithmetic", "Arithmetic" ]
12,046,704
https://en.wikipedia.org/wiki/Phosphatidylglycerol
Phosphatidylglycerol is a glycerophospholipid found in pulmonary surfactant and in the plasma membrane where it directly activates lipid-gated ion channels. The general structure of phosphatidylglycerol consists of a L-glycerol 3-phosphate backbone ester-bonded to either saturated or unsaturated fatty acids on carbons 1 and 2. The head group substituent glycerol is bonded through a phosphomonoester. It is the precursor of surfactant and its presence (>0.3) in the amniotic fluid of the newborn indicates fetal lung maturity. Approximately 98% of alveolar wall surface area is due to the presence of type I cells, with type II cells producing pulmonary surfactant covering around 2% of the alveolar walls. Once surfactant is secreted by the type II cells, it must be spread over the remaining type I cellular surface area. Phosphatidylglycerol is thought to be important in spreading of surfactant over the Type I cellular surface area. The major surfactant deficiency in premature infants relates to the lack of phosphatidylglycerol, even though it comprises less than 5% of pulmonary surfactant phospholipids. It is synthesized by head group exchange of a phosphatidylcholine enriched phospholipid using the enzyme phospholipase D. Biosynthesis Phosphatidylglycerol (PG) is formed via a complex sequential pathway whereby phosphatidic acid (PA) is first converted to CDP-diacylglyceride by the enzyme CDP-diacylglyceride synthase. Then a PGP synthase enzyme exchanges glycerol-3-phosphate (G3P) for cytidine monophosphase (CMP), forming the temporary intermediate phosphatidylglycerolphosphate (PGP). PG is finally synthesized when a PGP phosphatase enzyme catalyzes the immediate dephosphorylation of the PGP intermediate to form PG. In bacteria, another membrane phospholipid known as cardiolipin can be synthesized by condensing two molecules of phosphatidylglycerol; a reaction catalyzed by the enzyme cardiolipin-synthase. In eukaryotic mitochondria phosphatidylglycerol is converted to cardiolipin by reacting with a molecule of cytidine diphosphate diglyceride in a reaction catalyzed by cardiolipin synthase. See also Glycerol Cardiolipin Lipid-gated ion channels References Hostetler KY, van den Bosch H, van Deenen LL. The mechanism of cardiolipin biosynthesis in liver mitochondria. Biochim Biophys Acta. 1972 Mar 23;260(3):507-13. . PMID 4556770. External links Phospholipids Membrane biology
Phosphatidylglycerol
[ "Chemistry" ]
663
[ "Phospholipids", "Molecular biology", "Membrane biology", "Signal transduction" ]
12,047,145
https://en.wikipedia.org/wiki/BlogHer
BlogHer is an American media company founded by Elisa Camahort Page, Jory des Jardins, and Lisa Stone in 2005. It is an online blogger community and holds a yearly conference for women bloggers. BlogHer is owned by SHE Media which is a division of Penske Media Corporation. History BlogHer began as a conference in 2005 in San Jose, California, founded by Elisa Camahort Page, Jory des Jardins, and Lisa Stone. It was originally planned to be a blog, but grew to a 300-person conference on women and blogging once announced. In 2006, BlogHer started a group blog featuring over 60 women blogging on a variety of topics. The second BlogHer conference was held in San Jose and was much larger than the first, with at least 750 attendees. In 2007, the company expanded to include BlogHers Act, a political blogging network by and for women. Dan Gillmor quoted the site's community guideline "We embrace the spirit of civil disagreement" as an ideal. On July 16, 2008, iVillage, a network of online media outlets owned by NBC Universal, announced that it had reached a partnership with the BlogHer network to provide content for sites across the iVillage network. The same year, BlogHer received $5 million in funding from Peacock Ventures, NBC Universal's venture investment arm. Also in 2008, the BlogHer cofounders were honored with the Social Impact ABIE Award from the Anita Borg Institute. By 2010, BlogHer had 76,000 registered bloggers and 80 paid contributing editors. It also had 2,500 affiliated bloggers with revenue-sharing agreements, with 20 million unique monthly visitors. On November 3, 2014, BlogHer was purchased by SHE Media. In 2018, SHE Media was purchased by Penske Media Corporation. BlogHer conference and online community BlogHer holds annual conferences designed to give women bloggers exposure. Its first conference was held in San Jose, California, in 2005. Its community is described as an "ecosystem of blogs where each feeds off the others." It rotates headlines from all bloggers in the community to allow smaller bloggers to benefit from traffic of a larger website. References External links (https://web.archive.org/web/20050413024013/http://www.blogher.org/) "Building Companies the Women's Way", New York Times, 6/28/2007. Cooper Monroe. BlogHers Act: How 11,000 Women Bloggers are Organizing to Save the World. The Huffington Post. American women's websites Web-related conferences Conventions in the United States Organizations for women in science and technology Recurring events established in 2005 Conventions (meetings) Internet properties established in 2006 2005 establishments in California Women's conferences
BlogHer
[ "Technology" ]
576
[ "Organizations for women in science and technology", "Women in science and technology" ]
12,047,634
https://en.wikipedia.org/wiki/Building%20engineer
A building engineer is recognised as being expert in the use of technology for the design, construction, assessment and maintenance of the built environment. Commercial Building Engineers are concerned with the planning, design, construction, operation, renovation, and maintenance of buildings, as well as with their impacts on the surrounding environment. By country Australia In Australia building engineers, also known as architectural engineers may work on new building projects, or renovations of existing structures. Areas of study include: architectural history and design of buildings air conditioning, lighting and electrical power distribution water supply and distribution fire and life safety systems sustainable building systems design building structures and building construction technology construction planning. As a multifaceted build environment professional, a building engineer can provide important leadership in the design and construction of the built environment, collaborating with architects, engineers, builders and other design professionals. Canada In Canada, Building Engineers have to follow an interdisciplinary program that integrates pertinent knowledge from different disciplines. The building engineer explores all phases of the life cycle of a building and develops an appreciation of the building as an advanced technological system. Problems are identified and appropriate solutions found to improve the performance of the building in areas such as: Energy efficiency, passive solar engineering, lighting and acoustics; Indoor air quality; Construction management HVAC and control systems Advanced building materials, building envelope Earthquake resistance, wind effects on buildings, computer-aided design. A building engineer can be a licensed professional, and in some countries is synonymous with an architect. Building engineers are licensed to perform whole-building design with architect-engineer teams, or as practitioners in structural, mechanical, or electrical fields of building design. Europe Within the European Community, building engineers are affiliated with the AEEBC (Association of European Experts in Building and Construction) one of the biggest pan-European bodies for construction professionals. The titles, tasks and duties of a building engineer may vary from one European State to another. Hong Kong In Hong Kong, building engineers can become member of The Hong Kong Institution of Engineers (registered under building discipline). They can also registered at Engineers Registration Board after they have chartered and become Registered Professional Engineer (Building). Nigeria In Nigeria, a building engineer is simply known as builder. A builder is an academic trained specialist statutory registered professional responsible for Building Production, Management, Construction and Maintenance of Building for the use and protection of mankind. A builder must be a member of Nigeria institute of building(NIOB) and Council of Registered Builders of Nigeria(CORBON). The major consultancy services rendered by building engineers are listed below: Building production management Building maintenance management Project management Building surveying Feasibility and viability studies Facilities monitoring and evaluation Arbitration, meditation and expert witness. Functions of building engineers in the building code before plan approval according to the National Building Code: Contract Drawing and specification prepared by registered architects anđ registered engineers; Priced bill of Quantities prepared by registered quantity surveyors; Construction programme, project quality management plan, project health and safety plan and Buildability and maintenance analysis prepared by a registered Builder; Conditions of contract. All-risk insurance for the building works, personnel and equipment. United Kingdom and Ireland In the United Kingdom and Ireland, the title of "building engineer" is regulated by the CABE (Chartered Association of Building Engineers). The 'Chartered Association of Building Engineers' was founded as the 'Incorporated Association of Architects and Surveyors' (IAAS) in 1925 in London. The Incorporated Association of Architects and Surveyors became the 'Association of Building Engineers' (ABE) in 1993 and on obtaining its Royal Charter, became the 'Chartered Association of Building Engineers' (CABE) in 2014, its current name. The CABE accredits university qualifications to use the title Chartered Building Engineer. In the United Kingdom and Ireland, Chartered Building Engineers are involved within the design, planning, engineering, construction, legal compliance, fire safety, and maintenance of buildings. Their roles may vary from one practice to another. Building Engineers may offer specialised services within phases of the construction process or for different engineering aspects of the construction industry, such as fire safety, conservation, sustainability, planning or construction. In England and Wales, 'Building Engineers' may also be state authorised "Approved Inspectors" (or are employed by a state authorised "Corporate Approved Inspector") and thus may provide UK "Building Regulations" compliance & approval. Many 'Building Engineers' are employed by Scottish, English and Welsh local authorities to enforce and apply the Building Regulations and other Building Act 1984 related functions, including public safety in respect of "Dangerous Structures", etc. In the Republic of Ireland, where a semi-public building control system was implemented in 2014, Chartered Building Engineers may register as Building Surveyors in order to act as certifiers for compliance with building regulations at design and construction stage. The tasks and duties of the building engineer are: Design and preparation of plans Preparation of planning applications Remedial works Dilapidation schedules / negotiations Site inspections and certification of works Repair and maintenance of buildings Energy auditing and energy performance certificates Measured survey of land and buildings Advice on energy-saving designs Access audits Rehabilitation of properties Conservation Construction management Quantity surveying Fire safety and health and safety Demolition Negotiating, obtaining and implementing claims and grants Building variations Building inspections and surveys Advice on building law/party wall issues Checking building designs for compliance with the 'Building Regulations' and related 'Building' & 'Fire' Legislation. Enforcement of breaches of the 'Building Regulations' and related 'Building' & 'Fire' Legislation, including 'law court' proceedings. United States In the United States of America building engineering, also known as Architectural engineering is the application of engineering principles and technology to building design and construction. Definitions of an architectural engineer may refer to: An engineer in the structural, mechanical, electrical, construction or other engineering fields of building design and construction. A licensed engineering professional in parts of the United States. Architectural engineers are those who work with other engineers and architects for the designing and construction of buildings. The architectural engineer applies the knowledge and skills of broader engineering disciplines to the design, construction, operation, maintenance, and renovation of buildings and their component systems while paying careful attention to their effects on the surrounding environment. With the establishment of a specific "Architectural Engineering" NCEES Professional Engineering registration examination in the 1990s, and first offering in April 2003, Architectural Engineering is now recognized as a distinct engineering discipline in the United States. See also Architect Architectural engineering Architectural technologist Building services engineering Civil engineer Engineer Structural engineer References Architecture occupations Architectural design
Building engineer
[ "Engineering" ]
1,307
[ "Building engineering", "Civil engineering", "Architectural design", "Architecture occupations", "Design", "Architecture" ]
12,047,914
https://en.wikipedia.org/wiki/Transaction%20Processing%20over%20XML
Transaction Processing over XML (TPoX) is a computing benchmark for XML database systems. As a benchmark, TPoX is used for the performance testing of database management systems that are capable of storing, searching, modifying and retrieving XML data. The goal of TPoX is to allow database designers, developers and users to evaluate the performance of XML database features, such as the XML query languages XQuery and SQL/XML, XML storage, XML indexing, XML Schema support, XML updates, transaction processing and logging, and concurrency control. TPoX includes XML update tests based on the XQuery Update Facility. The TPoX benchmark exercises the processing of data-centric XML, in contrast to content- or document-centric XML. TPoX was originally developed and tested by IBM and Intel, but became an open source project on SourceForge in January 2007. TPoX 1.1 was released in June 2007. TPoX 2.0 was released in July 2009. The TPoX benchmark package contains the following: XML Schemas that define the XML data used in the benchmark. An XML data generation tool to generate an arbitrary number of XML documents with well-defined value distributions and referential integrity across documents. The XML data is generated conforming to industry schema such as FIXML to model real-world applications. Workloads which are executed on the generated data. A workload is a set of transactions. A transaction can be a query in XQuery or SQL/XML notation or an insert, update or delete operation. A Java application which acts as a workload driver. It is configurable and can spawn 1 to n parallel threads to simulate concurrent database users. Each user connects to the database and executes a random sequence of transactions defined in the workload. Parameter markers in the transactions are replaced by real values that are drawn from random value distributions. The workload driver collects and reports performance metrics, such as the transaction throughput as well as minimum, maximum and average response times. Documentation. The TPoX workload consists of seven XML queries, two inserts, two deletes, and six XML update operations. The primary performance metric of the benchmark is TTPS (TPoX Transactions Per Second) which is the throughput of the multi-user read/write workload at a given scale factor. The smallest TPoX scale factor uses 10GB of raw XML documents, the largest uses 1PB of raw XML documents. References Ron Bourret's list of XML database benchmarks An XML transaction processing benchmark, Proceedings of the 2007 ACM SIGMOD International Conference on Management of Data The CEO of Marklogic describes TPoX as a data-centric as opposed to content-centric XML scenario. TPoX is included in the list of XML Benchmarks in the Encyclopedia of Database Systems. TPoX is used in section 7.2 of an article from Oracle Corporation. TPoX is used in a research study from the University of Kaiserslautern, Germany. TPoX has been used in a research project to evaluate the efficiency of solid state disks. DB2 9.5 pureXML Performance Trends on the Next Generation Quad-Core Intel Xeon Processor DB2 9 pureXML Scalability on Intel Xeon MP Platforms Using IBM N Series Storage Taming a Terabyte of XML Data External links Transaction Processing over XML (TPoX) TPoX 1.1 TPoX 1.2 TPoX 1.3 TPoX 2.0 TPoX Benchmark Results Benchmarks (computing)
Transaction Processing over XML
[ "Technology" ]
754
[ "Benchmarks (computing)", "Computing comparisons", "Computer performance" ]
12,048,648
https://en.wikipedia.org/wiki/Canberra%20Glassworks
Canberra Glassworks is an Australian gallery in Canberra and glass art studio open to the general public to view the glass artists working. Opened in May 2007 by Jon Stanhope, it is the largest dedicated glass studio facility in Australia. Planning and building It is located in the Kingston Powerhouse which was designed by John Smith Murdoch, constructed from 1913-1915, and is a historical landmark. The power station generated electricity until 1957 and is Canberra's oldest public building. Particular effort was made to preserve the original building and surroundings where possible, and was developed within a framework of Ecologically Sustainable Design (ESD). artsACT and Jon Stanhope, Canberran Chief Minister and Minister for the Arts announced the name of the centre in late 2005, specifically to highlight 'Canberra' as a being potentially well reputed both nationally and internationally for studio glass and the term 'glassworks' to be clear about what equipment and facilities where available at the centre to artists as well as to the general public. The centre is strongly linked with the ANU School of Art Glass Workshop, whose founding workshop head Klaus Moje was pivotal in establishing the centre. The centre was originally scheduled to be opened in September 2006, but was opened in May 2007. The creation of Glassworks and renovation of this building is part of the redevelopment of the lake foreshore surrounding Kingston. Public outreach The studio contains a public viewing gallery above the main hotshop areas as well as public access walkways around all glass working areas. Glassworks also offers courses to non-practicing artists and members of the public, as well as students. The studio also offers members of the general public to commission works through artists working at the studio. In addition there are rotating art displays featuring multiple different styles of glassworking. Photos Notes External links Canberra Glassworks website Glassworks at artsACT website ANU School of Art Glass Workshop ANU Glass Australia Database Museums in Canberra Art museums and galleries in Canberra 2007 establishments in Australia Art museums and galleries established in 2007 Glass museums and galleries
Canberra Glassworks
[ "Materials_science", "Engineering" ]
403
[ "Glass engineering and science", "Glass museums and galleries" ]
12,049,028
https://en.wikipedia.org/wiki/Hypercarnivore
A hypercarnivore is an animal which has a diet that is more than 70% meat, either via active predation or by scavenging. The remaining non-meat diet may consist of non-animal foods such as fungi, fruits or other plant material. Some extant examples of hypercarnivorous animals include crocodilians, owls, shrikes, eagles, vultures, felids, most wild canids, polar bear, odontocetid cetaceans (toothed whales), snakes, spiders, scorpions, mantises, marlins, groupers, piranhas and most sharks. Every species in the family Felidae, including the domesticated cat, is a hypercarnivore in its natural state. Additionally, this term is also used in paleobiology to describe taxa of animals which have an increased slicing component of their dentition relative to the grinding component. In domestic settings, e.g. cats may have a diet designed from only plant and synthetic sources using modern processing methods. Feeding farmed animals such as alligators and crocodiles mostly or fully plant-based feed is sometimes done to save costs or as an environmentally friendly alternative. Hypercarnivores are not necessarily apex predators. For example, salmon are exclusively carnivorous, yet they are prey at all stages of life for a variety of organisms. Many prehistoric mammals of the clade Carnivoramorpha (Carnivora and Miacoidea without Creodonta), along with the early order Creodonta, and some mammals of the even earlier order Cimolesta, were hypercarnivores. The earliest carnivorous mammal is considered to be Cimolestes, which existed during the Late Cretaceous and early Paleogene periods in North America about 66 million years ago. Theropod dinosaurs such as Tyrannosaurus rex that existed during the late Cretaceous, although not mammals, were obligate carnivores. Large hypercarnivores evolved frequently in the fossil record, often in response to an ecological opportunity afforded by the decline or extinction of previously dominant hypercarnivorous taxa. While the evolution of large size and carnivory may be favored at the individual level, it can lead to a macroevolutionary decline, wherein such extreme dietary specialization results in reduced population densities and a greater vulnerability for extinction. As a result of these opposing forces, the fossil record of carnivores is dominated by successive clades of hypercarnivores that diversify and decline, only to be replaced by new hypercarnivorous clades. As an example of related species with differing diets, even though they diverged only 150,000 years ago, the polar bear is the most highly carnivorous bear (more than 90% of its diet is meat) while the grizzly bear is one of the least carnivorous in many locales, with less than 10% of its diet being meat. The genomes of the Tasmanian devil, killer whale, polar bear, leopard, lion, tiger, cheetah and domestic cat were analysed, and shared positive selection for two genes related to bone development and repair (DMP1, PTN), which is not seen in omnivores or herbivores, has been found. This indicates that a stronger bone structure is a crucial requirement and drives selection towards predatory hypercarnivore lifestyle in mammals. Positive selection of one gene related to enhanced bone mineralisation has been found in the Scimitar-toothed cat (Homotherium latidens). See also Hypocarnivore List of feeding behaviours Mesocarnivore References External links Carnivory
Hypercarnivore
[ "Biology" ]
763
[ "Eating behaviors", "Carnivory" ]
12,049,489
https://en.wikipedia.org/wiki/Barry%20Trost
Barry M. Trost (born June 13, 1941, in Philadelphia) is an American chemist who is the Job and Gertrud Tamaki Professor Emeritus in the School of Humanities and Sciences at Stanford University. The Tsuji-Trost reaction and the Trost ligand are named after him. He is prominent for advancing the concept of atom economy. Early life and education Trost was born in Philadelphia, Pennsylvania on June 13, 1941. He studied at the University of Pennsylvania and obtained his B.A. in 1962. He then attended the Massachusetts Institute of Technology for graduate school, where he worked with Herbert O. House on enolate anions, the Mannich reaction, and the Robinson annulation. Trost graduated with his Ph.D. in 1965. Independent career Trost moved to the University of Wisconsin–Madison to begin his independent career, and was promoted to professor of chemistry in 1969, and the Vilas Research Professor in 1982. In 1987, he moved to Stanford University as professor of chemistry, and was appointed the Job and Gertrud Tamaki Professor of Humanities and Sciences in 1990. He previously served as chair of the department of chemistry. , Trost has an h-index of 161 according to Google Scholar and of 140 (1040 documents) according to Scopus. Research Trost's research focused on chemical synthesis. In order to build complex target molecules from simple molecules, Trost developed new reactions and reagents, and utilized cascade reactions and tandem reactions. Target molecules have potential applications as novel catalysts, as well as antibiotic and anti-tumor therapeutics. References Sources Biographical Data at Trost's Home Page External links 21st-century American chemists American organic chemists 1941 births Scientists from Philadelphia University of Pennsylvania alumni Living people Members of the United States National Academy of Sciences Massachusetts Institute of Technology alumni Stanford University faculty University of Wisconsin–Madison faculty
Barry Trost
[ "Chemistry" ]
383
[ "Organic chemists", "American organic chemists" ]
12,049,745
https://en.wikipedia.org/wiki/Mater%20Dei%20Hospital
Mater Dei Hospital (MDH; ), also known simply as Mater Dei, is an acute general and teaching hospital in Msida, Malta. It was opened in 2007, replacing St. Luke's Hospital. It is a public hospital affiliated to the University of Malta, offering general and specialist services. History The hospital opened on 29 June 2007, replacing St. Luke's Hospital as the main public general hospital. The 250,000 square metre complex includes 825 beds and 25 operating theaters. It was built by Skanska Malta JV, a subsidiary of the Swedish construction firm Skanska. The project was planned to cost Lm 50,000,000 (around €116,000,000), but the cost skyrocketed to more than Lm 250,000,000 (around €582,000,000). Skanska was entrusted with the building of a new general hospital in Malta, and the "state-of-the-art" Mater Dei Hospital cost over €700,000,000. Later, however, it was discovered that Skanska had used lower-quality cement of the kind that is generally used to build pavements. As a result, the hospital could not develop further floors or build a helipad on the roof. University of Malta affiliation The hospital is located adjacent to the University of Malta, and contains the faculties of Health Sciences, Medicine and Surgery, and Dental Surgery in a purpose built Medical School wing. Prior to the COVID-19 pandemic, the hospital used to house the Health Sciences Library, which is a branch library of the University of Malta Library. Sir Anthony Mamo Oncology Centre The Sir Anthony Mamo Oncology Centre welcomed its first 50 outpatients on 22 December 2014. The hospital started being excavated in 2010 and building started in 2012. It is named after the first president of Malta Sir Anthony Mamo. It cost €52 million and an estimated €8 million a year is required to run it. The hospital is offering more advanced radiotherapy with two machines commissioned from the Leeds Spencer Centre, where they were introduced in 2013. The machines enable more precise radiotherapy and stronger doses reducing the length and frequency of sessions. Considerations by the Maltese Government for expanding radiotherapy services to include autologous transplants have also been made. The government has also considered the development of a clinical trials unit through which Maltese patients would be able to benefit from new medicines not yet on the market. Beds at the new hospital increased from the 78 at Boffa Hospital to 113 and the outpatient clinics from two to 12. The type of chemotherapy provided is more advanced. Palliative care beds were also increased from the 10 at Boffa to 16. A new MRI machine will help reduce waiting lists. Patients and their families would be followed before, during and after treatment and more training provided for staff. A total of 47 new professionals have been recruited on its opening day. Further reading See also List of hospitals in Malta References Hospital buildings completed in 2007 Hospitals in Malta Hospitals established in 2007 Msida Architectural controversies Controversies in Malta 21st-century controversies 2007 establishments in Malta
Mater Dei Hospital
[ "Engineering" ]
637
[ "Architectural controversies", "Architecture" ]
12,049,925
https://en.wikipedia.org/wiki/ATP2A2
ATP2A2 also known as sarcoplasmic/endoplasmic reticulum calcium ATPase 2 (SERCA2) is an ATPase associated with Darier's disease and Acrokeratosis verruciformis. This gene encodes one of the SERCA Ca(2+)-ATPases, which are intracellular pumps located in the sarcoplasmic or endoplasmic reticula of muscle cells. This enzyme catalyzes the hydrolysis of ATP coupled with the translocation of calcium from the cytosol to the sarcoplasmic reticulum lumen, and is involved in calcium sequestration associated with muscular excitation and contraction. Alternative splicing results in multiple transcript variants encoding different isoforms. References External links
ATP2A2
[ "Chemistry", "Biology" ]
167
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
12,050,205
https://en.wikipedia.org/wiki/Pell%20Frischmann
Pell Frischmann (PF) is a multi-disciplinary engineering consultancy based in London that provides structural and civil engineering, planning, design, and consulting services. Pell Frischmann employs over 1,000 staff worldwide with eight offices across the UK and international offices in India, the Middle East, Turkey and Romania. The original company was founded by Cecil Pell in the 1920s, who entered into partnership with Wilem Frischmann in the early 1970s forming Pell Frischmann and Partners. In 2003, the umbrella company became Pell Frischmann Consulting Engineers. Major subsidiaries of the company include Frischmann Prabhu operating in the Asia-Pacific region and Conseco operating in the Middle East. Key areas of business include buildings, building services, land development and regeneration, traffic and transportation, highways and bridges, railways, environment and process technology, water and wastewater, power, fire engineering and IT and telecommunications. In April 2015, Pell Frischmann received The Queen's Award for International Trade. History The original company was founded in 1926 by Cecil Pell, who subsequently formed Pell Frischmann and Partners with Wilem Frischmann in 1972. The Group became a limited company in 1984. Until the late 1960s, the Group's activities centred on structural engineering and mechanical and electrical building services. Pell Frischmann Group has acquired a number of companies, including the Department for Transport's West Yorkshire Road Construction Sub-Unit (1981), 70 staff from the Infrastructure Directorate of the Milton Keynes Development Corporation joined the Group to form Pell Frischmann Milton Keynes Ltd (1988), Pell Frischmann Water Ltd was formed in 1990 as a joint venture between the Pell Frischmann Group and South West Water plc. EPD Consultants was acquired from Balfour Beatty (1998) and De Leuw Rothwell Ltd (2000) was a new company set up safeguarding the jobs of a number of staff following the collapse and subsequent liquidation of the former De Leuw Rothwell. In 2002, the various offices were consolidated under one umbrella company and in September 2003 the company was restructured and changed its trading company name to Pell Frischmann Consulting Engineers (PFCE). Major subsidiaries include Frischmann Prabhu founded by Sudhakar Prabhu and based in Mumbai providing expertise in the Asia-Pacific region and Conseco International which focuses on projects in the Middle East and was awarded the Queen's Award for Enterprise in 2006 for work in the reconstruction of Iraq. In October 2015, it was announced that RAG-Stiftung (Foundation) Investment Company had bought a majority stake in the company. Notable projects Centre Point, London The tall Centre Point tower was constructed between 1963 and 1966. It is a 32-storey office building above Tottenham Court Road tube station with of floor space. Pell Frischmann provided structural engineering design and construction supervision services. The external columns, load-bearing façade and floors are prefabricated off-site from concrete which was highly polished to give it the appearance of marble or granite. Centre Point achieved a record building time in the 1960s, where a complete floor cycle was achieved in seven days without the use of exterior scaffolding. United Kingdom Tower 42 (formerly National Westminster Tower), London; The 52-storey tall former NatWest Tower office building has a gross floor area of approximately and rises to above ground level. Liverpool One, Liverpool 62 Buckingham Gate; the building was designed by Pelli Clarke Pelli and Swanke Hayden & Connell with Pell Frischmann working as structural engineers. It was completed in May 2013 with of office space. Pell Frischmann was shortlisted in the 2013 ACE awards 'Building Structures Large' category for its work on the groundscraper. Home Office Headquarters (2 Marsham Street), London New Street Square, London 7–10 Old Bailey, London University of Oxford, Oxford; Pell Frischmann has worked on a number of projects in the university including the following: New Bodleian Library; provided structural engineering services for the new structure. Earth Sciences Building; a teaching and research facility for the Earth Sciences Department designed by Wilkinson Eyre and partnered with Hoare Lea. Pell Frischmann won the ACE Engineering Excellence award for Building Structures in 2011 for its work on the department. Blavatnik School of Government; Pell Frischmann provided civil and structural engineering services for the new build structure. The building includes a 200 person capacity lecture theatre, teaching spaces, and seminar rooms. St Cross College; extensions to workspaces and accommodation. Pell Frischmann provided structural engineering services while Hoare Lea provided mechanical and electrical engineering. Hunts House, King's College London Cornwall House, King's College London Paddington Central, London; Paddington Central Development (Phase 1) is some 12 storeys high and comprises more than of prime office accommodation, together with retail and leisure space of a further and over 200 residential units. The Point, Paddington, London Middle East Al Zeina Beach Development, Abu Dhabi; provides 1,200 beach-front apartments. References External links Pell Frischmann Website Frischmann Prabhu Engineering consulting firms of the United Kingdom International engineering consulting firms British companies established in 1926 Construction and civil engineering companies of the United Kingdom 1926 establishments in England Consulting firms established in 1926 Construction and civil engineering companies established in 1926
Pell Frischmann
[ "Engineering" ]
1,094
[ "Engineering consulting firms", "International engineering consulting firms" ]
12,051,099
https://en.wikipedia.org/wiki/Mobile%20identity%20management
Mobile identity is a development of online authentication and digital signatures, where the SIM card of one's mobile phone works as an identity tool. Mobile identity enables legally binding authentication and transaction signing for online banking, payment confirmation, corporate services, and consuming online content. The user's certificates are maintained on the telecom operator's SIM card and in order to use them, the user has to enter a personal, secret PIN code. When using mobile identity, no separate card reader is needed, as the phone itself already performs both functions. In contrast to other approaches, the mobile phone in conjunction with a mobile signature-enabled SIM card aims to offer the same security and ease of use as for example smart cards in existing digital identity management systems. Smart card-based digital identities can only be used in conjunction with a card reader and a PC. In addition, distributing and managing the cards can be logistically difficult, exacerbated by the lack of interoperability between services relying on such a digital identity. There are a number of private company stakeholders that have an inherent interest in setting up a mobile signature service infrastructure to offer mobile identity services. These stakeholders are mobile network operators and, to a certain extent, financial institutions or service providers with an existing large customer base, that could leverage the use of mobile signatures across several applications. By country Finland The Finnish government has supervised the deployment of a common derivative of the ETSI-based mobile signature service standard, thus allowing the Finnish mobile operators to offer mobile signature services. The Finnish government certificate authority (CA) also issues the certificates that link the digital keys on the SIM card to the person's real world identity. Islamic Republic of Iran Through national mobile register program Iranian customs administration and ministry of ict registers database from IMEI of imported legally phones and allows Iranian citizens to only access full Iranian mobile phone operators national roaming network if they have linked their national ID to both Simcards and also non contraband/smuggled IMEI number. Sweden In the Nordic region, governments, public sector and financial institutions are increasingly offering online and mobile channels to access their services. In Sweden the WPK consortium, owned by banks and mobile operators, specifies a mobile signature service infrastructure that is used by banks to authenticate online banking users. Telenor Sverige has provided technology for the company's mobile signature services in Sweden since 2009. Telenor enables its customers a secure login to online services using their mobile phone for authentication and digital signing. Estonia The Estonian government issues all citizens with a smart card and digital identity called the Estonian ID card. Additionally, Sertifitseerimiskeskus, the certificate authority of Estonia issues special SIM cards to mobile phones which act as national personal identification method. The service is called m-id. Turkey In 2007, the mobile operator Turkcell bought a mobile signature service infrastructure Gemalto and launched Mobillmza, the world's first mobile security solution. They have partnered up with over 200 businesses, including many banks to enable them to use mobile signatures for online user authentication. Other services relying on mobile signatures in Turkey include securing the withdrawal of small loans from an ATM, and processing custom work flow processes by enabling applicants to use mobile signatures. Austria The Austrian government allows private sector companies to propose means for storing the government-controlled digital identity. Since 2006, the Austrian government has explicitly mentioned mobile phones as one of the likely devices to be used for storing and managing a digital identity. Eight Austrian saving banks will launch a pilot allowing online user authentication with mobile signatures. Ukraine In Ukraine, Mobile ID project started in 2015, and later declared as one of Government of Ukraine priorities supported by EU. At the beginning of 2018 Ukrainian cell operators are evaluating proposals and testing platforms from different local and foreign developers. Platform selection will be followed up by comprehensive certification process. Ukrainian IT and cryptography around Mobile ID topic is mostly presented by Innovation Development HUB LLC with its own Mobile ID platform. This particular solution is the sole, having already passed the certification, and most likely will be implemented in Ukraine. As of September 2019, all of 'big three' cell operators in Ukraine have launched Mobile ID service. Vodafone - commercial launch in August 2018. Kyivstar - commercial launch in December 2018. Lifecell - commercial launch in August 2019. Vodafone and Lifecell operators implemented Mobile ID solution of Ukrainian origin designed by Innovation Development HUB LLC. See also Identity management Location-based service Mobile computing Mobile security References Identity management Mobile telecommunication services
Mobile identity management
[ "Technology" ]
912
[ "Mobile telecommunications", "Mobile telecommunication services" ]
12,051,875
https://en.wikipedia.org/wiki/Mobile%20signature%20roaming
In mobile telecommunications technology, the concept of mobile signature roaming means an access point (AP) should be able to get a mobile signature from any end-user, even if the AP and the end-user have not contracted a commercial relationship with the same MSSP. Otherwise, an AP would have to build commercial terms with as many MSSPs as possible, and this might be a cost burden. This means that a mobile signature transaction issued by an application provider should be able to reach the appropriate MSSP, and this should be transparent for the AP. Mobile signature roaming itself requires commercial agreements between the entities that facilitate it. In this respect, we assume that various entities (including MSSPs) will join in order to define common commercial terms and rules corresponding to a mobile signature roaming Service. This is the concept of a mobile signature roaming service. Entities involved Acquiring Entity (AE): an entity performing this role is one of the entry points of the mesh, and handles commercial agreements with APs. The entry point in the mesh may be for instance a MSSP, or an aggregator of Application Providers in the context of a particular communities of interests (e.g. payment associations, banks, MNOs etc.). That's the reason why we define this more abstract role. An Acquiring Entity implements the Web Service Interface specified in TS 102 204 [8]; Home MSSP (HMSSP): this is the MSSP that is able to deal with the current end-user and the current transaction; Routing Entity (RE): any entity that facilitates the communication between the AE and the home MSSP; Attribute Provider: this role is described by Liberty Alliance [3]. One or several mesh members may undertake this role and store relevant attributes in order to facilitate the discovery of the Home MSSP by other Mesh members; Identity Issuer: an entity that is able to make a link between a Mobile Signature and an end user's identity. Within a PKI system, this is typically the certificate authority (CA) and/or a registration authority (RA); Verifying Entity (VE): an entity that can verify a Mobile Signature. A MSSP may be a Verifying Entity as well; Acquiring MSSP (AMSSP): this is a MSSP acting as an entry point in the Mesh. We can imagine that a commercial model for a mobile Signature Roaming Service is a Mesh of MSSPs which are fully or partially connected between each others. First mobile signature roaming transactions 13 April 2005 "Finnet and TeliaSonera Finland Performed Successful MSS Roaming" 7 February 2007 "World's first international Mobile Signature Roaming - with ETSI-MSS by Valimo (Finland) and BBS (Norway)" References Identity management Mobile telecommunications standards
Mobile signature roaming
[ "Technology" ]
563
[ "Mobile telecommunications", "Mobile telecommunications standards" ]
12,052,294
https://en.wikipedia.org/wiki/Electronic%20differential
In automotive engineering the electronic differential is a form of differential, which provides the required torque for each driving wheel and allows different wheel speeds. It is used in place of the mechanical differential in multi-drive systems. When cornering, the inner and outer wheels rotate at different speeds, because the inner wheels describe a smaller turning radius. The electronic differential uses the steering wheel command signal and the motor speed signals to control the power to each wheel so that all wheels are supplied with the torque they need. Functional description The classical automobile drivetrain is composed by a single Internal combustion engine providing torque to one or more driving wheels. The most common solution is to use a mechanical device to distribute torque to the wheels. This mechanical differential allows different wheel speeds when cornering. With the emergence of electric vehicles new drive train configurations are possible. Multi-drive systems become easy to implement due to the large power density of electric motors. These systems, usually with one motor per driving wheel, need an additional top level controller which performs the same task as a mechanical differential. The ED scheme has several advantages over a mechanical differential: simplicity - it avoids additional mechanical parts such as a gearbox or clutch; independent torque for each wheel allows additional capabilities (e.g., traction control, stability control); reconfigurable - it is reprogrammable in order to include new features or tuned according to the driver’s preferences; allows distributed regenerative braking; the torque is not limited by the wheel with least traction, as it is with a mechanical differential. faster response times; accurate knowledge of traction torque per wheel. However, the ED scheme also come with many disadvantages and drawbacks: errors and glitches are prone to happen, thus giving inaccurate reading and output as compared to conventional differential. These result in the wheels either receiving too much or little power and torque. increased premature tyre wear due to the inaccurate reading and output as compared to conventional differential. higher cost to manufacture and maintain the electronic systems. Applications Several applications of this technology have proven successful and have increased vehicle performance. The application range is wide and includes the huge T 282B from Liebherr which is considered the world largest truck. This earth-hauling truck is driven by an electric propulsion system composed by two independent electric motors. These motors providing a maximum power of 2700 kW are controlled in order to adjust their speeds when cornering, thus increasing traction and reducing tire wear. The Eliica is also equipped with electronic differential; this eight-wheeled electric vehicle is capable of driving up to 370 km/h whilst maintaining perfect torque control on each wheel. Smaller vehicles for traction purposes and System on Chip controllers for generic vehicular applications are also available. References Automotive transmission technologies Vehicle technology Mechanical power control
Electronic differential
[ "Physics", "Engineering" ]
556
[ "Vehicle technology", "Mechanics", "Mechanical power control", "Mechanical engineering by discipline" ]
12,053,305
https://en.wikipedia.org/wiki/Cepaea
Cepaea is a genus of large air-breathing land snails, terrestrial pulmonate gastropod molluscs in the family Helicidae. The shells are often brightly coloured and patterned with brown stripes. The two species in this genus, C. nemoralis and C. hortensis, are widespread and common in Western and Central Europe and have been introduced to North America. Both have been influential model species for ongoing studies of genetics and natural selection. Like many Helicidae, these snails use love darts during mating. Species For a long time, four species were classified in the genus Cepaea. However, molecular phylogenetic studies suggested that two of them should be placed in the genera Macularia and Caucasotachea, which are not immediate relatives of either Cepaea or each other: Cepaea hortensis (O. F. Müller, 1774) – white-lipped snail or garden banded snail Cepaea nemoralis (Linnaeus, 1758) – brown-lipped snail or grove snail Cepaea sylvatica (Draparnaud, 1801), now Macularia sylvatica Cepaea vindobonensis (Férussac, 1821), now Caucasotachea vindobonensis Interspecific relations The range of C. hortensis extends further north than that of C. nemoralis in Scotland and Scandinavia and it is the only one of the two species in Iceland. Likewise in the Swiss Alps C. hortensis is found as high as 2050 m, but C. nemoralis only up to 1600 m. Conversely, the southern edge of the range lies further north in C. hortensis; unlike C. nemoralis it does not occur in Italy, and in Spain it has a more restricted distribution (in the north-east corner). Where the ranges overlap C. hortensis prefers cooler sites with longer and damper vegetation. But the two species often co-occur at a site, in which situation the densities of both affect each other's growth, fecundity and mortality. However, they differ somewhat in their behaviour: C. hortensis is more active at lower temperatures, aestivates higher on the vegetation and is more diurnal, although this appears to be independent of whether the other species is present or not. When given no choice of partner in the laboratory, the two Cepaea species can form hybrids, which will backcross with the parental species, but the fertility is very low. Shell polymorphism Description and genetics The two Cepaea species share a genetic polymorphism for the colour and banding pattern of the shell. The background colour of the shell ranges from dark brown, through pink to yellow or even approaching white. This variation is continuous, but there are peaks in the distribution corresponding to brown, pink and yellow morphs. The colour is mainly determined by alleles at a single locus with brown dominant to pink, which is dominant to yellow. Up to five bands (very rarely more) run spirally around the shell, numbered 1 to 5 with the larger numbers further from the shell apex. The conventional scoring annotation is to write 12345 if all bands are present and separated, but to replace a number with 0 if a band is absent from its usual position and to enclose numbers in parentheses if bands are fused with their neighbours. Thus 003(45) would mean that the top two bands are absent and the lower two fused. A dominant allele at one locus causes the absence of all bands, a dominant allele at another locus causes the loss of all bands except band 3, and a dominant allele at a third locus causes the loss of just bands 1 and 2. The first of these three loci is closely linked to the locus determining shell colour, to another influencing the spread of the band pigment, and to one determining the colour of the lip and bands. This collection of linked loci are part of a supergene. A consequence of this arrangement is that the shells of different background colours within a population often exhibit different ratios of banded to unbanded shells: this is an example of linkage disequilibrium. The bands are usually dark brown, but this is affected by genes influencing intensity and colouration (e.g. black or orange). Another locus (part of the supergene) determines whether the band is continuous or forms a sequence of spots. The genetics underlying the fusion of adjacent bands is not well understood. Evolutionary explanations In both species, most populations exhibit polymorphism in one or more of these shell characters. Nevertheless, statistically we can detect systematic variation at continental scales, and also between habitats, and at various scales down to a few tens of metres. There is also statistical evidence of change with time, based both on comparisons between sub-fossil and modern shells, and on resampling the same sites some decades apart, although the latter has more often found little change over the period (stasis). Very much research in ecological genetics has addressed the reasons for both the variation and the systematic trends. The two selection pressures that might most feasibly act on the appearance of shells are climatic selection and predation. Darker shells heat up more quickly in the sun, which might well be advantageous for cold-blooded animals in shaded woodland but risks causing overheating and death in open habitats. This trade-off is also presumed to be responsible for the greater proportion of yellow C. nemoralis to the south, but it is curious why the trend is not present in C. hortensis. Contrary to predictions, recent global warming has not led to a detectable increase in yellow morphs on a continental scale. The use of photosensitive paint has shown that paler morphs spend more time exposed to the sun, which may imply that the shell polymorphism allows different morphs to coexist at a site by occupying different microhabitats. Both temperature regulation and predation make the same prediction of pale shells in open habitats and dark shells in woodland, so—although the prediction has often been confirmed—it is difficult to test which is the more important explanation. However, song thrushes (Turdus philomelos) break open Cepaea shells on stones ("anvils"), allowing a comparison of those they predate with those present in the local environment. Besides the directional selection favouring camouflaged individuals, visually searching predators might cause apostatic selection. The hypothesis is that they form a search image for the commonest morphs, favouring whichever morphs are locally rare, thus promoting diversity. As well as its visual effect, the shell pigments are associated with differences in shell strength, so may affect predation by predators searching non-visually, for instance at night. Several studies have demonstrated a predicted evolutionary response of shell appearance to a change of habitat. However, the association of shell appearance and habitat is not always consistent, especially in more disturbed environments, so it is believed that random effects are also influential, particularly founder effects. The two Cepaea species colonised much of Europe only within the last 4000 generations, so the time available for selection to act has been limited, and local anthropogenetic disturbances must often have reversed which morphs are optimal. Moreover, snails disperse more slowly than many other animals, so the most suitable genes may be locally absent. For instance, biologists were at one time puzzled by the phenomenon of "area effects"; the same morph of Cepaea may be found consistently over a wide area but in adjacent areas of similar habitat a different set of morphs predominate instead, with a sharp transition between. The explanation accepted nowadays is that relatively recently a change of habitat allowed the rapid colonisation of vacant areas by descendants of a few founder individuals until the colony had expanded out to areas occupied by other populations; subsequently intraspecific competition slowed the dispersal of genes into the neighbouring, occupied areas. Nevertheless, occasional transfer of genes between areas of different habitat is proposed to be important in maintaining the local diversity of phenotypes. References Further reading Helicidae Gastropod genera Model organisms Ecological genetics Polymorphism (biology)
Cepaea
[ "Biology" ]
1,677
[ "Model organisms", "Biological models" ]
12,054,042
https://en.wikipedia.org/wiki/Megan%20and%20Morag
Megan and Morag, two domestic sheep, were the first mammals to have been successfully cloned from differentiated cells. They are not to be confused with Dolly the sheep which was the first animal to be successfully cloned from an adult somatic cell or Polly the sheep which was the first cloned and transgenic animal. Megan and Morag, like Dolly and Polly, were cloned at the Roslin Institute in Edinburgh, Scotland in 1995. Background The team at the Roslin Institute were seeking a way to modify the genetic constitution of sheep and cattle more effectively than the hit and miss method that was the only method and had sort of aids available at the time – microinjection. In microinjection, DNA is injected into the pronuclei of fertilized oocytes. However, only a small proportion of the animals will integrate the injected DNA into their genome and in the rare cases that they do integrate this new genetic information, the pattern of expression of the injected piece of DNA's gene, due to the random integration, is very variable. The team choose to combine two approaches – microinjection and embryonic stem cells. In order to achieve this they decided to try to transfer the nucleus from one cell to another and stimulate this new cell to grow and become an animal, a process known as nuclear transfer. The team at the Roslin Institute tried to make immortalized and undifferentiated embryonic stem cell lines in sheep, but failed. As a result, they decided to work with cultured blastocyst cells. The nuclear material of these blastocyst cells would be transferred into an unfertilized sheep egg cell, an oocyte where the nucleus had been removed. To optimize the chances of successful nuclear transfer, they put the cultured cells into a state of quiescence, which was a similar state to that of the unfertilized egg cell. Nuclear transfer was done, using electrical stimuli both to fuse the cultured cell with the enucleated egg and to kick start embryonic development. From 244 nuclear transfers, 34 developed to a stage where they could be placed in the uteri of surrogate mothers. In the summer of 1995, five lambs were born, of which two – Megan and Morag – survived to become healthy fertile adults. These were the first mammals cloned from differentiated cells. They were born with the names 5LL2 and 5LL5 in June 1995. The production of Megan and Morag demonstrated that viable sheep can be produced by nuclear transfer from cells which have been cultured in vitro. They signified the technical breakthrough that made Dolly the sheep possible. The birth of Megan and Morag, a year before Dolly, with their huge beneficial potential, made relatively few headlines. , Megan was still alive and was the oldest cloned animal at the time. See also List of cloned animals References 1995 animal births Cloning Cloned sheep Vertebrate developmental biology Cell biology Stem cells Biotechnology Individual animals in the United Kingdom Animal duos Individual sheep
Megan and Morag
[ "Engineering", "Biology" ]
617
[ "Cell biology", "Cloning", "Genetic engineering", "Biotechnology", "nan" ]
12,054,085
https://en.wikipedia.org/wiki/Nanmu
Nanmu () is a precious wood that is unique to China and South Asia, and was historically used for boat building, architectural woodworking, furniture and sculptural carving in China. The Ming dynasty-era writings indicate this wood as superior durable softwood. A recent excavation of a tomb in Lija village in Jing'an County, Jiangxi Province, found 47 coffins made of nanmu wood that are reported to be about 2500 years old, dating back to the Eastern Zhou dynasty period and belonging to the Dongyi State of Xu. The trees that produce nanmu wood are evergreens that have long, straight trunks which grows to 35 meters in height and one meter in diameter. More than 30 varieties exist, which are found south of the Yangtze River. There are also nanmu trees on Hainan Island and in Vietnam. Yangmu nanmu is found in Sichuan. Zinan nanmu is found in southeast and south central China. Zinanmu nanmu is found in Jiangsu, Anhui, Zhenjiang. Zhennan nanmu is found in Guizhou and Sichuan. The finest of all the nanmu woods is Hongmao Nanmu (Phoebe hungmaoensis) from Hainan Island. Nanmu wood comes from several species of tree, including: Litsea cubeba Machilus nanmu Phoebe hungmaoensis Phoebe zhennan It is trees of the Phoebe genus, however, that produce the highest grades of nanmu wood. Nanmu is a knotty wood that frequently shows a wavy or quilted grain figure. It does not react to humidity and temperature much in the way of expansion or contraction and makes superior furniture which tends not to get loose or crack because of changes in climate. Nanmu woods that are lighter in color and have loose grain are considered inferior. Nanmu was used in architectural woodworking and boatbuilding due to its resistance to decay. The wood dries with little splitting or warping. After drying the wood is of medium-density and does not change shape. Nanmu can be sanded to a mirror finish. The highest grade of nanmu wood has a bright golden color, a pleasant fragrance, and exhibits impressive chatoyancy (an optical effect) which is why it is often referred to as jīnsī ( - golden-thread or golden silk) in China. Although recently harvested nanmu and zhennan lumber and newly made articles could still be obtained (at extremely high prices) as recently as 2019, it is officially a protected species in China and worldwide. Due to deforestation, disease, and pollution, these species are almost exhausted. Some small semi-natural forest stands and protected artificial forests still exist, but the trees are otherwise now limited to small growths of aging decorative trees in temples, cottages, parks, and courtyards. Footnotes External links Wood used in Chinese Antique Furniture Beijing Exhibition of Rare Wood Furniture Chinese culture Wood Plant common names
Nanmu
[ "Biology" ]
594
[ "Plant common names", "Common names of organisms", "Plants" ]
12,054,294
https://en.wikipedia.org/wiki/Behavior%20Tech%20Computer
Behavior Tech Computer (BTC) manufactures many computer accessories including keyboards, webcams and mouse devices; they used to manufacture optical drives but decided to leave the market and sold their Optical Division to Foxconn. The company was founded in 1982, launching a capacitance keyboard the following year. International expansion followed rapidly: 1986 BTC USA 1988 BTC Korea 1989 BTC Europe - a sales hub based in the Netherlands 1991 BTC France 1992 BTC Latin - based in Florida, USA 1993 BTC Germany - with a focus on Germany and Eastern Europe The expansion has led to BTC being described by the Taiwanese Business Weekly magazine as the "third most internationalized" company in Taiwan. See also List of companies of Taiwan References 1982 establishments in Taiwan Computer hardware companies Computer peripheral companies Electronics companies of Taiwan Companies listed on the Taiwan Stock Exchange Taiwanese brands
Behavior Tech Computer
[ "Technology" ]
173
[ "Computer hardware companies", "Computers" ]
12,054,908
https://en.wikipedia.org/wiki/Layered%20double%20hydroxides
Layered double hydroxides (LDH) are a class of ionic solids characterized by a layered structure with the generic layer sequence [AcB Z AcB]n, where c represents layers of metal cations, A and B are layers of hydroxide () anions, and Z are layers of other anions and neutral molecules (such as water). Lateral offsets between the layers may result in longer repeating periods. The intercalated anions (Z) are weakly bound, often exchangeable; their intercalation properties have scientific interest and industrial applications. LDHs occur in nature as minerals, as byproducts of metabolism of certain bacteria, and also unintentionally in man-made contexts, such as the products of corrosion of metal objects. Structure and formulas LDHs can be seen as derived from hydroxides of divalent cations (d) with the brucite (Mg(OH)2) layer structure [AdB AdB]n, by cation (c) replacement (Mg2+ → Al3+), or by cation oxidation (Fe2+ → Fe3+ in the case of green rust, Fe(OH)2), in the metallic divalent (d) cation layers, so as to give them an excess positive electric charge; and intercalation of extra anion layers (Z) between the hydroxide layers (A,B) to neutralize that charge, resulting in the structure [AcB Z AcB]n. LDHs can be formed with a wide variety of anions in the intercalated layers (Z), such as Cl−, Br−, NO, CO, SO and SeO. This structure is unusual in solid-state chemistry, since many materials with similar structure (such as montmorillonite and other clay minerals) have negatively charged main metal layers (c) and positive ions in the intercalated layers (Z). In the most studied class of LDHs, the positive layer (c) consists of divalent and trivalent cations, and can be represented by the generic formula: [()2]x+ [(Xn−)x/n · y]x–, where Xn− is the intercalating anion compensating the excess of positive charge (x+) present in the metal hydroxide layer. Most commonly, = Ca2+, Mg2+, Mn2+, Fe2+, Co2+, Ni2+, Cu2+ or Zn2+, and is another trivalent cation (Al3+, Cr3+), or possibly of the same element as in the case of green rust with Fe3+. Fixed-composition phases have been shown to exist over the range 0.2 ≤ x ≤ 0.33. However, phases with variable x hare also known, and in some cases, x > 0.5. Another class of Li/Al LDH is known where the main metal layer (c) consists of Li+ and Al3+ cations in a molar ratio Li:Al = 1:2, so that the metal hydroxide layer only bears one unit of positive charge in excess, with the generic formula: . In some cases, the pH value of the solution used during the synthesis and the high drying temperature of the LDH can eliminate the presence of the OH− groups in the LDH. For example, in the synthesis of the (BiO)4(OH)2CO3 compound, a low pH value of the aqueous solution or higher annealing temperature of the solid can induce the formation of (BiO)2CO3, which is thermodynamically more stable than the LDH compound, by exchanging OH− groups by CO32– groups. Applications The anions located in the interlayer regions can be replaced easily, in general. A wide variety of anions may be incorporated, ranging from simple inorganic anions (e.g. CO) through organic anions (e.g. benzoate, succinate) to complex biomolecules, including DNA. This has led to an intense interest in the use of LDH intercalates for advanced applications. Drug molecules such as ibuprofen may be intercalated; the resulting nanocomposites have potential for use in controlled release systems, which could reduce the frequency of doses of medication needed to treat a disorder. Further effort has been expended on the intercalation of agrochemicals, such as the chlorophenoxyacetates, and important organic synthons, such as terephthalate and nitrophenols. Agrochemical intercalates are of interest because of the potential to use LDHs to remove agrochemicals from polluted water, reducing the likelihood of eutrophication. LDHs exhibit shape-selective intercalation properties. For instance, treating LiAl2-Cl with a 50:50 mixture of terephthalate (1,4-benzenedicarboxylate) and phthalate (1,2-benzenedicarboxylate) results in intercalation of the 1,4-isomer with almost 100% preference. The selective intercalation of ions such as benzenedicarboxylates and nitrophenols has importance because these are produced in isomeric mixtures from crude oil residues, and it is often desirable to isolate a single form, for instance in the production of polymers. LDH-TiO2 intercalates are used in suspensions for self-cleaning of surfaces (especially for materials in cultural heritage), because of photo-catalytic properties of TiO2 and good compatibility of LDHs with inorganic materials. Minerals Naturally occurring (i.e., mineralogical) examples of LDH are classified as members of the hydrotalcite supergroup, named after the Mg-Al carbonate hydrotalcite, which is the longest-known example of a natural LDH phase. More than 40 mineral species are known to fall within this supergroup. The dominant divalent cations, M2+, that have been reported in hydrotalcite supergroup minerals are: Mg, Ca, Mn, Fe, Ni, Cu and Zn; the dominant trivalent cations, M3+, are: Al, Mn, Fe, Co and Ni. The most common intercalated anions are [CO3]2−, [SO4]2− and Cl−; OH−, S2− and [Sb(OH)6]− have also been reported. Some species contain intercalated cationic or neutral complexes such as [Na(H2O)6]+ or [MgSO4]0. The International Mineralogical Association's 2012 report on hydrotalcite supergroup nomenclature defines eight groups within the supergroup on the basis of a combination of criteria. These groups are: the hydrotalcite group, with M2+:M3+ = 3:1 (layer spacing ~7.8 Å); the quintinite group, with M2+:M3+ = 2:1 (layer spacing ~7.8 Å); the fougèrite group of natural 'green rust' phases, with M2+ = Fe2+, M3+ = Fe3+ in a range of ratios, and with O2− replacing OH− in the brucite module to maintain charge balance (layer spacing ~7.8 Å); the woodwardite group, with variable M2+:M3+ and interlayer [SO4]2−, leading to an expanded layer spacing of ~8.9 Å; the cualstibite group, with interlayer [Sb(OH)6]− and a layer spacing of ~9.7 Å; the glaucocerinite group, with interlayer [SO4]2− as in the woodwardite group, and with additional interlayer H2O molecules that further expand the layer spacing to ~11 Å; the wermlandite group, with a layer spacing of ~11 Å, in which cationic complexes occur with anions between the brucite-like layers; and the hydrocalumite group, with M2+ = Ca2+ and M3+ = Al3+, which contains brucite-like layers in which the Ca:Al ratio is 2:1 and the large cation, Ca2+, is coordinated to a seventh ligand of 'interlayer' water. The IMA Report also presents a concise systematic nomenclature for synthetic LDH phases that are not eligible for a mineral name. This uses the prefix LDH, and characterises components by the numbers of the octahedral cation species in the chemical formula, the interlayer anion, and the Ramsdell polytype symbol (number of layers in the repeat of the structure, and crystal system). For example, the 3R polytype of Mg6Al2(OH)12(CO3).4H2O (hydrotalcite sensu stricto) is described by "LDH 6Mg2Al·CO3-3R". This simplified nomenclature does not capture all the possible types of structural complexity in LDH materials. Elsewhere, the Report discusses examples of: long-range order of different cations within a brucite-like layer, which may produce sharp superstructure peaks in diffraction patterns and a and b periodicities that are multiples of the basic 3 Å repeat, or short-range order producing diffuse scattering; the wide variety of c periodicities that can occur due to relative displacements or rotations of the brucite-like layers, producing multiple polytypes with the same compositions, intergrowths of polytypes and variable degrees of stacking disorder; different periodicities arising from order of different interlayer species, either within an interlayer or by alternation of different anion types from interlayer to interlayer. See also Maalox, magnesium-aluminium oxide used as antacid References External links LDH, DNA and Hydrothermal Vents – Science Daily Mindat entry for Hydrotalcite Supergroup Materials Minerals Hydroxides Antacids
Layered double hydroxides
[ "Physics" ]
2,108
[ "Materials", "Bases (chemistry)", "Hydroxides", "Matter" ]
12,055,125
https://en.wikipedia.org/wiki/Bombieri%20norm
In mathematics, the Bombieri norm, named after Enrico Bombieri, is a norm on homogeneous polynomials with coefficient in or (there is also a version for non homogeneous univariate polynomials). This norm has many remarkable properties, the most important being listed in this article. Bombieri scalar product for homogeneous polynomials To start with the geometry, the Bombieri scalar product for homogeneous polynomials with N variables can be defined as follows using multi-index notation: by definition different monomials are orthogonal, so that if while by definition In the above definition and in the rest of this article the following notation applies: if write and and Bombieri inequality The fundamental property of this norm is the Bombieri inequality: let be two homogeneous polynomials respectively of degree and with variables, then, the following inequality holds: Here the Bombieri inequality is the left hand side of the above statement, while the right side means that the Bombieri norm is an algebra norm. Giving the left hand side is meaningless without that constraint, because in this case, we can achieve the same result with any norm by multiplying the norm by a well chosen factor. This multiplicative inequality implies that the product of two polynomials is bounded from below by a quantity that depends on the multiplicand polynomials. Thus, this product can not be arbitrarily small. This multiplicative inequality is useful in metric algebraic geometry and number theory. Invariance by isometry Another important property is that the Bombieri norm is invariant by composition with an isometry: let be two homogeneous polynomials of degree with variables and let be an isometry of (or ). Then we have . When this implies . This result follows from a nice integral formulation of the scalar product: where is the unit sphere of with its canonical measure . Other inequalities Let be a homogeneous polynomial of degree with variables and let . We have: where denotes the Euclidean norm. The Bombieri norm is useful in polynomial factorization, where it has some advantages over the Mahler measure, according to Knuth (Exercises 20-21, pages 457-458 and 682-684). See also Grassmann manifold Hardy space Homogeneous polynomial Plücker embedding References Norms (mathematics) Analytic number theory Polynomials Homogeneous polynomials Complex analysis Several complex variables
Bombieri norm
[ "Mathematics" ]
467
[ "Functions and mappings", "Mathematical analysis", "Analytic number theory", "Several complex variables", "Polynomials", "Mathematical objects", "Number theory", "Mathematical relations", "Norms (mathematics)", "Algebra" ]
12,055,540
https://en.wikipedia.org/wiki/Shapley%E2%80%93Shubik%20power%20index
The Shapley–Shubik power index was formulated by Lloyd Shapley and Martin Shubik in 1954 to measure the powers of players in a voting game. The constituents of a voting system, such as legislative bodies, executives, shareholders, individual legislators, and so forth, can be viewed as players in an n-player game. Players with the same preferences form coalitions. Any coalition that has enough votes to pass a bill or elect a candidate is called winning. The power of a coalition (or a player) is measured by the fraction of the possible voting sequences in which that coalition casts the deciding vote, that is, the vote that first guarantees passage or failure. The power index is normalized between 0 and 1. A power of 0 means that a coalition has no effect at all on the outcome of the game; and a power of 1 means a coalition determines the outcome by its vote. Also the sum of the powers of all the players is always equal to 1. There are some algorithms for calculating the power index, e.g., dynamic programming techniques, enumeration methods and Monte Carlo methods. Since Shapley and Shubik have published their paper, several axiomatic approaches have been used to mathematically study the Shapley–Shubik power index, with the anonymity axiom, the null player axiom, the efficiency axiom and the transfer axiom being the most widely used. Examples Suppose decisions are made by majority rule in a body consisting of A, B, C, D, who have 3, 2, 1 and 1 votes, respectively. The majority vote threshold is 4. There are 4! = 24 possible orders for these members to vote: For each voting sequence the pivot voter – that voter who first raises the cumulative sum to 4 or more – is bolded. Here, A is pivotal in 12 of the 24 sequences. Therefore, A has an index of power 1/2. The others have an index of power 1/6. Curiously, B has no more power than C and D. When you consider that A's vote determines the outcome unless the others unite against A, it becomes clear that B, C, D play identical roles. This reflects in the power indices. Suppose that in another majority-rule voting body with members, in which a single strong member has votes and the remaining members have one vote each. In this case the strong member has a power index of (unless , in which case the power index is simply ). Note that this is more than the fraction of votes which the strong member commands. Indeed, this strong member has only a fraction of the votes. Consider, for instance, a company which has 1000 outstanding shares of voting stock. One large shareholder holds 400 shares, while 600 other shareholders hold 1 share each. This corresponds to and . In this case the power index of the large shareholder is approximately 0.666 (or 66.6%), even though this shareholder holds only 40% of the stock. The remaining 600 shareholder have a power index of less than 0.0006 (or 0.06%). Thus, the large shareholder holds over 1000 times more voting power as each other shareholder, while holding only 400 times as much stock. The above can be mathematically derived as follows. Note that a majority is reached if at least votes are cast in favor. If , the strong member clearly holds all the power, since in this case (i.e., the votes of the strong member alone meet the majority threshold). Suppose now that and that in a randomly chosen voting sequence, the strong member votes as the th member. This means that after the first member have voted, votes have been cast in favor, while after the first members have voted, votes have been cast in favor. The vote of strong member is pivotal if the former does not meet the majority threshold, while the latter does. That is, , and . We can rewrite this condition as . Note that our condition of ensures that and (i.e., all of the permitted values of are feasible). Thus, the strong member is the pivotal voter if takes on one of the values of up to but not including . Since each of the possible values of is associated with the same number of voting sequences, this means that the strong member is the pivotal voter in a fraction of the voting sequences. That is, the power index of the strong member is . Applications The index has been applied to the analysis of voting in the Council of the European Union. The index has been applied to the analysis of voting in the United Nations Security Council. The UN Security Council is made up of fifteen member states, of which five (the United States of America, Russia, China, France and the United Kingdom) are permanent members of the council. For a motion to pass in the Council, it needs the support of every permanent member and the support of four non permanent members. This is equivalent to a voting body where the five permanent members have eight votes each, the ten other members have one vote each and there is a quota of forty four votes, as then there would be fifty total votes, so you need all five permanent members and then four other votes for a motion to pass. Note that a non-permanent member is pivotal in a permutation if and only if they are in the ninth position to vote and all five permanent members have already voted. Suppose that we have a permutation in which a non-permanent member is pivotal. Then there are three non-permanent members and five permanent that have to come before this pivotal member in this permutation. Therefore, there are ways of choosing these members and so 8! × different orders of the members before the pivotal voter. There would then be 6! ways of choosing the remaining voters after the pivotal voter. As there are a total of 15! permutations of 15 voters, the Shapley-Shubik power index of a non-permanent member is: . Hence the power index of a permanent member is . Python implementation This is a simple implementation of the above example in Python. from math import factorial, floor def normalize(x): total = sum(x) return [float(x) / total for x in x] def enumerate_coalitions(n): if n == 0: yield [] else: for coalition in enumerate_coalitions(n - 1): yield coalition yield coalition + [n] def power_index(seats, threshold=None): if threshold is None: threshold = floor(sum(seats) / 2) + 1 result = [0] * len(seats) for coalition in enumerate_coalitions(len(seats) - 1): for pivot in range(len(seats)): coalition_seats = sum(seats[(pivot + i) % len(seats)] for i in coalition) if (coalition_seats < threshold and threshold <= coalition_seats + seats[pivot]): result[pivot] += factorial(len(coalition)) * factorial(len(seats) - len(coalition) - 1) return normalize(result) print(power_index([3, 2, 1, 1])) See also Shapley value Arrow theorem Banzhaf power index References External links Online Power Index Calculator (by Tomomi Matsui) Computer Algorithms for Voting Power Analysis Web-based algorithms for voting power analysis Power Index Calculator Computes various indices for (multiple) weighted voting games online. Includes some examples. Computing Shapley-Shubik power index and Banzhaf power index with Python and R (by Frank Huettner) Game theory Cooperative games Voting theory Lloyd Shapley
Shapley–Shubik power index
[ "Mathematics" ]
1,598
[ "Game theory", "Cooperative games" ]
176,571
https://en.wikipedia.org/wiki/Small%20appliance
A small domestic appliance, also known as a small electric appliance or minor appliance or simply a small appliance, small domestic or small electric, is a portable or semi-portable machine, generally used on table-tops, counter-tops or other platforms, to accomplish a household task. Examples include microwave ovens, kettles, toasters, humidifiers, food processors and coffeemakers. They contrast with major appliances (known as "white goods" in the UK), such as the refrigerators and washing machines, which cannot be easily moved and are generally placed on the floor. Small appliances also contrast with consumer electronics (British "brown goods") which are for leisure and entertainment rather than purely practical tasks. Uses Some small appliances perform the same or similar function as their larger counterparts. For example, a toaster oven is a small appliance that performs a similar function as an oven. Small appliances often have a home version and a commercial version, for example waffle irons, food processors, and blenders are small household appliances. A food processor will perform the tasks of chopper, slicer, mixer as well as juicer. Instead of buying multiple small appliances you can buy a single food processor. The commercial, or industrial, version is designed to be used nearly continuously in a restaurant or other similar setting. Commercial appliances are typically connected to a more powerful electrical outlet, are larger and stronger, have more user-serviceable parts, and cost significantly more. These commercial versions are capable of performing heavy tasks in one go. Types and examples Small appliances include those used for : Beverage-making, such as electric kettles, coffeemakers or iced tea-makers Cleaning, such as vacuum cleaner Cooking, such as on a hot plate or with a microwave oven Lighting, using light fixtures Thermal comfort, such as an electric heater or fan Many small appliances perform a combination of the above processes such as mixing, heating by a bread machine Prices Small appliances can be very inexpensive, such as an electric can opener, hot pot, toaster, or coffee maker which may cost only a few U.S. dollars, or very expensive, such as an elaborate espresso maker, which may cost several thousand U.S. dollars. Most homes in developed economies contain several cheaper home appliances, with perhaps a few more expensive appliances, such as a high-end microwave oven or mixer. A small appliances like chopper, juicer, grinder and mixer may cost you few dollars. Instead of buying separate units the food processor will be less expensive. Sometime these kind of smart appliances saves money as well as the space. Powering Many small appliances are powered by electricity. The appliance may use a permanently attached cord that is plugged into a wall outlet or a detachable cord. The appliance may have a cord storage feature. A few hand-held appliances use batteries, which may be disposable or rechargeable. Some appliances consist of an electrical motor upon which is mounted various attachments so as to constitute several individual appliances, such as a blender, a food processor, or a juicer. Many stand mixers, while functioning primarily as a mixer, have attachments that can perform additional functions. A few gasoline and gas-powered appliances exist for use in situations where electricity is not expected to be available, but these are typically larger and not as portable as most small appliances. Items that perform the same function as small appliances but are hand-powered are generally referred to as tools or gadgets, for example a hand cranked egg beater, a grater, a mandoline, or a hand-powered meat grinder. Safety Small appliances which are defective or improperly used or maintained may cause house fires and other property damage, or may harbor bacteria if not properly cleaned. It is important that users read the instructions carefully and that appliances that use a grounded cord be attached to a grounded outlet. Because of the risk of fire, some appliances have a short detachable cord that is connected to the appliance magnetically. If the appliance is moved further than the cord length from the wall, the cord will detach from the appliance. Designations and regulations Designations and regulations of "small appliances" vary by country and are not simply determined by physical sizes. For instance, United States Environmental Protection Agency regulations mandate that small appliances must meet two standards: Completely manufactured, charged, and sealed in a factory Contains five pounds or less of refrigerant The designations and regulations of "small appliances" are very important in order to assure the safety of the public. Here are some rules and regulations for small appliances: Altering the design of certified recovered small appliances in a way that would affect the equipment's ability to meet the certification standards is strictly prohibited The equipment must meet the minimum requirements for certification See also Appliance (disambiguation) Domestic technology List of cooking appliances List of home appliances Standby power Yellow goods (retail classification) References
Small appliance
[ "Physics", "Technology" ]
1,017
[ "Physical systems", "Machines", "Home appliances" ]
176,619
https://en.wikipedia.org/wiki/Pinner%20reaction
The Pinner reaction refers to the acid catalysed reaction of a nitrile with an alcohol to form an imino ester salt (alkyl imidate salt); this is sometimes referred to as a Pinner salt. The reaction is named after Adolf Pinner, who first described it in 1877. Pinner salts are themselves reactive and undergo additional nucleophilic additions to give various useful products: With an excess of alcohol to form an orthoester With ammonia or an amine to form an amidine (di-nitriles may form imidines, for instance succinimidine from succinonitrile) With water to form an ester With hydrogen sulfide to form a thionoester Commonly, the Pinner salt itself is not isolated, with the reaction being continued to give the desired functional group (orthoester etc.) in one go. The imidium chloride salts is thermodynamically unstable, and low temperatures help prevent elimination to an amide and alkyl chloride. It should be appreciated that the Pinner reaction refers specifically to an acid catalyzed process, but that similar results can often be achieved using base catalysis. The two approaches can be complementary, with nitriles which are unreactive under acid conditions often giving better results in the presence of base, and vice versa. The determining factor is typically how electron-rich or poor the nitrile is. For example: an electron-poor nitrile is a good electrophile (readily susceptible to attack from alkoxides etc.) but a poor nucleophile would typically be easier to protonate than to participate in the reaction and hence would be expected to react more readily under basic rather than acidic conditions. See also Hoesch reaction Overman rearrangement Isocyanide Imidoyl chloride Stephen aldehyde synthesis – essentially the same reaction but including a reduction and with water as the nucleophile; generates the aldehyde. References Addition reactions Name reactions
Pinner reaction
[ "Chemistry" ]
423
[ "Name reactions" ]
176,622
https://en.wikipedia.org/wiki/Degrees%20of%20freedom
In many scientific fields, the degrees of freedom of a system is the number of parameters of the system that may vary independently. For example, a point in the plane has two degrees of freedom for translation: its two coordinates; a non-infinitesimal object on the plane might have additional degrees of freedoms related to its orientation. In mathematics, this notion is formalized as the dimension of a manifold or an algebraic variety. When degrees of freedom is used instead of dimension, this usually means that the manifold or variety that models the system is only implicitly defined. See: Degrees of freedom (mechanics), number of independent motions that are allowed to the body or, in case of a mechanism made of several bodies, number of possible independent relative motions between the pieces of the mechanism Degrees of freedom (physics and chemistry), a term used in explaining dependence on parameters, or the dimensions of a phase space Degrees of freedom (statistics), the number of values in the final calculation of a statistic that are free to vary Degrees of freedom problem, the problem of controlling motor movement given abundant degrees of freedom See also Six degrees of freedom Dimension Broad-concept articles
Degrees of freedom
[ "Physics" ]
234
[ "Geometric measurement", "Dimension", "Physical quantities", "Theory of relativity" ]
176,644
https://en.wikipedia.org/wiki/Hyperion%20%28moon%29
Hyperion , also known as Saturn VII, is the eighth-largest moon of Saturn. It is distinguished by its highly irregular shape, chaotic rotation, low density, and its unusual sponge-like appearance. It was the first non-rounded moon to be discovered. Discovery Hyperion was independently discovered by William Cranch Bond and his son George Phillips Bond in the United States, and William Lassell in the United Kingdom in September 1848. Name The moon is named after Titan Hyperion, the god of watchfulness and observation, and the elder brother of Cronus (the Greek equivalent of the Roman god Saturn). It is also designated Saturn VII. The adjectival form of the name is Hyperionian. Hyperion's discovery came shortly after John Herschel had suggested names for the seven previously known satellites of Saturn in his 1847 publication Results of Astronomical Observations made at the Cape of Good Hope. William Lassell, who saw Hyperion two days after William Bond, had already endorsed Herschel's naming scheme and suggested the name Hyperion in accordance with it. He also beat Bond to publication. Physical characteristics Shape Hyperion is one of the largest bodies known to be highly irregularly shaped (non-ellipsoidal, i.e. not in hydrostatic equilibrium) in the Solar System. The only larger moon known to be irregular in shape is Neptune's moon Proteus. Hyperion has about 15% of the mass of Mimas, the least massive known ellipsoidal body. The largest crater on Hyperion is approximately in diameter and deep. A possible explanation for the irregular shape is that Hyperion is a fragment of a larger body that was broken up by a large impact in the distant past. A proto-Hyperion could have been in diameter (which ranges from a little below the size of Mimas to a little below the size of Tethys). Over about 1,000 years, ejecta from a presumed Hyperion breakup would have impacted Titan at low speeds, building up volatiles in the atmosphere of Titan. Composition Like most of Saturn's moons, Hyperion's low density indicates that it is composed largely of water ice with only a small amount of rock. It is thought that Hyperion may be similar to a loosely accreted pile of rubble in its physical composition. However, unlike most of Saturn's moons, Hyperion has a low albedo (0.2–0.3), indicating that it is covered by at least a thin layer of dark material. This may be material from Phoebe (which is much darker) that got past Iapetus. Hyperion is redder than Phoebe and closely matches the color of the dark material on Iapetus. Hyperion has a porosity of about 0.46. Although Hyperion is the eighth-largest moon of Saturn, it is only the ninth-most massive. Phoebe has a smaller radius, but it is more massive than Hyperion and thus denser. Surface features Voyager 2 passed through the Saturn system, but photographed Hyperion only from a distance. It discerned individual craters and an enormous ridge, but was not able to make out the texture of Hyperion's surface. Early images from the Cassini orbiter suggested an unusual appearance, but it was not until Cassini'''s first targeted flyby of Hyperion on 25 September 2005 that Hyperion's oddness was revealed in full. Hyperion's surface is covered with deep, sharp-edged craters that give it the appearance of a giant sponge. Dark material fills the bottom of each crater. The reddish substance contains long chains of carbon and hydrogen and appears very similar to material found on other Saturnian satellites, most notably Iapetus. Scientists attribute Hyperion's unusual, sponge-like appearance to the fact that it has an unusually low density for such a large object. Its low density makes Hyperion quite porous, with a weak surface gravity. These characteristics mean impactors tend to compress the surface, rather than excavating it, and most material that is blown off the surface never returns. The latest analyses of data obtained by Cassini during its flybys of Hyperion in 2005 and 2006 show that about 40 percent of it is empty space. It was suggested in July 2007 that this porosity allows craters to remain nearly unchanged over the eons. The new analyses also confirmed that Hyperion is composed mostly of water ice with very little rock. Static charge Hyperion's surface is electrically charged and was the first discovered to be so other than the Moon's surface. Orbit and rotation The Voyager 2 images and subsequent ground-based photometry indicated that Hyperion's rotation is chaotic, that is, its axis of rotation wobbles so much that its orientation in space is unpredictable. Its Lyapunov time is around 30 days. Hyperion, together with Pluto's moons Nix and Hydra, is among only a few moons in the Solar System known to rotate chaotically, although it is expected to be common in binary asteroids. It is also the only regular planetary natural satellite in the Solar System known to not be tidally locked. Hyperion is unique among the large moons in that it is very irregularly shaped, has a fairly eccentric orbit, and is near a much larger moon, Titan. These factors combine to restrict the set of conditions under which a stable rotation is possible. The 3:4 orbital resonance between Titan and Hyperion may also make a chaotic rotation more likely. The fact that its rotation is not locked probably accounts for the relative uniformity of Hyperion's surface, in contrast to many of Saturn's other moons, which have contrasting trailing and leading hemispheres. Exploration Hyperion has been imaged several times from moderate distances by the Cassini orbiter. The first close targeted flyby occurred at a distance of on 26 September 2005. Cassini made another close approach to Hyperion on 25 August 2011 when it passed from Hyperion, and third close approach was on 16 September 2011, with closest approach of . Cassini'''s last flyby was on 31 May 2015 at a distance of about . See also Chaotic rotation Hyperion in fiction Moons of Saturn List of geological features on Hyperion Notes References External links Cassini mission Hyperion page at NASA's Solar System Exploration site The Planetary Society: Hyperion NASA: Saturn's Hyperion, A Moon With Odd Craters Cassini images of Hyperion Images of Hyperion at JPL's Planetary Photojournal Hyperion nomenclature from the USGS planetary nomenclature page Moons of Saturn Discoveries by William Cranch Bond 18480916 Chaotic maps Moons with a prograde orbit
Hyperion (moon)
[ "Mathematics" ]
1,363
[ "Functions and mappings", "Mathematical objects", "Mathematical relations", "Chaotic maps", "Dynamical systems" ]
176,670
https://en.wikipedia.org/wiki/Operon
In genetics, an operon is a functioning unit of DNA containing a cluster of genes under the control of a single promoter. The genes are transcribed together into an mRNA strand and either translated together in the cytoplasm, or undergo splicing to create monocistronic mRNAs that are translated separately, i.e. several strands of mRNA that each encode a single gene product. The result of this is that the genes contained in the operon are either expressed together or not at all. Several genes must be co-transcribed to define an operon. Originally, operons were thought to exist solely in prokaryotes (which includes organelles like plastids that are derived from bacteria), but their discovery in eukaryotes was shown in the early 1990s, and are considered to be rare. In general, expression of prokaryotic operons leads to the generation of polycistronic mRNAs, while eukaryotic operons lead to monocistronic mRNAs. Operons are also found in viruses such as bacteriophages. For example, T7 phages have two operons. The first operon codes for various products, including a special T7 RNA polymerase which can bind to and transcribe the second operon. The second operon includes a lysis gene meant to cause the host cell to burst. History The term "operon" was first proposed in a short paper in the Proceedings of the French Academy of Science in 1960. From this paper, the so-called general theory of the operon was developed. This theory suggested that in all cases, genes within an operon are negatively controlled by a repressor acting at a single operator located before the first gene. Later, it was discovered that genes could be positively regulated and also regulated at steps that follow transcription initiation. Therefore, it is not possible to talk of a general regulatory mechanism, because different operons have different mechanisms. Today, the operon is simply defined as a cluster of genes transcribed into a single mRNA molecule. Nevertheless, the development of the concept is considered a landmark event in the history of molecular biology. The first operon to be described was the lac operon in E. coli. The 1965 Nobel Prize in Physiology and Medicine was awarded to François Jacob, André Michel Lwoff and Jacques Monod for their discoveries concerning the operon and virus synthesis. Overview Operons occur primarily in prokaryotes but also rarely in some eukaryotes, including nematodes such as C. elegans and the fruit fly, Drosophila melanogaster. rRNA genes often exist in operons that have been found in a range of eukaryotes including chordates. An operon is made up of several structural genes arranged under a common promoter and regulated by a common operator. It is defined as a set of adjacent structural genes, plus the adjacent regulatory signals that affect transcription of the structural genes.5 The regulators of a given operon, including repressors, corepressors, and activators, are not necessarily coded for by that operon. The location and condition of the regulators, promoter, operator and structural DNA sequences can determine the effects of common mutations. Operons are related to regulons, stimulons and modulons; whereas operons contain a set of genes regulated by the same operator, regulons contain a set of genes under regulation by a single regulatory protein, and stimulons contain a set of genes under regulation by a single cell stimulus. According to its authors, the term "operon" is derived from the verb "to operate". As a unit of transcription An operon contains one or more structural genes which are generally transcribed into one polycistronic mRNA (a single mRNA molecule that codes for more than one protein). However, the definition of an operon does not require the mRNA to be polycistronic, though in practice, it usually is. Upstream of the structural genes lies a promoter sequence which provides a site for RNA polymerase to bind and initiate transcription. Close to the promoter lies a section of DNA called an operator. Operons versus clustering of prokaryotic genes All the structural genes of an operon are turned ON or OFF together, due to a single promoter and operator upstream to them, but sometimes more control over the gene expression is needed. To achieve this aspect, some bacterial genes are located near together, but there is a specific promoter for each of them; this is called gene clustering. Usually these genes encode proteins which will work together in the same pathway, such as a metabolic pathway. Gene clustering helps a prokaryotic cell to produce metabolic enzymes in a correct order. In one study, it has been posited that in the Asgard (archaea), ribosomal protein coding genes occur in clusters that are less conserved in their organization than in other Archaea; the closer an Asgard (archaea) is to the eukaryotes, the more dispersed is the arrangement of the ribosomal protein coding genes. General structure An operon is made up of 3 basic DNA components: Promoter – a nucleotide sequence that enables a gene to be transcribed. The promoter is recognized by RNA polymerase, which then initiates transcription. In RNA synthesis, promoters indicate which genes should be used for messenger RNA creation – and, by extension, control which proteins the cell produces. Operator – a segment of DNA to which a repressor binds. It is classically defined in the lac operon as a segment between the promoter and the genes of the operon. The main operator (O1) in the lac operon is located slightly downstream of the promoter; two additional operators, O2 and O3 are located at -82 and +412, respectively. In the case of a repressor, the repressor protein physically obstructs the RNA polymerase from transcribing the genes. Structural genes – the genes that are co-regulated by the operon. Not always included within the operon, but important in its function is a regulatory gene, a constantly expressed gene which codes for repressor proteins. The regulatory gene does not need to be in, adjacent to, or even near the operon to control it. An inducer (small molecule) can displace a repressor (protein) from the operator site (DNA), resulting in an uninhibited operon. Alternatively, a corepressor can bind to the repressor to allow its binding to the operator site. A good example of this type of regulation is seen for the trp operon. Regulation Control of an operon is a type of gene regulation that enables organisms to regulate the expression of various genes depending on environmental conditions. Operon regulation can be either negative or positive by induction or repression. Negative control involves the binding of a repressor to the operator to prevent transcription. In negative inducible operons, a regulatory repressor protein is normally bound to the operator, which prevents the transcription of the genes on the operon. If an inducer molecule is present, it binds to the repressor and changes its conformation so that it is unable to bind to the operator. This allows for expression of the operon. The lac operon is a negatively controlled inducible operon, where the inducer molecule is allolactose. In negative repressible operons, transcription of the operon normally takes place. Repressor proteins are produced by a regulator gene, but they are unable to bind to the operator in their normal conformation. However, certain molecules called corepressors are bound by the repressor protein, causing a conformational change to the active site. The activated repressor protein binds to the operator and prevents transcription. The trp operon, involved in the synthesis of tryptophan (which itself acts as the corepressor), is a negatively controlled repressible operon. Operons can also be positively controlled. With positive control, an activator protein stimulates transcription by binding to DNA (usually at a site other than the operator). In positive inducible operons, activator proteins are normally unable to bind to the pertinent DNA. When an inducer is bound by the activator protein, it undergoes a change in conformation so that it can bind to the DNA and activate transcription. Examples of positive inducible operons include the MerR family of transcriptional activators. In positive repressible operons, the activator proteins are normally bound to the pertinent DNA segment. However, when an inhibitor is bound by the activator, it is prevented from binding the DNA. This stops activation and transcription of the system. The lac operon The lac operon of the model bacterium Escherichia coli was the first operon to be discovered and provides a typical example of operon function. It consists of three adjacent structural genes, a promoter, a terminator, and an operator. The lac operon is regulated by several factors including the availability of glucose and lactose. It can be activated by allolactose. Lactose binds to the repressor protein and prevents it from repressing gene transcription. This is an example of the derepressible (from above: negative inducible) model. So it is a negative inducible operon induced by presence of lactose or allolactose. The trp operon Discovered in 1953 by Jacques Monod and colleagues, the trp operon in E. coli was the first repressible operon to be discovered. While the lac operon can be activated by a chemical (allolactose), the tryptophan (Trp) operon is inhibited by a chemical (tryptophan). This operon contains five structural genes: trp E, trp D, trp C, trp B, and trp A, which encodes tryptophan synthetase. It also contains a promoter which binds to RNA polymerase and an operator which blocks transcription when bound to the protein synthesized by the repressor gene (trp R) that binds to the operator. In the lac operon, lactose binds to the repressor protein and prevents it from repressing gene transcription, while in the trp operon, tryptophan binds to the repressor protein and enables it to repress gene transcription. Also unlike the lac operon, the trp operon contains a leader peptide and an attenuator sequence which allows for graded regulation. This is an example of the corepressible model. Predicting the number and organization of operons The number and organization of operons has been studied most critically in E. coli. As a result, predictions can be made based on an organism's genomic sequence. One prediction method uses the intergenic distance between reading frames as a primary predictor of the number of operons in the genome. The separation merely changes the frame and guarantees that the read through is efficient. Longer stretches exist where operons start and stop, often up to 40–50 bases. An alternative method to predict operons is based on finding gene clusters where gene order and orientation is conserved in two or more genomes. Operon prediction is even more accurate if the functional class of the molecules is considered. Bacteria have clustered their reading frames into units, sequestered by co-involvement in protein complexes, common pathways, or shared substrates and transporters. Thus, accurate prediction would involve all of these data, a difficult task indeed. Pascale Cossart's laboratory was the first to experimentally identify all operons of a microorganism, Listeria monocytogenes. The 517 polycistronic operons are listed in a 2009 study describing the global changes in transcription that occur in L. monocytogenes under different conditions. See also Evolutionary developmental biology Genetic code Gene regulatory network L-arabinose operon Protein biosynthesis TATA box Umu Chromotest References External links Mycobacterium tuberculosis H37Rv Operon Correlation Browser OBD - Operon database (a bit awkward to use though) Gene expression
Operon
[ "Chemistry", "Biology" ]
2,545
[ "Gene expression", "Molecular genetics", "Cellular processes", "Molecular biology", "Biochemistry", "Operons" ]
176,673
https://en.wikipedia.org/wiki/Well-posed%20problem
In mathematics, a well-posed problem is one for which the following properties hold: The problem has a solution The solution is unique The solution's behavior changes continuously with the initial conditions Examples of archetypal well-posed problems include the Dirichlet problem for Laplace's equation, and the heat equation with specified initial conditions. These might be regarded as 'natural' problems in that there are physical processes modelled by these problems. Problems that are not well-posed in the sense above are termed ill-posed. A simple example is a global optimization problem, because the location of the optima is generally not a continuous function of the parameters specifying the objective, even when the objective itself is a smooth function of those parameters. Inverse problems are often ill-posed; for example, the inverse heat equation, deducing a previous distribution of temperature from final data, is not well-posed in that the solution is highly sensitive to changes in the final data. Continuum models must often be discretized in order to obtain a numerical solution. While solutions may be continuous with respect to the initial conditions, they may suffer from numerical instability when solved with finite precision, or with errors in the data. Conditioning Even if a problem is well-posed, it may still be ill-conditioned, meaning that a small error in the initial data can result in much larger errors in the answers. Problems in nonlinear complex systems (so-called chaotic systems) provide well-known examples of instability. An ill-conditioned problem is indicated by a large condition number. If the problem is well-posed, then it stands a good chance of solution on a computer using a stable algorithm. If it is not well-posed, it needs to be re-formulated for numerical treatment. Typically this involves including additional assumptions, such as smoothness of solution. This process is known as regularization. Tikhonov regularization is one of the most commonly used for regularization of linear ill-posed problems. Existence of local solutions The existence of local solutions is often an important part of the well-posedness problem, and it is the foundation of many estimate methods, for example the energy method below. There are many results on this topic. For example, the Cauchy–Kowalevski theorem for Cauchy initial value problems essentially states that if the terms in a partial differential equation are all made up of analytic functions and a certain transversality condition is satisfied (the hyperplane or more generally hypersurface where the initial data are posed must be non-characteristic with respect to the partial differential operator), then on certain regions, there necessarily exist solutions which are as well analytic functions. This is a fundamental result in the study of analytic partial differential equations. Surprisingly, the theorem does not hold in the setting of smooth functions; an example discovered by Hans Lewy in 1957 consists of a linear partial differential equation whose coefficients are smooth (i.e., have derivatives of all orders) but not analytic for which no solution exists. So the Cauchy-Kowalevski theorem is necessarily limited in its scope to analytic functions. Energy method The energy method is useful for establishing both uniqueness and continuity with respect to initial conditions (i.e. it does not establish existence). The method is based upon deriving an upper bound of an energy-like functional for a given problem. Example: Consider the diffusion equation on the unit interval with homogeneous Dirichlet boundary conditions and suitable initial data (e.g. for which ). Multiply the equation by and integrate in space over the unit interval to obtain This tells us that (p-norm) cannot grow in time. By multiplying by two and integrating in time, from up to , one finds This result is the energy estimate for this problem. To show uniqueness of solutions, assume there are two distinct solutions to the problem, call them and , each satisfying the same initial data. Upon defining then, via the linearity of the equations, one finds that satisfies Applying the energy estimate tells us which implies (almost everywhere). Similarly, to show continuity with respect to initial conditions, assume that and are solutions corresponding to different initial data and . Considering once more, one finds that satisfies the same equations as above but with . This leads to the energy estimate which establishes continuity (i.e. as and become closer, as measured by the norm of their difference, then ). The maximum principle is an alternative approach to establish uniqueness and continuity of solutions with respect to initial conditions for this example. The existence of solutions to this problem can be established using Fourier series. Semi-group theory If it is possible to denote the solution to a Cauchy problem , where A is a linear operator mapping a dense linear subspace D(A) of X into X, with , where is a family of linear operators on X, satisfying S(0)=I S(a+b)=S(a)S(b)=S(b)S(a) for any a,b≥0 is continuous for every w in X for every w in X then (1) is well-posed. Hille-Yosida theorem states the criteria on A for such a to exist. See also Total absorption spectroscopy – an example of an inverse problem or ill-posed problem in a real-life situation that is solved by means of the expectation–maximization algorithm Notes References Numerical analysis Partial differential equations
Well-posed problem
[ "Mathematics" ]
1,110
[ "Computational mathematics", "Mathematical relations", "Approximations", "Numerical analysis" ]
176,695
https://en.wikipedia.org/wiki/Audio%20feedback
Audio feedback (also known as acoustic feedback, simply as feedback) is a positive feedback situation that may occur when an acoustic path exists between an audio output (for example, a loudspeaker) and its audio input (for example, a microphone or guitar pickup). In this example, a signal received by the microphone is amplified and passed out of the loudspeaker. The sound from the loudspeaker can then be received by the microphone again, amplified further, and then passed out through the loudspeaker again. The frequency of the resulting howl is determined by resonance frequencies in the microphone, amplifier, and loudspeaker, the acoustics of the room, the directional pick-up and emission patterns of the microphone and loudspeaker, and the distance between them. The principles of audio feedback were first discovered by Danish scientist Søren Absalon Larsen, hence it is also known as the Larsen effect. Feedback is almost always considered undesirable when it occurs with a singer's or public speaker's microphone at an event using a sound reinforcement system or PA system. Audio engineers typically use directional microphones with cardioid pickup patterns and various electronic devices, such as equalizers and, since the 1990s, automatic feedback suppressors, to prevent feedback, which detracts from the audience's enjoyment of the event and may damage equipment or hearing. Since the 1960s, electric guitar players in rock music bands using loud guitar amplifiers, speaker cabinets and distortion effects have intentionally created guitar feedback to create different sounds including long sustained tones that cannot be produced using standard playing techniques. The sound of guitar feedback is considered to be a desirable musical effect in heavy metal music, hardcore punk and grunge. Jimi Hendrix was an innovator in the intentional use of guitar feedback in his guitar solos to create unique musical sounds. History and theory The conditions for feedback follow the Barkhausen stability criterion, namely that, with sufficiently high gain, a stable oscillation can (and usually will) occur in a feedback loop whose frequency is such that the phase delay is an integer multiple of 360 degrees and the gain at that frequency is equal to 1. If the small-signal gain is greater than 1 for some frequency then the system will start to oscillate at that frequency because noise at that frequency will be amplified. Sound will be produced without anyone actually playing. The sound level will increase until the output starts clipping, reducing the loop gain to exactly unity. This is the principle upon which electronic oscillators are based; in that case, although the feedback loop is purely electronic, the principle is the same. If the gain is large but slightly less than 1, then ringing will be introduced, but only when at least some input sound is already being sent through the system. Early academic work on acoustical feedback was done by Dr. C. Paul Boner. Boner was responsible for establishing basic theories of acoustic feedback, room-ring modes, and room-sound system equalizing techniques. Boner reasoned that when feedback happened, it did so at one precise frequency. He also reasoned that it could be stopped by inserting a very narrow notch filter at that frequency in the loudspeaker's signal chain. He worked with Gifford White, founder of White Instruments to hand craft notch filters for specific feedback frequencies in specific rooms. Distance To maximize gain before feedback, the amount of sound energy that is fed back to the microphones must be reduced as much as is practical. As sound pressure falls off with 1/r with respect to the distance r in free space, or up to a distance known as reverberation distance in closed spaces (and the energy density with 1/r²), it is important to keep the microphones at a large enough distance from the speaker systems. As well, microphones should not be positioned in front of speakers and individuals using mics should be asked to avoid pointing the microphone at speaker enclosures. Directivity Additionally, the loudspeakers and microphones should have non-uniform directivity and should stay out of the maximum sensitivity of each other, ideally in a direction of cancellation. Public address speakers often achieve directivity in the mid and treble region (and good efficiency) via horn systems. Sometimes the woofers have a cardioid characteristic. Professional setups circumvent feedback by placing the main speakers away from the band or artist, and then having several smaller speakers known as monitors pointing back at each band member, but in the opposite direction to that in which the microphones are pointing taking advantage of microphones with a cardioid pickup pattern which are common in sound reinforcement applications. This configuration reduces the opportunities for feedback and allows independent control of the sound pressure levels for the audience and the performers. Frequency response Almost always, the natural frequency response of a sound reinforcement systems is not ideally flat as this leads to acoustical feedback at the frequency with the highest loop gain, which may be a resonance with much higher than the average gain over all frequencies. It is therefore helpful to apply some form of equalization to reduce the gain at this frequency. Feedback can be reduced manually by ringing out a sound system prior to a performance. The sound engineer can increase the level of a microphone until feedback occurs. The engineer can then attenuate the relevant frequency on an equalizer preventing feedback at that frequency but allowing sufficient volume at other frequencies. Many professional sound engineers can identify feedback frequencies by ear but others use a real-time analyzer to identify the ringing frequency. To avoid feedback, automatic feedback suppressor can be used. Some of these work by shifting the frequency slightly, with this upshift resulting in a chirp-sound instead of a howling sound of unaddressed feedback. Other devices use sharp notch filters to filter out offending frequencies. Adaptive algorithms are often used to automatically tune these notch filters. Deliberate uses To intentionally create feedback, an electric guitar player needs a guitar amplifier with very high gain (amplification) or the guitar brought near the speaker. The guitarist then allows the strings to vibrate freely and brings the guitar close to the loudspeaker of the guitar amp. The use of distortion effects units adds additional gain and facilitates the creation of intentional feedback. Early examples in popular music A deliberate use of acoustic feedback was pioneered by blues and rock and roll guitarists such as Willie Johnson, Johnny Watson and Link Wray. According to AllMusic's Richie Unterberger, the very first use of feedback on a commercial rock record is the introduction of the song "I Feel Fine" by the Beatles, recorded in 1964. Jay Hodgson agrees that this feedback created by John Lennon leaning a semi-acoustic guitar against an amplifier was the first chart-topper to showcase feedback distortion. The Who's 1965 hits "Anyway, Anyhow, Anywhere" and "My Generation" featured feedback manipulation by Pete Townshend, with an extended solo in the former and the shaking of his guitar in front of the amplifier to create a throbbing noise in the latter. Canned Heat's "Fried Hockey Boogie" also featured guitar feedback produced by Henry Vestine during his solo to create a highly amplified distorted boogie style of feedback. In 1963, the teenage Brian May and his father custom-built his signature guitar Red Special, which was purposely designed to feed back. Feedback was used extensively after 1965 by the Monks, Jefferson Airplane, the Velvet Underground and the Grateful Dead, who included in many of their live shows a segment named Feedback, a several-minute long feedback-driven improvisation. Feedback has since become a striking characteristic of rock music, as electric guitar players such as Jeff Beck, Pete Townshend, Dave Davies, Steve Marriott and Jimi Hendrix deliberately induced feedback by holding their guitars close to the amplifier's speaker. An example of feedback can be heard on Hendrix's performance of "Can You See Me?" at the Monterey Pop Festival. The entire guitar solo was created using amplifier feedback. Jazz guitarist Gábor Szabó was one of the earliest jazz musicians to use controlled feedback in his music, which is prominent on his live album The Sorcerer (1967). Szabó's method included the use of a flat-top acoustic guitar with a magnetic pickup. Lou Reed created his album Metal Machine Music (1975) entirely from loops of feedback played at various speeds. Introductions, transitions, and fade-outs In addition to "I Feel Fine", feedback was used on the introduction to songs including Jimi Hendrix's "Foxy Lady", the Beatles' "It's All Too Much", Hendrix's "Crosstown Traffic", David Bowie's "Little Wonder", the Strokes's "New York City Cops", Ben Folds Five's "Fair", Midnight Juggernauts's "Road to Recovery", Nirvana's "Radio Friendly Unit Shifter", the Jesus and Mary Chain's "Tumbledown" and "Catchfire", the Stone Roses's "Waterfall", Porno for Pyros's "Tahitian Moon", Tool's "Stinkfist", and the Cure's "Prayer For Rain". Examples of feedback combined with a quick volume swell used as a transition include Weezer's "My Name Is Jonas" and "Say It Ain't So"; The Strokes' "Reptilia", "New York City Cops", and "Juicebox"; Dream Theater's "As I Am"; as well as numerous tracks by Meshuggah and Tool. Cacophonous feedback fade-outs ending a song are most often used to generate rather than relieve tension, often cross-faded too after a thematic and musical release. Examples include Modwheelmood's remix of Nine Inch Nail's "The Great Destroyer"; and the Jesus and Mary Chain's "Teenage Lust", "Tumbledown", "Catchfire", "Sundown", and "Frequency". Examples in modern classical music Though closed circuit feedback was a prominent feature in many early experimental electronic music compositions, intentional acoustic feedback as sound material gained more prominence with compositions such as John Cage's Variations II (1961) performed by David Tudor and Robert Ashley's The Wolfman (1964). Steve Reich makes extensive use of audio feedback in his work Pendulum Music (1968) by swinging a series of microphones back and forth in front of their corresponding amplifiers. Hugh Davies and Alvin Lucier both use feedback in their works. Roland Kayn based much of his compositional oeuvre, which he termed "cybernetic music," on audio systems incorporating feedback. More recent examples can be found in the work of, for example, Lara Stanic, Paul Craenen, Anne Wellmer, Adam Basanta, Lesley Flanigan, Ronald Boersen and Erfan Abdi. Pitched feedback Pitched melodies may be created entirely from feedback by changing the angle between a guitar and amplifier after establishing a feedback loop. Examples include Tool's "Jambi", Robert Fripp's guitar on David Bowie's "Heroes" (album version), and Jimi Hendrix's "Third Stone from the Sun" and his live performance of "Wild Thing" at the Monterey Pop Festival. Regarding Fripp's work on "Heroes": Contemporary uses Audio feedback became a signature feature of many underground rock bands during the 1980s. American noise-rockers Sonic Youth melded the rock-feedback tradition with a compositional and classical approach (notably covering Reich's "Pendulum Music"), and guitarist/producer Steve Albini's group Big Black also worked controlled feedback into the makeup of their songs. With the alternative rock movement of the 1990s, feedback again saw a surge in popular usage by suddenly mainstream acts like Nirvana, the Red Hot Chili Peppers, Rage Against the Machine and the Smashing Pumpkins. The use of the "no-input-mixer" method for sound generation by feeding a mixing console back into itself has been adopted in experimental electronic and noise music by practitioners such as Toshimaru Nakamura. Devices The principle of feedback is used in many guitar sustain devices. Examples include handheld devices like the EBow, built-in guitar pickups that increase the instrument's sonic sustain, and sonic transducers mounted on the head of a guitar. Intended closed-circuit feedback can also be created by an effects unit, such as a delay pedal or effect fed back into a mixing console. The feedback can be controlled by using the fader to determine a volume level. The Boss DF-2 Super Feedbacker and Distortion pedal is an electronic effect unit that helps electric guitarists create feedback effects. The halldorophone is an electro-acoustic string instrument specifically made to work with string based feedback. See also Circuit bending Comb filter Echo suppression and cancellation Video feedback References External links Audio effects Audio electronics Rock music Guitar performance techniques Feedback he:היזון חוזר (מוזיקה)
Audio feedback
[ "Engineering" ]
2,638
[ "Audio electronics", "Audio engineering" ]
176,733
https://en.wikipedia.org/wiki/Cylindrical%20coordinate%20system
A cylindrical coordinate system is a three-dimensional coordinate system that specifies point positions by the distance from a chosen reference axis (axis L in the image opposite), the direction from the axis relative to a chosen reference direction (axis A), and the distance from a chosen reference plane perpendicular to the axis (plane containing the purple section). The latter distance is given as a positive or negative number depending on which side of the reference plane faces the point. The origin of the system is the point where all three coordinates can be given as zero. This is the intersection between the reference plane and the axis. The axis is variously called the cylindrical or longitudinal axis, to differentiate it from the polar axis, which is the ray that lies in the reference plane, starting at the origin and pointing in the reference direction. Other directions perpendicular to the longitudinal axis are called radial lines. The distance from the axis may be called the radial distance or radius, while the angular coordinate is sometimes referred to as the angular position or as the azimuth. The radius and the azimuth are together called the polar coordinates, as they correspond to a two-dimensional polar coordinate system in the plane through the point, parallel to the reference plane. The third coordinate may be called the height or altitude (if the reference plane is considered horizontal), longitudinal position, or axial position. Cylindrical coordinates are useful in connection with objects and phenomena that have some rotational symmetry about the longitudinal axis, such as water flow in a straight pipe with round cross-section, heat distribution in a metal cylinder, electromagnetic fields produced by an electric current in a long, straight wire, accretion disks in astronomy, and so on. They are sometimes called cylindrical polar coordinates or polar cylindrical coordinates, and are sometimes used to specify the position of stars in a galaxy (galactocentric cylindrical polar coordinates). Definition The three coordinates (, , ) of a point are defined as: The radial distance is the Euclidean distance from the -axis to the point . The azimuth is the angle between the reference direction on the chosen plane and the line from the origin to the projection of on the plane. The axial coordinate or height is the signed distance from the chosen plane to the point . Unique cylindrical coordinates As in polar coordinates, the same point with cylindrical coordinates has infinitely many equivalent coordinates, namely and where is any integer. Moreover, if the radius is zero, the azimuth is arbitrary. In situations where someone wants a unique set of coordinates for each point, one may restrict the radius to be non-negative () and the azimuth to lie in a specific interval spanning 360°, such as or . Conventions The notation for cylindrical coordinates is not uniform. The ISO standard 31-11 recommends , where is the radial coordinate, the azimuth, and the height. However, the radius is also often denoted or , the azimuth by or , and the third coordinate by or (if the cylindrical axis is considered horizontal) , or any context-specific letter. In concrete situations, and in many mathematical illustrations, a positive angular coordinate is measured counterclockwise as seen from any point with positive height. Coordinate system conversions The cylindrical coordinate system is one of many three-dimensional coordinate systems. The following formulae may be used to convert between them. Cartesian coordinates For the conversion between cylindrical and Cartesian coordinates, it is convenient to assume that the reference plane of the former is the Cartesian -plane (with equation ), and the cylindrical axis is the Cartesian -axis. Then the -coordinate is the same in both systems, and the correspondence between cylindrical and Cartesian are the same as for polar coordinates, namely in one direction, and in the other. The arcsine function is the inverse of the sine function, and is assumed to return an angle in the range = . These formulas yield an azimuth in the range . By using the arctangent function that returns also an angle in the range = , one may also compute without computing first For other formulas, see the article Polar coordinate system. Many modern programming languages provide a function that will compute the correct azimuth , in the range , given x and y, without the need to perform a case analysis as above. For example, this function is called by in the C programming language, and in Common Lisp. Spherical coordinates Spherical coordinates (radius , elevation or inclination , azimuth ), may be converted to or from cylindrical coordinates, depending on whether represents elevation or inclination, by the following: Line and volume elements In many problems involving cylindrical polar coordinates, it is useful to know the line and volume elements; these are used in integration to solve problems involving paths and volumes. The line element is The volume element is The surface element in a surface of constant radius (a vertical cylinder) is The surface element in a surface of constant azimuth (a vertical half-plane) is The surface element in a surface of constant height (a horizontal plane) is The del operator in this system leads to the following expressions for gradient, divergence, curl and Laplacian: Cylindrical harmonics The solutions to the Laplace equation in a system with cylindrical symmetry are called cylindrical harmonics. Kinematics In a cylindrical coordinate system, the position of a particle can be written as The velocity of the particle is the time derivative of its position, where the term comes from the Poisson formula . Its acceleration is See also List of canonical coordinate transformations Vector fields in cylindrical and spherical coordinates Del in cylindrical and spherical coordinates References Further reading External links MathWorld description of cylindrical coordinates Cylindrical Coordinates Animations illustrating cylindrical coordinates by Frank Wattenberg Three-dimensional coordinate systems Orthogonal coordinate systems de:Polarkoordinaten#Zylinderkoordinaten ro:Coordonate polare#Coordonate cilindrice fi:Koordinaatisto#Sylinterikoordinaatisto
Cylindrical coordinate system
[ "Mathematics" ]
1,196
[ "Orthogonal coordinate systems", "Coordinate systems" ]
176,735
https://en.wikipedia.org/wiki/Neolithic%20architecture
Neolithic architecture refers to structures encompassing housing and shelter from approximately 10,000 to 2,000 BC, the Neolithic period. In southwest Asia, Neolithic cultures appear soon after 10,000 BC, initially in the Levant (Pre-Pottery Neolithic A and Pre-Pottery Neolithic B) and from there into the east and west. Early Neolithic structures and buildings can be found in southeast Anatolia, Syria, and Iraq by 8,000 BC with agriculture societies first appearing in southeast Europe by 6,500 BC, and central Europe by ca. 5,500 BC (of which the earliest cultural complexes include the Starčevo-Koros (Cris), Linearbandkeramic, and Vinča. Architectural advances are an important part of the Neolithic period (10,000-2000 BC), during which some of the major innovations of human history occurred. The domestication of plants and animals, for example, led to both new economics and a new relationship between people and the world, an increase in community size and permanence, a massive development of material culture, and new social and ritual solutions to enable people to live together in these communities. New styles of individual structures and their combination into settlements provided the buildings required for the new lifestyle and economy, and were also an essential element of change. Housing The Neolithic people in the Levant, Anatolia, Syria, northern Mesopotamia and central Asia were great builders, utilising mud-brick to construct houses and villages. At Çatalhöyük, houses were plastered and painted with elaborate scenes of humans and animals. In Europe, the Neolithic long house with a timber frame, pitched, thatched roof, and walls finished in wattle and daub could be very large, presumably housing a whole extended family. Villages might comprise only a few such houses. Neolithic pile dwellings have been excavated in Sweden (Alvastra pile dwelling) and in the circum-Alpine area, with remains being found at the Mondsee and Attersee lakes in Upper Austria. Early archaeologists like Ferdinand Keller thought they formed artificial islands, much like the Scottish crannogs, but today it is clear that the majority of settlements was located on the shores of lakes and were only inundated later on. Reconstructed pile dwellings are shown in open-air museums in Unteruhldingen and Zürich (Pfahlbauland). In Romania, Moldova, and Ukraine, Neolithic settlements included wattle-and-daub structures with thatched roofs and floors made of logs covered in clay. This is also when the burdei pit-house (below-ground) style of house construction was developed, which was still used by Romanians and Ukrainians until the 20th century. Neolithic settlements and "cities" include: Göbekli Tepe in Turkey, ca. 9,000 BC Tell es-Sultan (Jericho) in the Levant, Neolithic from around 8,350 BC, arising from the earlier Epipaleolithic Natufian culture Nevali Cori in Turkey, ca. 8,000 BC Çatalhöyük in Turkey, 7,500 BC Mehrgarh in Pakistan, 7,000 BC Knap of Howar and Skara Brae, the Orkney Islands, Scotland, from 3,500 BC over 3,000 settlements of the Cucuteni-Trypillian culture, some with populations up to 15,000 residents, flourished in present-day Romania, Moldova and Ukraine from 5,400 to 2,800 BC. Tombs and ritual monuments Elaborate tombs for the dead were also built. These tombs are particularly numerous in Ireland, where there are many thousand still in existence. Neolithic people in the British Isles built long barrows and chamber tombs for their dead and causewayed camps, henges and cursus monuments. Megalithic architecture Megaliths found in Europe and the Mediterranean were also erected in the Neolithic period. These monuments include megalithic tombs, temples and several structures of unknown function. Tomb architecture is normally easily distinguished by the presence of human remains that had originally been buried, often with recognizable intent. Other structures may have had a mixed use, now often characterised as religious, ritual, astronomical or political. The modern distinction between various architectural functions with which we are familiar today, now makes it difficult for us to think of some megalithic structures as multi-purpose socio-cultural centre points. Such structures would have served a mixture of socio-economic, ideological, political functions and indeed aesthetic ideals. The megalithic structures of Ġgantija, Tarxien, Ħaġar Qim, Mnajdra, Ta' Ħaġrat, Skorba and smaller satellite buildings on Malta and Gozo, first appearing in their current form around 3600 BC, represent one of the earliest examples of a fully developed architectural statement in which aesthetics, location, design and engineering fused into free-standing monuments. Stonehenge, the other well-known building from the Neolithic would later, 2600 and 2400 BC for the sarsen stones, and perhaps 3000 BC for the blue stones, be transformed into the form that we know so well. At its height Neolithic architecture marked geographic space; their durable monumentality embodied a past, perhaps made up of memories and remembrance. In the Central Mediterranean, Malta also became home of a subterranean skeuomorphised form of architecture around 3600 BC. At the Ħal-Saflieni Hypogeum, the inhabitants of Malta carved out an underground burial complex in which surface architectural elements were used to embellish a series of chambers and entrances. It is at the Neolithic Ħal-Saflieni Hypogeum that the earliest known skeuomorphism first occurred in the world. This architectural device served to define the aesthetics of the underworld in terms that well known in the larger megaliths. On Malta and Gozo, surface and subterranean architecture defined two worlds, which later, in the Greek world, would manifest themselves in the myth of Hades and the world of the living. In Malta, therefore, we encounter Neolithic architecture which is demonstrably not purely functional, but which was conceptual in design and purpose. Other structures Early Neolithic water wells from the Linear Pottery culture have been found in central Germany near Leipzig. These structures are built in timber with complicated woodworking joints at the edges and are dated between 5,200 and 5,100 BC. The world's oldest known engineered roadway, the Sweet Track in England, also dates from this time. See also Ancestral Puebloans Architectural history Megalithic Temples of Malta Ħal-Saflieni Hypogeum Womb tomb Proto-city References External links Russian Architecture: Pre-History Architectural history architecture
Neolithic architecture
[ "Engineering" ]
1,352
[ "Architectural history", "Architecture" ]
176,801
https://en.wikipedia.org/wiki/Vojvodina
Vojvodina ( ; , ), officially the Autonomous Province of Vojvodina, is an autonomous province that occupies the northernmost part of Serbia, located in Central Europe. It lies within the Pannonian Basin, bordered to the south by the national capital Belgrade and the Sava and Danube Rivers. The administrative centre, Novi Sad, is the second-largest city in Serbia. The historic regions of Banat, Bačka, Syrmia and northernmost part of Mačva overlap the province. Modern Vojvodina is multi-ethnic and multi-cultural, with some 26 ethnic groups and six official languages. Less than two million people, nearly 27% of Serbia's population, live in the province. Name Vojvodina is also the Serbian word for voivodeship, a type of duchy overseen by a voivode. The Serbian Voivodeship, a precursor to modern Vojvodina, was an Austrian province from 1849 to 1860. Its official name is the Autonomous Province of Vojvodina. Its name in the province's six official languages is: Serbian: / () Hungarian: Croatian: Pannonian Rusyn: () Slovak: Romanian: History Pre-Roman times and Roman rule In the Neolithic period, two important archaeological cultures flourished in this area: the Starčevo culture and the Vinča culture. Indo-European peoples first settled in the territory of present-day Vojvodina in 3200 BC. During the Eneolithic period, the Bronze Age and the Iron Age, several Indo-European archaeological cultures were centered in or around Vojvodina, including the Vučedol culture, the Vatin culture, and the Bosut culture, among others. Before the Roman conquest in the 1st century BC, Indo-European peoples of Illyrian, Thracian and Celtic origin inhabited this area. The first states organized in this area were the Celtic State of the Scordisci (3rd century BC-1st century AD) with capital in Singidunum (Belgrade), and the Dacian Kingdom of Burebista (1st century BC). During Roman rule, Sirmium (modern Sremska Mitrovica) was one of the four capital cities of the Roman Empire, and six Roman Emperors were born in this city or in its surroundings. The city was also the capital of several Roman administrative units, including Pannonia Inferior, Pannonia Secunda, the Diocese of Pannonia, and the Praetorian prefecture of Illyricum. Roman rule lasted until the 5th century, after which the region came into the possession of various peoples and states. While Banat was a part of the Roman province of Dacia, Syrmia belonged to the Roman province of Pannonia. Bačka was not part of the Roman Empire and was populated and ruled by Sarmatian Iazyges. Early Middle Ages and Slavic settlement After the Romans were driven away from this region, various Indo-European and Turkic peoples and states ruled in the area. These peoples included Goths, Sarmatians, Huns, Gepids and Avars. For regional history, the largest in importance was a Gepid state, which had its capital in Sirmium. According to the 7th-century Miracles of Saint Demetrius, Avars gave the region of Syrmia to a Bulgar leader named Kuber circa 680. The Bulgars of Kuber moved south with Maurus to Macedonia where they co-operated with Tervel in the 8th century. Slavs settled today's Vojvodina in the 6th and 7th centuries, before some of them crossed the rivers Sava and Danube and settled in the Balkans. Slavic tribes that lived in the territory of present-day Vojvodina included Abodrites, Severans, Braničevci and Timočani. In the 9th century, after the fall of the Avar state, the first forms of Slavic statehood emerged in this area. The first Slavic states that ruled over this region included the Bulgarian Empire, Great Moravia and Ljudevit's Pannonian Duchy. During the Bulgarian administration (9th century), local Bulgarian dukes, Salan and Glad, ruled over the region. Salan's residence was Titel, while that of Glad was possibly in the rumoured rampart of Galad or perhaps in the Kladovo (Gladovo) in eastern Serbia. Glad's descendant was the duke Ahtum, another local ruler from the 11th century who opposed the establishment of Hungarian rule over the region. In the village of Čelarevo archaeologists have also found traces of people who practised the Judaic religion. Bunardžić dated Avar-Bulgar graves excavated in Čelarevo, containing skulls with Mongolian features and Judaic symbols, to the late 8th and 9th centuries. Erdely and Vilkhnovich consider the graves to belong to the Kabars who eventually broke ties with the Khazar Empire between the 830s and 862. (Three other Khazar tribes joined the Magyars and took part in the Magyar conquest of the Carpathian basin including what is now Vojvodina in 895–907.) Hungarian rule Following territorial disputes with Byzantine and Bulgarian states, most of Vojvodina became part of the Kingdom of Hungary between the 10th and 12th century and remained under Hungarian administration until the 16th century (Following periods of Ottoman and Habsburg administrations, Hungarian political dominance over most of the region was established again in 1867 and over entire region in 1882, after abolition of the Habsburg Military Frontier). The regional demographic balance started changing in the 11th century when Magyars began to replace the local Slavic population. But from the 14th century, the balance changed again in favour of the Slavs when Serbian refugees fleeing from territories conquered by the Ottoman army settled in the area. Most of the Hungarians left the region during the Ottoman conquest and early period of Ottoman administration, so the population of Vojvodina in Ottoman times was predominantly Serbs (who comprised an absolute majority of Vojvodina at the time), with significant presence of Muslims of various ethnic backgrounds. Ottoman rule After the defeat of the Kingdom of Hungary at Mohács by the Ottoman Empire, the region fell into a period of anarchy and civil wars. In 1526 Jovan Nenad, a leader of Serb mercenaries, established his rule in Bačka, northern Banat and a small part of Syrmia. He created an ephemeral independent state, with Subotica as its capital. At the peak of his power, Jovan Nenad proclaimed himself Serbian Emperor in Subotica. Taking advantage of the extremely confused military and political situation, the Hungarian noblemen from the region joined forces against him and defeated the Serbian troops in the summer of 1527. Emperor Jovan Nenad was assassinated and his state collapsed. After the fall of emperor's state, the supreme military commander of Jovan Nenad's army, Radoslav Čelnik, established his own temporary state in the region of Syrmia, where he ruled as Ottoman vassal. A few decades later, the whole region was added to the Ottoman Empire, which ruled over it until the end of the 17th and the first half of the 18th century, when it was incorporated into the Habsburg monarchy. The Treaty of Karlowitz of 1699, between Holy League and Ottoman Empire, marked the withdrawal of the Ottoman forces from Central Europe, and the supremacy of the Habsburg monarchy in that part of the continent. According to the treaty, the western part of Vojvodina passed to Habsburgs. The eastern part (eastern Syrmia and Province of Tamışvar) remained in Ottoman hands until Austrian conquest in 1716. This new border change was ratified by the Treaty of Passarowitz in 1718. Habsburg rule Hungarian Crown land (1699–1849) During the Great Serb Migration, Serbs from Ottoman territories settled in the Habsburg monarchy at the end of the 17th century (in 1690). Most settled in what is now Hungary, with the lesser part settling in western Vojvodina. All Serbs in the Habsburg monarchy gained the status of a recognized nation with extensive rights, in exchange for providing a border militia (in the Military Frontier) that could be mobilized against invaders from the south (such as the Ottomans), as well as in case of civil unrest in the Habsburg Kingdom of Hungary. Wallachian Right became the point of reference in the 18th century for military settlement in lowland region. The Vlachs who settled there were actually mainly Serbs, although there were also Romanians while Aromanians lived in the urban areas. At the beginning of Habsburg rule, most of the region was integrated into the Military Frontier, while western parts of Bačka were put under civil administration within the County of Bač. Later, the civil administration was expanded to other (mostly northern) parts of the region, while southern parts remained under military administration. The eastern part of this area was held again by the Ottoman Empire between 1787 and 1788, during the Russo-Turkish War. In 1716, Vienna temporarily forbade settlement by Hungarians and Jews in the area, while large numbers of German speakers were settled in the region from Bavaria and southern areas, in order to repopulate it and develop agriculture. From 1782, Protestant Hungarians and ethnic Germans settled in larger numbers. During the 1848–49 revolutions, Vojvodina was a site of a war between Serbs and Hungarians, due to the opposite national conceptions of the two peoples. At the May Assembly in Sremski Karlovci (13–15 May 1848), Serbs declared the constitution of the Serbian Voivodship (Serbian Duchy), a Serbian autonomous region within the Austrian Empire. The Serbian Voivodship consisted of Srem, Bačka, Banat, and Baranja. The head of the metropolitanate of Sremski Karlovci, Josif Rajačić, was elected patriarch, while Stevan Šupljikac was chosen as first voivod (duke). The ethnic war erupted violently in the area, with both sides committing atrocities against the civilian populations. Austrian Crown land (1849–1860) Following the Habsburg-Russian and Serb victory over the Hungarians in 1849, a new administrative territory was created in the region in November 1849, in accordance with a decision made by the Austrian emperor. By this decision, the Serbian autonomous region created in 1848 was transformed into the new Austrian crown land known as Voivodeship of Serbia and Banat of Temeschwar. It consisted of Banat, Bačka and Srem, excluding the southern parts of these regions which were part of the Military Frontier with significant Serbian populations. An Austrian governor seated in Temeschwar ruled the area, while the title of Voivod belonged to the emperor himself. The full title of the emperor was "Grand Voivod of the Voivodship of Serbia" (German: Großwoiwode der Woiwodschaft Serbien). German and Serbian were the official languages of the crown land. In 1860, the new province was abolished and most of it (with exception of Syrmia) was again integrated into the Habsburg Kingdom of Hungary. Hungarian rule Hungarian Crown land (1860–1867) Vojvodina remained Austrian Crown land until 1860, when Emperor Franz Joseph decided that it would be Hungarian Crown land again. After 1867, the Kingdom of Hungary became one of two self-governing parts of Austria-Hungary, and the territory was returned again to Hungarian administration. Counties in the Kingdom of Hungary (1867–1920) In 1867, a new county system was introduced. This territory was organized among Bács-Bodrog, Torontál and Temes counties. The era following the Austro-Hungarian Compromise of 1867 was a period of economic flourishing. The Kingdom of Hungary had the second-fastest growing economy in Europe between 1867 and 1913, but ethnic relations were strained. According to the 1910 census, the last census conducted in Austria-Hungary, the population of Vojvodina included 510,754 (33.8%) Serbs; 425,672 (28.1%) Hungarians; and 324,017 (21.4%) Germans. Kingdom of Yugoslavia At the end of World War I, the Austro-Hungarian Empire collapsed. On 29 October 1918, Syrmia became a part of the State of Slovenes, Croats and Serbs. On 31 October 1918, the Banat Republic was proclaimed in Timișoara. The government of Hungary recognized its independence, but it was short-lived. On 25 November 1918, the Great People's Assembly of Serbs, Bunjevci and other Slavs in Banat, Bačka and Baranja in Novi Sad proclaimed the unification of Vojvodina (Banat, Bačka and Baranja) with the Kingdom of Serbia (The assembly numbered 757 deputies, of which 578 were Serbs, 84 Bunjevci, 62 Slovaks, 21 Rusyns, 6 Germans, 3 Šokci, 2 Croats and 1 Hungarian). One day before this, on 24 November, the Assembly of Syrmia also proclaimed the unification of Syrmia with Serbia. On 1 December 1918, Vojvodina (as part of the Kingdom of Serbia) officially became part of the Kingdom of Serbs, Croats and Slovenes. Between 1929 and 1941, the region was part of the Danube Banovina, a province of the Kingdom of Yugoslavia. Its capital city was Novi Sad. Apart from the core territories of Vojvodina and Baranja, it included significant parts of Šumadija and Braničevo regions south of the Danube (but not the capital city of Belgrade). World War II and immediate aftermath Between 1941 and 1944, during World War II, Nazi Germany and its allies, Hungary and the Independent State of Croatia, occupied Vojvodina and divided it. Bačka and Baranja were annexed by Hungary and Syrmia was included in the Independent State of Croatia. A smaller Danube Banovina (including Banat, Šumadija, and Braničevo) was designated as part of the area governed by the Military Administration in Serbia. The administrative center of this smaller province was Smederevo. But, Banat was a separate autonomous region ruled by its ethnic German minority. The occupying powers committed numerous crimes against the civilian population, especially against Serbs, Jews and Roma; the Jewish population of Vojvodina was almost completely killed or deported. In total, Axis occupational authorities killed about 50,000 citizens of Vojvodina (mostly Serbs, Jews and Roma) while more than 280,000 people were interned, arrested, violated or tortured. Such crimes in varying regions of Vojvodina were carried out by Nazi Germans, Ustaše and Hungarian Axis forces. Many historians and authors describe the Ustashe regime's mass killings as genocide of the Serbs, including Raphael Lemkin. In 1942, in the Novi Sad Raid, a military operation carried out by the Királyi Honvédség, the armed forces of Hungary, during World War II, after occupation and annexation of former Yugoslav territories. It resulted in the deaths of 3,000–4,000 civilians in the southern Bačka (Bácska) region. Under the Hungarian authority, 19,573 people were killed in Bačka, of which the majority of victims were of Serb, Jewish and Romani origin. When Axis occupation ended in 1944, the region was temporarily placed under a military administration (1944–45) run by the new communist authorities. During and after the military administration, several thousands of citizens were killed. Victims were mostly ethnic Germans, but Hungarian and Serb populations were also killed. Both the war-time Axis occupational authorities and the post-war communist authorities ran concentration/prison camps in the territory of Vojvodina (see List of concentration and internment camps). While war-time prisoners in these camps were mostly Jews, Serbs and communists, post-war camps were formed for ethnic Germans (historically known as Danube Swabians). Most Vojvodina ethnic Germans (about 200,000) fled the region in 1944, together with the defeated German army. Most of those who remained in the region (about 150,000) were sent to some of the villages cordoned off as prisons. It is estimated that some 48,447 Germans died in the camps from disease, hunger, malnutrition, mistreatment, and cold. Some 8,049 Germans were killed by partisans during military administration in Vojvodina after October 1944. It has also been estimated that post-war communist authorities killed some 15,000–20,000 Hungarians and some 23,000–24,000 Serbs during Communist purges in Serbia in 1944–45. According to Dragoljub Živković, some 47,000 ethnic Serbs were murdered in Vojvodina between 1941 and 1948. About half were killed by occupational Axis forces and the other half by the post-war Communist authorities. The region was politically restored in 1944 (incorporating Syrmia, Banat, Bačka and Baranja) and became an autonomous province of Serbia in 1945. Instead of the previous name (Danube Banovina), the region regained its historical name of Vojvodina, while its capital city remained Novi Sad. When the final borders of Vojvodina were defined, Baranja was assigned to Croatia, while the northern part of the Mačva region was assigned to Vojvodina. Socialist Yugoslavia For decades, the province enjoyed only a small level of autonomy within Serbia. Under the 1974 Yugoslav constitution, it gained extensive rights of self-rule, as both Kosovo and Vojvodina were given de facto veto power in the Serbian and Yugoslav parliaments. Changes to their status could not be made without the consent of the two Provincial Assemblies. The 1974 Serbian constitution, adopted at the same time, reiterated that "the Socialist Republic of Serbia comprises the Socialist Autonomous Province of Vojvodina and the Socialist Autonomous Province of Kosovo, which originated in the common struggle of nations and nationalities of Yugoslavia in the National Liberation War (the Second World War) and socialist revolution". In 1990s, during the war in Croatia in persecution of Croats in Serbia during Yugoslav Wars was organized and participated in the expulsion of the Croats in some places in Vojvodina. Based on an investigation by the Humanitarian Law Fund from Belgrade in the course of June, July, and August 1992, more than 10,000 Croats from Vojvodina exchanged their property for the property of Serbs from Croatia, and altogether about 20,000 Croats left Serbia. According to other estimations, the number of Croats which have left Serbia under political pressure of Milošević's regime might be between 20,000 and 40,000. According to Petar Kuntić of Democratic Alliance of Croats in Vojvodina, 50,000 Croats were pressured to move out from Serbia during the Yugoslav wars. Under the rule of Serbian president Slobodan Milošević, a series of protests against Vojvodina's party leadership took place during the summer and autumn of 1988, which forced it to resign. Eventually Vojvodina and Kosovo had to accept Serbia's constitutional amendments that practically dismissed the autonomy of the provinces in Serbia. Vojvodina and Kosovo lost elements of statehood in September 1990 when the new constitution of the Republic of Serbia was adopted. Vojvodina was still referred to as an autonomous province of Serbia, but most of its autonomous powers – including, crucially, its vote on the Yugoslav collective presidency – were transferred to the control of Belgrade, the capital. The province still had its own parliament and government, and some other autonomous functions as well. According to Đorđe Tomić, this is an example of a phantom border. Contemporary period The fall of Milošević in 2000 created a new political climate in Vojvodina. Following talks between the political parties, the level of the province's autonomy was somewhat increased by the omnibus law in 2002. The Vojvodina provincial assembly adopted a new statute on 15 October 2008, which, after being partially amended, was approved by the Parliament of Serbia. On 28 January 2013, as an answer to the proposal of the Third Serbia political organization from Novi Sad to abolish the autonomy of Vojvodina, the pro-autonomist Vojvodina's Party performed a campaign that involved the posting of "Republic of Vojvodina" posters in Novi Sad. Geography Vojvodina is situated in the northern quarter of Serbia, in Central Europe. In the southeast part of the Pannonian Plain, the plain that remained when the Pliocene Pannonian Sea dried out. As a consequence of this, Vojvodina is rich in fertile loamy loess soil, covered with a layer of chernozem. The region is divided by the Danube and Tisa rivers into: Bačka in the northwest, Banat in the east and Syrmia (Srem) in the southwest. A small part of the Mačva region is also located in Vojvodina south of the Sava river, in the Srem District. Today, the western part of Syrmia is in Croatia, the northern part of Bačka is in Hungary, the eastern part of Banat is in Romania (with a small piece in Hungary), while Baranja (which is between the Danube and the Drava) is in Hungary and Croatia. Vojvodina has a total surface area of . Vojvodina is also part of the Danube-Kris-Mures-Tisa euroregion. The Gudurica peak (Gudurički vrh) on the Vršac Mountains, is the highest peak in Vojvodina, at an altitude of 641 m above sea level. The climate of the area is moderate continental, including cold winters and hot and humid summers. The Vojvodina climate is characterized by a vast range of extreme temperatures and very irregular rainfall distribution per month. Politics The Assembly of Vojvodina is the provincial legislature composed of 120 proportionally elected members. The current members were elected in the 2020 provincial elections. The Government of Vojvodina is the executive administrative body composed of a president and cabinet ministers. The current ruling coalition in the Vojvodina parliament is composed of the following political parties: Serbian Progressive Party, Socialist Party of Serbia and Alliance of Vojvodina Hungarians. The current president of Vojvodinian government is Igor Mirović (Serbian Progressive Party), while the president of the provincial Assembly is Bálint Juhász (Alliance of Vojvodina Hungarians). Administrative divisions Vojvodina is divided into seven districts. They are regional centers of state authority, but have no powers of their own; they present purely administrative divisions. The seven districts are further subdivided into 37 municipalities and the 8 cities of Kikinda, Novi Sad, Subotica, Zrenjanin, Pančevo, Sombor, Sremska Mitrovica, and Vršac. Demographics Vojvodina is more diverse than the rest of Serbia with more than 25 ethnic groups and six languages which are in official use by the provincial administration. Population by ethnicity: Population by mother tongue: Population by religion: Largest cities Culture There are two daily newspapers published in Vojvodina, Dnevnik in Serbian and Magyar Szó in Hungarian. Monthly and weekly publications in minority languages include Hrvatska riječ ("Croatian Word") in Croatian, Hlas Ľudu ("The Voice of the People") in Slovak, Libertatea ("Freedom") in Romanian, and Руске слово ("Rusyn Word") in Rusyn. There is also Bunjevačke novine ("The Bunjevac newspaper") in Bunjevac. Public Broadcasting Service of Vojvodina was founded in 1974 as Radio Television of Novi Sad, as an equal member of the association of JRT, Yugoslav Radio Television. Radio Novi Sad's first broadcast was on 29 November 1949. During the NATO bombing in the spring of 1999, the RT Novi Sad building of 20 thousand square meters was completely destroyed along with its basic production and technical premises . The Venac terrestrial broadcasting site was heavily damaged. The Radio-Television of Vojvodina produces and broadcasts regional programming on two channels, RTV1 (Serbian language) and RTV2 (minority languages), and three radio frequencies: Radio Novi Sad 1 (Serbian), Radio Novi Sad 2 (Hungarian), Radio Novi Sad 3 (other minority communities). Economy The economy of Vojvodina is largely based on developed food industry and fertile agricultural soil. Agriculture is a priority sector in Vojvodina. Traditionally, it has always been a significant part of the local economy and a generator of positive results, due to the abundance of fertile agricultural land which makes up 84% of its territory. The share of agribusiness in the total industrial production is 40%, that is 30% in the total exports of Vojvodina. The metal industry of Vojvodina has a long tradition and consists of smaller metal processing companies for components manufacturing and, to a lesser extent, of original equipment manufacturers (OEM) with their own brand name products. Vojvodina Metal Cluster gathers 116 companies with 6,300 employees. Other branches of industry are also developed such as the chemical industry, electrical industry, oil industry and construction industry. In the past decade, ICT sector has been growing rapidly and has taken significant role in Vojvodina's economic development. High- tech sector is a fast-growing sector in Vojvodina. Software development represents the main source of revenue, particularly development of ERP solutions, Java applications and mobile applications. IT sector companies mainly deal with software outsourcing, based on demands of international clients or with development of their own software products for purposes of domestic and international market. Vojvodina pays particular attention to interregional and cross-border economic cooperation, as well as to implementation of priorities defined within the EU Strategy for the Danube Region. Some of the companies from Vojvodina: Naftna Industrija Srbije Srbijagas HIP Petrohemija Apatin Brewery Novosadski sajam Vojvodina promotes its investment potentials through the Vojvodina Investment Promotion (VIP) agency, which was founded by the Parliament of the Autonomous Province of Vojvodina. Transport There are many important roads which pass through Vojvodina. First of all, the motorway A1 motorway which goes from Central Europe and the Horgos border crossing to Hungary, via Novi Sad to Belgrade and further to the southeast toward Niš, where it branches: one way leads east to the border with Bulgaria; the other to the south, towards Greece. Motorway A3 in Srem separates the west, towards the neighboring Croatia and further to Western Europe. There is also a network of regional and local roads and railway lines. The three largest rivers in Vojvodina are navigable stream. Danube River with a length of 588 kilometers and its tributaries Tisa (168 km), Sava (206 km) and Bega (75 km). Among them was dug extensive network of irrigation canals, drainage and transport, with a total length of , of which navigable. Tourism Tourist destinations in Vojvodina include well known Orthodox monasteries on Fruška Gora mountain, numerous hunting grounds, cultural-historical monuments, different folklores, interesting galleries and museums, plain landscapes with a lot of greenery, big rivers, canals and lakes, sandy terrain Deliblatska Peščara ("the European Sahara"), etc. In the last few years, Exit has been a popular music festival. Media Oko sveta, television news magazine Radio Television of Vojvodina, regional public broadcaster See also Vojvodina Autonomist Movement Rinflajš References External links Government of the Autonomous Province of Vojvodina Assembly of the Autonomous Province of Vojvodina Provincial Secretariat for Regional and International Cooperation Tourism organization of Vojvodina Useful information about Vojvodina, Parks of Nature, River Expedition, Wine Trails, Cities, Etno, Adventure and more Interactive map of Novi Sad Atlas of Vojvodina (Wikimedia Commons) Statistical information about municipalities of Vojvodina List of largest cities of Vojvodina The encyclopedia of Vojvodina Official symbols of AP Vojvodina Autonomous provinces of Serbia Statistical regions of Serbia States and territories established in 1944 Autonomous provinces Historical regions in Serbia Autonomous regions Countries and territories where Serbian is an official language Countries and territories where Romanian is an official language Countries and territories where Hungarian is an official language Countries and territories where Croatian is an official language Regions of Europe with multiple official languages 1944 establishments in Yugoslavia Rusyn communities
Vojvodina
[ "Mathematics" ]
5,905
[ "Statistical regions of Serbia", "Statistical concepts", "Statistical regions" ]
176,813
https://en.wikipedia.org/wiki/Physical%20geodesy
Physical geodesy is the study of the physical properties of Earth's gravity and its potential field (the geopotential), with a view to their application in geodesy. Measurement procedure Traditional geodetic instruments such as theodolites rely on the gravity field for orienting their vertical axis along the local plumb line or local vertical direction with the aid of a spirit level. After that, vertical angles (zenith angles or, alternatively, elevation angles) are obtained with respect to this local vertical, and horizontal angles in the plane of the local horizon, perpendicular to the vertical. Levelling instruments again are used to obtain geopotential differences between points on the Earth's surface. These can then be expressed as "height" differences by conversion to metric units. Units Gravity is commonly measured in units of m·s−2 (metres per second squared). This also can be expressed (multiplying by the gravitational constant G in order to change units) as newtons per kilogram of attracted mass. Potential is expressed as gravity times distance, m2·s−2. Travelling one metre in the direction of a gravity vector of strength 1 m·s−2 will increase your potential by 1 m2·s−2. Again employing G as a multiplier, the units can be changed to joules per kilogram of attracted mass. A more convenient unit is the GPU, or geopotential unit: it equals 10 m2·s−2. This means that travelling one metre in the vertical direction, i.e., the direction of the 9.8 m·s−2 ambient gravity, will approximately change your potential by 1 GPU. Which again means that the difference in geopotential, in GPU, of a point with that of sea level can be used as a rough measure of height "above sea level" in metres. Gravity Potential fields Geoid Due to the irregularity of the Earth's true gravity field, the equilibrium figure of sea water, or the geoid, will also be of irregular form. In some places, like west of Ireland, the geoid—mathematical mean sea level—sticks out as much as 100 m above the regular, rotationally symmetric reference ellipsoid of GRS80; in other places, like close to Sri Lanka, it dives under the ellipsoid by nearly the same amount. The separation between the geoid and the reference ellipsoid is called the undulation of the geoid, symbol . The geoid, or mathematical mean sea surface, is defined not only on the seas, but also under land; it is the equilibrium water surface that would result, would sea water be allowed to move freely (e.g., through tunnels) under the land. Technically, an equipotential surface of the true geopotential, chosen to coincide (on average) with mean sea level. As mean sea level is physically realized by tide gauge bench marks on the coasts of different countries and continents, a number of slightly incompatible "near-geoids" will result, with differences of several decimetres to over one metre between them, due to the dynamic sea surface topography. These are referred to as vertical datums or height datums. For every point on Earth, the local direction of gravity or vertical direction, materialized with the plumb line, is perpendicular to the geoid (see astrogeodetic leveling). Gravity anomalies Above we already made use of gravity anomalies . These are computed as the differences between true (observed) gravity , and calculated (normal) gravity . (This is an oversimplification; in practice the location in space at which γ is evaluated will differ slightly from that where g has been measured.) We thus get These anomalies are called free-air anomalies, and are the ones to be used in the above Stokes equation. In geophysics, these anomalies are often further reduced by removing from them the attraction of the topography, which for a flat, horizontal plate (Bouguer plate) of thickness H is given by The Bouguer reduction to be applied as follows: so-called Bouguer anomalies. Here, is our earlier , the free-air anomaly. In case the terrain is not a flat plate (the usual case!) we use for H the local terrain height value but apply a further correction called the terrain correction. See also Deflection of the vertical Dynamic height Friedrich Robert Helmert Geophysics Gravity of Earth Gravimetry LAGEOS Mikhail Molodenskii Normal height Orthometric height Satellite geodesy References Further reading B. Hofmann-Wellenhof and H. Moritz, Physical Geodesy, Springer-Verlag Wien, 2005. (This text is an updated edition of the 1967 classic by W.A. Heiskanen and H. Moritz). Geodesy Gravity Geophysics Gravimetry
Physical geodesy
[ "Physics", "Mathematics" ]
1,010
[ "Applied mathematics", "Applied and interdisciplinary physics", "Geodesy", "Geophysics" ]
176,827
https://en.wikipedia.org/wiki/White%20Christmas%20%28weather%29
A white Christmas is a Christmas with the presence of snow, either on Christmas Eve or on Christmas Day, depending on local tradition. The phenomenon is most common in the northern countries of the Northern Hemisphere. This is because December is at the beginning of the Southern Hemisphere summer, and so white Christmases there are extremely rare - with the exception being Antarctica, the Southern Alps of New Zealand's South Island, and parts of the Andes in South America. History The notion of "white Christmas" was popularized by writings of Charles Dickens. The depiction of snow-covered Christmas season found in The Pickwick Papers (1836), A Christmas Carol (1843), and his short stories was apparently influenced by memories of his childhood, which coincided with the coldest decade in England in more than a century. The song, "White Christmas", written by Irving Berlin and sung by Bing Crosby (with the Ken Darby singers and John Scott Trotter and his orchestra) and featured in the 1942 Paramount Pictures film Holiday Inn, is the best-selling single of all time and speaks nostalgically of a traditional white Christmas and has since become a seasonal standard. Definition The criteria for a "white Christmas" varies. In most countries, it simply means that the ground is covered by snow at Christmas, but some countries have more strict definitions. In the United States, the official definition of a white Christmas is that there has to be a snow depth of at least on the ground on 25 December in the contiguous United States, and in Canada the official definition is that there has to be more than on the ground on Christmas Day. In the United Kingdom, although for many a white Christmas simply means a complete covering of snow on Christmas Day, the official definition by the British Met Office and British bookmakers is for snow to be observed falling, however little (even if it melts before it reaches the ground), in the 24 hours of 25 December. Consequently, according to the Met Office and British bookmakers, even of snow on the ground at Christmas, because of a heavy snow fall a few days before, will not constitute a white Christmas, but a few snowflakes mixed with rain will, even if they never reach the ground. In the United Kingdom the most likely place to see snowfall on a Christmas Day is in North and North Eastern Scotland, in Aberdeen, Aberdeenshire or the Highlands. Although the term white Christmas usually refers to snow, if a significant hail accumulation occurs in an area on Christmas Day, as happened in parts of Melbourne on 25 December 2011, this can also be described as a white Christmas, due to the resulting white appearance of the landscape resembling snow cover. By country Croatia In recent decades, white Christmases in Zagreb have been observed in 1996, 1998, 1999, 2001 andmost recently, as of 2018in 2007. The probability of white Christmas in Zagreb has been estimated at 25%. In continental Croatia, the estimated probability of white Christmas is highest in Gorski Kotar and Lika (50-70%), followed by northwestern Croatia (40-60%) and Slavonia (20-40%). Climate projections suggest white Christmases in Croatia might become rarer in the future. Ireland In Ireland, the prospect of early winter snow is always remote due to the country's mild and wet climate (snowfall is most common in January and February). Bookmakers offer odds every year for a white Christmas, which is officially lying snow being recorded at 09:00 local time on Christmas Day, and recorded at either Dublin Airport or Cork Airport (bets are offered for each airport). Since 1961, countrywide, snow has fallen on 17 Christmas Days (1961, 1962, 1964, 1966, 1970, 1980, 1984, 1990, 1993, 1995, 1998, 1999, 2000, 2001, 2004, 2009 and 2010), with nine of these having snow lying on the ground at 09:00 (1964, 1970, 1980, 1993, 1995, 1999, 2004, 2009 and 2010). The maximum amount of lying snow ever recorded on Christmas Day was at Casement Aerodrome in 2010. At Dublin Airport, there have been 12 Christmas Days with snowfall since 1941 (1950, 1956, 1962, 1964, 1970, 1984, 1990, 1993, 1995, 1999, 2000 and 2004). The statistical likelihood of snow falling on Christmas Day at Dublin Airport is approximately once every 5.9 years. However, the only Christmas Day at the airport ever to have lying snow at 09:00 was 2010 (although no snow actually fell that day), with recorded. North America Canada In most parts of Canada it is likely to have a white Christmas in most years, except for the coast and southern interior valleys of British Columbia, southern Alberta, southern Ontario, and parts of Atlantic Canada – in those places Christmas without snow is not uncommon in warmer years, with the British Columbia coast the least likely place to have a white Christmas. The definition of a white Christmas in Canada is of snow-cover or more on Christmas morning at 7 am. Environment Canada started to analyze data from 1955 to 2017 for a total of 63 years, It shows the chance of a White Christmas for several Canadian cities. The year 2006 saw some of the warmest weather on record, with such places as Quebec City experiencing their first green Christmas in recorded history. In 2008, Canada experienced the first nationwide white Christmas in 37 years, as a series of pre-Christmas storms hit the nation, including the normally rainy BC Pacific coast. In 2015, the weather was mild in the east, with record-setting warm temperatures in Toronto and southern Ontario which made 2015 a green Christmas while the west saw cold and snowy weather. United States In the United States, there is often—but not always—snow on the ground at Christmas in the northern states, except in the Pacific Northwest, with Alaska the most likely to see snow on the ground at Christmas. In the contiguous United States, the highest probabilities are in the Upper Midwest and parts of northern New England, along with higher elevations of the Rockies. Some of the least likely white Christmases that have happened include the 2004 Christmas Eve Snowstorm, which brought the first white Christmas in 50 years to New Orleans. The 2004 storm also brought the first measurable snow of any kind since 1895 to Brownsville, Texas, and its twin city of Matamoros, Mexico. The Florida winter storm of 1989 also occurred immediately before Christmas causing a white Christmas for cities like Pensacola and Jacksonville. The same storm buried Wilmington, North Carolina and the rest of southeastern North Carolina under of snow. In the United States the notion of a white Christmas is often associated in the American popular consciousness with a Christmas celebration that includes traditional observances in the company of friends and family. "White Christmas" is an Irving Berlin song reminiscing about an old-fashioned Christmas setting. According to research by the CDIAC, the United States during the second half of the 20th century experienced declining frequencies of white Christmases, especially in the northeastern region. The National Climatic Data Center based the probability of a white Christmas ( of snow on the ground) at selected cities upon the 1981–2010 numbers from stations with at least 25 years of data. Tables for North America Poland The last white Christmas in Kraków was in 2010. Rzeszów had a white Christmas in 2016, the first since 2011. Romania White Christmases in Romania have been rare in recent times, and this is likely to continue due to the geographic position of Romania and climate change. In recent years, blizzard and snow falls usually start in January and they usually end at the beginning of March. However, at high altitudes, white Christmases are common. United Kingdom In the United Kingdom, white Christmases were more common from the 1550s to the 1850s, during the Little Ice Age; the last frost fair on the River Thames, however, was in the winter of 1813–14. The shift from the Julian to the Gregorian calendar in 1752 also slightly reduced the chance of a white Christmas, by moving Christmas earlier in the winter. Before 2006 for betting purposes, a Met Office employee was required to record if any snow fell on the London Weather Centre over the 24 hours of Christmas Day; after 2006 computers were used. An "official" white Christmas is defined by the Met Office as "one snowflake to be observed falling in the 24 hours of 25 December somewhere in the UK", but formerly the snow had to be observed at the Met Office building in London. By the newer measure, over half of all years have white Christmases, with snow being observed 38 times in the 54 years A more "traditional" idea of snow-covered ground is less common, however, with only 4 occasions in the 51 years reporting snow on the ground at 9am at more than 40% of weather stations. Although most places in the UK do tend to see some snow in the winter, it generally falls in January and February. However white Christmases do occur, on average every 6 years. Christmas 2009 was a white Christmas in some parts of Britain, with thick lying snow which easterly winds had brought over the previous week. Travel over much of Britain was badly affected by ice and snow on roads, and was made more slippery by partial daytime thaw followed by overnight refreezing. It was the first white Christmas anywhere in the United Kingdom since 2004. There was another white Christmas in 2010, it was also the coldest Christmas Day ever recorded in the United Kingdom. In 2014, parts of the Northern Isles had a white Christmas and again in 2017, Northern England and Southern Scotland had a white Christmas. Other parts of Europe In Europe, snow at Christmas is common in Norway, Sweden, Finland, the Baltic states, Russia, Slovakia, Ukraine, Belarus, and northeastern Poland. In general, due to the influence of the warm Gulf Stream on European climate, chances of a white Christmas are lower farther west. For example, in southern France a white Christmas is rare, while in Bucharest, Romania, which is at a similar latitude, it is much more likely. Northern Italy and the mountain regions of central-south Italy may also have a white Christmas. In cities such as Turin, Milan or Bologna a Christmas with falling snow or snow on the ground is not a rarity. White Christmases are also common in the Carpathian Mountains, as well as the Alps. Southern Hemisphere Because Christmas occurs during the summer in the Southern Hemisphere, white Christmases are especially rare events there, apart from Antarctica, which is generally uninhabited. A white Christmas elsewhere in the Southern Hemisphere is approximately equivalent to having snow in the Northern Hemisphere on 25 June. Some places like Ushuaia, Argentina and Stanley, Falkland Islands have received measurable snowfall on Christmas Day on numerous occasions. In 2006, a snowstorm hit the Snowy Mountains in New South Wales and Victoria, Australia, arriving on Christmas morning and bringing nearly of snow in higher areas. In New Zealand's Southern Alps, snow can fall any day of the year, and a white Christmas is possible. The same situation can be seen in the Andes at elevations above , with some locations on the Bolivian altiplano, such as El Alto, having the theoretical possibility of a white Christmas. References External links Weather lore Christmas Snow in culture
White Christmas (weather)
[ "Physics" ]
2,277
[ "Weather", "Physical phenomena", "Weather lore" ]
176,828
https://en.wikipedia.org/wiki/Gravity%20Research%20Foundation
The Gravity Research Foundation is an organization established in 1948 by businessman Roger Babson (founder of Babson College) to find ways to implement gravitational shielding. Over time, the foundation turned away from trying to block gravity and began trying to understand it. It holds an annual contest rewarding essays by scientific researchers on gravity-related topics. The contest, which awards prizes of up to $4,000, has been won by at least six people who later won the Nobel Prize in physics. The foundation held conferences and conducted operations in New Boston, New Hampshire through the late 1960s, but that aspect of its operation ended following Babson's death in 1967. It is mentioned on stone monuments, donated by Babson, at more than a dozen American universities. History Thomas Edison apparently suggested the creation of the Gravity Research Foundation to Babson, who established it in several scattered buildings in the small town of New Boston, New Hampshire. Babson said he chose that location because he thought it was far enough from big cities to survive a nuclear war. Babson wanted to put up a sign declaring New Boston to be the safest place in North America if World War III came, but town fathers toned it down to say merely that New Boston was a safe place. In an essay titled Gravity – Our Enemy Number One, Babson indicated that his wish to overcome gravity dated from the childhood drowning of his sister. "She was unable to fight gravity, which came up and seized her like a dragon and brought her to the bottom", he wrote. The foundation held occasional conferences that drew such people as Clarence Birdseye of frozen-food fame and Igor Sikorsky, inventor of the helicopter. Sometimes, attendees sat in chairs with their feet higher than their heads, to counterbalance gravity. Most of the foundation's work, however, involved sponsoring essays by researchers on gravity-related topics. It had only a couple of employees in New Boston. The physical Gravity Research Foundation disappeared some time after Babson's death in 1967. Its only remnant in New Boston is a granite slab in a traffic island that celebrates the foundation's "active research for antigravity and a partial gravity insulator". The building that held the foundation's meetings has long held a restaurant, and for a time had a bar called Gravity Tavern, since renamed. The essay award lives on, offering prizes of up to $4,000. As of 2020, it is still administered out of Wellesley, Massachusetts, by George Rideout, Jr., son of the foundation's original director. Over time, the foundation shed its crankish air, turning its attention from trying to block gravity to trying to understand it. The annual essay prize has drawn respected researchers, including physicist Stephen Hawking, who won in 1971, mathematician/author Roger Penrose (Nobel Prize in Physics, 2020), who won in 1975, and astrophysicist and Nobel laureate George Smoot, who won in 1993. Other notable award winners include Jacob Bekenstein, Sidney Coleman, Bryce DeWitt, Julian Schwinger (Nobel Prize in Physics, 1965), Martin Perl (Nobel Prize in Physics, 1995), Demetrios Christodoulou, Dennis Sciama, Gerard 't Hooft (Nobel Prize in Physics, 1999), Arthur E. Fischer, Jerrold E. Marsden, Robert Wald, John Archibald Wheeler and Frank Wilczek (Nobel Prize in Physics, 2004). Monuments In the 1960s, Babson gave grants to a number of colleges that were accompanied by stone monuments. The monuments are inscribed with a variety of similar sayings, such as "It is to remind students of the blessings forthcoming when a semi-insulator is discovered in order to harness gravity as a free power and reduce airplane accidents" and "It is to remind students of the blessings forthcoming when science determines what gravity is, how it works, and how it may be controlled." Colleges that received monuments include: Bethune-Cookman College in Daytona Beach, Florida Colby College in Waterville, Maine Eastern Baptist College in St. Davids, Pennsylvania Eastern Nazarene College in Quincy, Massachusetts Emory University in Atlanta, Georgia Gordon College in Wenham, Massachusetts Hobart and William Smith Colleges in Geneva, New York Keene State College in Keene, New Hampshire Middlebury College in Middlebury, Vermont Trinity College in Hartford, Connecticut Tufts University in Medford, Massachusetts Tuskegee Institute in Tuskegee, Alabama University of Tampa in Tampa, Florida Wheaton College in Wheaton, Illinois. Hobart College's "H-Book" contains a description of the circumstances surrounding the placement of its Gravity Monument: "The location of the stone on campus was linked to a gift to the Colleges of 'gravity grant' stocks, now totaling more than $1 million, from Roger Babson, the founder of Babson College. The eccentric Babson was intrigued by the notion of anti-gravity and inclined to further scientific research in this area. The Colleges used these funds to help construct Rosenberg Hall in 1994. Two trees that shade the stone are said to be direct descendants of Newton’s famous apple tree." The stone at Colby College was once in front of the Keyes Building on the main academic quadrangle but was moved to a more obscure location near the Schair-Swenson-Watson Alumni Center. Students would often knock it over in an ironic testament to gravity's power. At Tufts, the monument is the site of an "inauguration ceremony" for students who receive PhDs in cosmology, in which a thesis advisor drops an apple on the student's head. See also Louis Witten (theoretical physicist associated with the foundation) References External links (archive 2013-05-29) Article about Gravity Research Foundation on the New Boston Historical Society website Babson, Roger W. (1950). Chapter XXXV – Playing with Gravity, Actions and Reactions [Second Revised Edition]. New York: Harper & Brothers Publishers Page over to Chapter XXXV for Roger W. Babson's description of the Gravity Research Foundation. Links about monument stones The story of the Emory Gravity Monument Colby's Gravity Monument Article about Colby College monument Scientific research foundations in the United States Anti-gravity Scientific organizations established in 1948 1948 establishments in the United States
Gravity Research Foundation
[ "Astronomy" ]
1,278
[ "Astronomical hypotheses", "Anti-gravity" ]
176,855
https://en.wikipedia.org/wiki/Psychological%20testing
Psychological testing refers to the administration of psychological tests. Psychological tests are administered or scored by trained evaluators. A person's responses are evaluated according to carefully prescribed guidelines. Scores are thought to reflect individual or group differences in the construct the test purports to measure. The science behind psychological testing is psychometrics. Psychological tests According to Anastasi and Urbina, psychological tests involve observations made on a "carefully chosen sample [emphasis authors] of an individual's behavior." A psychological test is often designed to measure unobserved constructs, also known as latent variables. Psychological tests can include a series of tasks, problems to solve, and characteristics (e.g., behaviors, symptoms) the presence of which the respondent affirms/denies to varying degrees. Psychological tests can include questionnaires and interviews. Questionnaire- and interview-based scales typically differ from psychoeducational tests, which ask for a respondent's maximum performance. Questionnaire- and interview-based scales, by contrast, ask for the respondent's typical behavior. Symptom and attitude tests are more often called scales. A useful psychological test/scale must be both valid, i.e., show evidence that the test or scale measures what it is purported to measure,) and reliable, i.e., show evidence of consistency across items and raters and over time, etc. It is important that people who are equal on the measured construct (e.g., mathematics ability, depression) have an approximately equal probability of answering a test item accurately or acknowledging the presence of a symptom. An example of an item on a mathematics test that might be used in the United Kingdom but not the United States could be the following: "In a football match two players get a red card; how many players are left on the pitch?" This item requires knowledge of football (soccer) to be answered correctly, not just mathematical ability. Thus, group membership can influence the probability of correctly answering items, as encapsulated in the concept of differential item functioning. Often tests are constructed for a specific population and the nature of that population should be taken into account when administering tests outside that population. A test should be invariant between relevant subgroups (e.g., demographic groups) within a larger population. For example, for a test to be used in the United Kingdom, the test and its items should have approximately the same meaning for British males and females. That invariance does not necessarily apply to similar groups in another population, such as males and females in the United States or between populations, for example, the populations of the UK and the US. In test construction, it is important to establish invariance at least for the subgroups of the population of interest. Psychological assessment is similar to psychological testing but usually involves a more comprehensive assessment of the individual. According to the American Psychological Association, psychological assessment involves the collection and integration of data for the purpose of evaluating an individual’s "behavior, abilities, and other characteristics." Each assessment is a process that involves integrating information from multiple sources, such as personality inventories, ability tests, symptom scales, interest inventories, and attitude scales, as well as information from personal interviews. Collateral information can also be collected from occupational records or medical histories; information can also be obtained from parents, spouses, teachers, friends, or past therapists or physicians. One or more psychological tests are sources of information used within the process of assessment. Many psychologists conduct assessments when providing services. Psychological assessment is a complex, detailed, in-depth process. Examples of assessments include providing a diagnosis, identifying a learning disability in schoolchildren, determining if a defendant is mentally competent, and selecting job applicants. History The first large-scale tests may have been part of the imperial examination system in China. The tests, an early form of psychological testing, assessed candidates based on their proficiency in topics such as civil law and fiscal policies. Early tests of intelligence were made for entertainment rather than analysis. Modern mental testing began in France in the 19th century. It contributed to identifying individuals with intellectual disabilities for the purpose of humanely providing them with an alternative form of education. Englishman Francis Galton coined the terms psychometrics and eugenics. He developed a method for measuring intelligence based on nonverbal sensory-motor tests. The test was initially popular but was abandoned. In 1905 French psychologists Alfred Binet and Théodore Simon published the Échelle métrique de l'Intelligence (Metric Scale of Intelligence), known in English-speaking countries as the Binet–Simon test. The test focused heavily on verbal ability. Binet and Simon intended that the test be used to aid in identifying schoolchildren who were intellectually challenged, which in turn would pave the way for providing the children with professional help. The Binet-Simon test became the foundation for the later-developed Stanford–Binet Intelligence Scales. The origins of personality testing date back to the 18th and 19th centuries, when phrenology was the basis for assessing personality characteristics. Phrenology, a pseudoscience, involved assessing personality by way of skull measurement. Early pseudoscientific techniques eventually gave way to empirical methods. One of the earliest modern personality tests was the Woodworth Personal Data Sheet, a self-report inventory developed during World War I to be used by the United States Army for the purpose of screening potential soldiers for mental health problems and identifying victims of shell shock (the instrument was completed too late to be used for the purposes it was designed for). The Woodworth Inventory, however, became the forerunner of many later personality tests and scales. Principles The development of a psychological test requires careful research. Some of the elements of test development involve the following: Standardization - All procedures and steps must be conducted with consistency from one testing site/testing occasion to another. Examiner subjectivity is minimized (see objectivity next). Major standardized tests are normed on large try-out samples in order to understand what constitutes high, low, and intermediate scores. Objectivity - Scoring such that subjective judgments and biases are minimized; scores are obtained in a similar manner for every test taker (see below). Discrimination - Scores on a test should discriminate members of extreme groups; for example, each subscale of the original MMPI distinguished hospitalized patients suffering from mental illness and members of a well comparison group. Test Norms - Part of the standardization of large-scale tests (see above). Norms help psychologists learn about individual differences. For example, a normed personality scale can help psychologists understand how some people are high in negative affectivity (NA) and others are low or intermediate in NA. With many psychoeducational tests, test norms allow educators and psychologists obtain an age- or grade-referenced percentile rank, for example, in reading achievement. Reliability - Refers to test or scale consistency. It is important that individuals score about the same if they take a test and an alternate form of the test or if they take the same test twice, within a short time window. Reliability also refers to response consistency from test item to test item. Validity - Refers to evidence that demonstrates that a test or scale measures what it is purported to measure. Sample of behavior The term sample of behavior refers to an individual's performance on tasks that have usually been prescribed beforehand. For example, a spelling test for middle school students cannot include all the words in the vocabularies of middle schoolers because there are thousands of words in their lexicon; a middle school spelling test must include only a sample of words in their vocabulary. The samples of behavior must be reasonably representative of the behavior in question. The samples of behavior that make up a paper-and-pencil test, the most common type of psychological test, are written into the test items. Total performance on the items produces a test score. A score on a well-constructed test is believed to reflect a psychological construct such as achievement in a school subject like vocabulary or mathematics knowledge, cognitive ability, dimensions of personality such as introversion/extraversion, etc. Differences in test scores are thought to reflect individual differences in the construct the test is purported to measure. Types There are several broad categories of psychological tests: Achievement tests Achievement tests assess an individual's knowledge in a subject domain. Some academic achievement tests are designed to be administered by a trained evaluator. By contrast, group achievement tests are often administered by a teacher. A score on an achievement test is believed to reflect the individual's knowledge of a subject area. There are generally two types of achievement tests, norm-referenced and criterion-referenced tests. Most achievement tests are norm-referenced. The individual's responses are scored according to standardized protocols and the results can be compared to the results of a norming group. Norm-referenced tests can be used to underline individual differences, that is to say, to compare each test-taker to every other test-taker. By contrast, the purpose of criterion referenced achievement tests is ascertain whether the test-taker mastered a predetermined body of knowledge rather than to compare the test-taker to everyone else who took the test. These types of tests are often a component of a mastery-based classroom. The Kaufman Test of Educational Achievement is an example of an individually administered achievement test for students. Aptitude tests Psychological tests have been designed to measure abilities, both specific (e.g., clerical skill like the Minnesota Clerical Test) and general abilities (e.g., traditional IQ tests such as the Stanford-Binet or the Wechsler Adult Intelligence Scale). A widely used, but brief, aptitude test used in business is the Wonderlic Test. Aptitude tests have been used in assessing specific abilities or the general ability of potential new employees (the Wonderlic was once used by the NFL). Aptitude tests have also been used for career guidance. Evidence suggests that aptitude tests like IQ tests are sensitive to past learning and are not pure measures of untutored ability. The SAT, which used to be called the Scholastic Aptitude Test, had its named changed because performance on the test is sensitive to training. Attitude scales An attitude scale assesses an individual's disposition regarding an event (e.g., a Supreme Court decision), person (e.g., a governor), concept (e.g., wearing face masks during a pandemic), organization (e.g., the Boy Scouts), or object (e.g., nuclear weapons) on a unidimensional favorable-unfavorable attitude continuum. Attitude scales are used in marketing to determine individuals' preferences for brands. Historically social psychologists have developed attitude scales to assess individuals' attitudes toward the United Nations and race relations. Typically Likert scales are used in attitude research. Historically, the Thurstone scale was used prior to the development of the Likert scale. The Likert scale has largely supplanted the Thurstone scale. Biographical Information Blank The Biographical Information Blanks or BIB is a paper-and-pencil form that includes items that ask about detailed personal and work history. It is used to aid in the hiring of employees by matching the backgrounds of individuals to requirements of the job. Clinical tests The purpose of clinical tests is to assess the presence of symptoms of psychopathology . Examples of clinical assessments include the Minnesota Multiphasic Personality Inventory (MMPI), Millon Clinical Multiaxial Inventory-IV, Child Behavior Checklist, Symptom Checklist 90 and the Beck Depression Inventory. Many large-scale clinical tests are normed. For example, scores on the MMPI are rescaled such that 50 is the middlemost score on the MMPI Depression scale and 60 is a score that places the individual one standard deviation above the mean for depressive symptoms; 40 represents a symptom level that is one standard deviation below the mean. Criterion-referenced A criterion-referenced test is an achievement test in a specific knowledge domain. An individual's performance on the test is compared to a criterion. Test-takers are not compared to each other. A passing score, i.e., the criterion performance, is established by the teacher or an educational institution. Criterion-referenced tests are part and parcel of mastery based education. Direct observation Psychological assessment can involve the observation of people as they engage in activities. This type of assessment is usually conducted with families in a laboratory or at home. Sometimes the observation can involve children in a classroom or the schoolyard. The purpose may be clinical, such as to establish a pre-intervention baseline of a child's hyperactive or aggressive classroom behaviors or to observe the nature of parent-child interaction in order to understand a relational disorder. Time sampling methods are also part of direct observational research. The reliability of observers in direct observational research can be evaluated using Cohen's kappa. The Parent-Child Interaction Assessment-II (PCIA) is an example of a direct observation procedure that is used with school-age children and parents. The parents and children are video recorded playing at a make-believe zoo. The Parent-Child Early Relational Assessment is used to study parents and young children and involves a feeding and a puzzle task. The MacArthur Story Stem Battery (MSSB) is used to elicit narratives from children. The Dyadic Parent-Child Interaction Coding System-II tracks the extent to which children follow the commands of parents and vice versa and is well suited to the study of children with Oppositional Defiant Disorders and their parents. Interest inventories Psychological tests include interest inventories. These tests are used primarily for career counseling. Interest inventories include items that ask about the preferred activities and interests of people seeking career counseling. The rationale is that if the individual's activities and interests are similar to the modal pattern of activities and interests of people who are successful in a given occupation, then the chances are high that the individual would find satisfaction in that occupation. A widely used instrument is the Strong Interest Inventory, which is used in career assessment, career counseling, and educational guidance. Neuropsychological tests Neuropsychological tests are designed to assess behaviors that are linked to brain structure and function. An examiner, following strict pre-set procedures, administers the test to a single person in a quiet room largely free of distractions. An example of a widely-used neuropsychological test is the Stroop test. Norm-referenced tests Items on norm-referenced tests have been tried out on a norming group and scores on the test can be classified as high, medium, or low and the gradations in between. These tests allow for the study of individual differences. Scores on norm-referenced achievement tests are associated with percentile ranks vis-á-vis other individuals who are the test-taker's age or grade. Personality tests Personality tests assess constructs that are thought to be the constituents of personality. Examples of personality constructs include traits in the Big Five, such as introversion-extroversion and conscientiousness. Personality constructs are thought to be dimensional. Personality measures are used in research and in the selection of employees. They include self-report and observer-report scales. Examples of norm-referenced personality tests include the NEO-PI, the 16PF Questionnaire, the Occupational Personality Questionnaires, and the Five-Factor Personality Inventory. The International Personality Item Pool (IPIP) scales assess the same traits that the NEO and other personality scales assess. All IPIP scales and items are in the public domain and, therefore, are available free of charge. Projective tests Projective testing originated in the first half of the 1900s. The idea animating projective tests is that the examinee is thought to project hidden aspects of his or her personality, including unconscious content, onto the ambiguous stimuli presented in the test. Examples of projective tests include Rorschach test, Thematic apperception test, and the Draw-A-Person test. Available evidence, however, suggests that projective tests have limited validity. Psychological symptom scales Beck Depression Inventory (BDI-II), there is a fee to use the BDI. Beck Hopelessness Scale, there is a fee to use the scale. Bortner Type A Scale Center for Epidemiologic Studies Depression Scale (CES-D) Children's Depression Inventory (CDI & CDI-2) Depression Anxiety Stress Scales (DASS) General Health Questionnaire (GHQ) Generalized Anxiety Disorder scale (GAD-7) Hamilton Rating Scale for Anxiety (HAM-A) Unlike most other psychological symptom scales listed in this section, clinicians use this scale to help evaluate the mental health of people, usually under treatment, who have been diagnosed with an anxiety disorder; it is not used with the general population samples. Hamilton Rating Scale for Depression (HAM-D) Unlike most other psychological symptom scales listed in this section, clinicians use this scale to help evaluate the mental health of people, usually under treatment, who have been diagnosed with a depressive disorder; it is not used with the general population samples. Harburg Anger-In/Anger-Out Scale Hopkins Symptom Checklist (HSCL) Hospital Anxiety and Depression Scale (HADS) Jenkins Activity Survey (JAS) Assesses Type A/B behavior Kessler Psychological Distress Scale (K6 and K10, 6- and 10-item symptom scales) Midtown Study Screening Instrument Multidimensional Anger Inventory (MAI) Occupational Depression Inventory Perceived Stress Scale Patient Health Questionnaire–nine-item depression scale (PHQ-9) Penn State Worry Questionnaire Positive and Negative Affect Schedule (PANAS) Profile of Mood States (POMS) Psychiatric Epidemiology Research Interview (PERI) Psychosomatic Complaints Scale Psychotic Symptoms Subscale PTSD Checklist for DSM-5 (PCL-5) Rosenberg Self-Esteem Scale Although first designed for adolescents, the scale has been extensively used with adults. UCLA Loneliness Scale Zung Self-Rating Anxiety Scale Zung Self-Rating Depression Scale Public safety employment tests Vocations within the public safety field (e.g., fire service, law enforcement, corrections, emergency medical services) are often required to take industrial or organizational psychological tests for initial employment and promotion. The National Firefighter Selection Inventory, the National Criminal Justice Officer Selection Inventory, and the Integrity Inventory are prominent examples of these tests. Sources of psychological tests Thousands of psychological tests have been developed. Some were produced by commercial testing companies that charge for their use. Others have been developed by researchers, and can be found in the academic research literature. Tests to assess specific psychological constructs can be found by conducting a database search. Some databases are open access, for example, Google Scholar (although many tests found in the Google Scholar database are not free of charge). Other databases are proprietary, for example, PsycINFO, but are available through university libraries and many public libraries (e.g., the Brooklyn Public Library and the New York Public Library). There are online archives available that contain tests on various topics. APA PsycTests. Requires subscription Mental Measurements Yearbook- a non-profit that provides independent reviews of thousands of distinct psychological tests. Assessment Psychology Online has links to dozens of tests for clinical assessment. International Personality Item Pool (IPIP) contains items to assess more than 100 personality traits including Five Factor Model. Organization of Work: Measurement Tools for Research and Practice. NIOSH site devoted to Occupational Health and Safety Test security Many psychological and psychoeducational tests are not available to the public. Test publishers put restrictions on who has access to the test. Psychology licensing boards also restrict access to the tests used in licensing psychologists. Test publishers hold that both copyright and professional ethics require them to protect the tests. Publishers sell tests only to people who have proved their educational and professional qualifications. Purchasers are legally bound not to give test answers or the tests themselves to members of the public unless permitted by the publisher. The International Test Commission (ITC), an international association of national psychological societies and test publishers, publishes the International Guidelines for Test Use, which prescribes measures to take to "protect the integrity" of the tests by not publicly describing test techniques and by not "coaching individuals" so that they "might unfairly influence their test performance." See also References External links American Psychological Association webpage on testing and assessment British Psychological Society Psychological Testing Centre Guidelines of the International Test Commission International Item Pool, an alternative and free source of items available for research on personality List of mental health tests - a web directory with links to many assessments related to mental health and substance abuse Clinical psychology
Psychological testing
[ "Biology" ]
4,242
[ "Behavioural sciences", "Behavior", "Clinical psychology" ]
176,865
https://en.wikipedia.org/wiki/Cache%20coherence
In computer architecture, cache coherence is the uniformity of shared resource data that is stored in multiple local caches. In a cache coherent system, if multiple clients have a cached copy of the same region of a shared memory resource, all copies are the same. Without cache coherence, a change made to the region by one client may not be seen by others, and errors can result when the data used by different clients is mismatched. A cache coherence protocol is used to maintain cache coherency. The two main types are snooping and directory-based protocols. Cache coherence is of particular relevance in multiprocessing systems, where each CPU may have its own local cache of a shared memory resource. Overview In a shared memory multiprocessor system with a separate cache memory for each processor, it is possible to have many copies of shared data: one copy in the main memory and one in the local cache of each processor that requested it. When one of the copies of data is changed, the other copies must reflect that change. Cache coherence is the discipline which ensures that the changes in the values of shared operands (data) are propagated throughout the system in a timely fashion. The following are the requirements for cache coherence: Write Propagation Changes to the data in any cache must be propagated to other copies (of that cache line) in the peer caches. Transaction Serialization Reads/Writes to a single memory location must be seen by all processors in the same order. Theoretically, coherence can be performed at the load/store granularity. However, in practice it is generally performed at the granularity of cache blocks. Definition Coherence defines the behavior of reads and writes to a single address location. In a multiprocessor system, consider that more than one processor has cached a copy of the memory location X. The following conditions are necessary to achieve cache coherence: In a read made by a processor P to a location X that follows a write by the same processor P to X, with no writes to X by another processor occurring between the write and the read instructions made by P, X must always return the value written by P. In a read made by a processor P1 to location X that follows a write by another processor P2 to X, with no other writes to X made by any processor occurring between the two accesses and with the read and write being sufficiently separated, X must always return the value written by P2. This condition defines the concept of coherent view of memory. Propagating the writes to the shared memory location ensures that all the caches have a coherent view of the memory. If processor P1 reads the old value of X, even after the write by P2, we can say that the memory is incoherent. The above conditions satisfy the Write Propagation criteria required for cache coherence. However, they are not sufficient as they do not satisfy the Transaction Serialization condition. To illustrate this better, consider the following example: A multi-processor system consists of four processors - P1, P2, P3 and P4, all containing cached copies of a shared variable S whose initial value is 0. Processor P1 changes the value of S (in its cached copy) to 10 following which processor P2 changes the value of S in its own cached copy to 20. If we ensure only write propagation, then P3 and P4 will certainly see the changes made to S by P1 and P2. However, P3 may see the change made by P1 after seeing the change made by P2 and hence return 10 on a read to S. P4 on the other hand may see changes made by P1 and P2 in the order in which they are made and hence return 20 on a read to S. The processors P3 and P4 now have an incoherent view of the memory. Therefore, in order to satisfy Transaction Serialization, and hence achieve Cache Coherence, the following condition along with the previous two mentioned in this section must be met: Writes to the same location must be sequenced. In other words, if location X received two different values A and B, in this order, from any two processors, the processors can never read location X as B and then read it as A. The location X must be seen with values A and B in that order. The alternative definition of a coherent system is via the definition of sequential consistency memory model: "the cache coherent system must appear to execute all threads’ loads and stores to a single memory location in a total order that respects the program order of each thread". Thus, the only difference between the cache coherent system and sequentially consistent system is in the number of address locations the definition talks about (single memory location for a cache coherent system, and all memory locations for a sequentially consistent system). Another definition is: "a multiprocessor is cache consistent if all writes to the same memory location are performed in some sequential order". Rarely, but especially in algorithms, coherence can instead refer to the locality of reference. Multiple copies of the same data can exist in different cache simultaneously and if processors are allowed to update their own copies freely, an inconsistent view of memory can result. Coherence mechanisms The two most common mechanisms of ensuring coherency are snooping and directory-based, each having their own benefits and drawbacks. Snooping based protocols tend to be faster, if enough bandwidth is available, since all transactions are a request/response seen by all processors. The drawback is that snooping isn't scalable. Every request must be broadcast to all nodes in a system, meaning that as the system gets larger, the size of the (logical or physical) bus and the bandwidth it provides must grow. Directories, on the other hand, tend to have longer latencies (with a 3 hop request/forward/respond) but use much less bandwidth since messages are point to point and not broadcast. For this reason, many of the larger systems (>64 processors) use this type of cache coherence. Snooping First introduced in 1983, snooping is a process where the individual caches monitor address lines for accesses to memory locations that they have cached. The write-invalidate protocols and write-update protocols make use of this mechanism. For the snooping mechanism, a snoop filter reduces the snooping traffic by maintaining a plurality of entries, each representing a cache line that may be owned by one or more nodes. When replacement of one of the entries is required, the snoop filter selects for the replacement of the entry representing the cache line or lines owned by the fewest nodes, as determined from a presence vector in each of the entries. A temporal or other type of algorithm is used to refine the selection if more than one cache line is owned by the fewest nodes. Directory-based In a directory-based system, the data being shared is placed in a common directory that maintains the coherence between caches. The directory acts as a filter through which the processor must ask permission to load an entry from the primary memory to its cache. When an entry is changed, the directory either updates or invalidates the other caches with that entry. Distributed shared memory systems mimic these mechanisms in an attempt to maintain consistency between blocks of memory in loosely coupled systems. Coherence protocols Coherence protocols apply cache coherence in multiprocessor systems. The intention is that two clients must never see different values for the same shared data. The protocol must implement the basic requirements for coherence. It can be tailor-made for the target system or application. Protocols can also be classified as snoopy or directory-based. Typically, early systems used directory-based protocols where a directory would keep a track of the data being shared and the sharers. In snoopy protocols, the transaction requests (to read, write, or upgrade) are sent out to all processors. All processors snoop the request and respond appropriately. Write propagation in snoopy protocols can be implemented by either of the following methods: Write-invalidate When a write operation is observed to a location that a cache has a copy of, the cache controller invalidates its own copy of the snooped memory location, which forces a read from main memory of the new value on its next access. Write-update When a write operation is observed to a location that a cache has a copy of, the cache controller updates its own copy of the snooped memory location with the new data. If the protocol design states that whenever any copy of the shared data is changed, all the other copies must be "updated" to reflect the change, then it is a write-update protocol. If the design states that a write to a cached copy by any processor requires other processors to discard or invalidate their cached copies, then it is a write-invalidate protocol. However, scalability is one shortcoming of broadcast protocols. Various models and protocols have been devised for maintaining coherence, such as MSI, MESI (aka Illinois), MOSI, MOESI, MERSI, MESIF, write-once, Synapse, Berkeley, Firefly and Dragon protocol. In 2011, ARM Ltd proposed the AMBA 4 ACE for handling coherency in SoCs. The AMBA CHI (Coherent Hub Interface) specification from ARM Ltd, which belongs to AMBA5 group of specifications defines the interfaces for the connection of fully coherent processors. See also Consistency model Directory-based coherence Memory barrier Non-uniform memory access (NUMA) False sharing References Further reading Cache coherency Parallel computing Concurrent computing Consistency models
Cache coherence
[ "Technology" ]
1,993
[ "Computing platforms", "Concurrent computing", "IT infrastructure" ]
176,883
https://en.wikipedia.org/wiki/Major%20appliance
A major appliance, also known as a large domestic appliance or large electric appliance or simply a large appliance, large domestic, or large electric, is a non-portable or semi-portable machine used for routine housekeeping tasks such as cooking, washing laundry, or food preservation. Such appliances are sometimes collectively known as white goods, as the products were traditionally white in colour, although a variety of colours are now available. An appliance is different from a plumbing fixture because it uses electricity or fuel. Major appliances differ from small appliances because they are bigger and not portable. They are often considered fixtures and part of real estate and as such they are often supplied to tenants as part of otherwise unfurnished rental properties. Major appliances may have special electrical connections, connections to gas supplies, or special plumbing and ventilation arrangements that may be permanently connected to the appliance. This limits where they can be placed in a home. Since major appliances in a home consume a significant amount of energy, they have become the objectives of programs to improve their energy efficiency in many countries. Increasing energy efficiency is often described as an important element of climate change mitigation alongside other improvements like retrofitting buildings to increase building performance. Energy efficiency improvements may require changes in construction of the appliances, or improved control systems. Brands In the early days of electrification, many major consumer appliances were made by the same companies that made the generation and distribution equipment. While some of these brand names persist to the present day, even if only as licensed use of old popular brand names, today many major appliances are manufactured by companies or divisions of companies that specialize in particular appliances. Types Major appliances may be roughly divided as follows: Refrigeration equipment Freezer Refrigerator Water cooler Ice maker Cooking Kitchen stove, also known as a range, cooker, oven, cooking plate, or cooktop Wall oven Steamer oven Microwave oven Washing and drying equipment Washing machine Clothes dryer Drying cabinet Dishwasher Heating and cooling Air conditioner Furnace Water heater Whole house ventilator Mechanical Air Ventilator Efficiency See also Small appliances Domestic technology Home automation (Domotics) E-waste Household chore List of cooking appliances List of home appliances List of stoves Yellow, red and orange goods Black goods References External links Energy Star Appliances Electromechanical engineering Consumer goods Hardlines (retail) nn:Kvitvarer
Major appliance
[ "Physics", "Technology", "Engineering" ]
486
[ "Machines", "Physical systems", "Electromechanical engineering", "Mechanical engineering by discipline", "Home appliances", "Electrical engineering" ]
176,916
https://en.wikipedia.org/wiki/Henry%20Eyring%20%28chemist%29
Henry Eyring (February 20, 1901 – December 26, 1981) was a Mexico-born United States theoretical chemist whose primary contribution was in the study of chemical reaction rates and intermediates. Eyring developed the Absolute Rate Theory or Transition state theory of chemical reactions, connecting the fields of chemistry and physics through atomic theory, quantum theory, and statistical mechanics. History Eyring, a third-generation member of the Church of Jesus Christ of Latter-day Saints (LDS Church), was reared on a cattle ranch in Colonia Juárez, Chihuahua, a Mormon colony, for the first 11 years of his life. His father, Edward Christian Eyring, practiced plural marriage; Edward married Caroline Romney (1893) and her sister Emma Romney (1903), both daughters of Miles Park Romney, the great-grandfather of Mitt Romney. In July 1912, the Eyrings and about 4,200 other immigrants were driven out of Mexico by violent insurgents during the Mexican Revolution and moved to El Paso, Texas. After living in El Paso for approximately one year, the Eyrings relocated to Pima, Arizona, where he completed high school and showed a special aptitude for mathematics and science. He also studied at Gila Academy in Thatcher, Arizona, now Eastern Arizona College. One of the pillars at the front of the main building still bears his name, along with that of his sister Camilla's husband, Spencer W. Kimball, later president of the LDS Church. Eyring earned a BS in mining engineering at the University of Arizona by working in a copper mine. He then received a fellowship from the US Bureau of Mines fellowship and earned his M.Sc. in metallurgy. Having seen the high rates of accidents in the mines, and breathed sulfur fumes from blast furnaces at a smelter, he chose to do his Ph.D. in chemistry. He pursued and received his doctoral degree in chemistry from the University of California, Berkeley in 1927 for a thesis on A Comparison of the Ionization by, and Stopping Power for, Alpha Particles of Elements and Compounds. Princeton University recruited Eyring as an instructor in 1931. He would continue his work at Princeton until 1946. In 1946 he was offered a position as dean of the graduate school at the University of Utah, with professorships in chemistry and metallurgy. The chemistry building on the University of Utah campus is now named in his honor. A prolific writer, Eyring authored more than 600 scientific articles, ten scientific books, and a few books on the subject of science and religion. He received the Wolf Prize in Chemistry in 1980 and the National Medal of Science in 1966 for developing the Absolute Rate Theory or Transition state theory of chemical reactions, one of the most important developments of 20th-century chemistry. Several other chemists later received the Nobel Prize for work based on the Absolute Rate Theory, and his failure to receive the Nobel was a matter of surprise to many. The Nobel Prize organization admitted that "Strangely, Eyring never received a Nobel Prize"; the Royal Swedish Academy of Sciences apparently did not understand Eyring's theory until it was too late to award him the Nobel. The academy awarded him the Berzelius Medal in 1977 as partial compensation. Sterling M. McMurrin believed Eyring should have received the Nobel Prize but was not awarded it because of his religion. Eyring was elected president of the American Chemical Society in 1963 and the Association for the Advancement of Science in 1965. Personal life Eyring married Mildred Bennion. She was a native of Granger, Utah, who had a degree from the University of Utah and served as head of the physical education department there. She met Eyring while pursuing a doctorate at the University of Wisconsin. They had three sons together. The oldest, Edward M. "Ted" Eyring was an emeritus professor of chemistry at the University of Utah. The second son, Henry B. Eyring is a general authority of the LDS Church, while the youngest son Harden B. Eyring is a higher education administrator for the State of Utah. His wife, Mildred, died June 25, 1969, in Salt Lake City, Utah. On August 13, 1971, he married Winifred Brennan in the LDS Church's Salt Lake Temple. Eyring was a member of the LDS Church throughout his life. His views of science and religion were captured in this quote: "Is there any conflict between science and religion? There is no conflict in the mind of God, but often there is conflict in the minds of men." Eyring also feared overeager defenders of faith would discard new scientific findings because of apparent contradictions. He encouraged parents and teachers to distinguish between "what they know to be true and what they think may be true," to avoid clumping them together and "throwing the baby out with the bath." As a member of the LDS Church, Eyring served as a branch president, district president, and, for over twenty years, a member of the general board of the Deseret Sunday School Union. As of 2024, his son, Henry B. Eyring, is an apostle and member of the church's First Presidency. Awards AAAS Newcomb Cleveland Prize (1932) Bingham Medal (1949) of the Society of Rheology Peter Debye Award in Physical Chemistry (1964) National Medal of Science (1966) Irving Langmuir Award (1967) Linus Pauling Award (1969) Elliott Cresson Medal (1969) from the Franklin Institute Golden Plate Award of the American Academy of Achievement (1974) T. W. Richards Medal (1975) Priestley Medal (1975) Berzelius Medal (1979) Wolf Prize (1980) Member of International Academy of Quantum Molecular Science Member of U.S. National Academy of Sciences Member of the American Philosophical Society Member of the American Academy of Arts and Sciences Scientific publications: books Henry Eyring authored, co-authored, or edited the following books or journals: A generalized theory of plasticity involving the virial theorem The activated complex in chemisorption and catalysis An examination into the origin, possible synthesis, and physical properties of diamonds Annual Review of Physical Chemistry Basic chemical kinetics Deformation Kinetics with Alexander Stephen Krausz Electrochemistry Kinetic evidence of phase structure Modern Chemical Kinetics Non-classical reaction kinetics Physical Chemistry, an Advanced Treatise (1970) Quantum Chemistry Reactions in condensed phases The significance of isotopic reactions in rate theory Significant Liquid Structures Some aspects of catalytic hydrogenation Statistical Mechanics Statistical Mechanics and Dynamics Theoretical Chemistry: Advances and Perspectives. Volume 2 The Theory of Rate Processes in Biology and Medicine with Frank H. Johnson and Betsy Jones Stover Theory of Optical Activity (Monographs on Chemistry series) with D.J. Caldwell Time and Change Valency Religious publications: books Reflections of a Scientist (1983) The Faith of a Scientist. Bookcraft, Inc. (1967); Science and Your Faith in God: A Selected Compilation of Writings and Talks by Prominent Latter-Day Saints Scientists on the Subjects of Science and Religion. Bookcraft, Inc. (1958); compiled by Paul R. Green and featuring the writings of Henry Eyring and others See also Eyring equation Mormon Scientist: The Life and Faith of Henry Eyring References External links The Chemistry Department:1946-2000 by Edward M. Eyring, April K. Heiselt, & Kelly Erickson (University of Utah) Mini-Biography of Henry Eyring Henry Eyring papers, MS 8 at the University of Utah Henry Eyring speeches, MSS SC 586 at L. Tom Perry Special Collections, Brigham Young University "The Reconciliation of Faith and Science: Henry Eyring's Achievement" - 1982 article on Eyring as an LDS scientist from Dialogue: A Journal of Mormon Thought 1901 births 1981 deaths American leaders of the Church of Jesus Christ of Latter-day Saints Eastern Arizona College alumni Members of the International Academy of Quantum Molecular Science Mexican chemists Mexican emigrants to the United States Mexican leaders of the Church of Jesus Christ of Latter-day Saints National Medal of Science laureates People from Colonia Juárez, Chihuahua American physical chemists Romney family Princeton University faculty Rheologists Theoretical chemists University of Arizona alumni UC Berkeley College of Chemistry alumni University of Utah faculty Wolf Prize in Chemistry laureates Sunday School (LDS Church) people Latter Day Saints from Arizona Latter Day Saints from Utah Latter Day Saints from New Jersey Annual Reviews (publisher) editors Members of the American Philosophical Society 20th-century American chemists
Henry Eyring (chemist)
[ "Chemistry" ]
1,748
[ "Quantum chemistry", "Theoretical chemistry", "Theoretical chemists", "Physical chemists" ]
176,927
https://en.wikipedia.org/wiki/Computational%20geometry
Computational geometry is a branch of computer science devoted to the study of algorithms which can be stated in terms of geometry. Some purely geometrical problems arise out of the study of computational geometric algorithms, and such problems are also considered to be part of computational geometry. While modern computational geometry is a recent development, it is one of the oldest fields of computing with a history stretching back to antiquity. Computational complexity is central to computational geometry, with great practical significance if algorithms are used on very large datasets containing tens or hundreds of millions of points. For such sets, the difference between O(n2) and O(n log n) may be the difference between days and seconds of computation. The main impetus for the development of computational geometry as a discipline was progress in computer graphics and computer-aided design and manufacturing (CAD/CAM), but many problems in computational geometry are classical in nature, and may come from mathematical visualization. Other important applications of computational geometry include robotics (motion planning and visibility problems), geographic information systems (GIS) (geometrical location and search, route planning), integrated circuit design (IC geometry design and verification), computer-aided engineering (CAE) (mesh generation), and computer vision (3D reconstruction). The main branches of computational geometry are: Combinatorial computational geometry, also called algorithmic geometry, which deals with geometric objects as discrete entities. A groundlaying book in the subject by Preparata and Shamos dates the first use of the term "computational geometry" in this sense by 1975. Numerical computational geometry, also called machine geometry, computer-aided geometric design (CAGD), or geometric modeling, which deals primarily with representing real-world objects in forms suitable for computer computations in CAD/CAM systems. This branch may be seen as a further development of descriptive geometry and is often considered a branch of computer graphics or CAD. The term "computational geometry" in this meaning has been in use since 1971. Although most algorithms of computational geometry have been developed (and are being developed) for electronic computers, some algorithms were developed for unconventional computers (e.g. optical computers ) Combinatorial computational geometry The primary goal of research in combinatorial computational geometry is to develop efficient algorithms and data structures for solving problems stated in terms of basic geometrical objects: points, line segments, polygons, polyhedra, etc. Some of these problems seem so simple that they were not regarded as problems at all until the advent of computers. Consider, for example, the Closest pair problem: Given n points in the plane, find the two with the smallest distance from each other. One could compute the distances between all the pairs of points, of which there are n(n-1)/2, then pick the pair with the smallest distance. This brute-force algorithm takes O(n2) time; i.e. its execution time is proportional to the square of the number of points. A classic result in computational geometry was the formulation of an algorithm that takes O(n log n). Randomized algorithms that take O(n) expected time, as well as a deterministic algorithm that takes O(n log log n) time, have also been discovered. Problem classes The core problems in computational geometry may be classified in different ways, according to various criteria. The following general classes may be distinguished. Static problem In the problems of this category, some input is given and the corresponding output needs to be constructed or found. Some fundamental problems of this type are: Convex hull: Given a set of points, find the smallest convex polyhedron/polygon containing all the points. Line segment intersection: Find the intersections between a given set of line segments. Delaunay triangulation Voronoi diagram: Given a set of points, partition the space according to which points are closest to the given points. Linear programming Closest pair of points: Given a set of points, find the two with the smallest distance from each other. Farthest pair of points Largest empty circle: Given a set of points, find a largest circle with its center inside of their convex hull and enclosing none of them. Euclidean shortest path: Connect two points in a Euclidean space (with polyhedral obstacles) by a shortest path. Polygon triangulation: Given a polygon, partition its interior into triangles Mesh generation Boolean operations on polygons The computational complexity for this class of problems is estimated by the time and space (computer memory) required to solve a given problem instance. Geometric query problems In geometric query problems, commonly known as geometric search problems, the input consists of two parts: the search space part and the query part, which varies over the problem instances. The search space typically needs to be preprocessed, in a way that multiple queries can be answered efficiently. Some fundamental geometric query problems are: Range searching: Preprocess a set of points, in order to efficiently count the number of points inside a query region. Point location problem: Given a partitioning of the space into cells, produce a data structure that efficiently tells in which cell a query point is located. Nearest neighbor: Preprocess a set of points, in order to efficiently find which point is closest to a query point. Ray tracing: Given a set of objects in space, produce a data structure that efficiently tells which object a query ray intersects first. If the search space is fixed, the computational complexity for this class of problems is usually estimated by: the time and space required to construct the data structure to be searched in the time (and sometimes an extra space) to answer queries. For the case when the search space is allowed to vary, see "Dynamic problems". Dynamic problems Yet another major class is the dynamic problems, in which the goal is to find an efficient algorithm for finding a solution repeatedly after each incremental modification of the input data (addition or deletion input geometric elements). Algorithms for problems of this type typically involve dynamic data structures. Any of the computational geometric problems may be converted into a dynamic one, at the cost of increased processing time. For example, the range searching problem may be converted into the dynamic range searching problem by providing for addition and/or deletion of the points. The dynamic convex hull problem is to keep track of the convex hull, e.g., for the dynamically changing set of points, i.e., while the input points are inserted or deleted. The computational complexity for this class of problems is estimated by: the time and space required to construct the data structure to be searched in the time and space to modify the searched data structure after an incremental change in the search space the time (and sometimes an extra space) to answer a query. Variations Some problems may be treated as belonging to either of the categories, depending on the context. For example, consider the following problem. Point in polygon: Decide whether a point is inside or outside a given polygon. In many applications this problem is treated as a single-shot one, i.e., belonging to the first class. For example, in many applications of computer graphics a common problem is to find which area on the screen is clicked by a pointer. However, in some applications, the polygon in question is invariant, while the point represents a query. For example, the input polygon may represent a border of a country and a point is a position of an aircraft, and the problem is to determine whether the aircraft violated the border. Finally, in the previously mentioned example of computer graphics, in CAD applications the changing input data are often stored in dynamic data structures, which may be exploited to speed-up the point-in-polygon queries. In some contexts of query problems there are reasonable expectations on the sequence of the queries, which may be exploited either for efficient data structures or for tighter computational complexity estimates. For example, in some cases it is important to know the worst case for the total time for the whole sequence of N queries, rather than for a single query. See also "amortized analysis". Numerical computational geometry This branch is also known as geometric modelling and computer-aided geometric design (CAGD). Core problems are curve and surface modelling and representation. The most important instruments here are parametric curves and parametric surfaces, such as Bézier curves, spline curves and surfaces. An important non-parametric approach is the level-set method. Application areas of computational geometry include shipbuilding, aircraft, and automotive industries. List of algorithms See also List of combinatorial computational geometry topics List of numerical computational geometry topics CAD/CAM/CAE Solid modeling Computational topology Computer representation of surfaces Digital geometry Discrete geometry (combinatorial geometry) Space partitioning Tricomplex number Robust geometric computation Wikiversity:Topic:Computational geometry Wikiversity:Computer-aided geometric design References Further reading List of books in computational geometry Journals Combinatorial/algorithmic computational geometry Below is the list of the major journals that have been publishing research in geometric algorithms. Please notice with the appearance of journals specifically dedicated to computational geometry, the share of geometric publications in general-purpose computer science and computer graphics journals decreased. ACM Computing Surveys ACM Transactions on Graphics Acta Informatica Advances in Geometry Algorithmica Ars Combinatoria Computational Geometry: Theory and Applications Communications of the ACM Computer Aided Geometric Design Computer Graphics and Applications Computer Graphics World Computing in Geometry and Topology Discrete & Computational Geometry Geombinatorics Geometriae Dedicata IEEE Transactions on Graphics IEEE Transactions on Computers IEEE Transactions on Pattern Analysis and Machine Intelligence Information Processing Letters International Journal of Computational Geometry and Applications Journal of Combinatorial Theory, Series B Journal of Computational Geometry Journal of Differential Geometry Journal of the ACM Journal of Algorithms Journal of Computer and System Sciences Management Science Pattern Recognition Pattern Recognition Letters SIAM Journal on Computing SIGACT News; featured the "Computational Geometry Column" by Joseph O'Rourke Theoretical Computer Science The Visual Computer External links Computational Geometry Computational Geometry Pages Geometry In Action "Strategic Directions in Computational Geometry—Working Group Report" (1996) Journal of Computational Geometry (Annual) Winter School on Computational Geometry Computational Geometry Lab Computational fields of study Geometry processing
Computational geometry
[ "Mathematics", "Technology" ]
2,076
[ "Computational geometry", "Computational fields of study", "Computational mathematics", "Computing and society" ]
176,931
https://en.wikipedia.org/wiki/Internet%20Archive
The Internet Archive is an American non-profit organization founded in 1996 by Brewster Kahle that runs a digital library website, archive.org. It provides free access to collections of digitized media including websites, software applications, music, audiovisual, and print materials. The Archive also advocates a free and open Internet. Its mission is committing to provide "universal access to all knowledge". The Internet Archive allows the public to upload and download digital material to its data cluster, but the bulk of its data is collected automatically by its web crawlers, which work to preserve as much of the public web as possible. Its web archive, the Wayback Machine, contains hundreds of billions of web captures. The Archive also oversees numerous book digitization projects, collectively one of the world's largest book digitization efforts. History Brewster Kahle founded the Archive in May 1996, around the same time that he began the for-profit web crawling company Alexa Internet. The earliest known archived page on the site was saved on May 10, 1996, at 2:42 pm UTC (7:42 am PDT). By October of that year, the Internet Archive had begun to archive and preserve the World Wide Web in large amounts. The archived content became more easily available to the general public in 2001, through the Wayback Machine. In late 1999, the Archive expanded its collections beyond the web archive, beginning with the Prelinger Archives. Now, the Internet Archive includes texts, audio, moving images, and software. It hosts a number of other projects: the NASA Images Archive, the contract crawling service Archive-It, and the wiki-editable library catalog and book information site Open Library. Soon after that, the Archive began working to provide specialized services relating to the information access needs of the print-disabled; publicly accessible books were made available in a protected Digital Accessible Information System (DAISY) format. According to its website: In August 2012, the Archive announced that it had added BitTorrent to its file download options for more than 1.3 million existing files, and all newly uploaded files. This method is the fastest means of downloading media from the Archive, as files are served from two Archive data centers, in addition to other torrent clients which have downloaded and continue to serve the files. On November 6, 2013, the Internet Archive's headquarters in San Francisco's Richmond District caught fire, destroying equipment and damaging some nearby apartments. According to the Archive, it lost a side-building housing one of 30 of its scanning centers; cameras, lights, and scanning equipment worth hundreds of thousands of dollars; and "maybe 20 boxes of books and film, some irreplaceable, most already digitized, and some replaceable". The nonprofit Archive sought donations to cover the estimated $600,000 in damage. An overhaul of the site was launched as beta in November 2014, and the legacy layout was removed in March 2016. In November 2016, Kahle announced that the Internet Archive was building the Internet Archive of Canada, a copy of the Archive to be based somewhere in Canada. The announcement received widespread coverage due to the implication that the decision to build a backup archive in a foreign country was because of the upcoming presidency of Donald Trump. Beginning in 2017, OCLC and the Internet Archive have collaborated to make the Archive's records of digitized books available in WorldCat. Since 2018, the Internet Archive visual arts residency, which is organized by Amir Saber Esfahani and Andrew McClintock, helps connect artists with the Archive's over 48 petabytes of digitized materials. Over the course of the yearlong residency, visual artists create a body of work which culminates in an exhibition. The hope is to connect digital history with the arts and create something for future generations to appreciate online or off. Previous artists in residence include Taravat Talepasand, Whitney Lynn, and Jenny Odell. The Internet Archive acquires most materials from donations, such as hundreds of thousands of 78 rpm discs from Boston Public Library in 2017, a donation of 250,000 books from Trent University in 2018, and the entire collection of Marygrove College's library after it closed in 2020. All material is then digitized and retained in digital storage, while a digital copy is returned to the original holder and the Internet Archive's copy, if not in the public domain, is lent to patrons worldwide one at a time under the controlled digital lending (CDL) theory of the first-sale doctrine. On June 1, 2020, four large publishing houses – Hachette Book Group, Penguin Random House, HarperCollins, and John Wiley – filed a lawsuit against the Internet Archive before the United States District Court for the Southern District of New York, claiming that the Internet Archive's practice of controlled digital lending constituted copyright infringement. On March 25, 2023, the court found in favor of the publishers. The negotiated judgment of August 11, 2023, barred the Internet Archive from digitally lending books for which electronic copies are on sale. Also on August 11, 2023, the music industry giants Universal Music Group, Sony Music and Concord (together with their respective labels Capitol Records, Arista Records and CMGI Recorded Music Assets) sued the Internet Archive before the same United States District Court for the Southern District of New York over the Internet Archive's Great 78 Project for $621 million in damages from alleged copyright infringement. In September 2024, Google and the Internet Archive signed a partnership to allow people to see previous versions of websites on Google Search that uses the Wayback Machine, without linking the Google Cache yet. Cyberattacks During the week of May 27, 2024, the Internet Archive suffered a series of distributed denial of service (DDoS) attacks that made its services unavailable intermittently, sometimes for hours at a time, over a period of several days. The attack was claimed on May 28 by a hacker group called SN_BLACKMETA, with possible links to Anonymous Sudan. The incident drew a comparison with the 2023 British Library cyberattack, which affected the UK Web Archive. Beginning October 9, 2024, the Internet Archive's team, including archivist Jason Scott and security researcher Scott Helme, confirmed DDoS attacks, site defacement, and a data breach. The purported hacktivist group SN_BLACKMETA again claimed responsibility. A pop-up on the defaced site claimed that there was a "catastrophic" security breach, stating "Have you ever felt like the Internet Archive runs on sticks and is constantly on the verge of suffering a catastrophic security breach? It just happened. See 31 million of you on HIBP!" It was reported that about 31 million user accounts were affected, and compromised in a file called "ia_users.sql", dated September 28, 2024. The attackers stole users' email addresses and Bcrypt-hashed passwords. As of October 15, 2024, the website was still mostly offline for "prioritizing keeping data safe at the expense of service availability." On October 11, Kahle said that the data is safe, and will bring the service back to normal "in days, not weeks." On October 13, the Wayback Machine was restored in a read-only format, while archiving web pages was temporarily disabled. On October 14, Brewster Kahle said "[the Wayback Machine] volume is back to normal: 1,500 requests per second". On October 20, threat actors stole unrotated API tokens and breached Internet Archive on its Zendesk email support platform; they also claimed responsibility for the other breaches yet stated that SN_BLACKMETA was behind just the DDoS attacks. On October 21, Internet Archive went back online in a read-only manner. On October 22, all Internet Archive services temporarily went offline, but later that same day, only the Wayback Machine, Archive-It, and blog.archive.org were resumed. On October 23, archive.org, the Wayback Machine, Archive-It, and the Open Library services all resumed but with some features, such as logging in, still unavailable until the staff announced it back available in the next day or two. On October 25, the login feature is now back available for now and the site is active. Operations The Archive is a 501(c)(3) nonprofit operating in the United States. In 2019, it had an annual budget of $37 million, derived from revenue from its Web crawling services, various partnerships, grants, donations, and the Kahle-Austin Foundation. The Internet Archive also manages periodic funding campaigns. For instance, a December 2019 campaign had a goal of reaching $6 million in donations. It uses Ubuntu as its choice of operating system for the website servers. The Archive is headquartered in San Francisco, California. From 1996 to 2009, its headquarters were in the Presidio of San Francisco, a former U.S. military base. Since 2009, its headquarters have been at 300 Funston Avenue in San Francisco, a former Christian Science Church. At one time, most of its staff worked in its book-scanning centers; as of 2019, scanning is performed by 100 paid operators worldwide. The Archive also has data centers in three Californian cities: San Francisco, Redwood City, and Richmond. To reduce the risk of data loss, the Archive creates copies of parts of its collection at more distant locations, including the Bibliotheca Alexandrina in Egypt and a facility in Amsterdam. Since 2016, Internet Archive started to work to create a decentralized prototype of the digital library. From 2020, content from Internet Archive started to be stored in Filecoin. By October 2023, one petabyte of data had been uploaded to the Filecoin network. The Archive is a member of the International Internet Preservation Consortium and was officially designated as a library by the state of California in 2007. Web archiving Wayback Machine The Wayback Machine is a service that allows archives of the World Wide Web to be searched and accessed. It can be used to see what previous versions of web sites used to look like or to visit web sites that no longer even exist. The Wayback Machine was created as a joint effort between Alexa Internet (owned by Amazon.com) and the Internet Archive. Hundreds of billions of web sites and their associated data (images, source code, documents, etc.) are saved in a database. , the Internet Archive held over 866 billion web pages, more than 42.5 million print materials, 13 million videos, 3 million TV news, 1.2 million software programs, 14 million audio files, 5 million images, and 272,660 concerts in its Wayback Machine. Archive-It Created in early 2006, Archive-It is a web archiving subscription service that allows institutions and individuals to build and preserve collections of digital content and create digital archives. Archive-It allows the user to customize their capture or exclusion of web content they want to preserve for cultural heritage reasons. Through a web application, Archive-It partners can harvest, catalog, manage, browse, search, and view their archived collections. In terms of accessibility, the archived web sites are full text searchable within seven days of capture. Content collected through Archive-It is captured and stored as a WARC file. A primary and back-up copy is stored at the Internet Archive data centers. A copy of the WARC file can be given to subscribing partner institutions for geo-redundant preservation and storage purposes to their best practice standards. Periodically, the data captured through Archive-It is indexed into the Internet Archive's general archive. , Archive-It had more than 275 partner institutions in 46 U.S. states and 16 countries that have captured more than 7.4 billion URLs for more than 2,444 public collections. Archive-It partners are universities and college libraries, state archives, federal institutions, museums, law libraries, and cultural organizations, including the Electronic Literature Organization, North Carolina State Archives and Library, Stanford University, Columbia University, American University in Cairo, Georgetown Law Library, and many others. Internet Archive Scholar In September 2020, Internet Archive announced a new initiative to archive and preserve open access academic journals, called Internet Archive Scholar. Its full-text search index includes over 25 million research articles and other scholarly documents preserved in the Internet Archive. The collection spans from digitized copies of eighteenth century journals through the latest open access conference proceedings and pre-prints crawled from the World Wide Web. General Index In 2021, the Internet Archive announced the initial version of the General Index, a publicly available index to a collection of 107 million academic journal articles. Items and collections The Archive stores files inside so-called items, which are similar to directories in that they can contain multiple files, but can have additional metadata such as a description and tags which make them more searchable. Some file types can be previewed directly on the site, where as others have to be downloaded in order to be opened. If multiple multimedia files exist in an item, the website generates a playlist for video or audio files, or a slide show for pictures. If an item contains at least one video or picture, the Archive generates a preview thumbnail that can be seen on collection pages and in searches. Items can contain mixed data such as music files with an album cover picture, in which case the picture is used as thumbnail. Staff members of the Internet Archive organize items by placing them into so-called collections, which are pages listing multiple items. Book collections Text collection The scanning performed by the Internet Archive is financially supported by libraries and foundations. , when there were approximately 1 million texts, the entire collection was greater than 500 terabytes, which included raw camera images, cropped and skewed images, PDFs, and raw OCR data. , the Internet Archive was operating 33 scanning centers in five countries, digitizing about 1,000 books a day for a total of more than 2 million books, in a total collection of 4.4 million booksincluding material digitized by others and fed into the Internet Archive; at that time, users were performing more than 15 million downloads per month. The material digitized by others includes more than 300,000 books that were contributed to the collection, between about 2006 and 2008, by Microsoft through its Live Search Books project, which also included financial support and scanning equipment directly donated to the Internet Archive. On May 23, 2008, Microsoft announced it would be ending its Live Book Search project and would no longer be scanning books, donating its remaining scanning equipment to its former partners. Around October 2007, Archive users began uploading public domain books from Google Book Search. , there were more than 900,000 Google-digitized books in the Archive's collection; the books are identical to the copies found on Google, except without the Google watermarks, and are available for unrestricted use and download. Brewster Kahle revealed in 2013 that this archival effort was coordinated by Aaron Swartz, who, with a "bunch of friends", downloaded the public domain books from Google slowly enough and from enough computers to stay within Google's restrictions. They did this to ensure public access to the public domain. The Archive ensured the items were attributed and linked back to Google, which never complained, while libraries "grumbled". According to Kahle, this is an example of Swartz's "genius" to work on what could give the most to the public good for millions of people. In addition to books, the Archive offers free and anonymous public access to more than four million court opinions, legal briefs, or exhibits uploaded from the United States Federal Courts' PACER electronic document system via the RECAP web browser plugin. These documents had been kept behind a federal court paywall. On the Archive, they had been accessed by more than six million people by 2013. The Archive's BookReader web app, built into its website, has features such as single-page, two-page, and thumbnail modes; fullscreen mode; page zooming of high-resolution images; and flip page animation. In October 2024, the Internet Archive struck a deal with the Leiden University Library to accept the paper copies of 400,000 uncatalogued foreign dissertations held at the Library that were to be pulped – with a view to digitising them and making them accessible online. The collection includes theses by Niels Bohr, Marie Curie, Émile Durkheim, Albert Einstein, Otto Hahn, Carl Jung, J. Robert Oppenheimer, Max Planck, Luigi Pirandello, Gustav Stresemann and Max Weber. Open Library The Open Library is another project of the Internet Archive. The project seeks to include a web page for every book ever published: it holds 25 million catalog records of editions. It also seeks to be a web-accessible public library: it contains the full texts of approximately 1,600,000 public domain books (out of the more than five million from the main texts collection), as well as in-print and in-copyright books, many of which are fully readable, downloadable and full-text searchable; it offers a two-week loan of e-books in its controlled digital lending program for over 647,784 books not in the public domain, in partnership with over 1,000 library partners from six countries after a free registration on the web site. Open Library is a free and open-source software project, with its source code freely available on GitHub. The Open Library faces objections from some authors and the Society of Authors, who hold that the project is distributing books without authorization and is thus in violation of copyright laws, and four major publishers initiated a copyright infringement lawsuit against the Internet Archive in June 2020 to stop the Open Library project. Digitizing sponsors for books Many large institutional sponsors have helped the Internet Archive provide millions of scanned publications (text items). Some sponsors that have digitized large quantities of texts include the University of Toronto's Robarts Library, University of Alberta Libraries, University of Ottawa, Library of Congress, Boston Library Consortium member libraries, Boston Public Library, Princeton Theological Seminary Library, and many others. In 2017, the MIT Press authorized the Internet Archive to digitize and lend books from the press's backlist, with financial support from the Arcadia Fund. A year later, the Internet Archive received further funding from the Arcadia Fund to invite some other university presses to partner with the Internet Archive to digitize books, a project called "Unlocking University Press Books". The Library of Congress created numerous Handle System identifiers that pointed to free digitized books in the Internet Archive. The Internet Archive and Open Library are listed on the Library of Congress website as a source of e-books. Media collections In addition to web archives, the Internet Archive maintains extensive collections of digital media that are attested by the uploader to be in the public domain in the United States or licensed under a license that allows redistribution, such as Creative Commons licenses. Media are organized into collections by media type (moving images, audio, text, etc.), and into sub-collections by various criteria. Each of the main collections includes a "Community" sub-collection (formerly named "Open Source") where general contributions by the public are stored. Audio Audio Archive The Audio Archive includes music, audiobooks, news broadcasts, old time radio shows, podcasts, and a wide variety of other audio files. , there are more than 15,000,000 free digital recordings in the collection. The subcollections include audio books and poetry, podcasts, non-English audio, and many others. The sound collections are curated by B. George, director of the ARChive of Contemporary Music. Digital Library of Amateur Radio and Communications A project to preserve recordings of amateur radio transmissions, with funding from the Amateur Radio Digital Communications foundation. Live Music Archive The Live Music Archive sub-collection includes more than 170,000 concert recordings from independent musicians, as well as more established artists and musical ensembles with permissive rules about recording their concerts, such as the Grateful Dead, and more recently, The Smashing Pumpkins. Also, Jordan Zevon has allowed the Internet Archive to host a definitive collection of his father Warren Zevon's concert recordings. The Zevon collection ranges from 1976 to 2001 and contains 126 concerts including 1,137 songs. The Great 78 Project The Great 78 Project aims to digitize 250,000 78 rpm singles (500,000 songs) from the period between 1880 and 1960, donated by various collectors and institutions. It has been developed in collaboration with the Archive of Contemporary Music and George Blood Audio, responsible for the audio digitization. Netlabels The Archive has a collection of freely distributable music that is streamed and available for download via its Netlabels service. The music in this collection generally has Creative Commons-license catalogs of virtual record labels. Images collection This collection contains more than 3.5 million items. Cover Art Archive, Metropolitan Museum of Art – Gallery Images, NASA Images, Occupy Wall Street Flickr Archive, and USGS Maps are some sub-collections of Image collection. Cover Art Archive The Cover Art Archive is a joint project between the Internet Archive and MusicBrainz, whose goal is to make cover art images on the Internet. this collection contains more than 1,400,000 items. Metropolitan Museum of Art images The images of this collection are from the Metropolitan Museum of Art. This collection contains more than 140,000 items. NASA Images The NASA Images archive was created through a Space Act Agreement between the Internet Archive and NASA to bring public access to NASA's image, video, and audio collections in a single, searchable resource. The Internet Archive NASA Images team worked closely with all of the NASA centers to keep adding to the ever-growing collection. The nasaimages.org site launched in July 2008 and had more than 100,000 items online at the end of its hosting in 2012. Occupy Wall Street Flickr archive This collection contains Creative Commons-licensed photographs from Flickr related to the Occupy Wall Street movement. This collection contains more than 15,000 items. USGS Maps This collection contains more than 59,000 items from Libre Map Project. Machinima Archive One of the sub-collections of the Internet Archive's Video Archive is the Machinima Archive. This small section hosts many Machinima videos. Machinima is a digital artform in which computer games, game engines, or software engines are used in a sandbox-like mode to create motion pictures, recreate plays, or even publish presentations or keynotes. The archive collects a range of Machinima films from internet publishers such as Rooster Teeth and Machinima.com as well as independent producers. The sub-collection is a collaborative effort among the Internet Archive, the How They Got Game research project at Stanford University, the Academy of Machinima Arts and Sciences, and Machinima.com. Microfilm collection This collection contains approximately 160,000 microfilmed items from a variety of libraries including the University of Chicago Libraries, University of Illinois at Urbana-Champaign, University of Alberta, Allen County Public Library, and National Technical Information Service. Moving image collection The Internet Archive holds a collection of approximately 3,863 feature films. Additionally, the Internet Archive's Moving Image collection includes: newsreels, classic cartoons, pro- and anti-war propaganda, The Video Cellar Collection, Skip Elsheimer's "A.V. Geeks" collection, early television, and ephemeral material from Prelinger Archives, such as advertising, educational, and industrial films, as well as amateur and home movie collections. Subcategories of this collection include: IA's Brick Films collection, which contains stop-motion animation filmed with Lego bricks, some of which are "remakes" of feature films. IA's Election 2004 collection, a non-partisan public resource for sharing video materials related to the 2004 United States presidential election. IA's FedFlix collection, Joint Venture NTIS-1832 between the National Technical Information Service and Public.Resource.Org that features "the best movies of the United States Government, from training films to history, from our national parks to the U.S. Fire Academy and the Postal Inspectors" IA's Independent News collection, which includes sub-collections such as the Internet Archive's World At War competition from 2001, in which contestants created short films demonstrating "why access to history matters". Among their most-downloaded video files are eyewitness recordings of the devastating 2004 Indian Ocean earthquake. IA's September 11 Television Archive, which contains archival footage from the world's major television networks of the terrorist attacks of September 11, 2001, as they unfolded on live television. Open Educational Resources Open Educational Resources is a digital collection at archive.org. This collection contains hundreds of free courses, video lectures, and supplemental materials from universities in the United States and China. The contributors of this collection are ArsDigita University, Hewlett Foundation, MIT, Monterey Institute, and Naropa University. TV News Search & Borrow In September 2012, the Internet Archive launched the TV News Search & Borrow service for searching U.S. national news programs. The service is built on closed captioning transcripts and allows users to search and stream 30-second video clips. Upon launch, the service contained "350,000 news programs collected over 3 years from national U.S. networks and stations in San Francisco and Washington D.C." According to Kahle, the service was inspired by the Vanderbilt Television News Archive, a similar library of televised network news programs. In contrast to Vanderbilt, which limits access to streaming video to individuals associated with subscribing colleges and universities, the TV News Search & Borrow allows open access to its streaming video clips. In 2013, the Archive received an additional donation of "approximately 40,000 well-organized tapes" from the estate of a Philadelphia woman, Marion Stokes. Stokes "had recorded more than 35 years of TV news in Philadelphia and Boston with her VHS and Betamax machines." Miscellaneous collections Brooklyn Museum collection contains approximately 3,000 items from Brooklyn Museum. In December 2020, the film research library of Lillian Michelson was donated to the archive. Other services and endeavors Physical media Voicing a strong reaction to the idea of books simply being thrown away, and inspired by the Svalbard Global Seed Vault, Kahle now envisions collecting one copy of every book ever published. "We're not going to get there, but that's our goal", he said. Alongside the books, Kahle plans to store the Internet Archive's old servers, which were replaced in 2010. Software The Internet Archive has "the largest collection of historical software online in the world", spanning 50 years of computer history in terabytes of computer magazines and journals, books, shareware discs, FTP sites, video games, etc. The Internet Archive has created an archive of what it describes as "vintage software", as a way to preserve them. The project advocated an exemption from the United States Digital Millennium Copyright Act to permit them to bypass copy protection, which the United States Copyright Office approved in 2003 for a period of three years. The Archive does not offer the software for download, as the exemption is solely "for the purpose of preservation or archival reproduction of published digital works by a library or archive." The Library of Congress renewed the exemption in 2006, and in 2009 indefinitely extended it pending further rulemakings. The Library reiterated the exemption as a "Final Rule" with no expiration date in 2010. In 2013, the Internet Archive began to provide select video games browser-playable via MESS, for instance the Atari 2600 game E.T. the Extra-Terrestrial. Since December 23, 2014, the Internet Archive presents, via a browser-based DOSBox emulation, thousands of DOS/PC games for "scholarship and research purposes only". In November 2020, the Archive introduced a new emulator for Adobe Flash called Ruffle, and began archiving Flash animations and games ahead of the December 31, 2020, end-of-life for the Flash plugin across all computer systems. Table Top Scribe System A combined hardware software system has been developed that performs a safe method of digitizing content. Credit Union From 2012 to November 2015, the Internet Archive operated the Internet Archive Federal Credit Union, a federal credit union based in New Brunswick, New Jersey, with the goal of providing access to low- and middle-income people. Throughout its short existence, the IAFCU experienced significant conflicts with the National Credit Union Administration, which severely limited the IAFCU's loan portfolio and concerns over serving Bitcoin firms. At the time of its dissolution, it consisted of 395 members and was worth $2.5 million. Decentralization Since 2019, the Internet Archive organizes an event called Decentralized Web Camp (DWeb Camp). It is an annual camp that brings together a diverse global community of contributors in a natural setting. The camp aims to tackle real-world challenges facing the web and co-create decentralized technologies for a better internet. It aims to foster collaboration, learning, and fun while promoting principles of trust, human agency, mutual respect, and ecological awareness. Wayforward Machine On 30 September 2021, as a part of its 25th anniversary celebration, Internet Archive launched the "Wayforward Machine", a satirical, fictional website covered with pop-ups asking for personal information. The site was intended to depict a fictional dystopian timeline of real-world events leading to such a future, such as the repeal of Section 230 of the United States Code in 2022 and the introduction of advertising implants in 2041. Ceramic archivists collection The Great Room of the Internet Archive features a collection of more than 100 ceramic figures representing employees of the Internet Archive, with the 100th statue immortalizing Aaron Swartz. This collection, inspired by the statues of the Xian warriors in China, was commissioned by Brewster Kahle, sculpted by Nuala Creed, and as of 2014, is ongoing. Artists in residence The Internet Archive visual arts residency, organized by Amir Saber Esfahani, is designed to connect emerging and mid-career artists with the Archive's millions of collections and to show what is possible when open access to information intersects with the arts. During this one-year residency, selected artists develop a body of work that responds to and utilizes the Archive's collections in their own practice. 2019 Residency Artists: Caleb Duarte, Whitney Lynn, and Jeffrey Alan Scudder 2018 Residency Artists: Mieke Marple, Chris Sollars, and Taravat Talepasand 2017 Residency Artists: Laura Kim, Jeremiah Jenkins, and Jenny Odell Controversies, legal disputes, and activism Opposition to National security letters, bills and settlements On May 8, 2008, it was revealed that the Internet Archive had successfully challenged an FBI national security letter asking for logs on an undisclosed user. On November 28, 2016, it was revealed that a second FBI national security letter had been successfully challenged that had been asking for logs on another undisclosed user. The Internet Archive blacked out its web site for 12 hours on January 18, 2012, in protest of the Stop Online Piracy Act and the PROTECT IP Act bills, two pieces of legislation in the United States Congress that they argued would "negatively affect the ecosystem of web publishing that led to the emergence of the Internet Archive". This occurred in conjunction with the English Wikipedia blackout, as well as numerous other protests across the Internet. The Internet Archive is a member of the Open Book Alliance, which has been among the most outspoken critics of the Google Book Settlement. The Archive advocates an alternative digital library project. Hosting of disputed media On October 9, 2016, the Internet Archive was temporarily blocked in Turkey after it was used (amongst other file hosting services) by hackers to host 17 GB of leaked government emails. Because the Internet Archive only lightly moderates uploads, it includes resources that may be valued by extremists and the site may be used by them to evade block listing. In February 2018, the Counter Extremism Project said that the Archive hosted terrorist videos, including the beheading of Alan Henning, and had declined to respond to requests about the videos. In May 2018, a report published by the cyber-security firm Flashpoint stated that the Islamic State was using the Internet Archive to share its propaganda. Chris Butler, from the Internet Archive, responded that they regularly spoke to the US and EU governments about sharing information on terrorism. In April 2019, Europol, acting on a referral from French police, asked the Internet Archive to remove 550 sites of "terrorist propaganda". The Archive rejected the request, saying that the reports were wrong about the content they pointed to, or were too broad for the organization to comply with. On July 14, 2021, the Internet Archive held a joint "Referral Action Day" with Europol to target terrorist videos. A 2021 article said that jihadists regularly used the Internet Archive for "dead drops" of terrorist videos. In January 2022, a former UCLA lecturer's 800-page manifesto, containing racist ideas and threats against UCLA staff, was uploaded to the Internet Archive. The manifesto was removed by the Internet Archive after a week, amidst discussion about whether such documents should be preserved by archivists or not. Another 2022 paper found "an alarming volume of terrorist, extremist, and racist material on the Internet Archive". A 2023 paper reported that Neo-Nazis collect links to online, publicly available resources to be shared with new recruits. As the Internet Archive hosts uploaded texts that are not allowed on other websites, Nazi and neo-Nazi books in the Archive (e.g., The Turner Diaries) frequently appear on these lists. These lists also feature older, public domain material created when white supremacist views were more mainstream. 2020 National Emergency Library In the midst of the COVID-19 pandemic which closed many schools, universities, and libraries, the Archive announced on March 24, 2020, that it was creating the National Emergency Library by removing the lending restrictions it had in place for 1.4 million digitized books in its Open Library but otherwise limiting users to the number of books they could check out and enforcing their return; normally, the site would only allow one digital lending for each physical copy of the book they had, by use of an encrypted file that would become unusable after the lending period was completed. This Library would remain as such until at least June 30, 2020, or until the US national emergency was over, whichever came later. At launch, the Internet Archive allowed authors and rightholders to submit opt-out requests for their works to be omitted from the National Emergency Library. The Internet Archive said the National Emergency Library addressed an "unprecedented global and immediate need for access to reading and research material" due to the closures of physical libraries worldwide. They justified the move in a number of ways. Legally, they said they were promoting access to those inaccessible resources, which they claimed was an exercise in fair use principles. The Archive continued implementing their controlled digital lending policy that predated the National Emergency Library, meaning they still encrypted the lent copies and it was no easier for users to create new copies of the books than before. An ultimate determination of whether or not the National Emergency Library constituted fair use could only be made by a court. Morally, they also pointed out that the Internet Archive was a registered library like any other, that they either paid for the books themselves or received them as donations, and that lending through libraries predated copyright restrictions. The Archive had already been criticized by authors and publishers for its prior lending approach, and upon announcement of the National Emergency Library, authors, publishers, and groups representing both took further issue with The Archive and its Open Library project, equating the move to copyright infringement and digital piracy, and using the COVID-19 pandemic as a reason to push the boundaries of copyright. After the works of some of these authors were ridiculed in responses, the Internet Archive's Jason Scott requested that supporters of the National Emergency Library not denigrate anyone's books: "I realize there's strong debate and disagreement here, but books are life-giving and life-changing and these writers made them." Copyright issues In November 2005, free downloads of Grateful Dead concerts were removed from the site, following what seemed to be disagreements between some of the former band members. John Perry Barlow identified Bob Weir, Mickey Hart, and Bill Kreutzmann as the instigators of the change, according to an article in The New York Times. Phil Lesh, a founding member of the band, commented on the change in a November 30, 2005, posting to his personal web site: A November 30 forum post from Brewster Kahle summarized what appeared to be the compromise reached among the band members. Audience recordings could be downloaded or streamed, but soundboard recordings were to be available for streaming only. Concerts have since been re-added. In February 2016, Internet Archive users had begun archiving digital copies of Nintendo Power, Nintendo's official magazine for their games and products, which ran from 1988 to 2012. The first 140 issues had been collected, before Nintendo had the archive removed on August 8, 2016. In response to the take-down, Nintendo told gaming website Polygon, "[Nintendo] must protect our own characters, trademarks and other content. The unapproved use of Nintendo's intellectual property can weaken our ability to protect and preserve it, or to possibly use it for new projects". In August 2017, the Department of Telecommunications of the Government of India blocked the Internet Archive along with other file-sharing websites, in accordance with two court orders issued by the Madras High Court, citing piracy concerns after copies of two Bollywood films were allegedly shared via the service. The HTTP version of the Archive was blocked but it remained accessible using the HTTPS protocol. In 2023, the Internet Archive became a popular site for Indians to watch the first episode of India: The Modi Question, a BBC documentary released on January 17 and banned in India by January 20. The video was reported to have been removed by the Archive on January 23. The Internet Archive then stated, on January 27, that they had removed the video in response to a BBC request under the Digital Millennium Copyright Act. Book publishers' lawsuit The operation of the National Emergency Library was part of a lawsuit filed against the Internet Archive by four major book publishers—Hachette, HarperCollins, John Wiley & Sons, and Penguin Random House—in June 2020, challenging the copyright validity of the controlled digital lending program. In response, the Internet Archive closed the National Emergency Library on June 16, 2020, rather than the planned June 30, 2020, due to the lawsuit. The plaintiffs, supported by the Copyright Alliance, claimed in their lawsuit that the Internet Archive's actions constituted a "willful mass copyright infringement." Judge Koeltl ruled on March 24, 2023, against Internet Archive in the case, saying the National Emergency Library concept was not fair use, so the Archive infringed their copyrights by lending out the books without the waitlist restriction. An agreement was then reached for the Internet Archive to pay an undisclosed amount to the publishers. The Internet Archive appealed the ruling. On September 4, 2024, the U.S. Court of Appeals for the Second Circuit upheld the district court's ruling, calling the Internet Archive's argument that they were shielded by fair use doctrine "unpersuasive". Music publishers' lawsuit In August 2023, the music industry corporations Universal Music Group (UMG), Sony Music and Concord sued the Internet Archive over its Great 78 Project, asserting the project was engaged in copyright infringement. The Great 78 Project stores digitized versions of pre-1972 songs and albums from 78 rpm phonograph records, for "the preservation, research and discovery of 78rpm records." The project had started in 2016, when pre-1972 recordings had not been protected by copyright; in 2018, the U.S. Congress passed the Music Modernization Act (MMA) which enabled legal remedies for unauthorised use of pre-1972 recordings until 2067, thus effectively covering them with copyright. UMG and Sony had been the two largest companies in this sector for more than a decade, with respective market shares of 31.8% and 22.1% in 2023. Concord was a rapidly expanding music business closely partnered with UMG since its transformation into Concord Music Group in 2004 and backed since at least 2000 by J.P. Morgan. It was the first music company to perform an asset-backed securitization, led by Apollo Global Management, in December 2022. Its assets consisted of over 1 million copyrights to music older than 18 months. According to its CEO Bob Valentine, Concord derived about 85% of its revenue "from catalog, rather than newly-developed, music". As Valentine stated in his first interview, "The phenomenon of artists' IP has never been more liquid; it is now a real and proven asset class. Investment bankers are focused on it, financiers are financing it, and then there's entities like us, that know how to buy rights, but also know how to manage them and have the relationships to do so." The share of catalog music in total album equivalent consumption in the United States rose from 62.8% to 72.6% between 2019 and 2023. The publishers are seeking statutory damages for nearly 4,142 songs named in the suit, with a maximum possible fine of $621 million. The Internet Archive has argued that the primitive sound quality of the original recordings falls within the doctrine of "fair use" to digitize for preservation, that the number of downloads is so small it has almost no impact on the publishers' revenue, and over 95% of the collection is not readily available anywhere else. The plaintiffs said in response, "if ever there were a theory of fair use invented for litigation, this is it." According to a legal source at Mayer Brown, the music publishers' case could be challenged as unconstitutional, since the granting of copyright to pre-1972 works in the MMA only benefitted record companies without having a systemic effect. See also List of online image archives Public domain music Similar projects archive.today Internet Memory Foundation LibriVox National Digital Information Infrastructure and Preservation Program (NDIIPP) National Digital Library Program (NDLP) Project Gutenberg UK Government Web Archive at The National Archives (United Kingdom) UK Web Archive WebCite Other Anna's Archive Archive Team Digital dark age Digital preservation Heritrix Library Genesis Link rot List of web archives Memory hole PetaBox Search engine cache Notes References Further reading External links Internet Archive Scholar 1996 establishments in California 1996 in San Francisco 501(c)(3) organizations Access to Knowledge movement Articles containing video clips Charities based in California Foundations based in the United States Internet properties established in 1996 Online archives of the United States Organizations established in 1996 Public libraries in California Richmond District, San Francisco Sound archives Web archiving initiatives Tor onion services Webby Award winners File sharing communities Libraries established in 1996
Internet Archive
[ "Technology" ]
8,859
[ "File sharing communities", "Computing websites" ]
176,997
https://en.wikipedia.org/wiki/Blindsight
Blindsight is the ability of people who are cortically blind to respond to visual stimuli that they do not consciously see due to lesions in the primary visual cortex, also known as the striate cortex or Brodmann Area 17. The term was coined by Lawrence Weiskrantz and his colleagues in a paper published in a 1974 issue of Brain. A previous paper studying the discriminatory capacity of a cortically blind patient was published in Nature in 1973. The assumed existence of blindsight is controversial, with some arguing that it is merely degraded conscious vision. Type classification The majority of studies on blindsight are conducted on patients who are hemianopic, i.e. blind in one-half of their visual field. Following the destruction of the left or right striate cortex, patients are asked to detect, localize, and discriminate amongst visual stimuli that are presented to their blind side, often in a forced-response or guessing situation, even though they may not consciously recognize the visual stimulus. Research shows that such blind patients may achieve a higher accuracy than would be expected from chance alone. Type 1 blindsight is the term given to this ability to guess—at levels significantly above chance—aspects of a visual stimulus (such as location or type of movement) without any conscious awareness of any stimuli. Type 2 blindsight occurs when patients claim to have a feeling that there has been a change within their blind area—e.g. movement—but that it was not a visual percept. The re-classification of blindsight into Type 1 and Type 2 was made after it was shown that the most celebrated blindsight patient, "GY", was usually conscious of stimuli presented to his blind field if the stimuli had certain specific characteristics, namely being of high contrast and moving fast (at speeds in excess of 20 degrees per second). In the aftermath of the First World War, a neurologist, George Riddoch, had described patients who had been blinded by gunshot wounds to V1, who could not see stationary objects but who were, as he reported, "conscious" of seeing moving objects in their blind field. It is for this reason that the phenomenon has more recently also been called the Riddoch syndrome. Since then it has become apparent that such subjects can also become aware of visual stimuli belonging to other visual domains, such as color and luminance, when presented to their blind fields. The ability of such hemianopic subjects to become consciously aware of stimuli presented to their blind field is also commonly referred to as "residual" or "degraded" vision. As originally defined, blindsight challenged the common belief that perceptions must enter consciousness to affect our behavior, by showing that our behavior can be guided by sensory information of which we have no conscious awareness. Since the demonstration that blind patients can experience some visual stimuli consciously, and the consequent redefinition of blindsight into Type 1 and Type 2, a more nuanced view of the phenomenon has developed. Blindsight may be thought of as a converse of the form of anosognosia known as Anton syndrome, in which there is full cortical blindness along with the confabulation of visual experience. History Much of our current understanding of blindsight can be attributed to early experiments on monkeys. One monkey, named Helen, could be considered the "star monkey in visual research" because she was the original blindsight subject. Helen was a macaque monkey that had been decorticated; specifically, her primary visual cortex (V1) was completely removed, blinding her. Nevertheless, under certain specific situations, Helen exhibited sighted behavior. Her pupils would dilate and she would blink at stimuli that threatened her eyes. Furthermore, under certain experimental conditions, she could detect a variety of visual stimuli, such as the presence and location of objects, as well as shape, pattern, orientation, motion, and color. In many cases, she was able to navigate her environment and interact with objects as if she were sighted. A similar phenomenon was also discovered in humans. Subjects who had suffered damage to their visual cortices due to accidents or strokes reported partial or total blindness. Despite this, when prompted they could "guess" the presence and details of objects with above-average accuracy and, much like animal subjects, could catch objects tossed at them. The subjects never developed any kind of confidence in their abilities. Even when told of their successes, they would not begin to spontaneously make "guesses" about objects, but instead still required prompting. Furthermore, blindsight subjects rarely express the amazement about their abilities that sighted people would expect them to express. Describing blindsight Patients with blindsight have damage to the system that produces visual perception (the visual cortex of the brain and some of the nerve fibers that bring information to it from the eyes) rather than to the underlying brain system controlling eye movements. The phenomenon was originally thought to show how, after the more complex perception system is damaged, people can use the underlying control system to guide hand movements towards an object even though they cannot see what they are reaching for. Hence, visual information can control behavior without producing a conscious sensation. This ability of those with blindsight to act as if able to see objects that they are unconscious of suggested that consciousness is not a general property of all parts of the brain, but is produced by specialized parts of it. Blindsight patients show awareness of single visual features, such as edges and motion, but cannot gain a holistic visual percept. This suggests that perceptual awareness is modular and that—in sighted individuals—there is a "binding process that unifies all information into a whole percept", which is interrupted in patients with such conditions as blindsight and visual agnosia. Therefore, object identification and object recognition are thought to be separate processes and occur in different areas of the brain, working independently from one another. The modular theory of object perception and integration would account for the "hidden perception" experienced in blindsight patients. Research has shown that visual stimuli with the single visual features of sharp borders, sharp onset/offset times, motion and low spatial frequency contribute to, but are not strictly necessary for, an object's salience in blindsight. Cause There are three theories for the explanation of blindsight. The first states that after damage to area V1, other branches of the optic nerve deliver visual information to the superior colliculus, pulvinar and several other areas, including parts of the cerebral cortex. In turn, these areas might then control the blindsight responses. Another explanation for the phenomenon of blindsight is that even though the majority of a person's visual cortex may be damaged, tiny islands of functioning tissue remain. These islands are not large enough to provide conscious perception, but nevertheless enough for some unconscious visual perception. A third theory is that the information required to determine the distance to and velocity of an object in object space is determined by the lateral geniculate nucleus (LGN) before the information is projected to the visual cortex. In a normal subject, these signals are used to merge the information from the eyes into a three-dimensional representation (which includes the position and velocity of individual objects relative to the organism), extract a vergence signal to benefit the precision (previously auxiliary) optical system, and extract a focus control signal for the lenses of the eyes. The stereoscopic information is attached to the object information passed to the visual cortex. More recently, with the demonstration of a direct input from the LGN to area V5 (MT), which delivers signals from fast moving stimuli at latencies of about 30 ms, another explanation has emerged. This one proposes that the delivery of these signals is sufficient to arouse a conscious experience of fast visual motion, without implying that it is V5 alone that is responsible, since once signals reach V5, they may be propagated to other areas of the brain. The latter account would seem to exclude the possibility that signals are "pre-processed" by V1 or "post-processed" by it (through return connections from V5 back to V1), as has been suggested. The pulvinar nucleus of the thalamus also sends direct, V1 by-passing, signals to V5 but their precise role in generating a conscious visual experience of motion has not yet been determined. Evidence of blindsight can be indirectly observed in children as young as two months, although there is difficulty in determining the type in a patient who is not old enough to answer questions. Evidence in animals In a 1995 experiment, researchers attempted to show that monkeys with lesions in or even wholly removed striate cortexes also experienced blindsight. To study this, they had the monkeys complete tasks similar to those commonly used for human subjects. The monkeys were placed in front of a monitor and taught to indicate whether a stationary object or nothing was present in their visual field when a tone was played. Then the monkeys performed the same task except the stationary objects were presented outside of their visual field. The monkeys performed very similar to human participants and were unable to perceive the presence of stationary objects outside of their visual field. Another 1995 study by the same group sought to prove that monkeys could also be conscious of movement in their deficit visual field despite not being consciously aware of the presence of an object there. To do this, researchers used another standard test for humans which was similar to the previous study except moving objects were presented in the deficit visual field. Starting from the center of the deficit visual field, the object would either move up, down, or to the right. The monkeys performed identically to humans on the test, getting them right almost every time. This showed that the monkey's ability to detect movement is separate from their ability to consciously detect an object in their deficit visual field, and gave further evidence for the claim that damage to the striate cortex plays a large role in causing the disorder. Several years later, another study compared and contrasted the data collected from monkeys and that of a specific human patient with blindsight, GY. GY's striate cortical region was damaged through trauma at the age of eight, though for the most part he retained full functionality, GY was not consciously aware of anything in his right visual field. In the monkeys, the striate cortex of the left hemisphere was surgically removed. By comparing the test results of both GY and the monkeys, the researchers concluded that similar patterns of responses to stimuli in the "blind" visual field can be found in both species. Research Lawrence Weiskrantz and colleagues showed in the early 1970s that if forced to guess about whether a stimulus is present in their blind field, some observers do better than chance. This ability to detect stimuli that the observer is not conscious of can extend to discrimination of the type of stimulus (for example, whether an 'X' or 'O' has been presented in the blind field). Electrophysiological evidence from the late 1970s has shown that there is no direct retinal input from S-cones to the superior colliculus, implying that the perception of color information should be impaired. However, more recent evidence point to a pathway from S-cones to the superior colliculus, opposing previous research and supporting the idea that some chromatic processing mechanisms are intact in blindsight. Patients shown images on their blind side of people expressing emotions correctly guessed the emotion most of the time. The movement of facial muscles used in smiling and frowning were measured and reacted in ways that matched the kind of emotion in the unseen image. Therefore, the emotions were recognized without involving conscious sight. A 2011 study found that a young woman with a unilateral lesion of area V1 could scale her grasping movement as she reached out to pick up objects of different sizes placed in her blind field, even though she could not report the sizes of the objects. Similarly, another patient with unilateral lesion of area V1 could avoid obstacles placed in his blind field when he reached toward a target that was visible in his intact visual field. Even though he avoided the obstacles, he never reported seeing them. A study reported in 2008 asked patient GY to misstate where in his visual field a distinctive stimulus was presented. If the stimulus was in the upper part of his visual field, he was to say it was in the lower part, and vice versa. He was able to misstate, as requested, in his left visual field (with normal conscious vision); but he tended to fail in the task—to state the location correctly—when the stimulus was in his blindsight (right) visual field. This failure rate worsened when the stimulus was clearer, indicating that failure was not simply due to unreliability of blindsight. Case studies "DB" Researchers applied the same type of tests that were used to study blindsight in animals to a patient referred to as "DB". The normal techniques used to assess visual acuity in humans involved asking them to verbally describe some visually recognizable aspect of an object or objects. DB was given forced-choice tasks to complete instead. The results of DB's guesses showed that DB was able to determine shape and detect movement at some unconscious level, despite not being visually aware of this. DB himself chalked up the accuracy of his guesses to be merely coincidental. The discovery of the condition known as blindsight raised questions about how different types of visual information, even unconscious information, may be affected and sometimes even unaffected by damage to different areas of the visual cortex. Previous studies had already demonstrated that even without conscious awareness of visual stimuli, humans could still determine certain visual features such as presence in the visual field, shape, orientation and movement. But, in a newer study evidence showed that if damage to the visual cortex occurs in areas above the primary visual cortex, the conscious awareness of visual stimuli itself is not damaged. Blindsight shows that even when the primary visual cortex is damaged or removed a person can still perform actions guided by unconscious visual information. Despite damage occurring in the area necessary for conscious awareness of visual information, other functions of the processing of these visual percepts are still available to the individual. The same also goes for damage to other areas of the visual cortex. If an area of the cortex that is responsible for a certain function is damaged, it will only result in the loss of that particular function or aspect, functions that other parts of the visual cortex are responsible for remain intact. Alexander and Cowey Alexander and Cowey investigated how contrasting stimuli brightness affects blindsight patients' ability to discern movement. Prior studies have already shown that blindsight patients are able to detect motion even though they claim they do not see any visual percepts in their blind fields. The study subjects were two patients who suffered from hemianopsia—blindness in more than half of their visual field. Both subjects had displayed the ability to accurately determine the presence of visual stimuli in their blind hemifields without acknowledging an actual visual percept previously. To test the effect of brightness on the subject's ability to determine motion they used a white background with a series of colored dots. The contrast of the brightness of the dots compared to the white background was altered in each trial to determine if the participants performed better or worse when there was a larger discrepancy in brightness or not. The subjects focused on the display for two equal length time intervals and were asked whether they thought the dots were moving during the first or the second time interval. When the contrast in brightness between the background and the dots was higher, both of the subjects could discern motion more accurately than they would have statistically through guesswork. However, one subject was not able to accurately determine whether or not blue dots were moving regardless of the brightness contrast, but he/she was able to do so with every other color dot. When the contrast was highest, subjects were able to tell whether or not the dots were moving with very high rates of accuracy. Even when the dots were white, but still of a different brightness from the background, subjects could still determine whether they were moving. But, regardless of the dots' color, subjects could not tell when they were in motion when the white background and the dots were of similar brightness. Kentridge, Heywood, and Weiskrantz Kentridge, Heywood, and Weiskrantz used the phenomenon of blindsight to investigate the connection between visual attention and visual awareness. They wanted to see if their subject—who exhibited blindsight in other studies—could react more quickly when their attention was cued without the ability to be visually aware of it. The researchers aimed to show that being conscious of a stimulus and paying attention to it was not the same thing. To test the relationship between attention and awareness, they had the participant try to determine where a target was and whether it was oriented horizontally or vertically on a computer screen. The target line would appear at one of two different locations and would be oriented in one of two directions. Before the target would appear an arrow would become visible on the screen, sometimes pointing to the correct position of the target line and less frequently not. This arrow was the cue for the subject. The participant would press a key to indicate whether the line was horizontal or vertical, and could then also indicate to an observer whether or not he/she actually had a feeling that any object was there or not—even if they couldn't see anything. The participant was able to accurately determine the orientation of the line when the target was cued by an arrow before the appearance of the target, even though these visual stimuli did not equal awareness in the subject who had no vision in that area of his/her visual field. The study showed that even without the ability to be visually aware of a stimulus the participant could still focus his/her attention on this object. "TN" In 2003, a patient known as "TN" lost use of his primary visual cortex, area V1. He had two successive strokes, which knocked out the region in both his left and right hemispheres. After his strokes, ordinary tests of TN's sight turned up nothing. He could not even detect large objects moving right in front of his eyes. Researchers eventually began to notice that TN exhibited signs of blindsight and in 2008 decided to test their theory. They took TN into a hallway and asked him to walk through it without using the cane he always carried after having the strokes. TN was not aware at the time, but the researchers had placed various obstacles in the hallway to test if he could avoid them without conscious use of his sight. To the researchers' delight, he moved around every obstacle with ease, at one point even pressing himself up against the wall to squeeze past a trashcan placed in his way. After navigating through the hallway, TN reported that he was just walking the way he wanted to, not because he knew anything was there. "Mr. J" In another case study, a girl brought her grandfather in to see a neuropsychologist. The girl's grandfather, Mr. J., had suffered a stroke that had left him completely blind apart from a tiny spot in the middle of his visual field. The neuropsychologist, Dr. M., performed an exercise with him. The doctor helped Mr. J. to a chair, had him sit down, and then asked to borrow his cane. The doctor then asked, "Mr. J., please look straight ahead. Keep looking that way, and don't move your eyes or turn your head. I know that you can see a little bit straight ahead of you, and I don't want you to use that piece of vision for what I'm going to ask you to do. Fine. Now, I'd like you to reach out with your right hand [and] point to what I'm holding." Mr. J. then replied, "But I don't see anything—I'm blind!" The doctor then said, "I know, but please try, anyway." Mr. J then shrugged and pointed, and was surprised when his finger encountered the end of the cane which the doctor was pointing toward him. After this, Mr. J. said that "it was just luck". The doctor then turned the cane around so that the handle side was pointing towards Mr. J. He then asked for Mr. J. to grab hold of the cane. Mr. J. reached out with an open hand and grabbed hold of the cane. After this, the doctor said, "Good. Now put your hand down, please." The doctor then rotated the cane 90 degrees, so that the handle was oriented vertically. The doctor then asked Mr. J. to reach for the cane again. Mr. J. did this, turning his wrist so that his hand matched the orientation of the handle. This case study shows that, although (on a conscious level) Mr. J. was completely unaware of any visual abilities that he may have had, he was able to orient his grabbing motions as if he had no visual impairments. Other cases Other cases refer to SL, GY and GR. Brain regions involved Visual processing in the brain goes through a series of stages. Destruction of the primary visual cortex leads to blindness in the part of the visual field that corresponds to the damaged cortical representation. The area of blindness – known as a scotoma – is in the visual field opposite the damaged hemisphere and can vary from a small area up to the entire hemifield. Visual processing occurs in the brain in a hierarchical series of stages (with much crosstalk and feedback between areas). The route from the retina through V1 is not the only visual pathway into the cortex, though it is by far the largest; it is commonly thought that the residual performance of people exhibiting blindsight is due to preserved pathways into the extrastriate cortex that bypass V1. However both physiological evidence in monkeys and behavioral and imaging evidence in humans shows that activity in these extrastriate areas, and especially in V5, is apparently sufficient to support visual awareness in the absence of V1. To put it in a more complex way, recent physiological findings suggest that visual processing takes place along several independent, parallel pathways. One system processes information about shape, one about color, and one about movement, location and spatial organization. This information moves through an area of the brain called the lateral geniculate nucleus, located in the thalamus, and on to be processed in the primary visual cortex, area V1 (also known as the striate cortex because of its striped appearance). People with damage to V1 report no conscious vision, no visual imagery, and no visual images in their dreams. However, some of these people still experience the blindsight phenomenon, though this too is controversial, with some studies showing a limited amount of consciousness without V1 or projections relating to it. The superior colliculus and prefrontal cortex also have a major role in awareness of a visual stimulus. Lateral geniculate nucleus Mosby's Dictionary of Medicine, Nursing & Health Professions defines the LGN as "one of two elevations of the lateral posterior thalamus receiving visual impulses from the retina via the optic nerves and tracts and relaying the impulses to the calcarine (visual) cortex". What is seen in the left and right visual field is taken in by each eye and brought back to the optic disc via the nerve fibres of the retina. From the optic disc, visual information travels through the optic nerve and into the optic chiasm. Visual information then enters the optic tract and travels to four different areas of the brain including the superior colliculus, pretectum of the mid brain, the suprachiasmatic nucleus of the hypothalamus, and the lateral geniculate nucleus (LGN). Most axons from the LGN will then travel to the primary visual cortex. Injury to the primary visual cortex, including lesions and other trauma, leads to the loss of visual experience. However, the residual vision that is left cannot be attributed to V1. According to Schmid et al., "thalamic lateral geniculate nucleus has a causal role in V1-independent processing of visual information". This information was found through experiments using fMRI during activation and inactivation of the LGN and the contribution the LGN has on visual experience in monkeys with a V1 lesion. These researchers concluded that the magnocellular system of the LGN is less affected by the removal of V1, which suggests that it is because of this system in the LGN that blindsight occurs. Furthermore, once the LGN was inactivated, virtually all of the extrastriate areas of the brain no longer showed a response on the fMRI. The information leads to a qualitative assessment that included "scotoma stimulation, with the LGN intact had fMRI activation of ~20% of that under normal conditions". This finding agrees with the information obtained from, and fMRI images of, patients with blindsight. The same study also supported the conclusion that the LGN plays a substantial role in blindsight. Specifically, while injury to V1 does create a loss of vision, the LGN is less affected and may result in the residual vision that remains, causing the "sight" in blindsight. Functional magnetic resonance imaging has launched has also been employed to conduct brain scans in normal, healthy human volunteers to attempt to demonstrate that visual motion can bypass V1, through a connection from the LGN to the human middle temporal complex. Their findings concluded that there was an indeed a connection of visual motion information that went directly from the LGN to the V5/hMT+ bypassing V1 completely. Evidence also suggests that, following a traumatic injury to V1, there is still a direct pathway from the retina through the LGN to the extrastriate visual areas. The extrastriate visual areas include parts of the occipital lobe that surround V1. In non-human primates, these often include V2, V3, and V4. In a study conducted in primates, after partial ablation of area V1, areas V2 and V3 were still excited by visual stimulus. Other evidence suggests that "the LGN projections that survive V1 removal are relatively sparse in density, but are nevertheless widespread and probably encompass all extrastriate visual areas," including V2, V4, V5 and the inferotemporal cortex region. Controversy The results of some experiments suggest that blindsighted people may be preserving some kind of conscious experience and thus they are not fully blind. The criteria for blindsight has repeatedly changed based on findings that challenge the original definition, which has led some scientists to cast doubt on the existence of blindsight. See also Koniocellular cell Riddoch syndrome Two-streams hypothesis Visual agnosia References Further reading External links Blind man navigates maze Blind man avoids obstacles when reaching Consciousness Neurology Neuroscience Vision
Blindsight
[ "Biology" ]
5,514
[ "Neuroscience" ]
177,052
https://en.wikipedia.org/wiki/Immortality
Immortality is the concept of eternal life. Some species possess "biological immortality" due to an apparent lack of the Hayflick limit. From at least the time of the ancient Mesopotamians, there has been a conviction that gods may be physically immortal, and that this is also a state that the gods at times offer humans. In Christianity, the conviction that God may offer physical immortality with the resurrection of the flesh at the end of time has traditionally been at the center of its beliefs. What form an unending human life would take, or whether an immaterial soul exists and possesses immortality, has been a major point of focus of religion, as well as the subject of speculation and debate. In religious contexts, immortality is often stated to be one of the promises of divinities to human beings who perform virtue or follow divine law. Some scientists, futurists and philosophers have theorized about the immortality of the human body, with some suggesting that human immortality may be achievable in the first few decades of the 21st century with the help of certain speculative technologies such as mind uploading (digital immortality). Definitions Scientific Life extension technologies claim to be developing a path to complete rejuvenation. Cryonics holds out the hope that the dead can be revived in the future, following sufficient medical advancements. While, as shown with creatures such as hydra and Planarian worms, it is indeed possible for a creature to be biologically immortal, these are animals which are physiologically very different from humans, and it is not known if something comparable will ever be possible for humans. Religious Immortality in religion refers usually to either the belief in physical immortality or a more spiritual afterlife. In traditions such as ancient Egyptian beliefs, Mesopotamian beliefs and ancient Greek beliefs, the immortal gods consequently were considered to have physical bodies. In Mesopotamian and Greek religion, the gods also made certain men and women physically immortal, whereas in Christianity, many believe that all true believers will be resurrected to physical immortality. Similar beliefs that physical immortality is possible are held by Rastafarians or Rebirthers. Physical immortality Physical immortality is a state of life that allows a person to avoid death and maintain conscious thought. It can mean the unending existence of a person from a physical source other than organic life, such as a computer. Pursuit of physical immortality before the advent of modern science included alchemists, who sought to create the Philosopher's Stone, and various cultures' legends such as the Fountain of Youth or the Peaches of Immortality inspiring attempts at discovering an elixir of life. Modern scientific trends, such as cryonics, digital immortality, breakthroughs in rejuvenation, or predictions of an impending technological singularity, to achieve genuine human physical immortality, must still overcome all causes of death to succeed. Causes of death There are three main causes of death: natural aging, disease, and injury. Such issues can be resolved with the solutions provided in research to any end providing such alternate theories at present that require unification. Aging See also: DNA damage theory of aging Aubrey de Grey, a leading researcher in the field, defines aging as "a collection of cumulative changes to the molecular and cellular structure of an adult organism, which result in essential metabolic processes, but which also, once they progress far enough, increasingly disrupt metabolism, resulting in pathology and death." The current causes of aging in humans are cell loss (without replacement), DNA damage, oncogenic nuclear mutations and epimutations, cell senescence, mitochondrial mutations, lysosomal aggregates, extracellular aggregates, random extracellular cross-linking, immune system decline, and endocrine changes. Eliminating aging would require finding a solution to each of these causes, a program de Grey calls engineered negligible senescence. There is also a huge body of knowledge indicating that change is characterized by the loss of molecular fidelity. Disease Disease is theoretically surmountable by technology. In short, it is an abnormal condition affecting the body of an organism, something the body should not typically have to deal with its natural make up. Human understanding of genetics is leading to cures and treatments for a myriad of previously incurable diseases. The mechanisms by which other diseases do damage are becoming better understood. Sophisticated methods of detecting diseases early are being developed. Preventative medicine is becoming better understood. Neurodegenerative diseases like Parkinson's and Alzheimer's may soon be curable with the use of stem cells. Breakthroughs in cell biology and telomere research are leading to treatments for cancer. Vaccines are being researched for AIDS and tuberculosis. Genes associated with type 1 diabetes and certain types of cancer have been discovered, allowing for new therapies to be developed. Artificial devices attached directly to the nervous system may restore sight to the blind. Drugs are being developed to treat a myriad of other diseases and ailments. Trauma Physical trauma would remain as a threat to perpetual physical life, as an otherwise immortal person would still be subject to unforeseen accidents or catastrophes. The speed and quality of paramedic response remains a determining factor in surviving severe trauma. A body that could automatically repair itself from severe trauma, such as speculated uses for nanotechnology, would mitigate this factor. The brain cannot be risked to trauma if a continuous physical life is to be maintained. This aversion to trauma risk to the brain would naturally result in significant behavioral changes that would render physical immortality undesirable for some people. Environmental change Organisms otherwise unaffected by these causes of death would still face the problem of obtaining sustenance (whether from currently available agricultural processes or from hypothetical future technological processes) in the face of changing availability of suitable resources as environmental conditions change. After avoiding aging, disease, and trauma, death through resource limitation is still possible, such as hypoxia or starvation. If there is no limitation on the degree of gradual mitigation of risk then it is possible that the cumulative probability of death over an infinite horizon is less than certainty, even when the risk of fatal trauma in any finite period is greater than zero. Mathematically, this is an aspect of achieving 'actuarial escape velocity'. Biological immortality Biological immortality is an absence of aging. Specifically it is the absence of a sustained increase in rate of mortality as a function of chronological age. A cell or organism that does not experience aging, or ceases to age at some point, is biologically immortal. Biologists have chosen the word "immortal" to designate cells that are not limited by the Hayflick limit, where cells no longer divide because of DNA damage or shortened telomeres. The first and still most widely used immortal cell line is HeLa, developed from cells taken from the malignant cervical tumor of Henrietta Lacks without her consent in 1951. Prior to the 1961 work of Leonard Hayflick, there was the erroneous belief fostered by Alexis Carrel that all normal somatic cells are immortal. By preventing cells from reaching senescence one can achieve biological immortality; telomeres, a "cap" at the end of DNA, are thought to be the cause of cell aging. Every time a cell divides the telomere becomes a bit shorter; when it is finally worn down, the cell is unable to split and dies. Telomerase is an enzyme which rebuilds the telomeres in stem cells and cancer cells, allowing them to replicate an infinite number of times. No definitive work has yet demonstrated that telomerase can be used in human somatic cells to prevent healthy tissues from aging. On the other hand, scientists hope to be able to grow organs with the help of stem cells, allowing organ transplants without the risk of rejection, another step in extending human life expectancy. These technologies are the subject of ongoing research, and are not yet realized. Biologically immortal species Life defined as biologically immortal is still susceptible to causes of death besides aging, including disease and trauma, as defined above. Notable immortal species include: Bacteria – Bacteria reproduce through binary fission. A parent bacterium splits itself into two identical daughter cells which eventually then split themselves in half. This process repeats, thus making the bacterium essentially immortal. A 2005 PLoS Biology paper suggests that after each division the daughter cells can be identified as the older and the younger, and the older is slightly smaller, weaker, and more likely to die than the younger. Turritopsis dohrnii, a jellyfish (phylum Cnidaria, class Hydrozoa, order Anthoathecata), after becoming a sexually mature adult, can transform itself back into a polyp using the cell conversion process of transdifferentiation. Turritopsis dohrnii repeats this cycle, meaning that it may have an indefinite lifespan. Its immortal adaptation has allowed it to spread from its original habitat in the Caribbean to "all over the world". Hydra is a genus belonging to the phylum Cnidaria, the class Hydrozoa and the order Anthomedusae. They are simple fresh-water predatory animals possessing radial symmetry. Evolution of aging As the existence of biologically immortal species demonstrates, there is no thermodynamic necessity for senescence: a defining feature of life is that it takes in free energy from the environment and unloads its entropy as waste. Living systems can even build themselves up from seed, and routinely repair themselves. Aging is therefore presumed to be a byproduct of evolution, but why mortality should be selected for remains a subject of research and debate. Programmed cell death and the telomere "end replication problem" are found even in the earliest and simplest of organisms. This may be a tradeoff between selecting for cancer and selecting for aging. Modern theories on the evolution of aging include the following: Mutation accumulation is a theory formulated by Peter Medawar in 1952 to explain how evolution would select for aging. Essentially, aging is never selected against, as organisms have offspring before the mortal mutations surface in an individual. Antagonistic pleiotropy is a theory proposed as an alternative by George C. Williams, a critic of Medawar, in 1957. In antagonistic pleiotropy, genes carry effects that are both beneficial and detrimental. In essence this refers to genes that offer benefits early in life, but exact a cost later on, i.e. decline and death. The disposable soma theory was proposed in 1977 by Thomas Kirkwood, which states that an individual body must allocate energy for metabolism, reproduction, and maintenance, and must compromise when there is food scarcity. Compromise in allocating energy to the repair function is what causes the body gradually to deteriorate with age, according to Kirkwood. Immortality of the germline Individual organisms ordinarily age and die, while the germlines which connect successive generations are potentially immortal. The basis for this difference is a fundamental problem in biology. The Russian biologist and historian Zhores A. Medvedev considered that the accuracy of genome replicative and other synthetic systems alone cannot explain the immortality of germlines. Rather Medvedev thought that known features of the biochemistry and genetics of sexual reproduction indicate the presence of unique information maintenance and restoration processes at the different stages of gametogenesis. In particular, Medvedev considered that the most important opportunities for information maintenance of germ cells are created by recombination during meiosis and DNA repair; he saw these as processes within the germ cells that were capable of restoring the integrity of DNA and chromosomes from the types of damage that cause irreversible aging in somatic cells. Prospects for human biological immortality Life-extending substances Some scientists believe that boosting the amount or proportion of telomerase in the body, a naturally forming enzyme that helps maintain the protective caps at the ends of chromosomes, could prevent cells from dying and so may ultimately lead to extended, healthier lifespans. A team of researchers at the Spanish National Cancer Centre (Madrid) tested the hypothesis on mice. It was found that those mice which were "genetically engineered to produce 10 times the normal levels of telomerase lived 50% longer than normal mice". In normal circumstances, without the presence of telomerase, if a cell divides repeatedly, at some point all the progeny will reach their Hayflick limit. With the presence of telomerase, each dividing cell can replace the lost bit of DNA, and any single cell can then divide unbounded. While this unbounded growth property has excited many researchers, caution is warranted in exploiting this property, as exactly this same unbounded growth is a crucial step in enabling cancerous growth. If an organism can replicate its body cells faster, then it would theoretically stop aging. Embryonic stem cells express telomerase, which allows them to divide repeatedly and form the individual. In adults, telomerase is highly expressed in cells that need to divide regularly (e.g., in the immune system), whereas most somatic cells express it only at very low levels in a cell-cycle dependent manner. Technological immortality, biological machines, and "swallowing the doctor" Technological immortality is the prospect for much longer life spans made possible by scientific advances in a variety of fields: nanotechnology, emergency room procedures, genetics, biological engineering, regenerative medicine, microbiology, and others. Contemporary life spans in the advanced industrial societies are already markedly longer than those of the past because of better nutrition, availability of health care, standard of living and bio-medical scientific advances. Technological immortality predicts further progress for the same reasons over the near term. An important aspect of current scientific thinking about immortality is that some combination of human cloning, cryonics or nanotechnology will play an essential role in extreme life extension. Robert Freitas, a nanorobotics theorist, suggests tiny medical nanorobots could be created to go through human bloodstreams, find dangerous things like cancer cells and bacteria, and destroy them. Freitas anticipates that gene-therapies and nanotechnology will eventually make the human body effectively self-sustainable and capable of living indefinitely in empty space, short of severe brain trauma. This supports the theory that we will be able to continually create biological or synthetic replacement parts to replace damaged or dying ones. Future advances in nanomedicine could give rise to life extension through the repair of many processes thought to be responsible for aging. K. Eric Drexler, one of the founders of nanotechnology, postulated cell repair devices, including ones operating within cells and using as yet hypothetical biological machines, in his 1986 book Engines of Creation. Raymond Kurzweil, a futurist and transhumanist, stated in his book The Singularity Is Near that he believes that advanced medical nanorobotics could completely remedy the effects of aging by 2030. According to Richard Feynman, it was his former graduate student and collaborator Albert Hibbs who originally suggested to him (circa 1959) the idea of a medical use for Feynman's theoretical micromachines (see biological machine). Hibbs suggested that certain repair machines might one day be reduced in size to the point that it would, in theory, be possible to (as Feynman put it) "swallow the doctor". The idea was incorporated into Feynman's 1959 essay There's Plenty of Room at the Bottom. Cryonics Cryonics, the practice of preserving organisms (either intact specimens or only their brains) for possible future revival by storing them at cryogenic temperatures where metabolism and decay are almost completely stopped, can be used to 'pause' for those who believe that life extension technologies will not develop sufficiently within their lifetime. Ideally, cryonics would allow clinically dead people to be brought back in the future after cures to the patients' diseases have been discovered and aging is reversible. Modern cryonics procedures use a process called vitrification which creates a glass-like state rather than freezing as the body is brought to low temperatures. This process reduces the risk of ice crystals damaging the cell-structure, which would be especially detrimental to cell structures in the brain, as their minute adjustment evokes the individual's mind. Mind-to-computer uploading One idea that has been advanced involves uploading an individual's habits and memories via direct mind-computer interface. The individual's memory may be loaded to a computer or to a new organic body. Extropian futurists like Moravec and Kurzweil have proposed that, thanks to exponentially growing computing power, it will someday be possible to upload human consciousness onto a computer system, and exist indefinitely in a virtual environment. This could be accomplished via advanced cybernetics, where computer hardware would initially be installed in the brain to help sort memory or accelerate thought processes. Components would be added gradually until the person's entire brain functions were handled by artificial devices, avoiding sharp transitions that would lead to issues of identity, thus running the risk of the person to be declared dead and thus not be a legitimate owner of his or her property. After this point, the human body could be treated as an optional accessory and the program implementing the person could be transferred to any sufficiently powerful computer. Another possible mechanism for mind upload is to perform a detailed scan of an individual's original, organic brain and simulate the entire structure in a computer. What level of detail such scans and simulations would need to achieve to emulate awareness, and whether the scanning process would destroy the brain, is still to be determined. It is suggested that achieving immortality through this mechanism would require specific consideration to be given to the role of consciousness in the functions of the mind. An uploaded mind would only be a copy of the original mind, and not the conscious mind of the living entity associated in such a transfer. Without a simultaneous upload of consciousness, the original living entity remains mortal, thus not achieving true immortality. Research on neural correlates of consciousness is yet inconclusive on this issue. Whatever the route to mind upload, persons in this state could then be considered essentially immortal, short of loss or traumatic destruction of the machines that maintained them. Cybernetics Transforming a human into a cyborg can include brain implants or extracting a human processing unit and placing it in a robotic life-support system. Even replacing biological organs with robotic ones could increase life span (e.g. pace makers) and depending on the definition, many technological upgrades to the body, like genetic modifications or the addition of nanobots would qualify an individual as a cyborg. Some people believe that such modifications would make one impervious to aging and disease and theoretically immortal unless killed or destroyed. Digital immortality Religious views As late as 1952, the editorial staff of the Syntopicon found in their compilation of the Great Books of the Western World, that "The philosophical issue concerning immortality cannot be separated from issues concerning the existence and nature of man's soul." Thus, the vast majority of speculation on immortality before the 21st century was regarding the nature of the afterlife. Abrahamic religion The viewpoints of Christianity, Islam, and Judaism regarding the concept of immortality diverge as each faith system encapsulates unique theological interpretations and doctrines on the enduring human nature soul or spirit. Christianity Christian theology holds that Adam and Eve lost physical immortality for themselves and all their descendants through the Fall, although this initial "imperishability of the bodily frame of man" was "a preternatural condition". Christians who profess the Nicene Creed believe that every dead person (whether they believed in Christ or not) will be resurrected from the dead at the Second Coming; this belief is known as universal resurrection. Paul the Apostle, in following his past life as a Pharisee (a Jewish social movement that held to a future physical resurrection), proclaims an amalgamated view of resurrected believers where both the physical and the spiritual are rebuilt in the likeness of post-resurrection Christ, who "will transform our lowly body to be like his glorious body" (ESV). This thought mirrors Paul's depiction of believers having been "buried therefore with him [that is, Christ] by baptism into death" (ESV). N.T. Wright, a theologian and former Bishop of Durham, has said many people forget the physical aspect of what Jesus promised. He told Time: "Jesus' resurrection marks the beginning of a restoration that he will complete upon his return. Part of this will be the resurrection of all the dead, who will 'awake', be embodied and participate in the renewal. Wright says John Polkinghorne, a physicist and a priest, has put it this way: 'God will download our software onto his hardware until the time he gives us new hardware to run the software again for ourselves.' That gets to two things nicely: that the period after death (the Intermediate state) is a period when we are in God's presence but not active in our own bodies, and also that the more important transformation will be when we are again embodied and administering Christ's kingdom." This kingdom will consist of Heaven and Earth "joined together in a new creation", he said. Christian apocrypha include immortal human figures such as Cartaphilus who were cursed with physical immortality for various transgressions against Christ during the Passion. The medieval Waldensians believed in the immortality of the soul. Leaders of sects such as John Asgill and John Wroe taught followers that physical immortality was possible. Many Patristic writers have connected the immortal rational soul to the image of God found in Genesis 1:26. Among them is Athanasius of Alexandria and Clement of Alexandria, who say that the immortal rational soul itself is the image of God. Even Early Christian Liturgies exhibit this connection between the immortal rational soul and the creation of humanity in the image of God. Islam Islamic beliefs bears the concept of spiritual immortality within it; following the death of a certain individual, it will be arbitrated consistent with its beliefs as well as actions and will embark on the ever-lasting place where they will abate. The Muslim who holds the five pillars of Islam will make an entrance into the Jannah, where they will inhabit eternally. Al-Baqarah (2:25): "But give glad tidings to those who believe and work righteousness, that their portion is gardens, beneath which rivers flow. Every time they are fed with fruits therefrom, they say, 'Why, this is what we were fed with before,' for they are given things in similitude; and they have therein companions pure (and holy); and they abide therein forever." In contrast, the kafir hold the contradictory notion that they abide in Jahannam perpetually. Angels in Islam are reckoned as immortals from the perspective of Islam but most people believe is that the angels will die and that the Angel of Death will die, but there is no clear text concerning this. Rather there are texts which may indicate this, and there is the well known hadeeth (narration) about the "trumpet", which is a munkar hadeeth (rejected report). alternatively, Jinn have a long lifespan between 1000 and 1500. In some Muslim Sufi mystics, Khidr is given a long life but not immortality or there is more than a little argument stated about the demise of khidr; however, it is the matter of debate, and there is a fabrication point that goes around the Khidr drank from the fountain of Life, which is thoroughly invalid. Jesus in Islam was summoned to the sky by Allah's sanction to preserve him from the cross and endow him with long life until the advent of the Dajjal. Dajjal is, additionally, given a long life. Jesus Christ dispatches the Dajjal as he stays after 40 days, one like a year, one like a month, one like a week, and the rest of his days like normal days. The Qur'an states that it is the ultimate fate of all life, including humans, to die eventually. Judaism The traditional concept of an immaterial and immortal soul distinct from the body was not found in Judaism before the Babylonian exile, but developed as a result of interaction with Persian and Hellenistic philosophies. Accordingly, the Hebrew word nephesh, although translated as "soul" in some older English-language Bibles, actually has a meaning closer to "living being". Nephesh was rendered in the Septuagint as (psūchê), the Greek word for 'soul'. The only Hebrew word traditionally translated "soul" (nephesh) in English language Bibles refers to a living, breathing conscious body, rather than to an immortal soul. In the New Testament, the Greek word traditionally translated "soul" () has substantially the same meaning as the Hebrew, without reference to an immortal soul. "Soul" may refer either to the whole person, the self, as in "three thousand souls" were converted in (see ). The Hebrew Bible speaks about Sheol (שאול), originally a synonym of the grave – the repository of the dead or the cessation of existence, until the resurrection of the dead. This doctrine of resurrection is mentioned explicitly only in although it may be implied in several other texts. New theories arose concerning Sheol during the intertestamental period. The views about immortality in Judaism is perhaps best exemplified by the various references to this in Second Temple period. The concept of resurrection of the physical body is found in , according to which it will happen through recreation of the flesh. Resurrection of the dead is specified in detail in the extra-canonical books of Enoch, and in Apocalypse of Baruch. According to the British scholar in ancient Judaism P.R. Davies, there is "little or no clear reference ... either to immortality or to resurrection from the dead" in the Dead Sea scrolls texts. Both Josephus and the New Testament record that the Sadducees did not believe in an afterlife, but the sources vary on the beliefs of the Pharisees. The New Testament claims that the Pharisees believed in the resurrection, but does not specify whether this included the flesh or not. According to Josephus, who himself was a Pharisee, the Pharisees held that only the soul was immortal and the souls of good people will be reincarnated and "pass into other bodies", while "the souls of the wicked will suffer eternal punishment." The Book of Jubilees seems to refer to the resurrection of the soul only, or to a more general idea of an immortal soul. Rabbinic Judaism claims that the righteous dead will be resurrected in the Messianic Age, with the coming of the messiah. They will then be granted immortality in a perfect world. The wicked dead, on the other hand, will not be resurrected at all. This is not the only Jewish belief about the afterlife. The Tanakh is not specific about the afterlife, so there are wide differences in views and explanations among believers. Indian religions The perspectives on immortality within Hinduism and Buddhism exhibit nuanced differences, with each spiritual tradition offering distinctive theological interpretations and doctrines concerning the eternal essence of the human soul or consciousness. Hinduism Hindus believe in an immortal soul which is reincarnated after death. According to Hinduism, people repeat a process of life, death, and rebirth in a cycle called samsara. If they live their life well, their karma improves and their station in the next life will be higher, and conversely lower if they live their life poorly. After many life times of perfecting its karma, the soul is freed from the cycle and lives in perpetual bliss. There is no place of eternal torment in Hinduism, although if a soul consistently lives very evil lives, it could work its way down to the very bottom of the cycle. There are explicit renderings in the Upanishads alluding to a physically immortal state brought about by purification, and sublimation of the 5 elements that make up the body. For example, in the Shvetashvatara Upanishad (Chapter 2, Verse 12), it is stated "When earth, water, fire, air and sky arise, that is to say, when the five attributes of the elements, mentioned in the books on yoga, become manifest then the yogi's body becomes purified by the fire of yoga and he is free from illness, old age and death." Another view of immortality is traced to the Vedic tradition by the interpretation of Maharishi Mahesh Yogi: That man indeed whom these (contacts)do not disturb, who is even-minded inpleasure and pain, steadfast, he is fitfor immortality, O best of men. To Maharishi Mahesh Yogi, the verse means, "Once a man has become established in the understanding of the permanent reality of life, his mind rises above the influence of pleasure and pain. Such an unshakable man passes beyond the influence of death and in the permanent phase of life: he attains eternal life ... A man established in the understanding of the unlimited abundance of absolute existence is naturally free from existence of the relative order. This is what gives him the status of immortal life." An Indian Tamil saint known as Vallalar claimed to have achieved immortality before disappearing forever from a locked room in 1874. Buddhism One of the three marks of existence in Buddhism is anattā, "non-self". This teaching states that the body does not have an eternal soul but is composed of five skandhas or aggregates. Additionally, another mark of existence is impermanence, also called anicca, which runs directly counter to concepts of immortality or permanence. According to one Tibetan Buddhist teaching, Dzogchen, individuals can transform the physical body into an immortal body of light called the rainbow body. Ancient religions Ancient Greek religion Immortality in ancient Greek religion originally always included an eternal union of body and soul as can be seen in Homer, Hesiod, and various other ancient texts. The soul was considered to have an eternal existence in Hades, but without the body the soul was considered dead. Although almost everybody had nothing to look forward to but an eternal existence as a disembodied dead soul, a number of men and women were considered to have gained physical immortality and been brought to live forever in either Elysium, the Islands of the Blessed, heaven, the ocean or literally right under the ground. Among those humans made immortal were Amphiaraus, Ganymede, Ino, Iphigenia, Menelaus, Peleus, and a great number of those who fought in the Trojan and Theban wars. Asclepius was killed by Zeus, and by Apollo's request, was subsequently immortalized as a star.<ref name="ReferenceA">Emma and Ludwig Edelstein, Asclepius: Collection and Interpretation of the Testimonies, Volume 1, Page 51</ref>Theony Condos, Star Myths of the Greeks and Romans, p.141 In ancient Greek religion a number of men and women have been interpreted as being resurrected and made immortal. Achilles, after being killed, was snatched from his funeral pyre by his divine mother Thetis and brought to an immortal existence in either Leuce, the Elysian plains or the Islands of the Blessed. Memnon, who was killed by Achilles, seems to have received a similar fate. Alcmene, Castor, Heracles, and Melicertes, are also among the figures interpreted to have been resurrected to physical immortality. According to Herodotus's Histories, the seventh century BC sage Aristeas of Proconnesus was first found dead, after which his body disappeared from a locked room. He would reappear alive years later. However, Greek attitudes towards resurrection were generally negative, and the idea of resurrection was considered neither desirable nor possible. For example, Asclepius was killed by Zeus for using herbs to resurrect the dead, but by his father Apollo's request, was subsequently immortalized as a star. Writing his Lives of Illustrious Men (Parallel Lives) in the first century, the Middle Platonic philosopher Plutarch in his chapter on Romulus gave an account of the king's mysterious disappearance and subsequent deification, comparing it to Greek tales such as the physical immortalization of Alcmene and Aristeas the Proconnesian, "for they say Aristeas died in a fuller's work-shop, and his friends coming to look for him, found his body vanished; and that some presently after, coming from abroad, said they met him traveling towards Croton". Plutarch openly scorned such beliefs held in ancient Greek religion, writing, "many such improbabilities do your fabulous writers relate, deifying creatures naturally mortal." Likewise, he writes that while something within humans comes from the gods and returns to them after death, this happens "only when it is most completely separated and set free from the body, and becomes altogether pure, fleshless, and undefiled." The parallel between these traditional beliefs and the later resurrection of Jesus was not lost on early Christians, as Justin Martyr argued: "when we say ... Jesus Christ, our teacher, was crucified and died, and rose again, and ascended into heaven, we propose nothing different from what you believe regarding those whom you consider sons of Zeus." The philosophical idea of an immortal soul was a belief first appearing with either Pherecydes or the Orphics, and most importantly advocated by Plato and his followers. This, however, never became the general norm in Hellenistic thought. As may be witnessed even into the Christian era, not least by the complaints of various philosophers over popular beliefs, many or perhaps most traditional Greeks maintained the conviction that certain individuals were resurrected from the dead and made physically immortal and that others could only look forward to an existence as disembodied and dead, though everlasting, souls. Zoroastrianism Zoroastrians believe that on the fourth day after death, the human soul leaves the body and the body remains as an empty shell. Souls would go to either heaven or hell; these concepts of the afterlife in Zoroastrianism may have influenced Abrahamic religions. The Persian word for "immortal" is associated with the month "Amurdad", meaning "deathless" in Persian, in the Iranian calendar (near the end of July). The month of Amurdad or Ameretat is celebrated in Persian culture as ancient Persians believed the "Angel of Immortality" won over the "Angel of Death" in this month. Philosophical religions Within the realm of philosophical religious paradigms, engage in a profound exploration of the concept of immortality. Simultaneously, expand the breadth and depth of this intellectual inquiry to afford a more intricate examination of the subject matter. Taoism It is repeatedly stated in the Lüshi Chunqiu that death is unavoidable. Henri Maspero noted that many scholarly works frame Taoism as a school of thought focused on the quest for immortality. Isabelle Robinet asserts that Taoism is better understood as a way of life than as a religion, and that its adherents do not approach or view Taoism the way non-Taoist historians have done. In the Tractate of Actions and their Retributions, a traditional teaching, spiritual immortality can be rewarded to people who do a certain amount of good deeds and live a simple, pure life. A list of good deeds and sins are tallied to determine whether or not a mortal is worthy. Spiritual immortality in this definition allows the soul to leave the earthly realms of afterlife and go to pure realms in the Taoist cosmology. Philosophical arguments for the immortality of the soul Alcmaeon of Croton Alcmaeon of Croton argued that the soul is continuously and ceaselessly in motion. The exact form of his argument is unclear, but it appears to have influenced Plato, Aristotle, and other later writers. Plato Plato's Phaedo advances four arguments for the soul's immortality: The Cyclical Argument, or Opposites Argument explains that Forms are eternal and unchanging, and as the soul always brings life, then it must not die, and is necessarily "imperishable". As the body is mortal and is subject to physical death, the soul must be its indestructible opposite. Plato then suggests the analogy of fire and cold. If the form of cold is imperishable, and fire, its opposite, was within close proximity, it would have to withdraw intact as does the soul during death. This could be likened to the idea of the opposite charges of magnets. The Theory of Recollection explains that we possess some non-empirical knowledge (e.g. The Form of Equality) at birth, implying the soul existed before birth to carry that knowledge. Another account of the theory is found in Plato's Meno, although in that case Socrates implies anamnesis (previous knowledge of everything) whereas he is not so bold in Phaedo. The Affinity Argument, explains that invisible, immortal, and incorporeal things are different from visible, mortal, and corporeal things. Our soul is of the former, while our body is of the latter, so when our bodies die and decay, our soul will continue to live. The Argument from Form of Life or The Final Argument explains that the Forms, incorporeal and static entities, are the cause of all things in the world, and all things participate in Forms. For example, beautiful things participate in the Form of Beauty; the number four participates in the Form of the Even, etc. The soul, by its very nature, participates in the Form of Life, which means the soul can never die. Plotinus Plotinus offers a version of the argument that Kant calls "The Achilles of Rationalist Psychology". Plotinus first argues that the soul is simple, then notes that a simple being cannot decompose. Many subsequent philosophers have argued both that the soul is simple and that it must be immortal. The tradition arguably culminates with Moses Mendelssohn's Phaedon. Metochites Theodore Metochites argues that part of the soul's nature is to move itself, but that a given movement will cease only if what causes the movement is separated from the thing moved – an impossibility if they are one and the same. Avicenna Avicenna argued for the distinctness of the soul and the body, and the incorruptibility of the former. Aquinas The full argument for the immortality of the soul and Thomas Aquinas' elaboration of Aristotelian theory is found in Question 75 of the First Part of the Summa Theologica. Descartes René Descartes endorses the claim that the soul is simple, and also that this entails that it cannot decompose. Descartes does not address the possibility that the soul might suddenly disappear. Leibniz In early work, Gottfried Wilhelm Leibniz endorses a version of the argument from the simplicity of the soul to its immortality, but like his predecessors, he does not address the possibility that the soul might suddenly disappear. In his monadology he advances a sophisticated novel argument for the immortality of monads. Moses Mendelssohn Moses Mendelssohn's Phaedon is a defense of the simplicity and immortality of the soul. It is a series of three dialogues, revisiting the Platonic dialogue Phaedo, in which Socrates argues for the immortality of the soul, in preparation for his own death. Many philosophers, including Plotinus, Descartes, and Leibniz, argue that the soul is simple, and that because simples cannot decompose they must be immortal. In the Phaedon, Mendelssohn addresses gaps in earlier versions of this argument (an argument that Kant calls the Achilles of Rationalist Psychology). The Phaedon contains an original argument for the simplicity of the soul, and also an original argument that simples cannot suddenly disappear. It contains further original arguments that the soul must retain its rational capacities as long as it exists. Ethics The possibility of clinical immortality raises a host of medical, philosophical, and religious issues and ethical questions. These include persistent vegetative states, the nature of personality over time, technology to mimic or copy the mind or its processes, social and economic disparities created by longevity, and survival of the heat death of the universe. Undesirability Physical immortality has also been imagined as a form of eternal torment, as in the myth of Tithonus, or in Mary Shelley's short story The Mortal Immortal, where the protagonist lives to witness everyone he cares about die around him. For additional examples in fiction, see Immortality in fiction. Kagan (2012) argues that any form of human immortality would be undesirable. Kagan's argument takes the form of a dilemma. Either our characters remain essentially the same in an immortal afterlife, or they do not: If our characters remain basically the same – that is, if we retain more or less the desires, interests, and goals that we have now – then eventually, over an infinite stretch of time, we will get bored and find eternal life unbearably tedious. If, on the other hand, our characters are radically changed – e.g., by God periodically erasing our memories or giving us rat-like brains that never tire of certain simple pleasures – then such a person would be too different from our current self for us to care much what happens to them. Either way, Kagan argues, immortality is unattractive. The best outcome, Kagan argues, would be for humans to live as long as they desired and then to accept death gratefully as rescuing us from the unbearable tedium of immortality. Sociology If human beings were to achieve immortality, there would most likely be a change in the world's social structures. Sociologists argue that human beings' awareness of their own mortality shapes their behavior. With the advancements in medical technology in extending human life, there may need to be serious considerations made about future social structures. The world is already experiencing a global demographic shift of increasingly ageing populations with lower replacement rates. The social changes that are made to accommodate this new population shift may be able to offer insight on the possibility of an immortal society. Sociology has a growing body of literature on the sociology of immortality, which details the different attempts at reaching immortality (whether actual or symbolic) and their prominence in the 21st century. These attempts include renewed attention to the dead in the West, practices of online memorialization, and biomedical attempts to increase longevity. These attempts at reaching immortality and their effects in societal structures have led some to argue that we are becoming a "Postmortal Society". Foreseen changes to societies derived from the pursuit of immortality would encompass societal paradigms and worldviews, as well as the institutional landscape. Similarly, different forms of reaching immortality might entail a significant reconfiguration of societies, from becoming more technologically oriented to becoming more aligned with nature. Immortality would increase population growth, bringing with it many consequences as for example the impact of population growth on the environment and planetary boundaries. Politics Although some scientists state that radical life extension, delaying and stopping aging are achievable, there are no international or national programs focused on stopping aging or on radical life extension. In 2012 in Russia, and then in the United States, Israel and the Netherlands, pro-immortality political parties were launched. They aimed to provide political support to anti-aging and radical life extension research and technologies and at the same time transition to the next step, radical life extension, life without aging, and finally, immortality and aim to make possible access to such technologies to most currently living people. Some scholars critique the increasing support for immortality projects. Panagiotis Pentaris speculates that defeating ageing as the cause of death comes with a cost: "heightened stratification of humans in society and a wider gap between social classes". Others suggest that other immortality projects like transhumanist digital immortality, radical life extension and cryonics are part of the capitalist fabric of exploitation and control, which aims to extend privileged lives of the economic elite. In this sense, immortality could become a political-economic battleground for the twenty-first century between the haves and have-nots. Symbols There are numerous symbols representing immortality. The ankh is an Egyptian symbol of life that holds connotations of immortality when depicted in the hands of the gods and pharaohs, who were seen as having control over the journey of life. The Möbius strip in the shape of a trefoil knot is another symbol of immortality. Most symbolic representations of infinity or the life cycle are often used to represent immortality depending on the context they are placed in. Other examples include the Ouroboros, the Chinese fungus of longevity, the ten kanji, the phoenix, the peacock in Christianity, and the colors amaranth (in Western culture) and peach (in Chinese culture). See also Afterlife Ambrosia Amrita Bioethics Biogerontology Brooke Greenberg Crown of Immortality Dyson's eternal intelligence Elixir of life Eternal return Eternal youth Ghost Immortal DNA strand hypothesis Immortalist Society Immortality in fiction Lich List of people claimed to be immortal in myth and legend Methuselah Mouse Prize Molecular nanotechnology Negligible senescence Tipler's Omega Point Organlegging Neidan Posthuman Resurrection Queen Mother of the West Simulated reality Suspended animation Undead Regeneration (theology) Footnotes Notes References Further reading External links Scientists are Close to Finding a Way to be Immortal Turritopsis nutricula:Palscience Meet The Only Immortal Species on Planet Earth The Methuselah Foundation Aubrey de Grey's non-profit organization dedicated to finding a cure for aging KurzweilAI.net Ray Kurzweil resource site BiologicalGerontology.com Chris Smelick's Biogerontology site Vitae Institute Chris Smelick's non-profit organization ELPIs Theory Marios Kyriazis' theory of human biological immortality Immortality Institute Scientific and sociological discussions, activism, research Religious and spiritual prospects for immortality "Death and Immortality" Dictionary of the History of Ideas, etext at the University of Virginia Library "Immortality" Immortality – What Will Eternal Life Be Like? The Immortality of the Soul and the Resurrection of the Body Lecture by Heinrich J. Vogel An Essay on the Scriptural Doctrine of Immortality by James Challis Eternity: Christ's Return, Chiliasm, Resurrection of the Dead, Judgment, Hell, Luther on Eternity, Heaven J.P. Meyer, The Northwestern Lutheran, 22 August 1954, Vol. 41, # 17 to 14 April 1957, Vol. 44, No. 8 "How you Can Have Eternal Life" Jack Graham, PowerPoint Ministries, Crosswalk.com Immortality Taoist essay, personaltao.com The Trial to Conquer Death Ancient Scientific Yoga – The First Atom's Final Attempt Search A review by Dr. Peter Fenwick of the book Human Immortality by Mohammad Samir Hossain In literature Mary Shelley's The Mortal Immortal (on Wikipedia) Abraham Cummings' (1859) Immortality proved by the testimony of sense'' Afterlife Duration Life extension Mythological powers Technological utopianism
Immortality
[ "Physics" ]
9,727
[ "Spacetime", "Physical quantities", "Time", "Duration" ]
177,089
https://en.wikipedia.org/wiki/Atomic%20energy
Atomic energy or energy of atoms is energy carried by atoms. The term originated in 1903 when Ernest Rutherford began to speak of the possibility of atomic energy. H. G. Wells popularized the phrase "splitting the atom", before discovery of the atomic nucleus. Atomic energy includes: Nuclear binding energy, the energy required to split a nucleus of an atom. Nuclear potential energy, the potential energy of the particles inside an atomic nucleus. Nuclear reaction, a process in which nuclei or nuclear particles interact, resulting in products different from the initial ones; see also nuclear fission and nuclear fusion. Radioactive decay, the set of various processes by which unstable atomic nuclei (nuclides) emit subatomic particles. Atomic energy is the source of nuclear power, which uses sustained nuclear fission to generate heat and electricity. It is also the source of the explosive force of an atomic bomb. See also Atomic Age Index of environmental articles References Forms of energy Nuclear energy
Atomic energy
[ "Physics", "Chemistry" ]
190
[ "Physical quantities", "Forms of energy", "Energy (physics)", "Nuclear energy", "Nuclear physics", "Radioactivity" ]
177,092
https://en.wikipedia.org/wiki/Active%20galactic%20nucleus
An active galactic nucleus (AGN) is a compact region at the center of a galaxy that emits a significant amount of energy across the electromagnetic spectrum, with characteristics indicating that this luminosity is not produced by the stars. Such excess, non-stellar emissions have been observed in the radio, microwave, infrared, optical, ultra-violet, X-ray and gamma ray wavebands. A galaxy hosting an AGN is called an active galaxy. The non-stellar radiation from an AGN is theorized to result from the accretion of matter by a supermassive black hole at the center of its host galaxy. Active galactic nuclei are the most luminous persistent sources of electromagnetic radiation in the universe and, as such, can be used as a means of discovering distant objects; their evolution as a function of cosmic time also puts constraints on models of the cosmos. The observed characteristics of an AGN depend on several properties such as the mass of the central black hole, the rate of gas accretion onto the black hole, the orientation of the accretion disk, the degree of obscuration of the nucleus by dust, and presence or absence of jets. Numerous subclasses of AGN have been defined on the basis of their observed characteristics; the most powerful AGN are classified as quasars. A blazar is an AGN with a jet pointed toward the Earth, in which radiation from the jet is enhanced by relativistic beaming. History During the first half of the 20th century, photographic observations of nearby galaxies detected some characteristic signatures of AGN emission, although there was not yet a physical understanding of the nature of the AGN phenomenon. Some early observations included the first spectroscopic detection of emission lines from the nuclei of NGC 1068 and Messier 81 by Edward Fath (published in 1909), and the discovery of the jet in Messier 87 by Heber Curtis (published in 1918). Further spectroscopic studies by astronomers including Vesto Slipher, Milton Humason, and Nicholas Mayall noted the presence of unusual emission lines in some galaxy nuclei. In 1943, Carl Seyfert published a paper in which he described observations of nearby galaxies having bright nuclei that were sources of unusually broad emission lines. Galaxies observed as part of this study included NGC 1068, NGC 4151, NGC 3516, and NGC 7469. Active galaxies such as these are known as Seyfert galaxies in honor of Seyfert's pioneering work. The development of radio astronomy was a major catalyst to understanding AGN. Some of the earliest detected radio sources are nearby active elliptical galaxies such as Messier 87 and Centaurus A. Another radio source, Cygnus A, was identified by Walter Baade and Rudolph Minkowski as a tidally distorted galaxy with an unusual emission-line spectrum, having a recessional velocity of 16,700 kilometers per second. The 3C radio survey led to further progress in discovery of new radio sources as well as identifying the visible-light sources associated with the radio emission. In photographic images, some of these objects were nearly point-like or quasi-stellar in appearance, and were classified as quasi-stellar radio sources (later abbreviated as "quasars"). Soviet Armenian astrophysicist Viktor Ambartsumian introduced Active Galactic Nuclei in the early 1950s. At the Solvay Conference on Physics in 1958, Ambartsumian presented a report arguing that "explosions in galactic nuclei cause large amounts of mass to be expelled. For these explosions to occur, galactic nuclei must contain bodies of huge mass and unknown nature. From this point forward Active Galactic Nuclei (AGN) became a key component in theories of galactic evolution." His idea was initially accepted skeptically. A major breakthrough was the measurement of the redshift of the quasar 3C 273 by Maarten Schmidt, published in 1963. Schmidt noted that if this object was extragalactic (outside the Milky Way, at a cosmological distance) then its large redshift of 0.158 implied that it was the nuclear region of a galaxy about 100 times more powerful than other radio galaxies that had been identified. Shortly afterward, optical spectra were used to measure the redshifts of a growing number of quasars including 3C 48, even more distant at redshift 0.37. The enormous luminosities of these quasars as well as their unusual spectral properties indicated that their power source could not be ordinary stars. Accretion of gas onto a supermassive black hole was suggested as the source of quasars' power in papers by Edwin Salpeter and Yakov Zeldovich in 1964. In 1969 Donald Lynden-Bell proposed that nearby galaxies contain supermassive black holes at their centers as relics of "dead" quasars, and that black hole accretion was the power source for the non-stellar emission in nearby Seyfert galaxies. In the 1960s and 1970s, early X-ray astronomy observations demonstrated that Seyfert galaxies and quasars are powerful sources of X-ray emission, which originates from the inner regions of black hole accretion disks. Today, AGN are a major topic of astrophysical research, both observational and theoretical. AGN research encompasses observational surveys to find AGN over broad ranges of luminosity and redshift, examination of the cosmic evolution and growth of black holes, studies of the physics of black hole accretion and the emission of electromagnetic radiation from AGN, examination of the properties of jets and outflows of matter from AGN, and the impact of black hole accretion and quasar activity on galaxy evolution. Models Since the late 1960s it has been argued that an AGN must be powered by accretion of mass onto massive black holes (106 to 1010 times the Solar mass). AGN are both compact and persistently extremely luminous. Accretion can potentially give very efficient conversion of potential and kinetic energy to radiation, and a massive black hole has a high Eddington luminosity, and as a result, it can provide the observed high persistent luminosity. Supermassive black holes are now believed to exist in the centres of most if not all massive galaxies since the mass of the black hole correlates well with the velocity dispersion of the galactic bulge (the M–sigma relation) or with bulge luminosity. Thus, AGN-like characteristics are expected whenever a supply of material for accretion comes within the sphere of influence of the central black hole. Accretion disc In the standard model of AGN, cold material close to a black hole forms an accretion disc. Dissipative processes in the accretion disc transport matter inwards and angular momentum outwards, while causing the accretion disc to heat up. The expected spectrum of an accretion disc peaks in the optical-ultraviolet waveband; in addition, a corona of hot material forms above the accretion disc and can inverse-Compton scatter photons up to X-ray energies. The radiation from the accretion disc excites cold atomic material close to the black hole and this in turn radiates at particular emission lines. A large fraction of the AGN's radiation may be obscured by interstellar gas and dust close to the accretion disc, but (in a steady-state situation) this will be re-radiated at some other waveband, most likely the infrared. Relativistic jets Some accretion discs produce jets of twin, highly collimated, and fast outflows that emerge in opposite directions from close to the disc. The direction of the jet ejection is determined either by the angular momentum axis of the accretion disc or the spin axis of the black hole. The jet production mechanism and indeed the jet composition on very small scales are not understood at present due to the resolution of astronomical instruments being too low. The jets have their most obvious observational effects in the radio waveband, where very-long-baseline interferometry can be used to study the synchrotron radiation they emit at resolutions of sub-parsec scales. However, they radiate in all wavebands from the radio through to the gamma-ray range via the synchrotron and the inverse-Compton scattering process, and so AGN jets are a second potential source of any observed continuum radiation. Radiatively inefficient AGN There exists a class of "radiatively inefficient" solutions to the equations that govern accretion. Several theories exist, but the most widely known of these is the Advection Dominated Accretion Flow (ADAF). In this type of accretion, which is important for accretion rates well below the Eddington limit, the accreting matter does not form a thin disc and consequently does not efficiently radiate away the energy that it acquired as it moved close to the black hole. Radiatively inefficient accretion has been used to explain the lack of strong AGN-type radiation from massive black holes at the centres of elliptical galaxies in clusters, where otherwise we might expect high accretion rates and correspondingly high luminosities. Radiatively inefficient AGN would be expected to lack many of the characteristic features of standard AGN with an accretion disc. Particle acceleration AGN are a candidate source of high and ultra-high energy cosmic rays (see also Centrifugal mechanism of acceleration). Observational characteristics Among the many interesting characteristics of AGNs: very high luminosity, visible out to very high red shifts, small emitting regions, milli-parsecs in diameter, strong evolution of luminosity functions, detectable emission across the entire electromagnetic spectrum. Types of active galaxy It is convenient to divide AGN into two classes, conventionally called radio-quiet and radio-loud. Radio-loud objects have emission contributions from both the jet(s) and the lobes that the jets inflate. These emission contributions dominate the luminosity of the AGN at radio wavelengths and possibly at some or all other wavelengths. Radio-quiet objects are simpler since jet and any jet-related emission can be neglected at all wavelengths. AGN terminology is often confusing, since the distinctions between different types of AGN sometimes reflect historical differences in how the objects were discovered or initially classified, rather than real physical differences. Radio-quiet AGN Low-ionization nuclear emission-line regions (LINERs). As the name suggests, these systems show only weak nuclear emission-line regions, and no other signatures of AGN emission. It is debatable whether all such systems are true AGN (powered by accretion on to a supermassive black hole). If they are, they constitute the lowest-luminosity class of radio-quiet AGN. Some may be radio-quiet analogues of the low-excitation radio galaxies (see below). Seyfert galaxies. Seyferts were the earliest distinct class of AGN to be identified. They show optical range nuclear continuum emission, narrow and occasionally broad emission lines, occasionally strong nuclear X-ray emission and sometimes a weak small-scale radio jet. Originally they were divided into two types known as Seyfert 1 and 2: Seyfert 1s show strong broad emission lines while Seyfert 2s do not, and Seyfert 1s are more likely to show strong low-energy X-ray emission. Various forms of elaboration on this scheme exist: for example, Seyfert 1s with relatively narrow broad lines are sometimes referred to as narrow-line Seyfert 1s. The host galaxies of Seyferts are usually spiral or irregular galaxies. Radio-quiet quasars/QSOs. These are essentially more luminous versions of Seyfert 1s: the distinction is arbitrary and is usually expressed in terms of a limiting optical magnitude. Quasars were originally 'quasi-stellar' in optical images as they had optical luminosities that were greater than that of their host galaxy. They always show strong optical continuum emission, X-ray continuum emission, and broad and narrow optical emission lines. Some astronomers use the term QSO (Quasi-Stellar Object) for this class of AGN, reserving 'quasar' for radio-loud objects, while others talk about radio-quiet and radio-loud quasars. The host galaxies of quasars can be spirals, irregulars or ellipticals. There is a correlation between the quasar's luminosity and the mass of its host galaxy, in that the most luminous quasars inhabit the most massive galaxies (ellipticals). 'Quasar 2s'. By analogy with Seyfert 2s, these are objects with quasar-like luminosities but without strong optical nuclear continuum emission or broad line emission. They are scarce in surveys, though a number of possible candidate quasar 2s have been identified. Radio-loud AGN There are several subtypes of radio-loud active galactic nuclei. Radio-loud quasars behave exactly like radio-quiet quasars with the addition of emission from a jet. Thus they show strong optical continuum emission, broad and narrow emission lines, and strong X-ray emission, together with nuclear and often extended radio emission. "Blazars" (BL Lac objects and OVV quasars) classes are distinguished by rapidly variable, polarized optical, radio and X-ray emission. BL Lac objects show no optical emission lines, broad or narrow, so that their redshifts can only be determined from features in the spectra of their host galaxies. The emission-line features may be intrinsically absent or simply swamped by the additional variable component. In the latter case, emission lines may become visible when the variable component is at a low level. OVV quasars behave more like standard radio-loud quasars with the addition of a rapidly variable component. In both classes of source, the variable emission is believed to originate in a relativistic jet oriented close to the line of sight. Relativistic effects amplify both the luminosity of the jet and the amplitude of variability. Radio galaxies. These objects show nuclear and extended radio emission. Their other AGN properties are heterogeneous. They can broadly be divided into low-excitation and high-excitation classes. Low-excitation objects show no strong narrow or broad emission lines, and the emission lines they do have may be excited by a different mechanism. Their optical and X-ray nuclear emission is consistent with originating purely in a jet. They may be the best current candidates for AGN with radiatively inefficient accretion. By contrast, high-excitation objects (narrow-line radio galaxies) have emission-line spectra similar to those of Seyfert 2s. The small class of broad-line radio galaxies, which show relatively strong nuclear optical continuum emission probably includes some objects that are simply low-luminosity radio-loud quasars. The host galaxies of radio galaxies, whatever their emission-line type, are essentially always ellipticals. Unification of AGN species Unified models propose that different observational classes of AGN are a single type of physical object observed under different conditions. The currently favoured unified models are 'orientation-based unified models' meaning that they propose that the apparent differences between different types of objects arise simply because of their different orientations to the observer. However, they are debated (see below). Radio-quiet unification At low luminosities, the objects to be unified are Seyfert galaxies. The unification models propose that in Seyfert 1s the observer has a direct view of the active nucleus. In Seyfert 2s the nucleus is observed through an obscuring structure which prevents a direct view of the optical continuum, broad-line region or (soft) X-ray emission. The key insight of orientation-dependent accretion models is that the two types of object can be the same if only certain angles to the line of sight are observed. The standard picture is of a torus of obscuring material surrounding the accretion disc. It must be large enough to obscure the broad-line region but not large enough to obscure the narrow-line region, which is seen in both classes of object. Seyfert 2s are seen through the torus. Outside the torus there is material that can scatter some of the nuclear emission into our line of sight, allowing us to see some optical and X-ray continuum and, in some cases, broad emission lines—which are strongly polarized, showing that they have been scattered and proving that some Seyfert 2s really do contain hidden Seyfert 1s. Infrared observations of the nuclei of Seyfert 2s also support this picture. At higher luminosities, quasars take the place of Seyfert 1s, but, as already mentioned, the corresponding 'quasar 2s' are elusive at present. If they do not have the scattering component of Seyfert 2s they would be hard to detect except through their luminous narrow-line and hard X-ray emission. Radio-loud unification Historically, work on radio-loud unification has concentrated on high-luminosity radio-loud quasars. These can be unified with narrow-line radio galaxies in a manner directly analogous to the Seyfert 1/2 unification (but without the complication of much in the way of a reflection component: narrow-line radio galaxies show no nuclear optical continuum or reflected X-ray component, although they do occasionally show polarized broad-line emission). The large-scale radio structures of these objects provide compelling evidence that the orientation-based unified models really are true. X-ray evidence, where available, supports the unified picture: radio galaxies show evidence of obscuration from a torus, while quasars do not, although care must be taken since radio-loud objects also have a soft unabsorbed jet-related component, and high resolution is necessary to separate out thermal emission from the sources' large-scale hot-gas environment. At very small angles to the line of sight, relativistic beaming dominates, and we see a blazar of some variety. However, the population of radio galaxies is completely dominated by low-luminosity, low-excitation objects. These do not show strong nuclear emission lines—broad or narrow—they have optical continua which appear to be entirely jet-related, and their X-ray emission is also consistent with coming purely from a jet, with no heavily absorbed nuclear component in general. These objects cannot be unified with quasars, even though they include some high-luminosity objects when looking at radio emission, since the torus can never hide the narrow-line region to the required extent, and since infrared studies show that they have no hidden nuclear component: in fact there is no evidence for a torus in these objects at all. Most likely, they form a separate class in which only jet-related emission is important. At small angles to the line of sight, they will appear as BL Lac objects. Criticism of the radio-quiet unification In the recent literature on AGN, being subject to an intense debate, an increasing set of observations appear to be in conflict with some of the key predictions of the Unified Model, e.g. that each Seyfert 2 has an obscured Seyfert 1 nucleus (a hidden broad-line region). Therefore, one cannot know whether the gas in all Seyfert 2 galaxies is ionized due to photoionization from a single, non-stellar continuum source in the center or due to shock-ionization from e.g. intense, nuclear starbursts. Spectropolarimetric studies reveal that only 50% of Seyfert 2s show a hidden broad-line region and thus split Seyfert 2 galaxies into two populations. The two classes of populations appear to differ by their luminosity, where the Seyfert 2s without a hidden broad-line region are generally less luminous. This suggests absence of broad-line region is connected to low Eddington ratio, and not to obscuration. The covering factor of the torus might play an important role. Some torus models predict how Seyfert 1s and Seyfert 2s can obtain different covering factors from a luminosity and accretion rate dependence of the torus covering factor, something supported by studies in the x-ray of AGN. The models also suggest an accretion-rate dependence of the broad-line region and provide a natural evolution from more active engines in Seyfert 1s to more "dead" Seyfert 2s and can explain the observed break-down of the unified model at low luminosities and the evolution of the broad-line region. While studies of single AGN show important deviations from the expectations of the unified model, results from statistical tests have been contradictory. The most important short-coming of statistical tests by direct comparisons of statistical samples of Seyfert 1s and Seyfert 2s is the introduction of selection biases due to anisotropic selection criteria. Studying neighbour galaxies rather than the AGN themselves first suggested the numbers of neighbours were larger for Seyfert 2s than for Seyfert 1s, in contradiction with the Unified Model. Today, having overcome the previous limitations of small sample sizes and anisotropic selection, studies of neighbours of hundreds to thousands of AGN have shown that the neighbours of Seyfert 2s are intrinsically dustier and more star-forming than Seyfert 1s and a connection between AGN type, host galaxy morphology and collision history. Moreover, angular clustering studies of the two AGN types confirm that they reside in different environments and show that they reside within dark matter halos of different masses. The AGN environment studies are in line with evolution-based unification models where Seyfert 2s transform into Seyfert 1s during merger, supporting earlier models of merger-driven activation of Seyfert 1 nuclei. While controversy about the soundness of each individual study still prevails, they all agree on that the simplest viewing-angle based models of AGN Unification are incomplete. Seyfert-1 and Seyfert-2 seem to differ in star formation and AGN engine power. While it still might be valid that an obscured Seyfert 1 can appear as a Seyfert 2, not all Seyfert 2s must host an obscured Seyfert 1. Understanding whether it is the same engine driving all Seyfert 2s, the connection to radio-loud AGN, the mechanisms of the variability of some AGN that vary between the two types at very short time scales, and the connection of the AGN type to small and large-scale environment remain important issues to incorporate into any unified model of active galactic nuclei. A study of Swift/BAT AGN published in July 2022 adds support to the "radiation-regulated unification model" outlined in 2017. In this model, the relative accretion rate (termed the "Eddington ratio") of the black hole has a significant impact on the observed features of the AGN. Black Holes with higher Eddington ratios appear to be more likely to be unobscured, having cleared away locally obscuring material in a very short timescale. Cosmological uses and evolution For a long time, active galaxies held all the records for the highest-redshift objects known either in the optical or the radio spectrum, because of their high luminosity. They still have a role to play in studies of the early universe, but it is now recognised that an AGN gives a highly biased picture of the "typical" high-redshift galaxy. Most luminous classes of AGN (radio-loud and radio-quiet) seem to have been much more numerous in the early universe. This suggests that massive black holes formed early on and that the conditions for the formation of luminous AGN were more common in the early universe, such as a much higher availability of cold gas near the centre of galaxies than at present. It also implies that many objects that were once luminous quasars are now much less luminous, or entirely quiescent. The evolution of the low-luminosity AGN population is much less well understood due to the difficulty of observing these objects at high redshifts. See also References External links Astronomical classification systems Concepts in astronomy
Active galactic nucleus
[ "Physics", "Astronomy" ]
5,132
[ "Concepts in astronomy", "Astronomical classification systems" ]
177,098
https://en.wikipedia.org/wiki/Space%20Telescope%20Science%20Institute
The Space Telescope Science Institute (STScI) is the science operations center for the Hubble Space Telescope (HST), science operations and mission operations center for the James Webb Space Telescope (JWST), and science operations center for the Nancy Grace Roman Space Telescope. STScI was established in 1981 as a community-based science center that is operated for NASA by the Association of Universities for Research in Astronomy (AURA). STScI's offices are located on the Johns Hopkins University Homewood Campus and in the Rotunda building in Baltimore, Maryland. In addition to performing continuing science operations of HST and preparing for scientific exploration with JWST and Roman, STScI manages and operates the Mikulski Archive for Space Telescopes (MAST), which holds data from numerous active and legacy missions, including HST, JWST, Kepler, TESS, Gaia, and Pan-STARRS. Most of the funding for STScI activities comes from contracts with NASA's Goddard Space Flight Center but there are smaller activities funded by NASA's Ames Research Center, NASA's Jet Propulsion Laboratory, and the European Space Agency (ESA). The staff at STScI consists of scientists (mostly astronomers and astrophysicists), spacecraft engineers, software engineers, data management personnel, education and public outreach experts, and administrative and business support personnel. There are approximately 200 Ph.D. scientists working at STScI, 15 of whom are ESA staff who are on assignment to the HST and JWST project. The total STScI staff consists of about 850 people as of 2021. STScI operates its missions on behalf of NASA, the worldwide astronomy community, and to the benefit of the public. The science operations activities directly serve the astronomy community, primarily in the form of and (and eventually Roman) observations and grants, but also include distributing data from other NASA and ground-based missions via MAST. The ground system development activities create and maintain the software systems that are needed to provide these services to the astronomy community. STScI's public outreach activities provide a wide range of resources for media, informal education venues such as planetariums and science museums, and the general public. STScI also serves as a source of guidance to NASA on a range of optical and UV space astrophysics issues. The STScI staff interacts and communicates with the professional astronomy community through a number of channels, including participation at the bi-annual meetings of the American Astronomical Society, publication of regular STScI newsletters and the STScI website, hosting user committees and science working groups, and holding several scientific and technical symposia and workshops each year. These activities enable STScI to disseminate information to the telescope user community as well as enabling the STScI staff to maximize the scientific productivity of the facilities they operate by responding to the needs of the community and of NASA. STScI activities Note: Information in this section needs updating. For current activities, consult STScI's official website. Telescope science proposal selection The STScI conducts all activities required to select, schedule, and implement the science programs of the Hubble Space Telescope. The first step in this process is to support the annual community-led selection of the scientific programs that will be performed with HST. This begins with publishing of the annual Call for Proposals, which specifies the currently supported science instrument capabilities, proposal requirements and the submission deadline. Anyone is eligible to submit a proposal. All proposals are critically peer-reviewed by the Time Allocation Committee (TAC). The TAC consists of about 100 members of the U.S. and international astronomical community, selected to represent a broad range of research expertise needed to evaluate the proposals. Each proposal cycle typically involves reviewing 700 to 1100 proposals. Only 15 - 20% of these proposals will eventually be selected for implementation. The TAC reviews several categories of observing time, as well as proposals for archival, theoretical, and combined research projects between HST and other space-based or ground-based observatories (e.g., Chandra X-ray Observatory and the National Optical Astronomy Observatories). STScI provides all technical and logistical support for these activities. The annual cycle of proposal calls was occasionally altered in duration in years when a HST servicing mission was scheduled. Proposers fortunate enough to be awarded telescope time, referred to as General Observers (GOs), must then provide detailed requirements needed to schedule and implement their observing programs. This information is provided to STScI on what is called a Phase II proposal. The Phase II proposal specifies instrument operation modes, exposure times, telescope orientations, and so on. The STScI staff provide the web-based software called Exposure Time Calculators (ETCs) that allow GOs to estimate how much observing time any of the onboard detectors will need to accumulate the amount of light required to accomplish their scientific objectives. In addition, the STScI staff carries out all the steps necessary to implement each specific program, as well as plan the entire ensemble of programs for the year. For HST, this includes finding guide stars, checking on bright object constraints, implementing specific scheduling requirements, and working with observers to understand and factor in specific or any non-standard requirements they may have. Observation scheduling Once the Phase II information is gathered, a long-range observing plan is developed that covers the entire year, finding appropriate times to schedule individual observations, and at the same time ensuring effective and efficient use of the telescope through the year. Detailed observing schedules are created each week, including, in the case of HST operations, scheduling the data communication paths via the Tracking and Data Relay Satellite System (TDRSS) and generating the binary command loads for uplink to the spacecraft. Adjustments can be made to both long-range and weekly plans in response to Targets of Opportunity (e.g., for transient events like supernovae or coordination with one-of-a-kind events such as comet impact spacecraft). The STScI uses the Min-conflicts algorithm to schedule observation time on the telescope. The STScI is currently developing similar processes for JWST, although the operational details will be very different due to its different instrumentation and spacecraft constraints, and its location at the Sun-Earth L2 Lagrange point (~1.5 million km from Earth) rather than the low Earth orbit (~565 km) used by HST. Flight operations Flight Operations consists of the direct support and monitoring of HST functions in real-time. Real-time daily flight operations for HST include about 4 command load uplinks, about 10 data downlinks, and near continuous health and safety monitoring of the observatory. Real-time operations are staffed around the clock. Flight operations activities for HST are done at NASA's GSFC in Greenbelt, Maryland. Science data processing Science data from HST arrive at the STScI a few hours after being downlinked from TDRSS and subsequently passing through a data capture facility at NASA's Goddard Space Flight Center. Once at STScI, the data are processed by a series of computer algorithms that convert its format into an internationally accepted standard (known as FITS: Flexible Image Transport System), correct for missing data, and perform final calibration of the data by removing instrumental artifacts. The calibration steps are different for each HST instrument, but as a general rule they include cosmic ray removal, correction for instrument/detector non-uniformities, flux calibration, and application of world coordinate system information (which tells the user precisely where on the sky the detector was pointed). The calibrations applied are the best available at the time the data passes through the pipeline. The STScI is working with instrument developers to define similar processes for Kepler and JWST data. Science data archiving and distribution All HST science data are permanently archived after passing through the calibration pipeline. NASA policy mandates a one-year proprietary period on all data, which means that only the initial proposal team can access the data for the first year after it has been obtained. Subsequent to that year, the data become available to anyone who wishes to access it. Data sets retrieved from the archive are automatically re-calibrated to ensure that the most up-to-date calibration factors and software are applied. The STScI serves as the archive center for all of NASA's optical/UV space missions. In addition to archiving and storing HST science data, STScI holds data from 13 other missions including the International Ultraviolet Explorer (IUE), the Extreme Ultraviolet Explorer (EUVE), the Far Ultraviolet Spectroscopic Explorer (FUSE), and the Galaxy Evolution Explorer (GALEX). Kepler and JWST science data will be archived and retrieved in similar fashions. The internet serves as the primary user interface to the data archives at STScI (http://archive.stsci.edu). The archive currently holds over 30 terabytes of data. Each day about 11 gigabytes of new data are ingested and about 85 gigabytes of data are distributed to users. The Hubble Legacy Archive (HLA; http://hla.stsci.edu/), currently in development, will act as a more integrated and user-friendly archive. It will provide raw Hubble data as well as higher-level science products (color images, mosaics, etc.). Science instrument calibration and characterization STScI is responsible for in-flight calibration of the science instruments on HST and JWST. For HST, a calibration plan for the observatory is developed each year. This plan is designed to support the selected GO observation programs for that cycle, as well as to provide a basic calibration that spans the lifetime of each instrument. The calibration program includes measurements that are made relative to on-board calibration sources or to assess internal detector noise levels as well as observations of astronomical standard stars and fields, needed to determine absolute flux conversions and astrometric transformations. The external calibrations on HST typically total 5-10% of the GO observing program, with more time required when an instrument is still relatively new. HST has had a total of 12 science instruments to date, 6 of which are currently active. Two new instruments were installed during the May 2009 HST servicing mission STS-125. Electronic failures in STIS (in 2001) and in the ACS Wide-Field Channel (in 2007) were also repaired on-orbit in May 2009, bringing these instruments back to active status. All 12 HST instruments plus the 4 planned for JWST are summarized in the table below. HST instruments can detect light with wavelengths from the ultraviolet through the near infrared. JWST instruments will operate from the red-end of optical wavelengths (~6000 Angstroms) to the mid-infrared (5 to 27 micrometres). Instruments listed as decommissioned are no longer on board. STScI staff develops the calibration proposals, shepherd them through the scheduling process, and analyze the data they produce. These programs provide updated calibration and reference files to be used in the data processing pipeline. The calibration files are also archived so users can retrieve them if they need to manually recalibrate their data. All calibration activity and results are documented, usually in the form of Instrument Science Reports posted to the public website, and occasionally in the form of published papers. Results are also incorporated into the Data Handbooks and Instrument Handbooks. In addition to calibration of the instruments, STScI staff characterizes and documents the performance of the instrument, so users can better understand how to interpret their data. These are generally effects that are not automatically corrected for in the pipeline (because they vary with time or depend on the brightness of the source). They include global effects, such as charge transfer efficiency in the charge-coupled devices, as well as effects specific to modes and filters, such as filter "ghosts" (caused by subtle scattering of light within an instrument). Awareness of these effects can come from STScI staff as they analyze calibration programs, or from observers who find oddities in their data and provide feedback to STScI. The STScI staff also performs the characterization and calibration of the telescope itself. In the case of HST, this has evolved to primarily be a matter of monitoring and adjusting focus, and monitoring and measuring point spread functions. (In the early 1990s, the STScI was responsible for accurate measurement of the spherical aberration, necessary for the corrective optics of all subsequent instruments). In the case of JWST, the STScI will be responsible for using the wavefront sensor system developed by JPL and Northrop Grumman Space Technology (NGST, the NASA contractor building the observatory) to monitor and adjust the segmented telescope. Post observation support The post observation support includes a HelpDesk that users can contact to answer their questions about any aspect of observing – from how to submit a proposal to how to analyze the data. Science community service The STScI performs large HST science programs on behalf of the community. These are programs with broad scientific applications. To date, these programs include the Hubble Deep Field (HDF), the Hubble Deep Field South (HDFS), and the Ultra Deep Field (UDF). The raw and processed data for these observations are made available to the astronomy community nearly immediately. These products have then been used by many astronomers in pursuit of their own research topics, and have motivated a great deal of follow-up work (see, for example, http://www.stsci.edu/ftp/science/hdf/clearinghouse/clearinghouse.html and http://www.stsci.edu/hst/udf/index_html). Ground systems STScI is responsible for developing, enhancing, and maintaining most of the ground systems used to carry out our Hubble science operations described above. These systems originally (1980s, early 1990s) came from several sources, including in-house STScI developments and work done under NASA contracts with various vendors. Over HST's lifetime substantial work has been done on these systems - even while they were supporting daily operations of Hubble. They have been integrated into a more effective and easier to operate end-to-end system. They have been through major technology upgrades (e.g., improved operating systems and computer hardware, higher capacity archive storage media). They have also been modified to support the succession of instruments installed in the telescope. In the last several years, they have been modified to support WFC3 and COS, the two new instruments that will be installed during the next HST servicing mission, and to support the 2-Gyroscope mode of HST operations. STScI also provides subsets of ground system services to other astronomy missions, including FUSE, Kepler, and JWST. STScI's software engineers maintain about 7,900,000 source lines of code. Mission development and operations support STScI routinely participates with NASA and industry system engineers and scientists in developing the overall mission architecture. For HST, this includes helping to determine and prioritize servicing mission activities and development of the servicing strategy. For JWST, this includes participating in the definition of high-level science requirements and the overall architecture for the mission. In both cases, the STScI focuses on the scientific capabilities of the mission, and also the requirements for smooth and efficient operations of the observatory. Scientific research activities STScI manages the selection of the Hubble Fellowship Program. Since 1990, Hubble Fellowships support outstanding postdoctoral scientists whose research is broadly related to the scientific mission of the Hubble Space Telescope. In 2009, it was combined with the Spitzer Fellowship that since 2002 had been associated with the Spitzer Space Telescope and science program. It now supports fellows undertaking research associated with all missions within the Cosmic Origins theme: the Herschel Space Observatory, Hubble Space Telescope (HST), James Webb Space Telescope (JWST), Stratospheric Observatory for Infrared Astronomy (SOFIA), and the Spitzer Space Telescope. The research may be theoretical, observational, or instrumental. Each year, since HST's launch in 1990, 8 to 12 fellowships are awarded; from 2009 it hovers about 16. STScI also sponsors a summer student intern program that allows talented undergraduate students from around the world to work with the institute's scientific staff, providing these students with hands-on experience in state-of-the-art astronomical research. STScI's full-time scientific staff conducts original research spanning a broad range of astrophysics including investigations of the Solar System, exoplanet detection and characterization, star formation, galaxy evolution, and physical cosmology. STScI hosts an annual scientific symposium held each spring as well as several smaller scientific workshops. The employment of an active scientific staff at STScI helps to ensure that HST, and eventually JWST, perform at peak capability. Public outreach STScI's Office of Public Outreach (OPO) provides a wide array of products and services designed to share and communicate the science and discoveries of HST, JWST, Roman, and astronomy in general with the general public. OPO's efforts focus on meeting the needs of the media, the informal science education community, and the general public. OPO produces approximately 40 new press releases each year featuring HST discoveries and science results. These media packages include news stories, Hubble images, explanatory artwork, animations, and supplementary information for use by print, broadcast, and online media. OPO also participates in press conferences for particularly newsworthy discoveries, and conducts science writers' workshops for in-depth sessions with scientists working on current astrophysical research problems. In addition to news releases, OPO develops a variety of astronomy-related products and features for use by the general public and informal education venues including museums, science centers, planetariums, and libraries. These include background articles, telescope imagery, illustrations, diagrams, infographics, videos, scientific visualizations, virtual reality, and interactives. Most of these resources are distributed via websites developed and managed by STScI, including Hubblesite, Webbtelescope, ViewSpace, and Illuminated Universe. Content is also distributed via social media platforms, including Facebook, Twitter, Instagram, and YouTube. OPO also conducts outreach via live events in person and online. These include a regular Public Lecture Series as well as attendance at various local and national STEM events. OPO also provides support to informal education venues in the form of print materials, program/event resources, and professional development. OPO's outreach efforts are conducted in partnership with the Hubble, Webb, and Roman mission offices and with other institutions under NASA's Universe of Learning. References External links Scientific organizations established in 1981 1981 establishments in Maryland Hubble Space Telescope Space Telescope James Webb Space Telescope
Space Telescope Science Institute
[ "Astronomy" ]
3,874
[ "Space telescopes", "James Webb Space Telescope" ]
177,165
https://en.wikipedia.org/wiki/Malayan%20Emergency
The Malayan Emergency, also known as the Anti–British National Liberation War, was a guerrilla war fought in Malaya between communist pro-independence fighters of the Malayan National Liberation Army (MNLA) and the military forces of the Federation of Malaya and Commonwealth (British Empire). The communists fought to win independence for Malaya from the British Empire and to establish a communist state, while the Malayan Federation and Commonwealth forces fought to combat communism and protect British economic and colonial interests. The term "Emergency" was used by the British to characterise the conflict in order to avoid referring to it as a war, because London-based insurers would not pay out in instances of civil wars. The war began on 17 June 1948, after Britain declared a state of emergency in Malaya following attacks on plantations, which had been revenge attacks for the killing of left-wing activists. Leader of the Malayan Communist Party (MCP) Chin Peng and his allies fled into the jungles and formed the MNLA to wage a war for national liberation against British colonial rule. Many MNLA fighters were veterans of the Malayan Peoples' Anti-Japanese Army (MPAJA), a communist guerrilla army previously trained, armed and funded by the British to fight against Japan during World War II. The communists gained support from many civilians, mainly those from the Chinese community. The communists' belief in class consciousness, and both ethnic and gender equality, inspired many women and indigenous people to join both the MNLA and its undercover supply network the Min Yuen. Additionally, hundreds of former Japanese soldiers joined the MNLA. After establishing a series of jungle bases the MNLA began raiding British colonial police and military installations. Mines, plantations, and trains were attacked by the MNLA with the goal of gaining independence for Malaya by bankrupting the British occupation. The British attempted to starve the MNLA using scorched earth policies through food rationing, killing livestock, and aerial spraying of the herbicide Agent Orange. The British engaged in extrajudicial killings of unarmed villagers, in violation of the Geneva Conventions. The most infamous example is the Batang Kali massacre, which the press has referred to as "Britain's My Lai". The Briggs Plan forcibly relocated between 400,000 and 1,000,000 civilians into concentration camps called "new villages". Many Orang Asli indigenous communities were also targeted for internment because the British believed that they were supporting the communists. The widespread decapitation of people suspected to have been guerrillas led to the 1952 British Malayan headhunting scandal. Similar scandals relating to atrocities committed by British forces included the public display of corpses. Although the emergency was declared over in 1960, communist leader Chin Peng renewed the insurgency against the Malaysian government in 1968. This second phase of the insurgency lasted until 1989. Origins Socioeconomic issues (1941–1948) The economic disruption of World War II (WWII) on British Malaya led to widespread unemployment, low wages, and high levels of food price inflation. The weak economy was a factor in the growth of trade union movements and caused a rise in communist party membership, with considerable labour unrest and a large number of strikes occurring between 1946 and 1948. Malayan communists organised a successful 24-hour general strike on 29 January 1946, before organising 300 strikes in 1947. To combat rising trade union activity the British used police and soldiers as strikebreakers, and employers enacted mass dismissals, forced evictions of striking workers from their homes, legal harassment, and began cutting the wages of their workers. Colonial police responded to rising trade union activity through arrests, deportations, and beating striking workers to death. Responding to the attacks against trade unions, communist militants began assassinating strikebreakers, and attacking anti-union estates. These attacks were used by the colonial occupation as a pretext to conduct mass arrests of left-wing activists. On 12 June the British colonial occupation banned the PMFTU, Malaya's largest trade union. Malaya's rubber and tin resources were used by the British to pay war debts to the United States and to recover from the damage of WWII. Malaysian rubber exports to the United States were of greater value than all domestic exports from Britain to America, causing Malaya to be viewed by the British as a vital asset. Britain had prepared for Malaya to become an independent state, but only by handing power to a government which would be subservient to Britain and allow British businesses to keep control of Malaya's natural resources. Sungai Siput incident (1948) The first shots of the Malayan Emergency were fired during the Sungai Siput incident, on June 17, 1948, in the office of the Elphil Estate near the town of Sungai Siput. Three European plantation managers were killed by three young Chinese men suspected to have been communists. The deaths of these European plantation managers was used by the British colonial occupation to either arrest or kill many of Malaya's communist and trade union leaders. These mass arrests and killings saw many left-wing activists going into hiding and fleeing into the Malayan jungles. Origin and formation of the MNLA (1949) Although the Malayan communists had begun preparations for a guerrilla war against the British, the emergency measures and mass arrest of communists and left-wing activists in 1948 took them by surprise. Led by Chin Peng the remaining Malayan communists retreated to rural areas and formed the Malayan National Liberation Army (MNLA) on 1 February 1949. The MNLA was partly a re-formation of the Malayan Peoples' Anti-Japanese Army (MPAJA), the communist guerrilla force which had been the principal resistance in Malaya against the Japanese occupation during WWII. The British had secretly helped form the MPAJA in 1942 and trained them in the use of explosives, firearms and radios. Chin Peng was a veteran anti-fascist and trade unionist who had played an integral role in the MPAJA's resistance. Disbanded in December 1945, the MPAJA officially turned in its weapons to the British Military Administration, although many MPAJA soldiers secretly hid stockpiles of weapons in jungle hideouts. Members who agreed to disband were offered economic incentives. Around 4,000 members rejected these incentives and went underground. The MNLA began their war for Malayan independence from the British Empire by targeting the colonial resource extraction industries, namely the tin mines and rubber plantations which were the main sources of income for the British occupation of Malaya. The MNLA attacked these industries in the hopes of bankrupting the British and winning independence by making the colonial administration too expensive to maintain. Communist guerrilla strategies The Malayan National Liberation Army (MNLA) employed guerrilla tactics, attacking military and police outposts, sabotaging rubber plantations and tin mines, while also destroying transport and communication infrastructure. Support for the MNLA mainly came from the 3.12 million ethnic Chinese living in Malaya, many of whom were farmers living on the edges of the Malayan jungles and had been politically influenced by both the Chinese Communist Revolution and the resistance against Japan during WWII. Their support allowed the MNLA to supply themselves with food, medicine, information, and provided a source of new recruits. The ethnic Malay population supported them in smaller numbers. The MNLA gained the support of the Chinese because the Chinese were denied the equal right to vote in elections, had no land rights to speak of, and were usually very poor. The MNLA's supply organisation was called the Min Yuen (People's Movement). It had a network of contacts within the general population. Besides supplying material, especially food, it was also important to the MNLA as a source of intelligence. The MNLA and their supporters refer to the conflict as the Anti-British National Liberation War. The MNLA's camps and hideouts were in the inaccessible tropical jungle and had limited infrastructure. Almost 90% of MNLA guerrillas were ethnic Chinese, though there were some Malays, Indonesians and Indians among its members. The MNLA was organised into regiments, although these had no fixed establishments and each included all communist forces operating in a particular region. The regiments had political sections, commissars, instructors and secret service. In the camps, the soldiers attended lectures on Marxism–Leninism, and produced political newsletters to be distributed to civilians. In the early stages of the conflict, the guerrillas envisaged establishing control in "liberated areas" from which the government forces had been driven, but did not succeed in this. British and Commonwealth strategies During the first two years of the Emergency, British forces conducted a 'counter-terror,' characterised by high levels of state coercion against civilian populations; including sweeps, cordons, large-scale deportation, and capital charges against suspected guerrillas. Police corruption and the British military's widespread destruction of farmland and burning of homes belonging to villagers rumoured to be helping communists, led to a sharp increase in civilians joining the MNLA and communist movement. However, these tactics also prevented the communists from establishing liberated areas (the MCPs first, and foremost objective), successfully broke up larger guerrilla formations, and shifted the MNLA's plan of securing territory, to one of widespread sabotage. Commonwealth forces struggled to fight guerrillas who moved freely in the jungle and enjoyed support from rural Chinese populations. British planters and miners, who bore the brunt of the communist attacks, began to talk about government incompetence and being betrayed by Whitehall. The initial government strategy was primarily to guard important economic targets, such as mines and plantation estates. In April 1950, General Sir Harold Briggs, most famous for implementing the Briggs Plan, was appointed to Malaya. The central tenet of the Briggs Plan was to segregate MNLA guerrillas from their supporters among the population. A major component of the Briggs Plan involved targeting the MNLA's food supplies, which were supplied from three main sources: food grown by the MNLA in the jungle, food supplied by the Orang Asli aboriginal people living in the deep jungle, and MNLA supporters within the 'squatter' communities on the jungle fringes. The Briggs Plan also included the forced relocation of some 500,000 rural Malayans, including 400,000 Chinese civilians, into internment camps called "new villages". These internment camps were surrounded by barbed wire, police posts, and floodlit areas, all designed to stop the inmates from contacting and supplying MNLA guerrillas in the jungles, segregating the communists from their civilian supporters. In 1948 the British had 13 infantry battalions in Malaya, including seven partly formed Gurkha battalions, three British battalions, two battalions of the Royal Malay Regiment and a Royal Artillery Regiment being used as infantry. The Permanent Secretary of Defence for Malaya, Sir Robert Grainger Ker Thompson, had served in the Chindits in Burma during World War II. Thompson's in-depth experience of jungle warfare proved invaluable during this period as he was able to build effective civil-military relations and was one of the chief architects of the counter-insurgency plan in Malaya. In 1951 the British High Commissioner in Malaya, Sir Henry Gurney, was killed near Fraser's Hill during an MNLA ambush. General Gerald Templer was chosen to become the new High Commissioner in January 1952. During Templer's two-year command, "two-thirds of the guerrillas were wiped out and lost over half their strength, the incident rate fell from 500 to less than 100 per month and the civilian and security force casualties from 200 to less than 40." Orthodox historiography suggests that Templer changed the situation in the Emergency and his actions and policies were a major part of British success during his period in command. Revisionist historians have challenged this view and frequently support the ideas of Victor Purcell, a Sinologist who as early as 1954 claimed that Templer merely continued policies begun by his predecessors. Control of anti-guerrilla operations At all levels of the Malayan government (national, state, and district levels), the military and civil authority was assumed by a committee of military, police and civilian administration officials. This allowed intelligence from all sources to be rapidly evaluated and disseminated and also allowed all anti-guerrilla measures to be co-ordinated. Each of the Malay states had a State War Executive Committee which included the State Chief Minister as chairman, the Chief Police Officer, the senior military commander, state home guard officer, state financial officer, state information officer, executive secretary, and up to six selected community leaders. The Police, Military, and Home Guard representatives and the Secretary formed the operations sub-committee responsible for the day-to-day direction of emergency operations. The operations subcommittees as a whole made joint decisions. Agent Orange During the Malayan Emergency, Britain became the first nation in history to make use of herbicides and defoliants as a military weapon. It was used to destroy bushes, food crops, and trees to deprive the guerrillas of both food and cover, playing a role in Britain's food denial campaign during the early 1950s. A variety of herbicides were used to clear lines of communication and destroy food crops as part of this strategy. One of the herbicides, was a 50:50 mixture of butyl esters of 2,4,5-T and 2,4-D with the brand name Trioxone. This mixture was virtually identical to the later Agent Orange, though Trioxone likely had a heavier contamination of the health-damaging dioxin impurity. In 1952 Trioxone and mixtures of the aforementioned herbicides, were sprayed along a number of key roads. From June to October 1952, of roadside vegetation at possible ambush points were sprayed with defoliant, described as a policy of "national importance". The experts advised that the use of herbicides and defoliants for clearing the roadside could be effectively replaced by removing vegetation by hand and the spraying was stopped. However, after that strategy failed, the use of herbicides and defoliants in effort to fight the guerrillas was restarted under the command of Gerald Templer in February 1953 as a means of destroying food crops grown by communist forces in jungle clearings. Helicopters and fixed-wing aircraft despatched sodium trichloroacetate and Trioxone, along with pellets of chlorophenyl N,N-dimethyl-1-naphthylamine onto crops such as sweet potatoes and maize. Many Commonwealth personnel who handled and/or used Trioxone during the conflict suffered from serious exposure to dioxin and Trioxone. An estimated 10,000 civilians and guerrilla in Malaya also suffered from the effects of the defoliant, but many historians think that the number is much larger since Trioxone was used on a large scale in the Malayan conflict and, unlike the US, the British government limited information about its use to avoid negative global public opinion. The prolonged absence of vegetation caused by defoliation also resulted in major soil erosion. Following the end of the Emergency, US Secretary of State Dean Rusk advised US President John F. Kennedy that the precedent of using herbicide in warfare had been established by the British through their use of aircraft to spray herbicide and thus destroy enemy crops and thin the thick jungle of northern Malaya. Nature of warfare The British Army soon realised that clumsy sweeps by large formations were unproductive. Instead, platoons or sections carried out patrols and laid ambushes, based on intelligence from various sources, including informers, surrendered MNLA personnel, aerial reconnaissance and so on. An operation named "Nassau", carried out in the Kuala Langat swamp is described in The Guerrilla – and how to Fight Him ): MNLA guerrillas had numerous advantages over Commonwealth forces since they lived in closer proximity to villagers, they sometimes had relatives or close friends in the village, and they were not afraid to threaten violence or torture and murder village leaders as an example to the others, which forced them to assist them with food and information. British forces thus faced a dual threat: the MNLA guerrillas and the silent network in villages who supported them. British troops often described the terror of jungle patrols. In addition to watching out for MNLA guerrillas, they had to navigate difficult terrain and avoid dangerous animals and insects. Many patrols would stay in the jungle for days, even weeks, without encountering the MNLA guerrillas. That strategy led to the infamous Batang Kali massacre in which 24 unarmed villagers were executed by British troops. Royal Air Force activities, grouped under "Operation Firedog" included ground attacks in support of troops and the transport of supplies. The RAF used a wide mixture of aircraft to attack MNLA positions: from the new Avro Lincoln heavy bomber to Short Sunderland flying boats. Jets were used in the conflict when de Havilland Vampires replaced Spitfires of No. 60 Squadron RAF in 1950 and were used for ground attack. Jet bombers came with the English Electric Canberra in 1955. The Casualty Evacuation Flight was formed in early 1953 to bring the wounded out of the jungles; it used early helicopters such as the Westland Dragonfly, landing in small clearings The RAF progressed to using Westland Whirlwind helicopters to deploy troops in the jungle. The MNLA was vastly outnumbered by the British forces and their Commonwealth and colonial allies in terms of regular full-time soldiers. Siding with the British occupation were a maximum of 40,000 British and other Commonwealth troops, 250,000 Home Guard members, and 66,000 police agents. Supporting the communists were 7,000+ communist guerrillas (1951 peak), an estimated 1,000,000 sympathisers, and an unknown number of civilian Min Yuen supporters and Orang Asli sympathisers. Commonwealth contribution Commonwealth forces from Africa and the Pacific fought on the side of the British backed Federation of Malaya during the Malayan Emergency. These forces included troops from Australia, New Zealand, Fiji, Kenya, Nyasaland, Northern and Southern Rhodesia. Australia and Pacific Commonwealth forces Australian ground forces first joined the Malayan Emergency in 1955 with the deployment of the 2nd Battalion, Royal Australian Regiment (2 RAR). The 2 RAR was later replaced by 3 RAR, which in turn was replaced by 1 RAR. The Royal Australian Air Force contributed No. 1 Squadron (Avro Lincoln bombers) and No. 38 Squadron (C-47 transports). In 1955, the RAAF extended Butterworth air base, from which Canberra bombers of No. 2 Squadron (replacing No. 1 Squadron) and CAC Sabres of No. 78 Wing carried out ground attack missions against the guerrillas. The Royal Australian Navy destroyers and joined the force in June 1955. Between 1956 and 1960, the aircraft carriers and and destroyers , , , , , , , and were attached to the Commonwealth Strategic Reserve forces for three to nine months at a time. Several of the destroyers fired on communist positions in Johor. New Zealand's first contribution came in 1949, when Douglas C-47 Dakotas of RNZAF No. 41 Squadron were attached to the Royal Air Force's Far East Air Force. New Zealand became more directly involved in the conflict in 1955; from May, RNZAF de Havilland Vampires and Venoms began to fly strike missions. In November 1955 133 soldiers of what was to become the Special Air Service of New Zealand arrived from Singapore, for training in-country with the British SAS, beginning operations by April 1956. The Royal New Zealand Air Force continued to carry out strike missions with Venoms of No. 14 Squadron and later No. 75 Squadron English Electric Canberras bombers, as well as supply-dropping operations in support of anti-guerrilla forces, using the Bristol Freighter. A total of 1,300 New Zealanders were stationed in Malaya between 1948 and 1964, and fifteen lost their lives. Approximately 1,600 Fijian troops were involved in the Malayan Emergency from 1952 to 1956. The experience was captured in the documentary, Back to Batu Pahat. African Commonwealth forces Southern Rhodesia and its successor, the Federation of Rhodesia and Nyasaland, contributed two units to Malaya. Between 1951 and 1953, white Southern Rhodesian volunteers formed "C" Squadron of the Special Air Service. The Rhodesian African Rifles, comprising black soldiers and warrant officers led by white officers, were stationed in Johor between 1956 and 1958. The King's African Rifles from Nyasaland, Northern Rhodesia and Kenya were also deployed to Malaya. Iban mercenaries The British Empire hired thousands of mercenaries hailing from the Iban people (a subgroup of the Dayak people) of Borneo to fight against the Malayan National Liberation Army. During their service they were widely praised for their jungle and bushcraft skills. Though their military effectiveness and behaviour during the war has been brought into question. Their deployment received a large amount of both positive and negative attention in British media. They were also responsible for a number of atrocities, most notably the decapitation and scalping of suspected pro-independence guerrillas. Photographs of this practice were leaked in 1952, sparking the British Malayan headhunting scandal. In 1953 most Ibans in Malaya joined the reformed Sarawak Rangers, transitioning them from mercenaries into regular soldiers. According to a former member of the Sarawak Rangers, Ibans served with at least 42 separate battalions in the Malayan Emergency belonging to either British or Commonwealth militaries. Iban mercenaries were first deployed to British Malaya by the British Empire to fight in the Malayan Emergency on the 8 August where they served Ferret Force. Many were motivated to fight with the hope that they could collect the heads and scalps of their enemies. Their deployment was supported by the British politician Arthur Creech Jones, then serving as the Secretary of State for the Colonies who agreed to deploy Ibans to the Malayan Emergency for three months. Amid rumours that the Iban mercenaries they deployed were practiced headhunters, all Ibans serving with the British were removed from British Malaya and quietly redeployed in 1949 and served for the entirety of the war until its end in 1960. Some historians have argued that the British military's use of Ibans stemmed from stereotypes that "primitive" people enjoyed a closer relationship with nature than Europeans. Others have argued that the British army's deployment and treatment of the Ibans during the Malayan Emergency reflected the British military's history regarding what they perceived as 'martial races'. The deployment of Iban mercenaries recruited to fight in the Malayan Emergency was a widely publicised topic in the British press. Many newspapers articles contained titles referring to the Iban cultural practice of headhunting and contained articles portraying Ibans as violent and primitive while being friendly towards white Europeans. While many newspaper articles incorrectly argued that Ibans deployed to Malaya were no longer headhunters, others put forward arguments that Ibans in Malaya should be allowed to openly decapitate and scalp members of the MNLA. The Iban mercenaries deployed to Malaya were widely praised for their jungle bushcraft skills, although some British and Commonwealth officers found that Ibans were outperformed in this role by recruits from Africa and certain parts of the Commonwealth. The behaviour of Iban mercenaries serving in Malaya was also the subject of criticism, as some Iban recruits were found to have looted corpses and others had threatened their commanding officers with weapons. Due to fears of racial tensions with ethnic Malays the Iban mercenaries that Britain deployed to Malaya were denied access to automatic weapons. There were also communication difficulties as virtually all the Iban recruits in Malaya were illiterate and most British troops serving alongside them had no prior experience with Asian languages. Some Iban mercenaries refused to go on patrol after receving bad omens in their dreams. Iban society had no social classes making it difficult for them to adhere to military ranks. Some Royal Marines complained that their Iban allies were inaccurate with firearm, and Ibans were both the victims and perpetrators of an unusual amount of friendly fire incidents. The first Iban casualty of the war was a man called Jaweng ak Jugah who was shot dead after being mistaken for a "communist terrorist". At the beginning of the Malayan Emergency, the Ibans serving the British were classified as civilians and were thus awarded British and Commonwealth medals reserved for civilians. In one example, the Iban mercenary Awang anak Raweng, was awarded the George Cross in 1951 after he allegedly repelled an attack of 50 MNLA guerrillas. Another example is Menggong anak Panggit who was awarded the George Medal in 1953. In 1953 Ibans in Malaya were given their own regiment, the Sarawak Rangers. Many would go onto fight during the Second Malayan Emergency. The October Resolution In 1951 the MNLA implemented the October Resolution. The October Resolution involved a change of tactics by the MNLA by reducing attacks on economic targets and civilian collaborators, redirecting their efforts towards political organisation and subversion, and bolstering the supply network from the Min Yuen as well as jungle farming and was a response to the Briggs Plan. Amnesty declaration On 8 September 1955, the Government of the Federation of Malaya issued a declaration of amnesty to the communists. The Government of Singapore issued an identical offer at the same time. Tunku Abdul Rahman, as Chief Minister, offered amnesty but rejected negotiations with the MNLA. The amnesty read that: Those of you who come in and surrender will not be prosecuted for any offence connected with the Emergency, which you have committed under Communist direction, either before this date or in ignorance of this declaration. You may surrender now and to whom you like including to members of the public. There will be no general "ceasefire" but the security forces will be on alert to help those who wish to accept this offer and for this purpose local "ceasefire" will be arranged. The Government will conduct investigations on those who surrender. Those who show that they are genuinely intent to be loyal to the Government of Malaya and to give up their Communist activities will be helped to regain their normal position in society and be reunited with their families. As regards the remainder, restrictions will have to be placed on their liberty but if any of them wish to go to China, their request will be given due consideration. Following this amnesty declaration, an intensive publicity campaign was launched by the government. Alliance ministers in the Federal Government travelled extensively across Malaya exhorting civilians to call upon communist forces to surrender their weapons and accept the amnesty. Despite the campaign, few Communist guerrillas chose to surrender. Some political activists criticised the amnesty for being too restrictive and for being a rewording of earlier well established surrender offers. These critics advocated for direct negotiations with the communist guerrillas of the MNLA and MCP to work on a peace settlement. Leading officials of the Labour Party had, as part of the settlement, not excluded the possibility of recognition of the MCP as a political organisation. Within the Alliance itself, influential elements in both the MCA and UMNO were endeavouring to persuade the Chief Minister, Tunku Abdul Rahman, to hold negotiations with the MCP. Baling Talks and their consequences In 1955 Chin Peng indicated that he would be willing to meet with British officials alongside senior Malayan politicians. The result of this was the Baling Talks, a meeting which took place between communist and Commonwealth forces to debate a peace treaty. The Baling Talks took place inside an English School in Baling on 28 December 1955. The MCP and MNLA was represented by Chin Peng, Rashid Maidin, and Chen Tien. The Commonwealth forces were represented by Tunku Abdul Rahman, Tan Cheng-Lock and David Saul Marshall. Despite the meeting being conducted successfully, the British forces were worried that a peace treaty with the MCP would lead to communist activists regaining influence in society. As a result, many of Chin Peng's demands were dismissed. Following the failure of the talks, Tunku Abdul Rahman withdraw the amnesty offers for MNLA members on 8 February 1956, five months after they had been offered, stating he was unwilling to meet the communists again unless they indicated beforehand their intention to make "a complete surrender". Following the failure of the Baling Talks, the MCP made various efforts to resume peace negotiations with the Malayan government, all without success. Meanwhile, discussions began in the new Emergency Operations Council to intensify the "People's War" against the guerrillas. In July 1957, a few weeks before independence, the MCP made another attempt at peace talks, suggesting the following conditions for a negotiated peace: its members should be given privileges enjoyed by citizens a guarantee that political as well as armed members of the MCP would not be punished The failure of the talks affected MCP policy. The strength of the MNLA and 'Min Yuen' declined to 1830 members in August 1957. Those who remained faced exile, or death in the jungle. However, Tunku Abdul Rahman did not respond to the MCP's proposals. Following the declaration of Malaya's independence in August 1957, the MNLA lost its rationale as a force of colonial liberation. The last serious resistance from MNLA guerrillas ended with a surrender in the Telok Anson marsh area in 1958. The remaining MNLA forces fled to the Thai border and further east. On 31 July 1960 the Malayan government declared the state of emergency over, and Chin Peng left south Thailand for Beijing where he was accommodated by the Chinese authorities in the International Liaison Bureau, where many other Southeast Asian Communist Party leaders were housed. Casualties During the conflict, security forces killed 6,710 MNLA guerrillas and captured 1,287, while 2,702 guerrillas surrendered during the conflict, and approximately 500 more did so at its conclusion. A total of 226 guerrillas were executed. 1,346 Malayan troops and police were killed during the fighting. 1,443 British personnel died, in what remains the largest loss of life among UK armed forces since the Second World War. 2,478 civilians were killed, with another 810 recorded as missing. Atrocities Commonwealth Torture During the Malayan conflict, there were instances during operations to find MNLA guerrillas where British troops detained and allegedly tortured villagers who were suspected of aiding the MNLA. Socialist historian Brian Lapping said that there was "some vicious conduct by the British forces, who routinely beat up Chinese squatters when they refused, or possibly were unable, to give information" about the MNLA. The Scotsman newspaper lauded these tactics as a good practice since "simple-minded peasants are told and come to believe that the communist leaders are invulnerable". Some civilians and detainees were also allegedly shot, either because they attempted to flee from and potentially aid the MNLA or simply because they refused to give intelligence to British forces. Widespread use of arbitrary detention, punitive actions against villages, and use of torture by the police, "created animosity" between Chinese squatters and British forces in Malaya which was counterproductive to gathering good intelligence. Batang Kali Massacre During the Batang Kali massacre, 24 unarmed civilians were executed by the Scots Guards near a rubber plantation at Sungai Rimoh near Batang Kali in Selangor in December 1948. All the victims were male, ranging in age from young teenage boys to elderly men. Many of the victims' bodies were found to have been mutilated and their village of Batang Kali was burned to the ground. No weapons were found when the village was searched. The only survivor of the massacre was a man named Chong Hong who was in his 20s at the time. He fainted and was presumed dead. Soon afterwards the British colonial government staged a coverup of British military abuses which served to obfuscate the exact details of the massacre. The massacre later became the focus of decades of legal battles between the UK government and the families of the civilians executed by British troops. According to Christi Silver, Batang Kali was notable in that it was the only incident of mass killings by Commonwealth forces during the war, which Silver attributes to the unique subculture of the Scots Guards and poor enforcement of discipline by junior officers. Internment camps As part of the Briggs Plan devised by British General Sir Harold Briggs, 500,000 people (roughly ten percent of Malaya's population) were forced from their homes by British forces. Tens of thousands of homes were destroyed, and many people were imprisoned in British internment camps called "new villages". During the Malayan Emergency, 450 new villages were created. The policy aimed to inflict collective punishment on villages where people were thought to support communism, and also to isolate civilians from guerrilla activity. Many of the forced evictions involved the destruction of existing settlements which went beyond the justification of military necessity. This practice is now prohibited by Article 17 (1) of Additional Protocol II to the Geneva Conventions, which forbid civilian internment unless rendered absolutely necessary by military operations. Collective punishment A key British war measure was inflicting collective punishments on villages whose population were deemed to be aiding MNLA guerrillas. At Tanjong Malim in March 1952, Templer imposed a twenty-two-hour house curfew, banned everyone from leaving the village, closed the schools, stopped bus services, and reduced the rice rations for 20,000 people. The last measure prompted the London School of Hygiene and Tropical Medicine to write to the Colonial Office to note that the "chronically undernourished Malayan" might not be able to survive as a result. "This measure is bound to result in an increase, not only of sickness but also of deaths, particularly amongst the mothers and very young children". Some people were fined for leaving their homes to use external latrines. In another collective punishment, at Sengei Pelek the following month, measures included a house curfew, a reduction of 40 percent in the rice ration and the construction of a chain-link fence 22 yards outside the existing barbed wire fence around the town. Officials explained that the measures were being imposed upon the 4,000 villagers "for their continually supplying food" to the MNLA and "because they did not give information to the authorities". Deportations Over the course of the war, some 30,000 mostly ethnic Chinese were deported by the British authorities to mainland China. This would have been a war crime under Article 17 (2) of Additional Protocol II to the Geneva Conventions, which states: "Civilians shall not be compelled to leave their own territory for reasons connected with the conflict." Public display of corpses During the Emergency it was common practice for British forces and their allies to publicly display the corpses of suspected communists and anti-colonial guerrillas. This was often done in the centers of towns and villages. Oftentimes British and Commonwealth troops would round up local children and forced them to look at the corpses, monitoring their emotional reaction for clues on whether they knew the dead. Many of the corpses publicly displayed by British forces belonged to guerrillas who had previously been allies of Britain during WWII. A notable victim of these public corpse displays was MNLA guerrilla leader Liew Kon Kim, whose corpse was publicly displayed in locations around British Malaya. At least two instances of public corpse displays by British forces in Malaya gained notable media attention in Britain, and were later dubbed "The Telok Anson Tragedy" and "The Kulim Tragedy". Headhunting and scalping During the war British and Commonwealth forces hired over 1,000 Iban (Dyak) mercenaries from Borneo to act as jungle trackers. With a tradition of headhunting, they decapitated suspected MNLA members; the authorities held that taking the heads was the only means of later identification. Iban headhunters were permitted by British military leaders to keep the scalps of corpses as trophies. After the headhunting had been exposed to the public, the Foreign Office first tried to deny it was in use, before then trying to justify Iban headhunting and conducting damage control in the press. Privately, the Colonial Office noted that "there is no doubt that under international law a similar case in wartime would be a war crime". Skull fragments from a trophy head were later found to have been displayed in a British regimental museum. Headhunting exposed to British public In April 1952, the British communist newspaper the Daily Worker (later known as the Morning Star) published a photograph of British Royal Marines inside a British military base openly posing with severed human heads. By republishing these images the British communists had hoped to turn public opinion against the war. Initially British government spokespersons belonging to the Admiralty and the Colonial Office claimed the photograph was fake. In response to the accusations that their headhunting photograph was fake, the Daily Worker released another photograph taken in Malaya showing British soldiers posing with a severed head. Later the Colonial Secretary, Oliver Lyttelton, confirmed to parliament that the Daily Worker headhunting photographs were indeed genuine. In response to the Daily Worker articles exposing the decapitation of MNLA suspects, the practice was banned by Winston Churchill who feared that such photographs resulting from headhunting would expose the British for their brutality. However, Churchill's order to discontinue the decapitations was widely ignored by Iban trackers who continued to behead suspected guerrillas. Despite the shocking imagery of the photographs of soldiers posing with severed heads in Malaya, the Daily Worker was the only newspaper to publish them and the photographs were virtually ignored by the mainstream British press. Comparisons with Vietnam This war had similarities with the First Indochina War in Vietnam; both the French and the British returned to establish their colonial rule after Japanese occupation, both granted a high degree of autonomy to their own indigenous states (Vietnam on 8/3/1949 and Malaya on 1/2/1948), both had to fight communist anti-colonial rebellions as part of ideological conflicts, the headquarters of the communists in both Vietnam and Malaysia were in the jungle, both pro-colonial native states were granted full independence within the French Union (4/6/1954) or the British Commonwealth (31/8/1957) at the end of the war, and both Vietnam and Malaysia had to continue to fight the communist side after independence. Differences This conflict and the Vietnam War (following the First Indochina War) have often been compared. However, the two conflicts differ in the following ways: The MNLA never numbered more than about 8,000 full-time insurgents, but the People's Army of (North) Vietnam fielded a quarter of a million regular troops, in addition to roughly 100,000 National Liberation Front (or Viet Cong) partisans. The Viet Cong were agents of the government of North Vietnam and could count on substantial support from their government; the MNLA had no such domestic state support. North Korea, Cuba and the People's Republic of China (PRC) provided military hardware, logistical support, personnel and training to North Vietnam, whereas the MNLA received no material support, weapons or training from any foreign government. North Vietnam's shared border with its ally China (PRC) allowed for continuous assistance and provided a safe haven for communist forces, but Malaya's only land border is with non-communist Thailand. Britain did not approach the Emergency as a conventional conflict and quickly implemented an effective intelligence strategy, led by the Malayan Police Special Branch, and a systematic hearts and minds operation, both of which proved effective against the largely political aims of the guerrilla movement. The British military recognised that in a low-intensity war, individual soldiers' skill and endurance were of far greater importance than overwhelming firepower (artillery, air support, etc.). Even though many British soldiers were conscripted National Servicemen, the necessary skills and attitudes were taught at a Jungle Warfare School, which also developed the optimum tactics based on experience gained in the field. Vietnam was less ethnically fragmented than Malaya. During the Emergency, most MNLA members were ethnically Chinese and drew support from sections of the Chinese community. However, most of the more numerous indigenous Malays, many of whom were animated by anti-Chinese sentiments, largely remained loyal to the government and enlisted in high numbers into the security services. Similarities The United States in Vietnam were highly influenced by Britain's military strategies during the Malayan Emergency and the two wars shared many similarities. Some examples are listed below. Both countries used Agent Orange. Britain pioneered the use of Agent Orange as a weapon of war during the Malayan Emergency. This fact was used by the United States as a justification to use Agent Orange in Vietnam. Both the Royal Air Force and the United States Air Force used widespread saturation bombing. Both countries frequently used internment camps. In Malaya, internment camps called "new villages" were built by the British colonial occupation to imprison approximately 400,000 rural peasants. The United States attempted to replicate the new villages with their Strategic Hamlet Program. However, the Strategic Hamlets were unsuccessful in segregating communist guerrillas from their civilian supporters. Both countries made use of incendiary weapons, including flamethrowers and incendiary grenades. Both the Malayan and Vietnamese communists recruited women as fighters due to their beliefs in gender equality. Women served as generals in both communist armies, with notable examples being Lee Meng in Malaya and Nguyễn Thị Định in Vietnam. Both the Malayan and Vietnamese communists were led by veterans of WWII who had been trained by their future enemies. The British trained and funded the Malayan Peoples' Anti-Japanese Army whose veterans would go onto resist the British colonial occupation, and the United States trained Vietnamese communists to fight against Japan during WWII. Legacy The Indonesia–Malaysia confrontation of 1963–1966 arose from tensions between Indonesia and the new British backed Federation of Malaysia that was conceived in the aftermath of the Malayan Emergency. In the late 1960s, the coverage of the My Lai massacre during the Vietnam War prompted the initiation of investigations in the UK concerning war crimes perpetrated by British forces during the Emergency, such as the Batang Kali massacre. No charges have yet been brought against the British forces involved and the claims have been repeatedly dismissed by the British government as propaganda, despite evidence suggestive of a cover-up. Following the end of the Malayan Emergency in 1960, the predominantly ethnic Chinese Malayan National Liberation Army, the armed wing of the MCP, retreated to the Malaysia–Thailand border where it regrouped and retrained for future offensives against the Malaysian government. A new phase of communist insurgency began in 1968. It was triggered when the MCP ambushed security forces in Kroh–Betong, in the northern part of Peninsular Malaysia, on 17 June 1968. The new conflict coincided with renewed tensions between ethnic Malays and Chinese following the 13 May incident of 1969, and the ongoing Vietnam War. Communist leader Chin Peng spent much of the 1990s and early 2000s working to promote his perspective of the Emergency. In a collaboration with Australian academics, he met with historians and former Commonwealth military personnel at a series of meetings which led to the publication of Dialogues with Chin Peng: New Light on the Malayan Communist Party. Peng also travelled to England and teamed up with conservative journalist Ian Ward and his wife Norma Miraflor to write his autobiography Alias Chin Peng: My Side of History. Many colonial documents, possibly relating to British atrocities in Malaya, were either destroyed or hidden by the British colonial authorities as a part of Operation Legacy. Traces of these documents were rediscovered during a legal battle in 2011 involving the victims of rape and torture by the British military during the Mau Mau uprising. List of battles/incidents during the Malayan Emergency Assassination of Sir Henry Gurney Batang Kali massacre Battle of Semur River Bukit Kepong incident Labis incident Operation Termite Penang ambush Sungai Siput incident In popular culture In popular Malaysian culture, the Emergency has frequently been portrayed as a primarily Malay struggle against the communists. This perception has been criticised by some, such as Information Minister Zainuddin Maidin, for not recognising Chinese and Indian efforts. A number of films were set during the Emergency, including: The Planter's Wife (1952) Windom's Way (1957) The 7th Dawn (1964) The Virgin Soldiers (1969) Stand Up, Virgin Soldiers (1977) Bukit Kepong (1981) The Garden of Evening Mists (2019) Other media: Mona Brand's stage production Strangers in the Land (1952) was created as political commentary to criticise the occupation, depicting plantation owners as burning down villages and collecting the heads of murdered Malayans as trophies. The play was only performed in the UK at the tiny activist run Unity Theater because the British government had banned the play from commercial stages. The Malayan Trilogy series of novels (1956–1959) by Anthony Burgess is set during the Malayan Emergency. In The Sweeney episode "The Bigger They Are" (series 4, episode 8; 26 October 1978), the tycoon Leonard Gold is being blackmailed by Harold Collins, who has a photo of him present at a massacre of civilians in Malaya when he was in the British Army twenty-five years earlier. Throughout the series Porridge, there are references to Fletcher having served in Malaya, probably as a result of National Service. He regales his fellow inmates with stories of his time there, and in one episode it is revealed that Prison Officer Mackay had also served in Malaya. Pennyworth See also Batang Kali massacre Battle of Semur River Briggs Plan British Far East Command Bukit Kepong incident Chin Peng Cold War in Asia Communist insurgency in Malaysia (1968–89) Far East Strategic Reserve (FESR) History of Malaysia List of weapons in Malayan Emergency Malayan Peoples' Anti-Japanese Army New village Notes References Sources Further reading Newsinger, John. (2016) British counterinsurgency (Springer, 2016) compares British measures in Mayaya, Palestine, Kenya, Cyprus, South Yemen, Dhofar, & Northern Ireland Short, Anthony (1975). The Communist Insurrection in Malaya 1948–1960. London and New York: Frederick Muller. Reprinted (2000) as In Pursuit of Mountain Rats. Singapore. Sullivan, Michael D. "Leadership in Counterinsurgency: A Tale of Two Leaders" Military Review (Sep/Oct 2007) 897#5 pp 119–123. External links Australian War Memorial (Malayan Emergency 1950–1960) Far East Strategic Reserve Navy Association (Australia) Inc. (Origins of the FESR – Navy) Malayan Emergency (AUS/NZ Overview) Britain's Small Wars (Malayan Emergency) PsyWar.Org (Psychological Operations during the Malayan Emergency) www.roll-of-honour.com (Searchable database of Commonwealth Soldiers who died) A personal account of flying the Bristol Brigand aircraft with 84 Squadron RAF during the Malayan Emergency – Terry Stringer The Malayan Emergency 1948 to 1960 Anzac Portal 1948 in military history Cold War conflicts Cold War rebellions Communism in Malaysia Communism in Singapore Communist rebellions History of the Royal Marines Insurgencies in Asia Rebellions against the British Empire Guerrilla wars Cold War history of Australia Civil wars in Malaysia British Empire Decolonization Military operations involving chemical weapons Wars involving Australia Wars involving pre-independence Malaysia Wars involving New Zealand Wars involving Rhodesia Wars of independence
Malayan Emergency
[ "Chemistry" ]
9,507
[ "Military operations involving chemical weapons" ]
177,174
https://en.wikipedia.org/wiki/Orion%20Nebula
The Orion Nebula (also known as Messier 42, M42, or NGC 1976) is a diffuse nebula situated in the Milky Way, being south of Orion's Belt in the constellation of Orion, and is known as the middle "star" in the "sword" of Orion. It is one of the brightest nebulae and is visible to the naked eye in the night sky with an apparent magnitude of 4.0. It is away and is the closest region of massive star formation to Earth. The M42 nebula is estimated to be 25 light-years across (so its apparent size from Earth is approximately 1 degree). It has a mass of about 2,000 times that of the Sun. Older texts frequently refer to the Orion Nebula as the Great Nebula in Orion or the Great Orion Nebula. The Orion Nebula is one of the most scrutinized and photographed objects in the night sky and is among the most intensely studied celestial features. The nebula has revealed much about the process of how stars and planetary systems are formed from collapsing clouds of gas and dust. Astronomers have directly observed protoplanetary disks and brown dwarfs within the nebula, intense and turbulent motions of the gas, and the photo-ionizing effects of massive nearby stars in the nebula. Physical characteristics The Orion Nebula is visible with the naked eye even from areas affected by light pollution. It is seen as the middle "star" in the "sword" of Orion, which are the three stars located south of Orion's Belt. The "star" appears fuzzy to sharp-eyed observers, and the nebulosity is obvious through binoculars or a small telescope. The peak surface brightness of the central region of M42 is about 17 Mag/arcsec2 and the outer bluish glow has a peak surface brightness of 21.3 Mag/arcsec2. The Orion Nebula contains a very young open cluster, known as the Trapezium Cluster due to the asterism of its primary four stars within a diameter of 1.5 light years. Two of these can be resolved into their component binary systems on nights with good seeing, giving a total of six stars. The stars of the Trapezium Cluster, along with many other stars, are still in their early years. The Trapezium Cluster is a component of the much larger Orion Nebula cluster, an association of about 2,800 stars within a diameter of 20 light years. The Orion Nebula is in turn surrounded by the much larger Orion molecular cloud complex which is hundreds of light years across, spanning the whole Orion Constellation. Two million years ago the Orion Nebula cluster may have been the home of the runaway stars AE Aurigae, 53 Arietis, and Mu Columbae, which are currently moving away from the nebula at speeds greater than . Coloration Observers have long noted a distinctive greenish tint to the nebula, in addition to regions of red and of blue-violet. The red hue is a result of the Hα recombination line radiation at a wavelength of 656.3 nm. The blue-violet coloration is the reflected radiation from the massive O-class stars at the core of the nebula. The green hue was a puzzle for astronomers in the early part of the 20th century because none of the known spectral lines at that time could explain it. There was some speculation that the lines were caused by a new element, and the name nebulium was coined for this mysterious material. With better understanding of atomic physics, however, it was later determined that the green spectrum was caused by a low-probability electron transition in doubly ionized oxygen, a so-called "forbidden transition". This radiation was impossible to reproduce in the laboratory at the time, because it depended on the quiescent and nearly collision-free environment found in the high vacuum of deep space. History There has been speculation that the Mayans of Central America may have described the nebula within their "Three Hearthstones" creation myth; if so, the three would correspond to two stars at the base of Orion, Rigel and Saiph, and another, Alnitak at the tip of the "belt" of the imagined hunter, the vertices of a nearly perfect equilateral triangle with Orion's Sword (including the Orion Nebula) in the middle of the triangle seen as the smudge of smoke from copal incense in a modern myth, or, in (the translation it suggests of) an ancient one, the literal or figurative embers of a fiery creation. Neither Ptolemy's Almagest nor al Sufi's Book of Fixed Stars noted this nebula, even though they both listed patches of nebulosity elsewhere in the night sky; nor did Galileo mention it, even though he also made telescopic observations surrounding it in 1610 and 1617. This has led to some speculation that a flare-up of the illuminating stars may have increased the brightness of the nebula. The first discovery of the diffuse nebulous nature of the Orion Nebula is generally credited to French astronomer Nicolas-Claude Fabri de Peiresc, on November 26, 1610, when he made a record of observing it with a refracting telescope purchased by his patron Guillaume du Vair. The first published observation of the nebula was by the Jesuit mathematician and astronomer Johann Baptist Cysat of Lucerne in his 1619 monograph on the comets (describing observations of the nebula that may date back to 1611). He made comparisons between it and a bright comet seen in 1618 and described how the nebula appeared through his telescope as: His description of the center stars as different from a comet's head in that they were a "rectangle" may have been an early description of the Trapezium Cluster. (The first detection of three of the four stars of this cluster is credited to Galileo Galilei on February 4, 1617. ) The nebula was independently "discovered" (though visible to the naked eye) by several other prominent astronomers in the following years, including by Giovanni Battista Hodierna (whose sketch was the first published in De systemate orbis cometici, deque admirandis coeli characteribus). In 1659, Dutch scientist Christiaan Huygens published the first detailed drawing of the central region of the nebula in Systema Saturnium. Charles Messier observed the nebula on March 4, 1769, and he also noted three of the stars in Trapezium. Messier published the first edition of his catalog of deep sky objects in 1774 (completed in 1771). As the Orion Nebula was the 42nd object in his list, it became identified as M42. In 1865, English amateur astronomer William Huggins used his visual spectroscopy method to examine the nebula, showing that it, like other nebulae he had examined, was made up of "luminous gas". On September 30, 1880, Henry Draper used the new dry plate photographic process with an 11-inch (28 cm) refracting telescope to make a 51-minute exposure of the Orion Nebula, the first instance of astrophotography of a nebula in history. Another breakthrough in astronomical photography occurred in 1883, when amateur astronomer Andrew Ainslie Common used the dry plate process to record several images in exposures up to 60 minutes with a 36-inch (91 cm) reflecting telescope that he constructed in the backyard of his home in Ealing, west London. These images, for the first time, showed stars and nebula detail too faint to be seen by the human eye. In 1902, Vogel and Eberhard discovered differing velocities within the nebula, and by 1914 astronomers at Marseilles had used the interferometer to detect rotation and irregular motions. Campbell and Moore confirmed these results using the spectrograph, demonstrating turbulence within the nebula. In 1931, Robert J. Trumpler noted that the fainter stars near the Trapezium formed a cluster, and he was the first to name them the Trapezium cluster. Based on their magnitudes and spectral types, he derived a distance estimate of 1,800 light years. This was three times farther than the commonly accepted distance estimate of the period but was much closer to the modern value. In 1993, the Hubble Space Telescope first observed the Orion Nebula. Since then, the nebula has been a frequent target for HST studies. The images have been used to build a detailed model of the nebula in three dimensions. Protoplanetary disks have been observed around most of the newly formed stars in the nebula, and the destructive effects of high levels of ultraviolet energy from the most massive stars have been studied. In 2005, the Advanced Camera for Surveys instrument of the Hubble Space Telescope finished capturing the most detailed image of the nebula yet taken. The image was taken through 104 orbits of the telescope, capturing over 3,000 stars down to the 23rd magnitude, including infant brown dwarfs and possible brown dwarf binary stars. A year later, scientists working with the HST announced the first ever masses of a pair of eclipsing binary brown dwarfs, 2MASS J05352184–0546085. The pair are located in the Orion Nebula and have approximate masses of and respectively, with an orbital period of 9.8 days. Surprisingly, the more massive of the two also turned out to be the less luminous. In October 2023, astronomers, based on observations of the Orion Nebula with the James Webb Space Telescope, reported the discovery of pairs of rogue planets, similar in mass to the planet Jupiter, and called JuMBOs (short for Jupiter Mass Binary Objects). Structure The entirety of the Orion Nebula extends across a 1° region of the sky, and includes neutral clouds of gas and dust, associations of stars, ionized volumes of gas, and reflection nebulae. The Nebula is part of a much larger nebula that is known as the Orion molecular cloud complex. The Orion molecular cloud complex extends throughout the constellation of Orion and includes Barnard's Loop, the Horsehead Nebula, M43, M78, and the Flame Nebula. Stars are forming throughout the entire Cloud Complex, but most of the young stars are concentrated in dense clusters like the one illuminating the Orion Nebula. The current astronomical model for the nebula consists of an ionized (H II) region, roughly centered on Theta1 Orionis C, which lies on the side of an elongated molecular cloud in a cavity formed by the massive young stars. (Theta1 Orionis C emits 3-4 times as much photoionizing light as the next brightest star, Theta2 Orionis A.) The H II region has a temperature ranging up to 10,000 K, but this temperature falls dramatically near the edge of the nebula. The nebulous emission comes primarily from photoionized gas on the back surface of the cavity. The H II region is surrounded by an irregular, concave bay of more neutral, high-density cloud, with clumps of neutral gas lying outside the bay area. This in turn lies on the perimeter of the Orion Molecular Cloud. The gas in the molecular cloud displays a range of velocities and turbulence, particularly around the core region. Relative movements are up to 10 km/s (22,000 mi/h), with local variations of up to 50 km/s and possibly more. Observers have given names to various features in the Orion Nebula. The dark bay that extends from the north into the bright region is known as "Sinus Magnus", also called the "Fish's Mouth". The illuminated regions to both sides are called the "Wings". Other features include "The Sword", "The Thrust", and "The Sail". Star formation The Orion Nebula is an example of a stellar nursery where new stars are being born. Observations of the nebula have revealed approximately 700 stars in various stages of formation within the nebula. In 1979 observations with the Lallemand electronic camera at the Pic-du-Midi Observatory showed six unresolved high-ionization sources near the Trapezium Cluster. These sources were interpreted as partly ionized globules (PIGs). The idea was that these objects are being ionized from the outside by M42. Later observations with the Very Large Array showed solar-system-sized condensations associated with these sources. Here the idea appeared that these objects might be low-mass stars surrounded by an evaporating protostellar accretion disk. In 1993 observations with the Hubble Space Telescope have yielded the major confirmation of protoplanetary disks within the Orion Nebula, which have been dubbed proplyds. HST has revealed more than 150 of these within the nebula, and they are considered to be systems in the earliest stages of solar system formation. The sheer numbers of them have been used as evidence that the formation of planetary systems is fairly common in the universe. Stars form when clumps of hydrogen and other gases in an H II region contract under their own gravity. As the gas collapses, the central clump grows stronger and the gas heats to extreme temperatures by converting gravitational potential energy to thermal energy. If the temperature gets high enough, nuclear fusion will ignite and form a protostar. The protostar is 'born' when it begins to emit enough radiative energy to balance out its gravity and halt gravitational collapse. Typically, a cloud of material remains a substantial distance from the star before the fusion reaction ignites. This remnant cloud is the protostar's protoplanetary disk, where planets may form. Recent infrared observations show that dust grains in these protoplanetary disks are growing, beginning on the path towards forming planetesimals. Once the protostar enters into its main sequence phase, it is classified as a star. Even though most planetary disks can form planets, observations show that intense stellar radiation should have destroyed any proplyds that formed near the Trapezium group, if the group is as old as the low mass stars in the cluster. Since proplyds are found very close to the Trapezium group, it can be argued that those stars are much younger than the rest of the cluster members. Stellar wind and effects Once formed, the stars within the nebula emit a stream of charged particles known as a stellar wind. Massive stars and young stars have much stronger stellar winds than the Sun. The wind forms shock waves or hydrodynamical instabilities when it encounters the gas in the nebula, which then shapes the gas clouds. The shock waves from stellar wind also play a large part in stellar formation by compacting the gas clouds, creating density inhomogeneities that lead to gravitational collapse of the cloud. There are three different kinds of shocks in the Orion Nebula. Many are featured in Herbig–Haro objects: Bow shocks are stationary and are formed when two particle streams collide with each other. They are present near the hottest stars in the nebula where the stellar wind speed is estimated to be thousands of kilometers per second and in the outer parts of the nebula where the speeds are tens of kilometers per second. Bow shocks can also form at the front end of stellar jets when the jet hits interstellar particles. Jet-driven shocks are formed from jets of material sprouting off newborn T Tauri stars. These narrow streams are traveling at hundreds of kilometers per second, and become shocks when they encounter relatively stationary gases. Warped shocks appear bow-like to an observer. They are produced when a jet-driven shock encounters gas moving in a cross-current. The interaction of the stellar wind with the surrounding cloud also forms "waves" which are believed to be due to the hydrodynamical Kelvin-Helmholtz instability. The dynamic gas motions in M42 are complex, but are trending out through the opening in the bay and toward the Earth. The large neutral area behind the ionized region is currently contracting under its own gravity. There are also supersonic "bullets" of gas piercing the hydrogen clouds of the Orion Nebula. Each bullet is ten times the diameter of Pluto's orbit and tipped with iron atoms glowing in the infrared. They were probably formed one thousand years earlier from an unknown violent event. Evolution Interstellar clouds like the Orion Nebula are found throughout galaxies such as the Milky Way. They begin as gravitationally bound blobs of cold, neutral hydrogen, intermixed with traces of other elements. The cloud can contain hundreds of thousands of solar masses and extend for hundreds of light years. The tiny force of gravity that could compel the cloud to collapse is counterbalanced by the very faint pressure of the gas in the cloud. Whether due to collisions with a spiral arm, or through the shock wave emitted from supernovae, the atoms are precipitated into heavier molecules and the result is a molecular cloud. This presages the formation of stars within the cloud, usually thought to be within a period of 10–30 million years, as regions pass the Jeans mass and the destabilized volumes collapse into disks. The disk concentrates at the core to form a star, which may be surrounded by a protoplanetary disk. This is the current stage of evolution of the nebula, with additional stars still forming from the collapsing molecular cloud. The youngest and brightest stars we now see in the Orion Nebula are thought to be less than 300,000 years old, and the brightest may be only 10,000 years in age. Some of these collapsing stars can be particularly massive, and can emit large quantities of ionizing ultraviolet radiation. An example of this is seen with the Trapezium cluster. Over time the ultraviolet light from the massive stars at the center of the nebula will push away the surrounding gas and dust in a process called photoevaporation. This process is responsible for creating the interior cavity of the nebula, allowing the stars at the core to be viewed from Earth. The largest of these stars have short life spans and will evolve to become supernovae. Within about 100,000 years, most of the gas and dust will be ejected. The remains will form a young open cluster, a cluster of bright, young stars surrounded by wispy filaments from the former cloud. See also Barnard's Loop Kleinmann–Low Nebula Flame Nebula (NGC 2024) Horsehead Nebula Hubble 3D (2010), an IMAX film with an elaborate CGI fly-through of the Orion Nebula List of diffuse nebulae List of Messier objects Messier 43, which is part of the Orion Nebula Messier 78, a reflection nebula New General Catalogue Orion correlation theory Orion molecular cloud complex Orion OB1 Notes 1,344 × tan( 65 arcmins / 2 ) = 12.7 ly radius From temperate zones in the Northern Hemisphere, the nebula appears below the Belt of Orion; from temperate zones in the Southern Hemisphere the nebula appears above the Belt. C. Robert O'Dell commented about this Wikipedia article, "The only egregious error is the last sentence in the Stellar Formation section. It should actually read 'Even though most planetary disks can form planets, observations show that intense stellar radiation should have destroyed any proplyds that formed near the Trapezium group, if the group is as old as the low mass stars in the cluster. Since proplyds are found very close to the Trapezium group, it can be argued that those stars are much younger than the rest of the cluster members.'" References External links Orion Nebula photographs taken by Andrew Ainslie Common in 1883, part of the London Science Museum's collection , University of South Wales Orion Nebula observed by Chandra/HST Orion Nebula observed by Gemini Observatory Orion Nebula at ESA/Hubble Messier 42, SEDS Messier pages and specifically NGC 1976. January 2006 Hubble Space Telescope image of the Orion Nebula January 2006 Hubble Space Telescope image of the Trapezium cluster Orion Nebula M42, Hubble Images Remarkable new views captured of Orion Nebula, SpaceFlight Now, 2001. NightSkyInfo.com – The Great Orion Nebula Astronomy Picture of the Day Spitzer's Orion 2010 April 10 Planetary Systems Now Forming in Orion 2009 December 22 Great Orion Nebulae 2008 October 23 ESO: Hidden Secrets of Orion’s Clouds incl. Photos and animations Messier objects H II regions NGC objects Orion molecular cloud complex Orion (constellation) Orion–Cygnus Arm Diffuse nebulae 16101126 Astronomical objects known since antiquity Articles containing video clips Star-forming regions 145
Orion Nebula
[ "Astronomy" ]
4,132
[ "Constellations", "Orion (constellation)" ]
177,206
https://en.wikipedia.org/wiki/VoiceXML
VoiceXML (VXML) is a digital document standard for specifying interactive media and voice dialogs between humans and computers. It is used for developing audio and voice response applications, such as banking systems and automated customer service portals. VoiceXML applications are developed and deployed in a manner analogous to how a web browser interprets and visually renders the Hypertext Markup Language (HTML) it receives from a web server. VoiceXML documents are interpreted by a voice browser and in common deployment architectures, users interact with voice browsers via the public switched telephone network (PSTN). The VoiceXML document format is based on Extensible Markup Language (XML). It is a standard developed by the World Wide Web Consortium (W3C). Usage VoiceXML applications are commonly used in many industries and segments of commerce. These applications include order inquiry, package tracking, driving directions, emergency notification, wake-up, flight tracking, voice access to email, customer relationship management, prescription refilling, audio news magazines, voice dialing, real-estate information and national directory assistance applications. VoiceXML has tags that instruct the voice browser to provide speech synthesis, automatic speech recognition, dialog management, and audio playback. The following is an example of a VoiceXML document: <vxml version="2.0" xmlns="http://www.w3.org/2001/vxml"> <form> <block> <prompt> Hello world! </prompt> </block> </form> </vxml> When interpreted by a VoiceXML interpreter this will output "Hello world" with synthesized speech. Typically, HTTP is used as the transport protocol for fetching VoiceXML pages. Some applications may use static VoiceXML pages, while others rely on dynamic VoiceXML page generation using an application server like Tomcat, Weblogic, IIS, or WebSphere. Historically, VoiceXML platform vendors have implemented the standard in different ways, and added proprietary features. But the VoiceXML 2.0 standard, adopted as a W3C Recommendation on 16 March 2004, clarified most areas of difference. The VoiceXML Forum, an industry group promoting the use of the standard, provides a conformance testing process that certifies vendors' implementations as conformant. History AT&T Corporation, IBM, Lucent, and Motorola formed the VoiceXML Forum in March 1999, in order to develop a standard markup language for specifying voice dialogs. By September 1999 the Forum released VoiceXML 0.9 for member comment, and in March 2000 they published VoiceXML 1.0. Soon afterwards, the Forum turned over the control of the standard to the W3C. The W3C produced several intermediate versions of VoiceXML 2.0, which reached the final "Recommendation" stage in March 2004. VoiceXML 2.1 added a relatively small set of additional features to VoiceXML 2.0, based on feedback from implementations of the 2.0 standard. It is backward compatible with VoiceXML 2.0 and reached W3C Recommendation status in June 2007. Future versions of the standard VoiceXML 3.0 was slated to be the next major release of VoiceXML, with new major features. However, with the disbanding of the VoiceXML Forum in May 2022, the development of the new standard was scrapped. Implementations As of December 2022, there are few VoiceXML 2.0/2.1 platform implementations being offered. Hewlett-Packard (OCMP) OnMobile (Ozone Speech Platform) Alvaria Avaya (Avaya Experience Portal) OpenVXI Cisco Genesys (company) Nuance Communications Phonologies Plum Voice Telesoft Technologies Related standards The W3C's Speech Interface Framework also defines these other standards closely associated with VoiceXML. SRGS and SISR The Speech Recognition Grammar Specification (SRGS) is used to tell the speech recognizer what sentence patterns it should expect to hear: these patterns are called grammars. Once the speech recognizer determines the most likely sentence it heard, it needs to extract the semantic meaning from that sentence and return it to the VoiceXML interpreter. This semantic interpretation is specified via the Semantic Interpretation for Speech Recognition (SISR) standard. SISR is used inside SRGS to specify the semantic results associated with the grammars, i.e., the set of ECMAScript assignments that create the semantic structure returned by the speech recognizer. SSML The Speech Synthesis Markup Language (SSML) is used to decorate textual prompts with information on how best to render them in synthetic speech, for example which speech synthesizer voice to use or when to speak louder or softer. PLS The Pronunciation Lexicon Specification (PLS) is used to define how words are pronounced. The generated pronunciation information is meant to be used by both speech recognizers and speech synthesizers in voice browsing applications. CCXML The Call Control eXtensible Markup Language (CCXML) is a complementary W3C standard. A CCXML interpreter is used on some VoiceXML platforms to handle the initial call setup between the caller and the voice browser, and to provide telephony services like call transfer and disconnect to the voice browser. CCXML can also be used in non-VoiceXML contexts. MSML, MSCML, MediaCTRL In media server applications, it is often necessary for several call legs to interact with each other, for example in a multi-party conference. Some deficiencies were identified in VoiceXML for this application and so companies designed specific scripting languages to deal with this environment. The Media Server Markup Language (MSML) was Convedia's solution, and Media Server Control Markup Language (MSCML) was Snowshore's solution. Snowshore is now owned by Dialogic and Convedia is now owned by Radisys. These languages also contain 'hooks' so that external scripts (like VoiceXML) can run on call legs where IVR functionality is required. There was an IETF working group called mediactrl ("media control") that was working on a successor for these scripting systems, which it is hoped will progress to an open and widely adopted standard. The mediactrl working group concluded in 2013. See also ECMAScript – the scripting language used in VoiceXML OpenVXI – an open source VoiceXML interpreter library SCXML – State Chart XML References External links W3C's Voice Browser Working Group, Official VoiceXML Standards VoiceXML Forum, VoiceXML Trademark Holder VoiceXML tutorials World Wide Web Consortium standards XML-based standards Markup languages Speech synthesis XML-based programming languages VoIP protocols 2000 software
VoiceXML
[ "Technology" ]
1,414
[ "Computer standards", "XML-based standards" ]
177,215
https://en.wikipedia.org/wiki/Photomultiplier%20tube
Photomultiplier tubes (photomultipliers or PMTs for short) are extremely sensitive detectors of light in the ultraviolet, visible, and near-infrared ranges of the electromagnetic spectrum. They are members of the class of vacuum tubes, more specifically vacuum phototubes. These detectors multiply the current produced by incident light by as much as 100 million times or 108 (i.e., 160 dB), in multiple dynode stages, enabling (for example) individual photons to be detected when the incident flux of light is low. The combination of high gain, low noise, high frequency response or, equivalently, ultra-fast response, and large area of collection has maintained photomultipliers an essential place in low light level spectroscopy, confocal microscopy, Raman spectroscopy, fluorescence spectroscopy, nuclear and particle physics, astronomy, medical diagnostics including blood tests, medical imaging, motion picture film scanning (telecine), radar jamming, and high-end image scanners known as drum scanners. Elements of photomultiplier technology, when integrated differently, are the basis of night vision devices. Research that analyzes light scattering, such as the study of polymers in solution, often uses a laser and a PMT to collect the scattered light data. Semiconductor devices, particularly silicon photomultipliers and avalanche photodiodes, are alternatives to classical photomultipliers; however, photomultipliers are uniquely well-suited for applications requiring low-noise, high-sensitivity detection of light that is imperfectly collimated. Structure and operating principles Photomultipliers are typically constructed with an evacuated glass housing (using an extremely tight and durable glass-to-metal seal like other vacuum tubes), containing a photocathode, several dynodes, and an anode. Incident photons strike the photocathode material, which is usually a thin vapor-deposited conducting layer on the inside of the entry window of the device. Electrons are ejected from the surface as a consequence of the photoelectric effect. These electrons are directed by the focusing electrode toward the electron multiplier, where electrons are multiplied by the process of secondary emission. The electron multiplier consists of a number of electrodes called dynodes. Each dynode is held at a more positive potential, by ≈100 Volts, than the preceding one. A primary electron leaves the photocathode with the energy of the incoming photon, or about 3 eV for "blue" photons, minus the work function of the photocathode. A small group of primary electrons is created by the arrival of a group of initial photons. (In Fig. 1, the number of primary electrons in the initial group is proportional to the energy of the incident high energy gamma ray.) The primary electrons move toward the first dynode because they are accelerated by the electric field. They each arrive with ≈100 eV kinetic energy imparted by the potential difference. Upon striking the first dynode, more low energy electrons are emitted, and these electrons are in turn accelerated toward the second dynode. The geometry of the dynode chain is such that a cascade occurs with an exponentially-increasing number of electrons being produced at each stage. For example, if at each stage an average of 5 new electrons are produced for each incoming electron, and if there are 12 dynode stages, then at the last stage one expects for each primary electron about 512 ≈ 108 electrons. This last stage is called the anode. This large number of electrons reaching the anode results in a sharp current pulse that is easily detectable, for example on an oscilloscope, signaling the arrival of the photon(s) at the photocathode ≈50 nanoseconds earlier. The necessary distribution of voltage along the series of dynodes is created by a voltage divider chain, as illustrated in Fig. 2. In the example, the photocathode is held at a negative high voltage on the order of 1000 V, while the anode is very close to ground potential. The capacitors across the final few dynodes act as local reservoirs of charge to help maintain the voltage on the dynodes while electron avalanches propagate through the tube. Many variations of design are used in practice; the design shown is merely illustrative. There are two common photomultiplier orientations, the head-on or end-on (transmission mode) design, as shown above, where light enters the flat, circular top of the tube and passes the photocathode, and the side-on design (reflection mode), where light enters at a particular spot on the side of the tube, and impacts on an opaque photocathode. The side-on design is used, for instance, in the type 931, the first mass-produced PMT. Besides the different photocathode materials, performance is also affected by the transmission of the window material that the light passes through, and by the arrangement of the dynodes. Many photomultiplier models are available having various combinations of these, and other, design variables. The manufacturers manuals provide the information needed to choose an appropriate design for a particular application. History The invention of the photomultiplier is predicated upon two prior achievements, the separate discoveries of the photoelectric effect and of secondary emission. Photoelectric effect The first demonstration of the photoelectric effect was carried out in 1887 by Heinrich Hertz using ultraviolet light. Significant for practical applications, Elster and Geitel two years later demonstrated the same effect using visible light striking alkali metals (potassium and sodium). The addition of caesium, another alkali metal, has permitted the range of sensitive wavelengths to be extended towards longer wavelengths in the red portion of the visible spectrum. Historically, the photoelectric effect is associated with Albert Einstein, who relied upon the phenomenon to establish the fundamental principle of quantum mechanics in 1905, an accomplishment for which Einstein received the 1921 Nobel Prize. It is worthwhile to note that Heinrich Hertz, working 18 years earlier, had not recognized that the kinetic energy of the emitted electrons is proportional to the frequency but independent of the optical intensity. This fact implied a discrete nature of light, i.e. the existence of quanta, for the first time. Secondary emission The phenomenon of secondary emission (the ability of electrons in a vacuum tube to cause the emission of additional electrons by striking an electrode) was, at first, limited to purely electronic phenomena and devices (which lacked photosensitivity). In 1899 the effect was first reported by Villard. In 1902, Austin and Starke reported that the metal surfaces impacted by electron beams emitted a larger number of electrons than were incident. The application of the newly discovered secondary emission to the amplification of signals was only proposed after World War I by Westinghouse scientist Joseph Slepian in a 1919 patent. The race towards a practical electronic television camera The ingredients for inventing the photomultiplier were coming together during the 1920s as the pace of vacuum tube technology accelerated. The primary goal for many, if not most, workers was the need for a practical television camera technology. Television had been pursued with primitive prototypes for decades prior to the 1934 introduction of the first practical video camera (the iconoscope). Early prototype television cameras lacked sensitivity. Photomultiplier technology was pursued to enable television camera tubes, such as the iconoscope and (later) the orthicon, to be sensitive enough to be practical. So the stage was set to combine the dual phenomena of photoemission (i.e., the photoelectric effect) with secondary emission, both of which had already been studied and adequately understood, to create a practical photomultiplier. First photomultiplier, single-stage (early 1934) The first documented photomultiplier demonstration dates to the early 1934 accomplishments of an RCA group based in Harrison, NJ. Harley Iams and Bernard Salzberg were the first to integrate a photoelectric-effect cathode and single secondary emission amplification stage in a single vacuum envelope and the first to characterize its performance as a photomultiplier with electron amplification gain. These accomplishments were finalized prior to June 1934 as detailed in the manuscript submitted to Proceedings of the Institute of Radio Engineers (Proc. IRE). The device consisted of a semi-cylindrical photocathode, a secondary emitter mounted on the axis, and a collector grid surrounding the secondary emitter. The tube had a gain of about eight and operated at frequencies well above 10 kHz. Magnetic photomultipliers (mid 1934–1937) Higher gains were sought than those available from the early single-stage photomultipliers. However, it is an empirical fact that the yield of secondary electrons is limited in any given secondary emission process, regardless of acceleration voltage. Thus, any single-stage photomultiplier is limited in gain. At the time the maximum first-stage gain that could be achieved was approximately 10 (very significant developments in the 1960s permitted gains above 25 to be reached using negative electron affinity dynodes). For this reason, multiple-stage photomultipliers, in which the photoelectron yield could be multiplied successively in several stages, were an important goal. The challenge was to cause the photoelectrons to impinge on successively higher-voltage electrodes rather than to travel directly to the highest voltage electrode. Initially this challenge was overcome by using strong magnetic fields to bend the electrons' trajectories. Such a scheme had earlier been conceived by inventor J. Slepian by 1919 (see above). Accordingly, leading international research organizations turned their attention towards improving photomultipliers to achieve higher gain with multiple stages. In the USSR, RCA-manufactured radio equipment was introduced on a large scale by Joseph Stalin to construct broadcast networks, and the newly formed All-Union Scientific Research Institute for Television was gearing up a research program in vacuum tubes that was advanced for its time and place. Numerous visits were made by RCA scientific personnel to the USSR in the 1930s, prior to the Cold War, to instruct the Soviet customers on the capabilities of RCA equipment and to investigate customer needs. During one of these visits, in September 1934, RCA's Vladimir Zworykin was shown the first multiple-dynode photomultiplier, or photoelectron multiplier. This pioneering device was proposed by Leonid A. Kubetsky in 1930 which he subsequently built in 1934. The device achieved gains of 1000x or more when demonstrated in June 1934. The work was submitted for print publication only two years later, in July 1936 as emphasized in a recent 2006 publication of the Russian Academy of Sciences (RAS), which terms it "Kubetsky's Tube." The Soviet device used a magnetic field to confine the secondary electrons and relied on the Ag-O-Cs photocathode which had been demonstrated by General Electric in the 1920s. By October 1935, Vladimir Zworykin, George Ashmun Morton, and Louis Malter of RCA in Camden, NJ submitted their manuscript describing the first comprehensive experimental and theoretical analysis of a multiple dynode tube — the device later called a photomultiplier — to Proc. IRE. The RCA prototype photomultipliers also used an Ag-O-Cs (silver oxide-caesium) photocathode. They exhibited a peak quantum efficiency of 0.4% at 800 nm. Electrostatic photomultipliers (1937–present) Whereas these early photomultipliers used the magnetic field principle, electrostatic photomultipliers (with no magnetic field) were demonstrated by Jan Rajchman of RCA Laboratories in Princeton, NJ in the late 1930s and became the standard for all future commercial photomultipliers. The first mass-produced photomultiplier, the Type 931, was of this design and is still commercially produced today. Improved photocathodes Also in 1936, a much improved photocathode, Cs3Sb (caesium-antimony), was reported by P. Görlich. The caesium-antimony photocathode had a dramatically improved quantum efficiency of 12% at 400 nm, and was used in the first commercially successful photomultipliers manufactured by RCA (i.e., the 931-type) both as a photocathode and as a secondary-emitting material for the dynodes. Different photocathodes provided differing spectral responses. Spectral response of photocathodes In the early 1940s, the JEDEC (Joint Electron Device Engineering Council), an industry committee on standardization, developed a system of designating spectral responses. The philosophy included the idea that the product's user need only be concerned about the response of the device rather than how the device may be fabricated. Various combinations of photocathode and window materials were assigned "S-numbers" (spectral numbers) ranging from S-1 through S-40, which are still in use today. For example, S-11 uses the caesium-antimony photocathode with a lime glass window, S-13 uses the same photocathode with a fused silica window, and S-25 uses a so-called "multialkali" photocathode (Na-K-Sb-Cs, or sodium-potassium-antimony-caesium) that provides extended response in the red portion of the visible light spectrum. No suitable photoemissive surfaces have yet been reported to detect wavelengths longer than approximately 1700 nanometers, which can be approached by a special (InP/InGaAs(Cs)) photocathode. RCA Corporation For decades, RCA was responsible for performing the most important work in developing and refining photomultipliers. RCA was also largely responsible for the commercialization of photomultipliers. The company compiled and published an authoritative and widely used Photomultiplier Handbook. RCA provided printed copies free upon request. The handbook, which continues to be made available online at no cost by the successors to RCA, is considered to be an essential reference. Following a corporate break-up in the late 1980s involving the acquisition of RCA by General Electric and disposition of the divisions of RCA to numerous third parties, RCA's photomultiplier business became an independent company. Lancaster, Pennsylvania facility The Lancaster, Pennsylvania facility was opened by the U.S. Navy in 1942 and operated by RCA for the manufacture of radio and microwave tubes. Following World War II, the naval facility was acquired by RCA. RCA Lancaster, as it became known, was the base for the development and the production of commercial television products. In subsequent years other products were added, such as cathode-ray tubes, photomultiplier tubes, motion-sensing light control switches, and closed-circuit television systems. Burle Industries Burle Industries, as a successor to the RCA Corporation, carried the RCA photomultiplier business forward after 1986, based in the Lancaster, Pennsylvania facility. The 1986 acquisition of RCA by General Electric resulted in the divestiture of the RCA Lancaster New Products Division. Hence, 45 years after being founded by the U.S. Navy, its management team, led by Erich Burlefinger, purchased the division and in 1987 founded Burle Industries. In 2005, after eighteen years as an independent enterprise, Burle Industries and a key subsidiary were acquired by Photonis, a European holding company Photonis Group. Following the acquisition, Photonis was composed of Photonis Netherlands, Photonis France, Photonis USA, and Burle Industries. Photonis USA operates the former Galileo Corporation Scientific Detector Products Group (Sturbridge, Massachusetts), which had been purchased by Burle Industries in 1999. The group is known for microchannel plate detector (MCP) electron multipliers—an integrated micro-vacuum tube version of photomultipliers. MCPs are used for imaging and scientific applications, including night vision devices. On 9 March 2009, Photonis announced that it would cease all production of photomultipliers at both the Lancaster, Pennsylvania and the Brive, France plants. Hamamatsu The Japan-based company Hamamatsu Photonics (also known as Hamamatsu) has emerged since the 1950s as a leader in the photomultiplier industry. Hamamatsu, in the tradition of RCA, has published its own handbook, which is available without cost on the company's website. Hamamatsu uses different designations for particular photocathode formulations and introduces modifications to these designations based on Hamamatsu's proprietary research and development. Photocathode materials The photocathodes can be made of a variety of materials, with different properties. Typically the materials have low work function and are therefore prone to thermionic emission, causing noise and dark current, especially the materials sensitive in infrared; cooling the photocathode lowers this thermal noise. The most common photocathode materials are Ag-O-Cs (also called S1) transmission-mode, sensitive from 300–1200 nm. High dark current; used mainly in near-infrared, with the photocathode cooled; GaAs:Cs, caesium-activated gallium arsenide, flat response from 300 to 850 nm, fading towards ultraviolet and to 930 nm; InGaAs:Cs, caesium-activated indium gallium arsenide, higher infrared sensitivity than GaAs:Cs, between 900–1000 nm much higher signal-to-noise ratio than Ag-O-Cs; Sb-Cs, (also called S11) caesium-activated antimony, used for reflective mode photocathodes; response range from ultraviolet to visible, widely used; bialkali (Sb-K-Cs, Sb-Rb-Cs), caesium-activated antimony-rubidium or antimony-potassium alloy, similar to Sb:Cs, with higher sensitivity and lower noise. can be used for transmission-mode; favorable response to a NaI:Tl scintillator flashes makes them widely used in gamma spectroscopy and radiation detection; high-temperature bialkali (Na-K-Sb), can operate up to 175 °C, used in well logging, low dark current at room temperature; multialkali (Na-K-Sb-Cs), (also called S20), wide spectral response from ultraviolet to near-infrared, special cathode processing can extend range to 930 nm, used in broadband spectrophotometers; solar-blind (Cs-Te, Cs-I), sensitive to vacuum-UV and ultraviolet, insensitive to visible light and infrared (Cs-Te has cutoff at 320 nm, Cs-I at 200 nm). Window materials The windows of the photomultipliers act as wavelength filters; this may be irrelevant if the cutoff wavelengths are outside of the application range or outside of the photocathode sensitivity range, but special care has to be taken for uncommon wavelengths. Borosilicate glass is commonly used for near-infrared to about 300 nm. High borate borosilicate glasses exist also in high UV transmission versions with high transmission also at 254 nm. Glass with very low content of potassium can be used with bialkali photocathodes to lower the background radiation from the potassium-40 isotope. Ultraviolet glass transmits visible and ultraviolet down to 185 nm. Used in spectroscopy. Synthetic silica transmits down to 160 nm, absorbs less UV than fused silica. Different thermal expansion than kovar (and than borosilicate glass that's expansion-matched to kovar), a graded seal needed between the window and the rest of the tube. The seal is vulnerable to mechanical shocks. Magnesium fluoride transmits ultraviolet down to 115 nm. Hygroscopic, though less than other alkali halides usable for UV windows. Usage considerations Photomultiplier tubes typically utilize 1000 to 2000 volts to accelerate electrons within the chain of dynodes. (See Figure near top of article.) The most negative voltage is connected to the cathode, and the most positive voltage is connected to the anode. Negative high-voltage supplies (with the positive terminal grounded) are often preferred, because this configuration enables the photocurrent to be measured at the low voltage side of the circuit for amplification by subsequent electronic circuits operating at low voltage. However, with the photocathode at high voltage, leakage currents sometimes result in unwanted "dark current" pulses that may affect the operation. Voltages are distributed to the dynodes by a resistive voltage divider, although variations such as active designs (with transistors or diodes) are possible. The divider design, which influences frequency response or rise time, can be selected to suit varying applications. Some instruments that use photomultipliers have provisions to vary the anode voltage to control the gain of the system. While powered (energized), photomultipliers must be shielded from ambient light to prevent their destruction through overexcitation. In some applications this protection is accomplished mechanically by electrical interlocks or shutters that protect the tube when the photomultiplier compartment is opened. Another option is to add overcurrent protection in the external circuit, so that when the measured anode current exceeds a safe limit, the high voltage is reduced. If used in a location with strong magnetic fields, which can curve electron paths, steer the electrons away from the dynodes and cause loss of gain, photomultipliers are usually magnetically shielded by a layer of soft iron or mu-metal. This magnetic shield is often maintained at cathode potential. When this is the case, the external shield must also be electrically insulated because of the high voltage on it. Photomultipliers with large distances between the photocathode and the first dynode are especially sensitive to magnetic fields. Applications Photomultipliers were the first electric eye devices, being used to measure interruptions in beams of light. Photomultipliers are used in conjunction with scintillators to detect Ionizing radiation by means of hand held and fixed radiation protection instruments, and particle radiation in physics experiments. Photomultipliers are used in research laboratories to measure the intensity and spectrum of light-emitting materials such as compound semiconductors and quantum dots. Photomultipliers are used as the detector in many spectrophotometers. This allows an instrument design that escapes the thermal noise limit on sensitivity, and which can therefore substantially increase the dynamic range of the instrument. Photomultipliers are used in numerous medical equipment designs. For example, blood analysis devices used by clinical medical laboratories, such as flow cytometers, utilize photomultipliers to determine the relative concentration of various components in blood samples, in combination with optical filters and incandescent lamps. An array of photomultipliers is used in a gamma camera. Photomultipliers are typically used as the detectors in flying-spot scanners. High-sensitivity applications After 50 years, during which solid-state electronic components have largely displaced the vacuum tube, the photomultiplier remains a unique and important optoelectronic component. Perhaps its most useful quality is that it acts, electronically, as a nearly perfect current source, owing to the high voltage utilized in extracting the tiny currents associated with weak light signals. There is no Johnson noise associated with photomultiplier signal currents, even though they are greatly amplified, e.g., by 100 thousand times (i.e., 100 dB) or more. The photocurrent still contains shot noise. Photomultiplier-amplified photocurrents can be electronically amplified by a high-input-impedance electronic amplifier (in the signal path subsequent to the photomultiplier), thus producing appreciable voltages even for nearly infinitesimally small photon fluxes. Photomultipliers offer the best possible opportunity to exceed the Johnson noise for many configurations. The aforementioned refers to measurement of light fluxes that, while small, nonetheless amount to a continuous stream of multiple photons. For smaller photon fluxes, the photomultiplier can be operated in photon-counting, or Geiger, mode (see also Single-photon avalanche diode). In Geiger mode the photomultiplier gain is set so high (using high voltage) that a single photo-electron resulting from a single photon incident on the primary surface generates a very large current at the output circuit. However, owing to the avalanche of current, a reset of the photomultiplier is required. In either case, the photomultiplier can detect individual photons. The drawback, however, is that not every photon incident on the primary surface is counted either because of less-than-perfect efficiency of the photomultiplier, or because a second photon can arrive at the photomultiplier during the "dead time" associated with a first photon and never be noticed. A photomultiplier will produce a small current even without incident photons; this is called the dark current. Photon-counting applications generally demand photomultipliers designed to minimise dark current. Nonetheless, the ability to detect single photons striking the primary photosensitive surface itself reveals the quantization principle that Einstein put forth. Photon counting (as it is called) reveals that light, not only being a wave, consists of discrete particles (i.e., photons). Temperature range It is known that at cryogenic temperatures photo multipliers demonstrate increase in (bursting) electrons emission as temperature lowers. This phenomenon is still unexplained by any physics theory. See also Lucas cell Scintillation counter Silicon photomultiplier Total absorption spectroscopy References Bibliography Wright, A.G., https://global.oup.com/academic/product/the-photomultiplier-handbook-9780199565092 "The Photomultiplier Handbook", 616pp, Oxford University Press, Oxford, England (2017). Engstrom, Ralph W., Photomultiplier Handbook, RCA/Burle (1980). Photomultiplier Tubes: Basics and Applications (Second Edition), Hamamatsu Photonics, Hamamatsu City, Japan, (1999). Flyckt, S.O. and Marmonier, C., Photomultiplier Tubes: Principles and Applications, Philips Photonics, Brive, France (2002). External links Molecular Expressions – Java-based simulation and tutorial on photomultiplier tubes Photomultiplier Handbook (4MB PDF) from Burle Industries, essentially the Engstrom-RCA Handbook reprinted Photomultiplier technical papers from ET-Enterprises Photomultiplier tubes basics and applications from Hamamatsu Photonics Electron Multiplier – simulation of an electron multiplier tube Ionising radiation detectors Medical imaging Particle detectors Photodetectors Sensors Vacuum tubes Photomultipliers Single-photon detectors
Photomultiplier tube
[ "Physics", "Technology", "Engineering" ]
5,675
[ "Radioactive contamination", "Vacuum tubes", "Vacuum", "Particle detectors", "Measuring instruments", "Ionising radiation detectors", "Sensors", "Matter" ]
177,259
https://en.wikipedia.org/wiki/Enschede%20fireworks%20disaster
The Enschede fireworks disaster was a catastrophic fireworks explosion on 13 May 2000 in Enschede, Netherlands. The explosion killed 23 people including four firefighters and injured 950 others. A total of 400 homes were destroyed and 1,500 buildings were subsequently damaged. The first explosion had a strength in the order of , while the strength of the final explosion was in the range of . The biggest blast was felt as far as the city of Deventer, away. Fire crews were called in from across the border in Germany to help battle the blaze; it was brought under control by the end of the day. S.E. Fireworks was a major supplier to pop concerts and major festive events in the Netherlands. Prior to the disaster it had a good safety record and met all safety audits. Cause The fire which triggered the explosion is believed to have started inside the central building of the S.E. Fireworks depot, in a work area where some of fireworks were stored. It then spread outside the building to two full shipping containers that were being used to illegally store more display incendiaries. When of fireworks exploded, it destroyed the surrounding residential area. One theory to explain the large scale of the disaster was that internal fire doors in the central complex—which might otherwise have contained the fire—had been left open. Theoretically, an explosion was considered highly unlikely because the fireworks were stored in sealed bunkers specifically designed to minimize such risk. However, the illegal use of shipping containers reduced safety, particularly as they had been arranged closely together at ground level and had not been separated either by earthworks or other partitioning. One week prior to the explosion, SE had been audited. The company was judged to have met all official safety regulations while the legally imported fireworks had been inspected by Dutch authorities and deemed safe. However, after the explosion, residents from the affected district of Roombeek—a poor, working-class neighbourhood—complained of government inaction and lack of interest, saying the whole disaster was just waiting to happen. When it was built in 1977, the warehouse was outside the town, but as new residential areas were built it became surrounded by low-income housing. Residents and town councillors stated they did not even know that there was a fireworks warehouse in their area. Later in the court case, the judge said that city officials failed to take steps even when they knew laws had been broken. They acted "completely incomprehensibly" by allowing the company to expand, for fear that the city would have to pay the cost of moving S.E. Fireworks to another location. Damage A area around the warehouse was destroyed by the blast. The S.E. Fireworks factory was the only one in the Netherlands to be located in a residential area. This caused around 400 houses to be destroyed, 15 streets incinerated and a total of 1,500 homes damaged, leaving 1,250 people homeless, essentially obliterating the neighbourhood of Roombeek. Ten thousand residents were evacuated, and damages eventually neared 1 billion guilders (€454 million). The Dutch Government warned that potentially harmful asbestos was released into the air by the explosion. The fire spread to the nearby Grolsch Beer brewery which had an asbestos roof. It was later demolished and closed; a replacement opened nearby in Boekelo in 2004. Death toll By 22 May 2000, 18 people had been confirmed dead, including an elderly woman who died from her injuries in hospital on 21 May. On 24 May, the search for victims ended with three people still missing and presumed dead. A 19th victim died from her injuries in hospital on 19 October 2000. The addition of the three missing people to the death toll brought the total to 22. In 2005, Dutch media reported that a 26-year-old Enschede resident who had died in a local hospital on 2 October 2000 was unofficially being recognised as the 23rd victim of the disaster. Her widower clarified in an interview that while she sustained only minor injuries on the day of the disaster, she appeared to develop an eating disorder in the months thereafter, and that her doctor reported a "direct connection" between her death and the disaster without specifying whether or not she became physically ill as a result of the disaster or if illness was a factor in her death. A memorial at the site of the disaster lists her name along with the 22 victims included in the official death toll. Healthcare impact In 2005, a survey was published based on the electronic medical records of general practitioners, which compared 9,329 victims with a control sample size of 7,329 units ranging from 16 months before to 2.5 years after the disaster. The study highlighted that "two and a half years post-disaster the prevalence of psychological problems in victims, who had to relocate, was about double and in the non-relocated victims one-third more than controls". Relocated victims showed the highest increase of medically unexplained physical symptoms (tiredness, headache, nausea, and abdominal pain) and of gastrointestinal morbidity. Legal proceedings On 20 May, Dutch authorities issued an international arrest warrant for the two managers of the company, Rudi Bakker and Willie Pater, after they fled their homes, which were empty when searched. Pater handed himself in on the same day, and Bakker the day after. In April 2002, the owners were sentenced to six months' imprisonment each for violation of environmental and safety regulations and dealing in illegal fireworks alongside a fine of €2,250 each. However, they were both acquitted of the more serious charges of negligence for the fire. In May 2003, Arnhem Appeals Court acquitted 36-year-old André de Vries of arson. Almelo Court had originally (May 2002) tried and convicted De Vries of arson and sentenced him to 15 years imprisonment. In February 2005, after a four-and-a-half-year legal battle, the six-month sentence of each of the owners was increased to twelve months each. A total of €8.5m in compensation was awarded to the victims of the explosion, according to the organisation in charge of distributing the compensation, the UPV, having assessed 3,519 claims. Three hundred people received cash for incurring extra costs, 136 people received money for loss of income, and 1,477 people received compensation for health problems. Dutch fire safety regulations The S.E. Fireworks disaster led to intensified safety regulations in the Netherlands concerning the storage and sale of fireworks. The Roombeek area that was destroyed by the blast has since been rebuilt. Since the catastrophe, three illegal fireworks depots were closed down in the Netherlands. Memorials There have been annual public memorial services in Roombeek since 2000 led by the mayor Jan Mans, and commonly ending on the Stroinksbleekweg. The theme of 2004 was "homecoming". In 2005 a memorial was unveiled at the site of the disaster, titled 'Het verdwenen huis tussen hemel en aarde' (The vanished house between heaven and earth) by Moldavian artist Pavel Balta. Part of the memorial consists of the floor of the former storage depot, crushed into the crater created by the explosion. In popular culture The events of the disaster served as the inspiration of the mission "Explosion in fireworks factory" from Emergency 4: Global Fighters for Life. See also 1654 Delft gunpowder explosion Leiden gunpowder disaster (1807) 2000 in the Netherlands List of fireworks accidents and incidents List of industrial disasters Notes References External links In pictures: Firework blaze – BBC News Satellite Map Imagery – Google Maps 2000 disasters in the Netherlands 2000 fires in Europe 2000 in the Netherlands 2000 industrial disasters Building and structure fires in the Netherlands Events in Enschede Explosions in 2000 Explosions in the Netherlands Fireworks accidents and incidents History of Overijssel Industrial fires and explosions May 2000 events in Europe Warehouse fires
Enschede fireworks disaster
[ "Chemistry" ]
1,594
[ "Industrial fires and explosions", "Explosions" ]
177,309
https://en.wikipedia.org/wiki/Lighting%20design
In theatre, a lighting designer (or LD) works with the director, choreographer, set designer, costume designer, and sound designer to create the lighting, atmosphere, and time of day for the production in response to the text while keeping in mind issues of visibility, safety, and cost. The LD also works closely with the stage manager or show control programming, if show control systems are used in that production. Outside stage lighting, the job of a lighting designer can be much more diverse, and they can be found working on rock and pop tours, corporate launches, art installations, or lighting effects at sporting events. During pre-production The role of the lighting designer varies greatly within professional and amateur theater. For a Broadway show, a touring production and most regional and small productions the LD is usually an outside freelance specialist hired early in the production process. Smaller theater companies may have a resident lighting designer responsible for most of the company's productions or rely on a variety of freelance or even volunteer help to light their productions. At the off-Broadway or off-off-Broadway level, the LD will occasionally be responsible for much of the hands-on technical work (such as hanging instruments, programming the light board, etc.) that would be the work of the lighting crew in a larger theater. The LD will read the script carefully and make notes on changes in place and time between scene—and will have meetings (called design or production meetings) with the director, designers, stage manager, and production manager to discuss ideas for the show and establish budget and scheduling details. The LD will also attend several later rehearsals to observe the way the actors are being directed to use the stage area ('blocking') during different scenes and will receive updates from the stage manager on any changes that occur. The LD will also ensure that they have an accurate plan of the theatre's lighting positions and a list of their equipment, as well as an accurate copy of the set design, especially the ground plan and section. The LD must consider the show's mood and the director's vision in creating a lighting design. To help the LD communicate artistic vision, they may employ renderings, storyboards, photographs, reproductions of artwork, or mockups of effects to help communicate how the lighting should look. Various forms of paperwork are essential for the LD to successfully communicate their design to various production team members. Examples of typical paperwork include cue sheets, light plots, instrument schedules, shop orders, and focus charts. Cue sheets communicate the placement of cues that the LD has created for the show, using artistic terminology rather than technical language, and information on exactly when each cue is called so that the stage manager and the assistants know when and where to call the cue. Cue sheets are of the most value to stage management. The light plot is a scale drawing that communicates the location of lighting fixtures and lighting positions so a team of electricians can independently install the lighting system. Next to each instrument on the plan will be information for any color gel, gobo, or other accessories that need to go with it, and its channel number. Often, paperwork listing all of this information is also generated by using a program such as Lightwright. The lighting designer uses this paperwork to aid in the visualization of not only ideas but also simple lists to assist the master electrician during load-in, focus, and technical rehearsals. Professional LDs generally use special computer-aided design packages to create accurate and easily readable drafted plots that can be swiftly updated as necessary. The LD will discuss the plot with the show's production manager and the theatre's master electrician or technical director to make sure there are no unforeseen problems during load-in. The lighting designer is responsible, in conjunction with the production's independently hired production electrician, who will interface with the theater's master electrician, for directing the theater's electrics crew in the realization of their designs during the technical rehearsals. After the Electricians have hung, circuited, and patched the lighting units, the LD will direct the focusing (pointing, shaping and sizing of the light beams) and gelling (coloring) of each unit. After focus has occurred the LD usually sits at a temporary desk (tech table) in the theater (typically on the center line in the middle of the house) where they have a good view of the stage and work with the light board operator, who will either be seated alongside them at a portable control console or talk via headset to the control room. At the tech table, the LD will generally use a Magic Sheet, which is a pictorial layout of how the lights relate to the stage, so they can have quick access to channel numbers that control particular lighting instruments. The LD may also have a copy of the light plot and channel hookup, a remote lighting console, a computer monitor connected to the light board (so they can see what the board op is doing), and a headset, though in smaller theatres this is less common. There may be a time allowed for pre-lighting or "pre-cueing", a practice that is often done with people known as Light Walkers who stand in for performers so the LD can see what the light looks like on bodies. At an arranged time, the performers arrive and the production is worked through in chronological order, with occasional stops to correct sound, lighting, entrances, etc.; known as a "cue-to-cue" or tech rehearsal. The lighting designer will work constantly with the board operator to refine the lighting states as the technical rehearsal continues, but because the focus of a "tech" rehearsal is the production's technical aspects, the LD may require the performers to pause ("hold") frequently. Nevertheless, any errors of focusing or changes to the lighting plan are corrected only when the performers are not onstage. These changes take place during 'work' or 'note' calls. The LD only attends these notes calls if units are hung or rehung and require additional focusing. The LD or assistant lighting director (also known as the ALD, see below for description) will be in charge if in attendance. If the only work to be done is maintenance (i.e. changing a lamp or burnt out gel) then the production or master electrician will be in charge and will direct the electrics crew. After the tech process, the performance may (or may not, depending on time constraints) go into dress rehearsal without a ticketed audience or previews with a ticketed audience. During this time, if the cueing is finished, the LD will sit in the audience and take notes on what works and what needs changing. At this point, the stage manager will begin to take over the work of calling cues for the light board op to follow. Generally, the LD will stay on the headset, and may still have a monitor connected to the light board in case of problems, or will be in the control booth with the board operator when a monitor is not available. Changes will often occur during notes call, but if serious problems occur, the performance may be halted and the issue will be resolved. Once the show is open to the public, the lighting designer will often stay and watch several performances of the show, making notes each night and making desired changes the next day during notes call. If the show is still in previews, then the LD will make changes, but once the production officially opens, normally, the lighting designer will not make further changes. Changes should not be made after the lighting design is finished, and never without the LD's approval. There may be times when changes are necessary after the production has officially opened. Reasons for changes after opening night include: casting changes; significant changes in blocking; addition, deletion or rearrangement of scenes; or the tech and/or preview period (if there was a preview period) was too short to accommodate as thorough a cueing as was needed (this is particularly common in dance productions). If significant changes need to be made, the LD will come in and make them, however, if only smaller changes are needed, the LD may opt to send the ALD. If a show runs for a particularly long time then the LD may come in periodically to check the focus of each lighting instrument and if they are retaining their color (some gel, especially saturated gel, loses its richness and can fade or 'burn out' over time). The LD may also sit in on a performance to make sure that the cues are still being called at the right place and time. The goal is often to finish by the opening of the show, but what is most important is that the LD and the directors believe that the design is finished to each's satisfaction. If that happens to be by opening night, then after opening no changes are normally made to that particular production run at that venue. The general maintenance of the lighting rig then becomes the responsibility of the master electrician. In small theatres It is uncommon for a small theatre to have a very large technical crew, as there is less work to do. Many times, the lighting crew of a small theater will consist of a single lighting designer and one to three people, who collectively are in charge of hanging, focusing, and patching all lighting instruments. The lighting designer, in this situation, commonly works directly with this small team, fulfilling the role of both master electrician and lighting designer. Many times the designer will directly participate in the focusing of lights. The same crew will generally also program cues and operate the light board during rehearsals and performances. In some cases, the light board and sound board are operated by the same person, depending on the complexity of the show. The lighting designer may also take on other roles in addition to lights when they are finished hanging lights and programming cues on the board. Advances in visualization and presentation As previously mentioned, it is difficult to fully communicate the intent of a lighting design before all the lights are installed and all the cues are written. With the advancement in computer processing and visualization software, lighting designers are now able to create computer-generated images (CGI) that represent their ideas. The lighting designer enters the light plot into the visualization software and then enters the ground plan of the theater and set design, giving as much three-dimensional data as possible (which helps in creating complete renderings). This creates a 3D model in computer space that can be lit and manipulated. Using the software, the LD can use the lights from his plot to create actual lighting in the 3D model with the ability to define parameters such as color, focus, gobo, beam angle etc. The designer can then take renderings or "snapshots" of various looks that can then be printed out and shown to the director and other members of the design team. Mockups and lighting scale models In addition to computer visualization, either full-scale or small-scale mockups are a good method for depicting a lighting designer's ideas. Fiber optic systems such as LightBox or Luxam allow a users to light a scale model of the set. For example, a set designer can create a model of the set in 1/4" scale, and the lighting designer can then take the fiber optic cables and attach them to scaled-down lighting units that can accurately replicate the beam angles of specified lighting fixtures. These 'mini lights' can then be attached to cross pieces simulating different lighting positions. Fiber optic fixtures have the capacity to simulate attributes of full scale theatrical lighting fixtures including; color, beam angle, intensity, and gobos. The most sophisticated fiber optic systems are controllable through computer software or a DMX controlled Light board. This gives the lighting designer the ability to mock up real-time lighting effects as they will look during the show. Additional members of the lighting design team If the production is large or especially complex, the lighting designer may hire additional lighting professionals to help execute the design. Associate lighting designer The associate lighting designer (associate LD) will assist the lighting designer in creating and executing the lighting design. While the duties that an LD may expect the associate LD to perform may differ from person to person, usually the Ass't LD will do the following: Attend design and production meetings with or in place of the LD Attend rehearsals with or in place of LD and take notes of specific design ideas and tasks that the lighting department needs to accomplish Assist the LD in generating the light plot, channel hookup and sketches If needed, the Associate may need to take the set drawings and put them into a CAD program to be manipulated by the LD (however, this job is usually given to the assistant LD if there is one). The assistant LD may be in charge of running focus, and may even direct where the lights are to be focused. The associate is generally authorized to speak on behalf of the LD and can make creative and design decisions when needed (and when authorized by the LD). This is one of the biggest differences between the Associate and the Assistant. Assistant lighting designer The assistant lighting designer (assistant LD) assists the lighting designer and associate lighting designer. Depending on the particular arrangement the ALD may report directly to the LD, or they may in essence be the Associate's assistant. There also may be more than one assistant on a show depending on the size of the production. The ALD will usually: Attend design and production meetings with the LD or the associate LD Attend rehearsals with the LD or the associate LD Assist the LD in generating the light plot and channel hookup. If the plot is to be computer generated, the ALD is the one who physically enters the information into the computer. The ALD may run errands for the LD such as picking up supplies or getting the light plot printed in large format. The ALD will help the Associate LD in running focus. The ALD may take Focus Charts during focus. Track and coordinate followspots (if any exist for the production) and generate paperwork to aid in their cueing and color changes. In rare instances the ALD may be the light board operator. See also List of lighting designers Architectural lighting design Landscape lighting Master electrician Professional Lighting and Sound Association References Stage Lighting Design: The Art, the Craft, the Life, by Richard Pilbrow on books.google.com Stage Lighting Design: A Practical Guide, Neil Fraser, on books.google.com A Practical Guide to Stage Lighting, By Steven Louis Shelley, on books.google.com The Lighting Art: The Aesthetics of Stage Lighting Design, by Richard H. Palmer, on books.google.com Stage lighting design in Britain: the emergence of the lighting designer, 1881-1950, by Nigel H. Morgan, on books.google.com Scene Design and Stage Lighting By R. Wolf, Dick Block, on books.google.com External links stagelightingprimer.com,Stage Lighting for Students northern.edu, A brief history of stage lighting Broadcasting occupations Design occupations Filmmaking occupations Landscape and garden designers Mass media occupations Television terminology Theatrical occupations Stage crew Stage lighting sv:Ljusdesign#Ljusdesigner
Lighting design
[ "Engineering" ]
3,075
[ "Design occupations", "Design" ]
177,318
https://en.wikipedia.org/wiki/List%20of%20data%20structures
This is a list of well-known data structures. For a wider list of terms, see list of terms relating to algorithms and data structures. For a comparison of running times for a subset of this list see comparison of data structures. Data types Primitive types Boolean, true or false. Character Floating-point representation of a finite subset of the rationals. Including single-precision and double-precision IEEE 754 floats, among others Fixed-point representation of the rationals Integer, a direct representation of either the integers or the non-negative integers Reference, sometimes erroneously referred to as a pointer or handle, is a value that refers to another value, possibly including itself Symbol, a unique identifier Enumerated type, a set of symbols Complex, representation of complex numbers Composite types or non-primitive type Array, a sequence of elements of the same type stored contiguously in memory Record (also called a structure or struct), a collection of fields Product type (also called a tuple), a record in which the fields are not named String, a sequence of characters representing text Union, a datum which may be one of a set of types Tagged union (also called a variant, discriminated union or sum type), a union with a tag specifying which type the data is Abstract data types Container List Tuple Associative array, Map Multimap Set Multiset (bag) Stack Queue (example Priority queue) Double-ended queue Graph (example Tree, Heap) Some properties of abstract data types: "Ordered" means that the elements of the data type have some kind of explicit order to them, where an element can be considered "before" or "after" another element. This order is usually determined by the order in which the elements are added to the structure, but the elements can be rearranged in some contexts, such as sorting a list. For a structure that isn't ordered, on the other hand, no assumptions can be made about the ordering of the elements (although a physical implementation of these data types will often apply some kind of arbitrary ordering). "Uniqueness" means that duplicate elements are not allowed. Depending on the implementation of the data type, attempting to add a duplicate element may either be ignored, overwrite the existing element, or raise an error. The detection for duplicates is based on some inbuilt (or alternatively, user-defined) rule for comparing elements. Linear data structures A data structure is said to be linear if its elements form a sequence. Arrays Array Associative array Bit array Bit field Bitboard Bitmap Circular buffer Control table Image Dope vector Dynamic array Gap buffer Hashed array tree Lookup table Matrix Parallel array Sorted array Sparse matrix Iliffe vector Variable-length array Lists Doubly linked list Array list Linked list also known as a Singly linked list Association list Self-organizing list Skip list Unrolled linked list VList Conc-tree list Xor linked list Zipper Doubly connected edge list also known as half-edge Difference list Free list Trees Trees are a subset of directed acyclic graphs. Binary trees AA tree AVL tree Binary search tree Binary tree Cartesian tree Conc-tree list Left-child right-sibling binary tree Order statistic tree Pagoda Randomized binary search tree Red–black tree Rope Scapegoat tree Self-balancing binary search tree Splay tree T-tree Tango tree Threaded binary tree Top tree Treap WAVL tree Weight-balanced tree Zip tree B-trees B-tree B+ tree B*-tree Dancing tree 2–3 tree 2–3–4 tree Queap Fusion tree Bx-tree Heaps Heap Min-max heap Binary heap B-heap Weak heap Binomial heap Fibonacci heap AF-heap Leonardo heap 2–3 heap Soft heap Pairing heap Leftist heap Treap Beap Skew heap Ternary heap D-ary heap Brodal queue Bit-slice trees In these data structures each tree node compares a bit slice of key values. Radix tree Suffix tree Suffix array Compressed suffix array FM-index Generalised suffix tree B-tree Judy array Trie X-fast trie Y-fast trie Merkle tree Multi-way trees Ternary search tree Ternary tree K-ary tree And–or tree (a,b)-tree Link/cut tree SPQR-tree Spaghetti stack Disjoint-set data structure (Union-find data structure) Fusion tree Enfilade Exponential tree Fenwick tree Van Emde Boas tree Rose tree Space-partitioning trees These are data structures used for space partitioning or binary space partitioning. Segment tree Interval tree Range tree Bin K-d tree Implicit k-d tree Min/max k-d tree Relaxed k-d tree Adaptive k-d tree Quadtree Octree Linear octree Z-order UB-tree R-tree R+ tree R* tree Hilbert R-tree X-tree Metric tree Cover tree M-tree VP-tree BK-tree Bounding interval hierarchy Bounding volume hierarchy BSP tree Rapidly exploring random tree Application-specific trees Abstract syntax tree Parse tree Decision tree Alternating decision tree Minimax tree Expectiminimax tree Finger tree Expression tree Log-structured merge-tree PQ tree Hash-based structures Approximate Membership Query Filter Bloom filter Cuckoo filter Quotient filter Count–min sketch Distributed hash table Double hashing Dynamic perfect hash table Hash array mapped trie Hash list Hash table Hash tree Hash trie Koorde Prefix hash tree Rolling hash MinHash Ctrie Graphs Many graph-based data structures are used in computer science and related fields: Graph Adjacency list Adjacency matrix Graph-structured stack Scene graph Decision tree Binary decision diagram Zero-suppressed decision diagram And-inverter graph Directed graph Directed acyclic graph Propositional directed acyclic graph Multigraph Hypergraph Other Lightmap Winged edge Quad-edge Routing table Symbol table Piece table E-graph See also List of algorithms Purely functional data structure Blockchain, a hash-based chained data structure that can persist state history over time External links Tommy Benchmarks Comparison of several data structures. Data Structures
List of data structures
[ "Technology" ]
1,266
[ "Computing-related lists" ]
177,320
https://en.wikipedia.org/wiki/Spectral%20line
A spectral line is a weaker or stronger region in an otherwise uniform and continuous spectrum. It may result from emission or absorption of light in a narrow frequency range, compared with the nearby frequencies. Spectral lines are often used to identify atoms and molecules. These "fingerprints" can be compared to the previously collected ones of atoms and molecules, and are thus used to identify the atomic and molecular components of stars and planets, which would otherwise be impossible. Types of line spectra Spectral lines are the result of interaction between a quantum system (usually atoms, but sometimes molecules or atomic nuclei) and a single photon. When a photon has about the right amount of energy (which is connected to its frequency) to allow a change in the energy state of the system (in the case of an atom this is usually an electron changing orbitals), the photon is absorbed. Then the energy will be spontaneously re-emitted, either as one photon at the same frequency as the original one or in a cascade, where the sum of the energies of the photons emitted will be equal to the energy of the one absorbed (assuming the system returns to its original state). A spectral line may be observed either as an emission line or an absorption line. Which type of line is observed depends on the type of material and its temperature relative to another emission source. An absorption line is produced when photons from a hot, broad spectrum source pass through a cooler material. The intensity of light, over a narrow frequency range, is reduced due to absorption by the material and re-emission in random directions. By contrast, a bright emission line is produced when photons from a hot material are detected, perhaps in the presence of a broad spectrum from a cooler source. The intensity of light, over a narrow frequency range, is increased due to emission by the hot material. Spectral lines are highly atom-specific, and can be used to identify the chemical composition of any medium. Several elements, including helium, thallium, and caesium, were discovered by spectroscopic means. Spectral lines also depend on the temperature and density of the material, so they are widely used to determine the physical conditions of stars and other celestial bodies that cannot be analyzed by other means. Depending on the material and its physical conditions, the energy of the involved photons can vary widely, with the spectral lines observed across the electromagnetic spectrum, from radio waves to gamma rays. Nomenclature Strong spectral lines in the visible part of the electromagnetic spectrum often have a unique Fraunhofer line designation, such as K for a line at 393.366 nm emerging from singly-ionized calcium atom, Ca+, though some of the Fraunhofer "lines" are blends of multiple lines from several different species. In other cases, the lines are designated according to the level of ionization by adding a Roman numeral to the designation of the chemical element. Neutral atoms are denoted with the Roman numeral I, singly ionized atoms with II, and so on, so that, for example: Cu II copper ion with +1 charge, Cu1+ Fe III iron ion with +2 charge, Fe2+ More detailed designations usually include the line wavelength and may include a multiplet number (for atomic lines) or band designation (for molecular lines). Many spectral lines of atomic hydrogen also have designations within their respective series, such as the Lyman series or Balmer series. Originally all spectral lines were classified into series: the principal series, sharp series, and diffuse series. These series exist across atoms of all elements, and the patterns for all atoms are well-predicted by the Rydberg-Ritz formula. These series were later associated with suborbitals. Line broadening and shift There are a number of effects which control spectral line shape. A spectral line extends over a tiny spectral band with a nonzero range of frequencies, not a single frequency (i.e., a nonzero spectral width). In addition, its center may be shifted from its nominal central wavelength. There are several reasons for this broadening and shift. These reasons may be divided into two general categories – broadening due to local conditions and broadening due to extended conditions. Broadening due to local conditions is due to effects which hold in a small region around the emitting element, usually small enough to assure local thermodynamic equilibrium. Broadening due to extended conditions may result from changes to the spectral distribution of the radiation as it traverses its path to the observer. It also may result from the combining of radiation from a number of regions which are far from each other. Broadening due to local effects Natural broadening The lifetime of excited states results in natural broadening, also known as lifetime broadening. The uncertainty principle relates the lifetime of an excited state (due to spontaneous radiative decay or the Auger process) with the uncertainty of its energy. Some authors use the term "radiative broadening" to refer specifically to the part of natural broadening caused by the spontaneous radiative decay. A short lifetime will have a large energy uncertainty and a broad emission. This broadening effect results in an unshifted Lorentzian profile. The natural broadening can be experimentally altered only to the extent that decay rates can be artificially suppressed or enhanced. Thermal Doppler broadening The atoms in a gas which are emitting radiation will have a distribution of velocities. Each photon emitted will be "red"- or "blue"-shifted by the Doppler effect depending on the velocity of the atom relative to the observer. The higher the temperature of the gas, the wider the distribution of velocities in the gas. Since the spectral line is a combination of all of the emitted radiation, the higher the temperature of the gas, the broader the spectral line emitted from that gas. This broadening effect is described by a Gaussian profile and there is no associated shift. Pressure broadening The presence of nearby particles will affect the radiation emitted by an individual particle. There are two limiting cases by which this occurs: Impact pressure broadening or collisional broadening: The collision of other particles with the light emitting particle interrupts the emission process, and by shortening the characteristic time for the process, increases the uncertainty in the energy emitted (as occurs in natural broadening). The duration of the collision is much shorter than the lifetime of the emission process. This effect depends on both the density and the temperature of the gas. The broadening effect is described by a Lorentzian profile and there may be an associated shift. Quasistatic pressure broadening: The presence of other particles shifts the energy levels in the emitting particle (see spectral band), thereby altering the frequency of the emitted radiation. The duration of the influence is much longer than the lifetime of the emission process. This effect depends on the density of the gas, but is rather insensitive to temperature. The form of the line profile is determined by the functional form of the perturbing force with respect to distance from the perturbing particle. There may also be a shift in the line center. The general expression for the lineshape resulting from quasistatic pressure broadening is a 4-parameter generalization of the Gaussian distribution known as a stable distribution. Pressure broadening may also be classified by the nature of the perturbing force as follows: Linear Stark broadening occurs via the linear Stark effect, which results from the interaction of an emitter with an electric field of a charged particle at a distance , causing a shift in energy that is linear in the field strength. Resonance broadening occurs when the perturbing particle is of the same type as the emitting particle, which introduces the possibility of an energy exchange process. Quadratic Stark broadening occurs via the quadratic Stark effect, which results from the interaction of an emitter with an electric field, causing a shift in energy that is quadratic in the field strength. Van der Waals broadening occurs when the emitting particle is being perturbed by Van der Waals forces. For the quasistatic case, a Van der Waals profile is often useful in describing the profile. The energy shift as a function of distance between the interacting particles is given in the wings by e.g. the Lennard-Jones potential. Inhomogeneous broadening Inhomogeneous broadening is a general term for broadening because some emitting particles are in a different local environment from others, and therefore emit at a different frequency. This term is used especially for solids, where surfaces, grain boundaries, and stoichiometry variations can create a variety of local environments for a given atom to occupy. In liquids, the effects of inhomogeneous broadening is sometimes reduced by a process called motional narrowing. Broadening due to non-local effects Certain types of broadening are the result of conditions over a large region of space rather than simply upon conditions that are local to the emitting particle. Opacity broadening Opacity broadening is an example of a non-local broadening mechanism. Electromagnetic radiation emitted at a particular point in space can be reabsorbed as it travels through space. This absorption depends on wavelength. The line is broadened because the photons at the line center have a greater reabsorption probability than the photons at the line wings. Indeed, the reabsorption near the line center may be so great as to cause a self reversal in which the intensity at the center of the line is less than in the wings. This process is also sometimes called self-absorption. Macroscopic Doppler broadening Radiation emitted by a moving source is subject to Doppler shift due to a finite line-of-sight velocity projection. If different parts of the emitting body have different velocities (along the line of sight), the resulting line will be broadened, with the line width proportional to the width of the velocity distribution. For example, radiation emitted from a distant rotating body, such as a star, will be broadened due to the line-of-sight variations in velocity on opposite sides of the star (this effect usually referred to as rotational broadening). The greater the rate of rotation, the broader the line. Another example is an imploding plasma shell in a Z-pinch. Combined effects Each of these mechanisms can act in isolation or in combination with others. Assuming each effect is independent, the observed line profile is a convolution of the line profiles of each mechanism. For example, a combination of the thermal Doppler broadening and the impact pressure broadening yields a Voigt profile. However, the different line broadening mechanisms are not always independent. For example, the collisional effects and the motional Doppler shifts can act in a coherent manner, resulting under some conditions even in a collisional narrowing, known as the Dicke effect. Spectral lines of chemical elements Bands The phrase "spectral lines", when not qualified, usually refers to lines having wavelengths in the visible band of the full electromagnetic spectrum. Many spectral lines occur at wavelengths outside this range. At shorter wavelengths, which correspond to higher energies, ultraviolet spectral lines include the Lyman series of hydrogen. At the much shorter wavelengths of X-rays, the lines are known as characteristic X-rays because they remain largely unchanged for a given chemical element, independent of their chemical environment. Longer wavelengths correspond to lower energies, where the infrared spectral lines include the Paschen series of hydrogen. At even longer wavelengths, the radio spectrum includes the 21-cm line used to detect neutral hydrogen throughout the cosmos. See also Absorption spectrum Atomic spectral line Bohr model Electron configuration Emission spectrum Fourier transform Fraunhofer line Table of emission spectra of gas discharge lamps Hydrogen line (21-cm line) Hydrogen spectral series Spectral band Spectroscopy Splatalogue Notes References Further reading Spectroscopy Spectrum (physical sciences)
Spectral line
[ "Physics", "Chemistry" ]
2,429
[ "Physical phenomena", "Molecular physics", "Spectrum (physical sciences)", "Instrumental analysis", "Waves", "Spectroscopy" ]
177,356
https://en.wikipedia.org/wiki/Equational%20prover
EQP (Equational prover) is an automated theorem proving program for equational logic, developed by the Mathematics and Computer Science Division of the Argonne National Laboratory. It was one of the provers used for solving a longstanding problem posed by Herbert Robbins, namely, whether all Robbins algebras are Boolean algebras. References External links EQP project. Robbins Algebras Are Boolean. Argonne National Laboratory, Mathematics and Computer Science Division. Theorem proving software systems
Equational prover
[ "Mathematics" ]
101
[ "Theorem proving software systems", "Automated theorem proving", "Mathematical software" ]
177,434
https://en.wikipedia.org/wiki/Virgo%20Supercluster
The Local Supercluster (LSC or LS), or Virgo Supercluster is a formerly defined supercluster containing the Virgo Cluster and Local Group, which itself contains the Milky Way and Andromeda galaxies, as well as others. At least 100 galaxy groups and clusters are located within its diameter of 33 megaparsecs (110 million light-years). The Virgo Supercluster is one of about 10 million superclusters in the observable universe and is in the Pisces–Cetus Supercluster Complex, a galaxy filament. A 2014 study indicates that the Local Supercluster is only a part of an even greater supercluster, Laniakea, a larger group centered on the Great Attractor, thus subsuming the former Virgo Supercluster under Laniakea. Background Beginning with the first large sample of nebulae published by William and John Herschel in 1863, it was known that there is a marked excess of nebular fields in the constellation Virgo, near the north galactic pole. In the 1950s, French–American astronomer Gérard de Vaucouleurs was the first to argue that this excess represented a large-scale galaxy-like structure, coining the term "Local Supergalaxy" in 1953, which he changed to "Local Supercluster" (LSC) in 1958. Harlow Shapley, in his 1959 book Of Stars and Men, suggested the term Metagalaxy. Debate went on during the 1960s and 1970s as to whether the Local Supercluster (LS) was actually a structure or a chance alignment of galaxies. The issue was resolved with the large redshift surveys of the late 1970s and early 1980s, which convincingly showed the flattened concentration of galaxies along the supergalactic plane. Structure In 1982, R. Brent Tully presented the conclusions of his research concerning the basic structure of the LS. It consists of two components: an appreciably flattened disk containing two thirds of the supercluster's luminous galaxies, and a roughly spherical halo containing the remaining third. The disk itself is a thin (~1 Mpc) ellipsoid with a long axis / short axis ratio of at least 6 to 1, and possibly as high as 9 to 1. Data released in June 2003 from the 5-year Two-degree-Field Galaxy Redshift Survey (2dF) has allowed astronomers to compare the LS to other superclusters. The LS represents a typical poor (that is, lacking a high density core) supercluster of rather small size. It has one rich galaxy cluster in the center, surrounded by filaments of galaxies and poor groups. The Local Group is located on the outskirts of the LS in a small filament extending from the Fornax Cluster to the Virgo Cluster. The Virgo Supercluster's volume is roughly 7,000 times that of the Local Group, or 100 billion times that of the Milky Way. Galaxy distribution The number density of galaxies in the LS falls off with the square of the distance from its center near the Virgo Cluster, suggesting that this cluster is not randomly located. Overall, the vast majority of the luminous galaxies (less than absolute magnitude −13) are concentrated in a small number of clouds (groups of galaxy clusters). Ninety-eight percent can be found in the following 11 clouds, given in decreasing order of number of luminous galaxies: Canes Venatici, Virgo Cluster, Virgo II (southern extension), Leo II, Virgo III, Crater (NGC 3672), Leo I, Leo Minor (NGC 2841), Draco (NGC 5907), Antlia (NGC 2997), and NGC 5643. Of the luminous galaxies located in the disk, one third are in the Virgo Cluster. The remainder are found in the Canes Venatici Cloud and Virgo II Cloud, plus the somewhat insignificant NGC 5643 Group. The luminous galaxies in the halo are concentrated in a small number of clouds (94% in 7 clouds). This distribution indicates that "most of the volume of the supergalactic plane is a great void." A helpful analogy that matches the observed distribution is that of soap bubbles. Flattish clusters and superclusters are found at the intersection of bubbles, which are large, roughly spherical (on the order of 20–60 Mpc in diameter) voids in space. Long filamentary structures seem to predominate. An example of this is the Hydra–Centaurus Supercluster, the nearest supercluster to the Virgo Supercluster, which starts at a distance of roughly 30 Mpc and extends to 60 Mpc. Cosmology Large-scale dynamics Since the late 1980s it has been apparent that not only the Local Group, but all matter out to a distance of at least 50 Mpc is experiencing a bulk flow on the order of 600 km/s in the direction of the Norma Cluster (Abell 3627). Lynden-Bell et al. (1988) dubbed the cause of this the "Great Attractor". The Great Attractor is now understood to be the center of mass of an even larger structure of galaxy clusters, dubbed "Laniakea", which includes the Virgo Supercluster (including the Local Group) as well as the Hydra-Centaurus Supercluster, the Pavo-Indus Supercluster, and the Fornax Group. The Great Attractor, together with the entire supercluster, is found to be moving toward Shapley Supercluster, with center of Shapley Attractor. Dark matter The LS has a total mass M ≈ 1015 and a total optical luminosity L ≈ 3 . This yields a mass-to-light ratio of about 300 times that of the solar ratio (/ = 1), a figure that is consistent with results obtained for other superclusters. By comparison, the mass-to-light ratio for the Milky Way is 63.8 assuming a solar absolute magnitude of 4.83, a Milky Way absolute magnitude of −20.9, and a Milky Way mass of . These ratios are one of the main arguments in favor of the presence of large amounts of dark matter in the universe; if dark matter did not exist, much smaller mass-to-light ratios would be expected. Maps Diagrams See also Abell catalogue Large-scale structure of the universe List of Abell clusters Supercluster KBC Void References Further reading External links The Atlas of the Universe, a website created by astrophysicist Richard Powell that shows maps of our local universe on a number of different scales (similar to above maps). Galaxy superclusters Laniakea Supercluster
Virgo Supercluster
[ "Astronomy" ]
1,419
[ "Astronomical objects", "Galaxy superclusters" ]
177,512
https://en.wikipedia.org/wiki/Economic%20geography
Economic geography is the subfield of human geography that studies economic activity and factors affecting it. It can also be considered a subfield or method in economics. Economic geography takes a variety of approaches to many different topics, including the location of industries, economies of agglomeration (also known as "linkages"), transportation, international trade, development, real estate, gentrification, ethnic economies, gendered economies, core-periphery theory, the economics of urban form, the relationship between the environment and the economy (tying into a long history of geographers studying culture-environment interaction), and globalization. Theoretical background and influences There are diverse methodological approaches in the field of location theory. Neoclassical location theorists, following in the tradition of Alfred Weber, often concentrate on industrial location and employ quantitative methods. However, since the 1970s, two major reactions against neoclassical approaches have reshaped the discipline. One is Marxist political economy, stemming from the contributions of scholars like David Harvey, which offers a critical perspective on spatial economics. The other is the new economic geography, which considers social, cultural, and institutional factors alongside economic aspects in understanding spatial phenomena. Economists like Paul Krugman and Jeffrey Sachs have contributed extensively to the analysis of economic geography. Krugman, in particular, referred to his application of spatial thinking to international trade theory as the "new economic geography," which presents a competing perspective to a similarly named approach within the discipline of geography. This overlap in terminology can lead to confusion. As an alternative, some scholars have proposed using the term "geographical economics" to differentiate between the two approaches. History Early approaches to economic geography are found in the seven Chinese maps of the State of Qin, which date to the 4th century BC and in the Greek geographer Strabo's Geographika, compiled almost 2000 years ago. As cartography developed, geographers illuminated many aspects used today in the field; maps created by different European powers described the resources likely to be found in American, African, and Asian territories. The earliest travel journals included descriptions of the native people, the climate, the landscape, and the productivity of various locations. These early accounts encouraged the development of transcontinental trade patterns and ushered in the era of mercantilism. Lindley M. Keasbey wrote in 1901 that no discipline of economic geography existed, with scholars either doing geography or economics. Keasbey argued for a discipline of economic geography, writing,On the one hand, the economic activities of man are determined from the first by the phenomena of nature; and, on the other hand, the phenomena of nature are subsequently modified by the economic activities of man. Since this is the case, to start the deductions of economics, the inductions of geography are necessary; and to continue the inductions of geography, the deductions of economics are required. Logically, therefore, economics is impossible without geography, and geography is incomplete without economics.World War II contributed to the popularization of geographical knowledge generally, and post-war economic recovery and development contributed to the growth of economic geography as a discipline. During environmental determinism's time of popularity, Ellsworth Huntington and his theory of climatic determinism, while later greatly criticized, notably influenced the field. Valuable contributions also came from location theorists such as Johann Heinrich von Thünen or Alfred Weber. Other influential theories include Walter Christaller's Central place theory, the theory of core and periphery. Fred K. Schaefer's article "Exceptionalism in geography: A Methodological Examination", published in the American journal Annals of the Association of American Geographers, and his critique of regionalism, made a large impact on the field: the article became a rallying point for the younger generation of economic geographers who were intent on reinventing the discipline as a science, and quantitative methods began to prevail in research. Well-known economic geographers of this period include William Garrison, Brian Berry, Waldo Tobler, Peter Haggett and William Bunge. Contemporary economic geographers tend to specialize in areas such as location theory and spatial analysis (with the help of geographic information systems), market research, geography of transportation, real estate price evaluation, regional and global development, planning, Internet geography, innovation, social networks. Approaches to study As economic geography is a very broad discipline, with economic geographers using many different methodologies in the study of economic phenomena in the world some distinct approaches to study have evolved over time: Theoretical economic geography focuses on building theories about spatial arrangement and distribution of economic activities. Regional economic geography examines the economic conditions of particular regions or countries of the world. It deals with economic regionalization as well as local economic development. Historical economic geography examines the history and development of spatial economic structure. Using historical data, it examines how centers of population and economic activity shift, what patterns of regional specialization and localization evolve over time and what factors explain these changes. Evolutionary economic geography adopts an evolutionary approach to economic geography. More specifically, Evolutionary Economic Geography uses concepts and ideas from evolutionary economics to understand the evolution of cities, regions, and other economic systems. Critical economic geography is an approach taken from the point of view of contemporary critical geography and its philosophy. Behavioral economic geography examines the cognitive processes underlying spatial reasoning, locational decision making, and behavior of firms and individuals. Economic geography is sometimes approached as a branch of anthropogeography that focuses on regional systems of human economic activity. An alternative description of different approaches to the study of human economic activity can be organized around spatiotemporal analysis, analysis of production/consumption of economic items, and analysis of economic flow. Spatiotemporal systems of analysis include economic activities of region, mixed social spaces, and development. Alternatively, analysis may focus on production, exchange, distribution, and consumption of items of economic activity. Allowing parameters of space-time and item to vary, a geographer may also examine material flow, commodity flow, population flow and information flow from different parts of the economic activity system. Through analysis of flow and production, industrial areas, rural and urban residential areas, transportation site, commercial service facilities and finance and other economic centers are linked together in an economic activity system. Branches Thematically, economic geography can be divided into these subdisciplines: Geography of agriculture It is traditionally considered the branch of economic geography that investigates those parts of the Earth's surface that are transformed by humans through primary sector activities. It thus focuses on structures of agricultural landscapes and asks for the processes that lead to these spatial patterns. While most research in this area concentrates rather on production than on consumption,[1] a distinction can be made between nomothetic (e.g. distribution of spatial agricultural patterns and processes) and idiographic research (e.g. human-environment interaction and the shaping of agricultural landscapes). The latter approach of agricultural geography is often applied within regional geography. Geography of industry Geography of international trade Geography of resources Geography of transport and communication Geography of finance These areas of study may overlap with other geographical sciences. Economists and economic geographers Generally, spatially interested economists study the effects of space on the economy. Geographers, on the other hand, are interested in the economic processes' impact on spatial structures. Moreover, economists and economic geographers differ in their methods in approaching spatial-economic problems in several ways. An economic geographer will often take a more holistic approach to the analysis of economic phenomena, which is to conceptualize a problem in terms of space, place, and scale as well as the overt economic problem that is being examined. The economist approach, according to some economic geographers, has the main drawback of homogenizing the economic world in ways economic geographers try to avoid. The New Economic Geography With the rise of the New Economy, economic inequalities are increasing spatially. The New Economy, generally characterized by globalization, increasing use of information and communications technology, the growth of knowledge goods, and feminization, has enabled economic geographers to study social and spatial divisions caused by the rising New Economy, including the emerging digital divide. The new economic geographies consist of primarily service-based sectors of the economy that use innovative technology, such as industries where people rely on computers and the internet. Within these is a switch from manufacturing-based economies to the digital economy. In these sectors, competition makes technological changes robust. These high technology sectors rely heavily on interpersonal relationships and trust, as developing things like software is very different from other kinds of industrial manufacturing—it requires intense levels of cooperation between many different people, as well as the use of tacit knowledge. As a result of cooperation becoming a necessity, there is a clustering in the high-tech new economy of many firms. Diane Perrons argues that in Anglo-American literature, the New Economy Geography consists of two distinct types. New Economic Geography 1 (NEG1) is characterized by sophisticated spatial modelling. It seeks to explain uneven development and the emergence of industrial clusters. It does so through the exploration of linkages between centripetal and centrifugal forces, especially those of economies of scale. New Economic Geography 2 (NEG2) also seeks to explain the apparently paradoxical emergence of industrial clusters in a contemporary context, however, it emphasizes relational, social, and contextual aspects of economic behaviour, particularly the importance of tacit knowledge. The main difference between these two types is NEG2's emphasis on aspects of economic behaviour that NEG1 considers intangible. Both New Economic Geographies acknowledge transport costs, the importance of knowledge in a new economy, possible effects of externalities, and endogenous processes that generate increases in productivity. The two also share a focus on the firm as the most important unit and on growth rather than development of regions. As a result, the actual impact of clusters on a region is given far less attention, relative to the focus on clustering of related activities in a region. However, the focus on the firm as the main entity of significance hinders the discussion of New Economic Geography. It limits the discussion in a national and global context and confines it to a smaller scale context. It also places limits on the nature of the firm's activities and their position within the global value chain. Further work done by Bjorn Asheim (2001) and Gernot Grabher (2002) challenges the idea of the firm through action-research approaches and mapping organizational forms and their linkages. In short, the focus on the firm in new economic geographies is undertheorized in NEG1 and undercontextualized in NEG2, which limits the discussion of its impact on spatial economic development. Spatial divisions within these arising New Economic geographies are apparent in the form of the digital divide, as a result of regions attracting talented workers instead of developing skills at a local level (see Creative Class for further reading). Despite increasing inter-connectivity through developing information communication technologies, the contemporary world is still defined through its widening social and spatial divisions, most of which are increasingly gendered. Danny Quah explains these spatial divisions through the characteristics of knowledge goods in the New Economy: goods defined by their infinite expansibility, weightlessness, and nonrivalry. Social divisions are expressed through new spatial segregation that illustrates spatial sorting by income, ethnicity, abilities, needs, and lifestyle preferences. Employment segregation is evidence by the overrepresentation of women and ethnic minorities in lower-paid service sector jobs. These divisions in the new economy are much more difficult to overcome as a result of few clear pathways of progression to higher-skilled work. Influence of Geography on Economic History The study of geography, in terms of how it has shaped or impacted on the settlement, location of resources, trade routes, shows how geography has shaped economic history. One of the reasons why interactions between geographic characteristics and economic activity can be convoluted is because the said characteristics are the primary cause by which the emergence or decline of civilizations. Transportation and Trade In the past rivers and water ways have remained critical transport channels. In the Nile, river, one of the first civilization icons of Egypt benefited from transport of goods and farming. Similarly it proliferated economic unification across the entire China with its influence on Yangtze River. The present is still true for a river like the Mississippi in order to efficiently transport products. Meanwhile geographical hindrances which include deserts, mountains among others make trade challenging. Sahara Desert needed some trade routes that were strictly depended on the oases while Himalayas separated some places like Tibet. However, there are some well-developed mountain passes, which play an essential role in the commercial experience, for example Khyber Pass. Agriculture and the Climate Climate too plays a very important role in determining the pace of economic development. The results also indicated that the level of productivity in agriculturally dominated regions was higher where the weather was moderate. For instance, the Mediterranean environment creates employment in the Southern Europe through the promotion of the sale of olive oil and wines. On the other hand, in desert region, creativity in matters concerning the use of water as a resource is well hammered when there is no innovation in the use of water.. Historical Background Historically, geography has influenced whether some parts of the world are indeed capable of supporting civilization at any one point in time. Colonial powers during the period of exploration were able to take advantages of the geographical opportunities, while the initial farm based communities were found to be developed in the Fertile Crescent. Sea channels connected continents for the primary aim of the acquisition of resources in the Atlantic Slave trade. Contemporary Consequences Geographical barriers continue to impact the economic outcomes in the present situation. Maritime trade benefits countries that are bordering the ocean. But the cost of transport is comparatively higher in the land locked countries. Despite what technology has made geography do to us, it is possible to weigh in on the future course that our future economic plans are to take through gaining an understanding of geography’s far reaching implications. Citations: [1] https://study.com/academy/lesson/how-geographical-features-impact-economic-activity.html [2] https://www.bb.org.bd/pub/research/workingpaper/wp1615.pdf [3] https://www.oxfordbibliographies.com/display/document/obo-9780199874002/obo-9780199874002-0146.xml [4] https://journals.sagepub.com/doi/10.1177/016001799761012334 [5] https://www.hks.harvard.edu/centers/cid/publications/faculty-working-papers/geography-and-economic-development [6] https://shs.cairn.info/revue-recherches-economiques-de-louvain-2011-2-page-141?lang=fr [7] https://www.researchgate.net/publication/233996238_Geography_and_Economic_Development [8] https://www.jstor.org/stable/857 See also Business cluster Creative class Cultural geography Cultural geography Cultural economics Cultural psychology Environmental determinism Development geography Gravity model of trade Geography and wealth Location theory Land (economics) New Economy Regional science Retail geography Rural economics Spatial analysis Tobler's first law of geography Tobler's second law of geography Urban economics Weber problem Economic Geography (journal) - founded and published quarterly at Clark University since 1925 Journal of Economic Geography - published by Oxford University Press since 2001 References Further reading Barnes, T. J., Peck, J., Sheppard, E., and Adam Tickell (eds). (2003). Reading Economic Geography. Oxford: Blackwell. Combes, P. P., Mayer, T., Thisse, J.T. (2008). Economic Geography: The Integration of Regions and Nations. Princeton: Princeton University Press. Description. Scroll down to chapter-preview links. Dicken, P. (2003). Global Shift: Reshaping the Global Economic Map in the 21st Century. New York: Guilford. Lee, R. and Wills, J. (1997). Geographies of Economies. London: Arnold. Massey, D. (1984). Spatial Divisions of Labour, Social Structures and the Structure of Production, MacMillan, London. Peck, J. (1996). Work-place: The Social Regulation of Labor Markets. New York: Guilford. Peck, J. (2001). Workfare States. New York: Guilford. Tóth, G., Kincses, Á., Nagy, Z. (2014). European Spatial Structure. LAP LAMBERT Academic Publishing, , External links Scientific journals Tijdschrift voor economische en sociale geografie (TESG) - Published by The Royal Dutch Geographical Society (KNAG) since 1948. Zeitschrift für Wirtschaftsgeographie - The German Journal of Economic Geography, published since 1956. Other EconGeo Network Social and Spatial Inequalities Regional economics Cultural economics Cultural geography Cross-cultural psychology Human geography
Economic geography
[ "Environmental_science" ]
3,547
[ "Environmental social science", "Human geography" ]
177,515
https://en.wikipedia.org/wiki/Molecular%20engineering
Molecular engineering is an emerging field of study concerned with the design and testing of molecular properties, behavior and interactions in order to assemble better materials, systems, and processes for specific functions. This approach, in which observable properties of a macroscopic system are influenced by direct alteration of a molecular structure, falls into the broader category of “bottom-up” design. Molecular engineering is highly interdisciplinary by nature, encompassing aspects of chemical engineering, materials science, bioengineering, electrical engineering, physics, mechanical engineering, and chemistry. There is also considerable overlap with nanotechnology, in that both are concerned with the behavior of materials on the scale of nanometers or smaller. Given the highly fundamental nature of molecular interactions, there are a plethora of potential application areas, limited perhaps only by one's imagination and the laws of physics. However, some of the early successes of molecular engineering have come in the fields of immunotherapy, synthetic biology, and printable electronics (see molecular engineering applications). Molecular engineering is a dynamic and evolving field with complex target problems; breakthroughs require sophisticated and creative engineers who are conversant across disciplines. A rational engineering methodology that is based on molecular principles is in contrast to the widespread trial-and-error approaches common throughout engineering disciplines. Rather than relying on well-described but poorly-understood empirical correlations between the makeup of a system and its properties, a molecular design approach seeks to manipulate system properties directly using an understanding of their chemical and physical origins. This often gives rise to fundamentally new materials and systems, which are required to address outstanding needs in numerous fields, from energy to healthcare to electronics. Additionally, with the increased sophistication of technology, trial-and-error approaches are often costly and difficult, as it may be difficult to account for all relevant dependencies among variables in a complex system. Molecular engineering efforts may include computational tools, experimental methods, or a combination of both. History Molecular engineering was first mentioned in the research literature in 1956 by Arthur R. von Hippel, who defined it as "… a new mode of thinking about engineering problems. Instead of taking prefabricated materials and trying to devise engineering applications consistent with their macroscopic properties, one builds materials from their atoms and molecules for the purpose at hand." This concept was echoed in Richard Feynman's seminal 1959 lecture There's Plenty of Room at the Bottom, which is widely regarded as giving birth to some of the fundamental ideas of the field of nanotechnology. In spite of the early introduction of these concepts, it was not until the mid-1980s with the publication of Engines of Creation: The Coming Era of Nanotechnology by Drexler that the modern concepts of nano and molecular-scale science began to grow in the public consciousness. The discovery of electrically conductive properties in polyacetylene by Alan J. Heeger in 1977 effectively opened the field of organic electronics, which has proved foundational for many molecular engineering efforts. Design and optimization of these materials has led to a number of innovations including organic light-emitting diodes and flexible solar cells. Applications Molecular design has been an important element of many disciplines in academia, including bioengineering, chemical engineering, electrical engineering, materials science, mechanical engineering and chemistry. However, one of the ongoing challenges is in bringing together the critical mass of manpower amongst disciplines to span the realm from design theory to materials production, and from device design to product development. Thus, while the concept of rational engineering of technology from the bottom-up is not new, it is still far from being widely translated into R&D efforts. Molecular engineering is used in many industries. Some applications of technologies where molecular engineering plays a critical role: Consumer Products Antibiotic surfaces (e.g. incorporation of silver nanoparticles or antibacterial peptides into coatings to prevent microbial infection) Cosmetics (e.g. rheological modification with small molecules and surfactants in shampoo) Cleaning products (e.g. nanosilver in laundry detergent) Consumer electronics (e.g. organic light-emitting diode displays (OLED)) Electrochromic windows (e.g. windows in the Boeing 787 Dreamliner) Zero emission vehicles (e.g. advanced fuel cells/batteries) Self-cleaning surfaces (e.g. super hydrophobic surface coatings) Energy Harvesting and Storage Flow batteries - Synthesizing molecules for high-energy density electrolytes and highly-selective membranes in grid-scale energy storage systems. Lithium-ion batteries - Creating new molecules for use as electrode binders, electrolytes, electrolyte additives, or even for energy storage directly in order to improve energy density (using materials such as graphene, silicon nanorods, and lithium metal), power density, cycle life, and safety. Solar cells - Developing new materials for more efficient and cost-effective solar cells including organic, quantum dot or perovskite-based photovoltaics. Photocatalytic water splitting - Enhancing the production of hydrogen fuel using solar energy and advanced catalytic materials such as semiconductor nanoparticles Environmental Engineering Water desalination (e.g. new membranes for highly-efficient low-cost ion removal) Soil remediation (e.g. catalytic nanoparticles that accelerate the degradation of long-lived soil contaminants such as chlorinated organic compounds) Carbon sequestration (e.g. new materials for CO2 adsorption) Immunotherapy Peptide-based vaccines (e.g. amphiphilic peptide macromolecular assemblies induce a robust immune response) Peptide-containing biopharmaceuticals (e.g. nanoparticles, liposomes, polyelectrolyte micelles as delivery vehicles) Synthetic Biology CRISPR - Faster and more efficient gene editing technique Gene delivery/gene therapy - Designing molecules to deliver modified or new genes into cells of live organisms to cure genetic disorders Metabolic engineering - Modifying metabolism of organisms to optimize production of chemicals (e.g. synthetic genomics) Protein engineering - Altering structure of existing proteins to enable specific new functions, or the creation of fully artificial proteins DNA-functionalized materials - 3D assemblies of DNA-conjugated nanoparticle lattices Techniques and instruments used Molecular engineers utilize sophisticated tools and instruments to make and analyze the interactions of molecules and the surfaces of materials at the molecular and nano-scale. The complexity of molecules being introduced at the surface is increasing, and the techniques used to analyze surface characteristics at the molecular level are ever-changing and improving. Meantime, advancements in high performance computing have greatly expanded the use of computer simulation in the study of molecular scale systems. Computational and Theoretical Approaches Computational chemistry High performance computing Molecular dynamics Molecular modeling Statistical mechanics Theoretical chemistry Topology Microscopy Atomic Force Microscopy (AFM) Scanning Electron Microscopy (SEM) Transmission Electron Microscopy (TEM) Molecular Characterization Dynamic light scattering (DLS) Matrix-assisted laser desorption/ionization (MALDI) spectrocosopy Nuclear magnetic resonance (NMR) spectroscopy Size exclusion chromatography (SEC) Spectroscopy Ellipsometry 2D X-Ray Diffraction (XRD) Raman Spectroscopy/Microscopy Surface Science Glow Discharge Optical Emission Spectrometry Time of Flight-Secondary Ion Mass Spectrometry (ToF-SIMS) X-Ray Photoelectron Spectroscopy (XPS) Synthetic Methods DNA synthesis Nanoparticle synthesis Organic synthesis Peptide synthesis Polymer synthesis Other Tools Focused Ion Beam (FIB) Profilometer UV Photoelectron Spectroscopy (UPS) Vibrational Sum Frequency Generation Research / Education At least three universities offer graduate degrees dedicated to molecular engineering: the University of Chicago, the University of Washington, and Kyoto University. These programs are interdisciplinary institutes with faculty from several research areas. The academic journal Molecular Systems Design & Engineering publishes research from a wide variety of subject areas that demonstrates "a molecular design or optimisation strategy targeting specific systems functionality and performance." See also General topics Biological engineering Biomolecular engineering Chemical engineering Chemistry Electrical engineering Materials science and engineering Mechanical engineering Molecular design software Molecular electronics Molecular modeling Molecular nanotechnology Nanotechnology References Nanotechnology Engineering disciplines
Molecular engineering
[ "Materials_science", "Engineering" ]
1,683
[ "Nanotechnology", "Materials science", "nan" ]
177,517
https://en.wikipedia.org/wiki/EAA%20AirVenture%20Oshkosh
EAA AirVenture Oshkosh (formerly the EAA Annual Convention and Fly-In), or just Oshkosh, is an annual air show and gathering of aviation enthusiasts held each summer at the Wittman Regional Airport and adjacent Pioneer Airport in Oshkosh, Wisconsin, United States. The southern part of the show grounds, as well as "Camp Scholler", are located in the town of Nekimi and a base for seaplanes is located on Lake Winnebago in the town of Black Wolf. The airshow is arranged by the Experimental Aircraft Association (EAA), an international general aviation organization based in Oshkosh, and is the largest of its kind in the world. The show lasts a week, usually beginning on the Monday of the last full week in July. During the gathering, the airport's control tower, frequency 118.5, is the busiest in the world. History EAA was founded in Hales Corners, Wisconsin in 1953 by aircraft designer and military aviator veteran Paul Poberezny, who originally started the organization in the basement of his home for builders and restorers of recreational aircraft. Although homebuilding is still a large part of the organization's activities, it has grown to include almost every aspect of recreational, commercial and military aviation, as well as aeronautics and astronautics. The first EAA fly-in was held in September 1953 at what is now Timmerman Field as a small part of the Milwaukee Air Pageant, fewer than 150 people registered as visitors the first year and only a handful of airplanes attended the event. In 1959, the EAA fly-in grew too large for the Air Pageant and moved to Rockford, Illinois. In 1970, when it outgrew its facilities at the Rockford airport (now Chicago Rockford International Airport), it moved to Oshkosh, Wisconsin. Much of the convention's growth and prominence on the world stage is credited to founder Paul Poberezny's son, aerobatic world champion and longtime EAA president Tom Poberezny, who became chairman of the event in 1977. For many years, its official name was The EAA Annual Convention and Fly-In. In 1998, the name was changed to AirVenture Oshkosh, but many regular attendees still call it as The Oshkosh Airshow or just Oshkosh. For many years, access to the flight line was restricted to EAA members. In 1997, the fee structure for the show was changed allowing all visitors access to the entire grounds. EAA AirVenture holds nearly 1,000 forums and workshops, in addition to their many vendors which bring a variety of aircraft supplies, general merchandise, and name brand sponsors such as Piper, Cessna, Cirrus, and many others. The 2020 AirVenture convention and air show was canceled due to the COVID-19 pandemic. Historical aircraft debuts AirVenture has hosted the debut of numerous revolutionary designs. Richard VanGrunsven debuted his Van's RV-3 at the 1972 AirVenture Oshkosh, a homebuilt that defined new ways of aircraft performance. VanGrunsven would eventually go on to build more homebuilts than anyone else in the world, exceeding the annual production of all commercial general aviation companies combined. In 1975, aircraft designer Burt Rutan introduced his VariEze canard aircraft at Oshkosh, pioneering the use of moldless glass-reinforced plastic construction in homebuilts, a technique that several aircraft went on to adopt in the ensuing years including composite commercial airliners. At the 1987 AirVenture, Cirrus Aircraft's founders, the Klapmeier brothers, unveiled the Cirrus VK-30 kit aircraft, which later led to the creation of the successful SR20 and SR22, the first designs to incorporate all-composite fiberglass construction, glass-panel cockpits and airframe ballistic parachutes for use in manufactured light aircraft. Other past notable designs introduced at AirVenture include Frank Christensen's Christen Eagle II aerobatic kit biplane in 1978, Tom Hamilton's Glasair 1 in 1980, and Lance Neibauer's Lancair 200 in 1985. At the 2018 AirVenture, Jack Bally introduced his Bally Bomber B-17, a one-third scale, single seat homebuilt intended as a replica of the Boeing B-17 Flying Fortress, and Mike Patey introduced DRACO, a highly modified PZL-104 Wilga made for STOL and bush flying. Attendance The EAA estimated the attendance in 2021 at 608,000 people. In 2018, 2,714 international visitors registered from 87 nations. There were approximately 10,000 aircraft, 2,979 show planes, and 976 media representatives on-site from six continents, along with 867 commercial exhibitors. In the past, attendance at the event was tabulated on a daily basis rather than on an individual basis. It is unclear whether this practice still exists at EAA Airventure. For example, in 2006 the Oshkosh Northwestern reported that attendance was estimated at 625,000 by the EAA. The estimate was for total attendees each day, so one person attending 7 days would count as 7 attendees. The paper estimated that between 200,000 and 300,000 individuals attended the event. The large number of aircraft arrivals and departures during the fly-in week makes the Wittman Field FAA control tower the busiest in the world for that week in number of movements. To accommodate the huge flow of aircraft around the airport and the nearby airspace, a special NOTAM is published each year, choreographing the normal and emergency procedures to follow. More than 4,000 volunteers contribute approximately 250,000 hours before, during and after the event. Economic impact The EAA AirVenture fly-in has a large economic impact on the Oshkosh area as well as the state of Wisconsin. In 1982, the Milwaukee Journal Sentinel reported that the fly-in had a large economic benefit to the Fox Valley region. At the time EAA estimated the benefit to be around $30 million ($ in ) and the Oshkosh Convention and Tourism bureau estimated it to be lower, at $21 million ($ in ). In 1989, the Oshkosh Chamber of Commerce said that Winnebago County had a $47 million benefit from the fly in. Additionally, a 1987 UW-Oshkosh study reported a $65 million ($ in ) benefit to the entire state of Wisconsin. In 2008, a UW-Oshkosh Center for Community Partnerships study showed a $110 million economic impact for the Oshkosh area. Of that, $84 million was direct impact with a $26 million multiplier from secondary spending. Additionally, the fly in provided 1,700 jobs and $39 million in labor income for Winnebago, Outagamie, and Fond du Lac counties. In 2017, it was estimated that the event had an over $170 million economic impact on the surrounding area. Air-traffic operation In 1961, The Rockford EAA airshow had 10,000 aircraft movements. In 1971, the EAA airshow at Oshkosh brought in 600 planes and 31,653 movements. Today AirVenture brings in more than 10,000 airplanes. Special air traffic procedures are used to ensure safe, coordinated operations. For example, in 2014 the special flight procedures NOTAM was 32 pages long. FAA air traffic staff, including controllers, supervisors, and managers, compete throughout the FAA's 17-state Central Terminal Service Area to work the event. In 2008, 172 air traffic professionals representing 56 facilities volunteered to staff the facilities at Oshkosh (OSH), Fond du Lac (FLD), and Fisk. They wear bright pink shirts to stand out in the crowd. Due to the budget sequestration in 2013, the Federal Aviation Administration announced that it was not able to send resources to support the AirVenture. Rather than cancelling the event, the Experimental Aircraft Association was forced to sign a $447,000 contract to repay the government for FAA resources during the AirVenture. EAA filed a petition in Federal court arguing that the FAA could not withhold services without specific Congressional action. However, in March 2014 EAA signed a settlement agreement agreeing to pay the 2013 costs and a further agreement that guaranteed FAA participation for another nine years. The agreement required the association to reimburse the government for AirVenture specific costs that had been provided at government expense in the years prior to 2013. Host airports Wittman Regional Airport (OSH): Airplanes Pioneer Airport (WS17): Helicopters and airships Ultralight Fun Fly Zone: Ultralights, powered parachutes, weight shift trikes, gyroplanes, homebuilt rotorcraft, and hot air balloons Vette/Blust Seaplane Base (96WI): Seaplanes Fond du Lac County Airport (FLD): Diversion airport, additional parking Appleton International Airport (ATW): Diversion airport, additional parking, U.S. Customs and Border Protection processing Austin Straubel International Airport (GRB): Diversion airport, additional parking Planeacres Airport (2WN7): Emergency diversion airport (Fisk approach) Technical operations Several days prior to the event, members of the FAA's Technical Operation team from around the Central Service Area arrive in Oshkosh to set up the temporary communication facilities (mobile communication platforms, Fisk VFR approach control and Fond du Lac (FLD) tower). These technicians maintain the facilities during the event and tear down and store the equipment after AirVenture ends. Volunteering EAA AirVenture relies heavily on volunteers who arrive in the weeks leading up to the air show. The tasks they perform range from parking cars and airplanes, to painting buildings, to helping set up and tear down concerts and shows presented by the EAA. Long-time volunteers receive free meals, tee-shirts, embroidered patches, and free admission into the EAA AirVenture event. National Blue Beret National Blue Beret (NBB) is a National Cadet Special Activity in the Civil Air Patrol. The event is two weeks long and is set up so that the second week will overlap with the AirVenture airshow. Participants are Civil Air Patrol cadet and senior members who must go through a competitive selection process in order to attend the event. Participants help conduct event operations, including flight marshaling, crowd control, and emergency services. See also EAA Aviation Museum EAA Biplane Jack Pelton Project Schoolflight Sport Aviation magazine Steve Wittman Sun 'n Fun Tannkosh Young Eagles References Further reading External links EAA AirVenture Oshkosh Experimental Aircraft Association "#OSH18 - EAA AirVenture at Oshkosh 2018" (YouTube video) - overview shows the scale of the event and a variety of aircraft & places Air shows in the United States Wisconsin culture Oshkosh, Wisconsin Tourist attractions in Winnebago County, Wisconsin Recurring events established in 1953 Experimental Aircraft Association Aviation in Wisconsin Events in Wisconsin Summer events in the United States 1953 establishments in Wisconsin
EAA AirVenture Oshkosh
[ "Engineering" ]
2,314
[ "Experimental Aircraft Association", "Aerospace engineering organizations" ]
177,533
https://en.wikipedia.org/wiki/STS-107
STS-107 was the 113th flight of the Space Shuttle program, and the 28th and final flight of Space Shuttle Columbia. The mission ended on February 1, 2003, with the Space Shuttle Columbia disaster which killed all seven crew members and destroyed the space shuttle. It was the 88th post-Challenger disaster mission. The flight launched from Kennedy Space Center in Florida on January 16, 2003. It spent 15 days, 22 hours, 20 minutes, 32 seconds in orbit. The crew conducted a multitude of international scientific experiments. The disaster occurred during reentry while the orbiter was over Texas. Immediately after the disaster, NASA convened the Columbia Accident Investigation Board to determine the cause of the disintegration. The source of the failure was determined to have been caused by a piece of foam that broke off during launch and damaged the thermal protection system (reinforced carbon-carbon panels and thermal protection tiles) on the leading edge of the orbiter's left wing. During re-entry the damaged wing slowly overheated and came apart, eventually leading to loss of control and disintegration of the vehicle. The cockpit window frame is now exhibited in a memorial inside the Space Shuttle Atlantis Pavilion at the Kennedy Space Center. The damage to the thermal protection system on the wing was similar to that of Atlantis which had also sustained damage in 1988 during STS-27, the second mission after the Space Shuttle Challenger disaster. However, the damage on STS-27 occurred at a spot that had more robust metal (a thin steel plate near the landing gear), and that mission survived the re-entry. Crew Crew seat assignments Mission highlights STS-107 carried the SPACEHAB Research Double Module (RDM) on its inaugural flight, the Freestar experiment (mounted on a Hitchhiker Program rack), and the Extended Duration Orbiter pallet. SPACEHAB was first flown on STS-57. On the day of the experiment, a video taken to study atmospheric dust may have detected a new atmospheric phenomenon, dubbed a "TIGER" (Transient Ionospheric Glow Emission in Red). On board Columbia was a copy of a drawing by Petr Ginz, the editor-in-chief of the magazine Vedem, who depicted what he imagined the Earth looked like from the Moon when he was a 14-year-old prisoner in the Terezín concentration camp. The copy was in the possession of Ilan Ramon and was lost in the disintegration. Ramon also traveled with a dollar bill received from the Lubavitcher Rebbe. An Australian experiment, created by students from Glen Waverley Secondary College, was designed to test the reaction of zero gravity on the web formation of the Australian garden orb weaver spider. Major experiments Examples of some of the experiments and investigations on the mission. In SPACEHAB RDM: 9 commercial payloads with 21 investigations; 4 payloads for the European Space Agency with 14 investigations; 1 payload for ISS Risk Mitigation; 18 payloads NASA's Office of Biological and Physical Research (OBPR) with 23 investigations. In the payload bay attached to RDM: Combined Two-Phase Loop Experiment (COM2PLEX); Miniature Satellite Threat Reporting System (MSTRS); Star Navigation (STARNAV). FREESTAR Critical Viscosity of Xenon-2 (CVX-2); Space Experiment Module (SEM-14); Mediterranean Israeli Dust Experiment (MEIDEX); Low Power Transceiver (LPT); Solar Constant Experiment-3 (SOLCON-3); Shuttle Ozone Limb Sounding Experiment (SOLSE-2); Additional payloads Shuttle Ionospheric Modification with Pulsed Local Exhaust Experiment (SIMPLEX); Ram Burn Observation (RAMBO). Because much of the data was transmitted during the mission, there was still large return on the mission objectives even though Columbia was lost on re-entry. NASA estimated that 30% of the total science data was saved and collected through telemetry back to ground stations. Around 5-10% more data was saved and collected through recovering samples and hard drives intact on the ground after the Space Shuttle Columbia disaster, increasing the total data of saved experiments despite the disaster from 30% to 35-40%. About five or six Columbia payloads encompassing many experiments were successfully recovered in the debris field. Scientists and engineers were able to recover 99% of the data for one of the six FREESTAR experiments, Critical Viscosity of Xenon-2 (CVX-2), that flew unpressurized in the payload bay during the mission after recovering the viscometer and hard drive damaged but fully intact in the debris field in Texas. NASA recovered a commercial payload, Commercial Instrumentation Technology Associates (ITA) Biomedical Experiments-2 (CIBX-2), and ITA was able to increase the total data saved from STS-107 from 0% to 50% for this payload. This experiment studied treatments for cancer, and the micro-encapsulation experiment part of the payload was completely recovered, increasing from 0% data to 90% data after recovering the samples fully intact for this experiment. In this same payload were numerous crystal-forming experiments by hundreds of elementary and middle school students from all across the United States. Miraculously most of their experiments were found intact in CIBX-2, increasing from 0% data to 100% fully recovered data. The BRIC-14 (moss growth experiment) and BRIC-60 (Caenorhabditis elegans roundworm experiment) samples were found intact in the debris field within a radius in east Texas. 80-87% of these live organisms survived the catastrophe. The moss and roundworms experiments' original primary mission was not nominal due to the lack of having the samples immediately after landing in their original state (they were discovered many months after the crash), but these samples helped the scientific community greatly in the field of astrobiology and helped form new theories about microorganisms surviving a long trip in outer space while traveling on meteorites or asteroids. Re-entry Columbia began re-entry as planned, but the heat shield was compromised due to damage sustained during the ascent. The heat of re-entry was free to spread into the damaged portion of the orbiter, ultimately causing its disintegration and the death of all seven astronauts. The accident triggered a 7-month investigation and a search for debris, and over 85,000 pieces were collected throughout the initial investigation. This amounted to roughly 38 percent of the orbiter vehicle. Insignia The mission insignia itself is the only patch of the shuttle program that is entirely shaped in the orbiter's outline. The central element of the patch is the microgravity symbol, μg, flowing into the rays of the astronaut symbol. The mission inclination is portrayed by the 39-degree angle of the astronaut symbol to the Earth's horizon. The sunrise is representative of the numerous experiments that are the dawn of a new era for continued microgravity research on the International Space Station and beyond. The breadth of science and the exploration of space is illustrated by the Earth and stars. The constellation Columba (the dove) was chosen to symbolize peace on Earth and the Space Shuttle Columbia. The seven stars also represent the mission crew members and honor the original astronauts who paved the way to make research in space possible. Six stars have five points, the seventh has six points like a Star of David, symbolizing the Israeli Space Agency's contributions to the mission. An Israeli flag is adjacent to the name of Payload Specialist Ramon, who was the first Israeli in space. The crew insignia or 'patch' design was initiated by crew members Dr. Laurel Clark and Dr. Kalpana Chawla. First-time crew member Clark provided most of the design concepts as Chawla led the design of her maiden voyage STS-87 insignia. Clark also pointed out that the dove in the Columba constellation was mythologically connected to the explorers the Argonauts who released the dove. Wake-up calls Throughout the shuttle program, sleeping astronauts were often awakened each morning by songs and short pieces of music chosen by their families, friends, and Mission Control, a tradition dating back to the Gemini and Apollo programs. While the crew of STS-107 worked shifts in "red" and "blue" teams to work around the clock, on this mission each shift was still awoken with a "wake-up call"; the only other two-shift shuttle mission to do so was STS-99. Gallery See also List of Space Shuttle missions Outline of space science Notes References Literature External links NASA's Space Shuttle Columbia and Her Crew NASA STS-107 Crew Memorial web page NASA's STS-107 Space Research Web Site Spaceflight Now: STS-107 Mission Report Press Kit Article describing experiments which survived the disaster Article: Astronaut Laurel Clark from Racine, WI Status reports Detailed NASA status reports for each day of the mission. Space accidents and incidents in the United States Space Shuttle Columbia disaster Space Shuttle missions Space program fatalities Spacecraft launched in 2003 Articles containing video clips February 2003 2003 in Texas January 2003 2003 in Louisiana Kalpana Chawla
STS-107
[ "Engineering" ]
1,861
[ "Space program fatalities", "Space programs" ]
177,541
https://en.wikipedia.org/wiki/Space%20Shuttle%20Columbia%20disaster
On Saturday, February 1, 2003, Space Shuttle Columbia disintegrated as it re-entered the atmosphere over Texas and Louisiana, killing all seven astronauts on board. It was the second Space Shuttle mission to end in disaster, after the loss of Challenger and crew in 1986. The mission, designated STS-107, was the twenty-eighth flight for the orbiter, the 113th flight of the Space Shuttle fleet and the 88th after the Challenger disaster. It was dedicated to research in various fields, mainly on board the SpaceHab module inside the shuttle's payload bay. During launch, a piece of the insulating foam broke off from the Space Shuttle external tank and struck the thermal protection system tiles on the orbiter's left wing. Similar foam shedding had occurred during previous Space Shuttle launches, causing damage that ranged from minor to near-catastrophic, but some engineers suspected that the damage to Columbia was more serious. Before reentry, NASA managers limited the investigation, reasoning that the crew could not have fixed the problem if it had been confirmed. When Columbia reentered the atmosphere of Earth, the damage allowed hot atmospheric gases to penetrate the heat shield and destroy the internal wing structure, which caused the orbiter to become unstable and break apart. After the disaster, Space Shuttle flight operations were suspended for more than two years, as they had been after the Challenger disaster. Construction of the International Space Station (ISS) was paused until flights resumed in July2005 with STS-114. NASA made several technical and organizational changes to subsequent missions, including adding an on-orbit inspection to determine how well the orbiter's thermal protection system (TPS) had endured the ascent, and keeping designated rescue missions ready in case irreparable damage was found. Except for one mission to repair the Hubble Space Telescope, subsequent Space Shuttle missions were flown only to the ISS to allow the crew to use it as a haven if damage to the orbiter prevented safe reentry. The remaining four orbiters were retired after the building of the ISS was completed. Background Space Shuttle The Space Shuttle was a partially reusable spacecraft operated by the U.S. National Aeronautics and Space Administration (NASA). It flew in space for the first time in April1981, and was used to conduct in-orbit research, and deploy commercial, military, and scientific payloads. At launch, it consisted of the orbiter, which contained the crew and payload, the external tank (ET), and the two solid rocket boosters (SRBs). The orbiter was a reusable, winged vehicle that launched vertically and landed as a glider. Five operational orbiters were built during the Space Shuttle program. was the first space-rated orbiter constructed, following the atmospheric test vehicle . The orbiter contained the crew compartment, where the crew predominantly lived and worked throughout a mission. Three Space Shuttle main engines (SSMEs) were mounted at the aft end of the orbiter and provided thrust during launch. Once in space, the crew maneuvered using the two smaller, aft-mounted Orbital Maneuvering System (OMS) engines. The orbiter was protected from heat during reentry by the thermal protection system (TPS), a thermal soaking protective layer around the orbiter. In contrast with previous US spacecraft, which had used ablative heat shields, the reusability of the orbiter required a multi-use heat shield. During reentry, the TPS experienced temperatures up to , but had to keep the orbiter vehicle's aluminum skin temperature below . The TPS primarily consisted of four sub-systems. The nose cone and leading edges of the wings experienced temperatures above , and were protected by the composite material reinforced carbon–carbon (RCC). Thicker RCC was developed and installed in 1998 to prevent damage from micrometeoroid and orbital debris. The entire underside of the orbiter vehicle, as well as the other hottest surfaces, were protected with black high-temperature reusable surface insulation. Areas on the upper parts of the orbiter vehicle were covered with white low-temperature reusable surface insulation, which provided protection at temperatures below . The payload bay doors and parts of the upper wing surfaces were covered with reusable felt surface insulation, as the temperature there remained below . Two solid rocket boosters (SRBs) were connected to the ET, and burned for the first two minutes of flight. The SRBs separated from the ET once they had expended their fuel and fell into the Atlantic Ocean under a parachute. NASA retrieval teams recovered the SRBs and returned them to the Kennedy Space Center (KSC), where they were disassembled and their components were reused on future flights. When the Space Shuttle launched, the orbiter and SRBs were connected to the ET, which held the fuel for the SSMEs. The ET consisted of a tank for liquid hydrogen (LH2), stored at and a smaller tank for liquid oxygen (LOX), stored at . It was covered in insulating foam to keep the liquids cold and prevent ice forming on the tank's exterior. The orbiter connected to the ET via two umbilicals near its bottom and a bipod near its top section. After its fuel had been expended, the ET separated from the orbiter and reentered the atmosphere, where it would break apart during reentry and its pieces would land in the Indian or Pacific Ocean. Debris strike concerns During the design process of the Space Shuttle, a requirement of the ET was that it would not release any debris that could potentially damage the orbiter and its TPS. The integrity of the TPS components was necessary for the survival of the crew during reentry, and the tiles and panels were only built to withstand relatively minor impacts. On STS-1, the first flight of the Space Shuttle, the orbiter Columbia was damaged during its launch from a foam strike. Foam strikes occurred regularly during Space Shuttle launches; of the 79 missions with available imagery during launch, foam strikes occurred on 65 of them. The bipod connected the ET near the top to the front underside of the orbiter via two struts with a ramp at the tank end of each strut; the ramps were covered in foam to prevent ice from forming that could damage the orbiter. The foam on each bipod ramp was approximately , and was carved by hand from the original foam application. Bipod ramp foam from the left strut had been observed falling off the ET on six flights prior to STS-107, and had created some of the largest foam strikes that the orbiter experienced. The first bipod ramp foam strike occurred during STS-7; the orbiter's TPS was repaired after the mission but no changes were made to address the cause of the bipod foam loss. After bipod foam loss on STS-32, NASA engineers, under the assumption that the foam loss was due to pressure buildup within the insulation, added vent holes to the foam to allow gas to escape. After a bipod foam strike damaged the TPS on STS-50, internal NASA investigations concluded it was an "accepted flight risk" and that it should not be treated as a flight safety issue. Bipod foam loss occurred on STS-52 and STS-62, but neither event was noticed until the investigation following Columbia'''s destruction. During STS-112, which flew in October 2002, a chunk of bipod ramp foam broke away from the ET bipod ramp and hit the SRB-ET attachment ring near the bottom of the left SRB, creating a dent wide and deep. Following the mission, the Program Requirements Control Board declined to categorize the bipod ramp foam loss as an in-flight anomaly. The foam loss was briefed at the STS-113 Flight Readiness Brief, but the Program Requirements Control Board decided that the ET was safe to fly. A debris strike from the ablative material on the right SRB caused significant damage to during the STS-27 launch on December 2, 1988. On the second day of the flight the crew inspected the damage using a camera on the remote manipulator system. The debris strike had removed a tile; the exposed orbiter skin was a reinforced section, and a burn-through might have occurred had the damage been in a different location. After the mission, the NASA Program Requirements Control Board designated the issue as an in-flight anomaly that was corrected with the planned improvement for the SRB ablator. Flight Space Shuttle mission For STS-107, Columbia carried the SpaceHab Research Double Module, the Orbital Acceleration Research Experiment, and an Extended Duration Orbiter pallet. The mission passed its pre-launch certifications and reviews, and began with the launch. The mission was originally scheduled to launch on January 11, 2001, but it was delayed thirteen times, until its launch on January 16, 2003. The seven-member crew of STS-107 were selected in July 2000. The mission was commanded by Rick Husband, who was a colonel in the U.S. Air Force and a test pilot. He had previously flown on STS-96. The mission's pilot was William McCool, a U.S. Navy commander who was on his first spaceflight. The payload commander was Michael Anderson, a U.S. Air Force lieutenant colonel who had previously flown on STS-89. Kalpana Chawla served as the flight engineer; she had previously flown on STS-87. David Brown and Laurel Clark, both Navy captains, flew as the mission specialists on their first spaceflights. Ilan Ramon, a colonel in the Israeli Air Force and the first Israeli astronaut, flew as a payload specialist on his first spaceflight. Launch and debris strike Columbia was launched from the Kennedy Space Center Launch Complex 39A (LC-39A) at 10:39:00am. At T+81.7seconds, a piece of foam approximately long and wide broke off from the left bipod on the ET. At T+81.9seconds, the foam struck the reinforced carbon–carbon (RCC) panels on Columbias left wing at relative velocity of . The foam's low ballistic coefficient caused it to lose speed immediately after separating from the ET, and the orbiter ran into the slower foam. Neither the mission nor ground crew noticed the debris strike at the time. The SRBs separated from the ET at T+2minutes and 7seconds, followed by the ET's separation from the orbiter at T+8minutes 30seconds. The ET separation was photographed by Anderson and recorded by Brown, but they did not record the bipod with missing foam. At T+43minutes, Columbia completed its orbital insertion as planned. Flight risk management After Columbia entered orbit, the NASA Intercenter Photo Working Group conducted a routine review of videos of the launch. The group's analysts did not notice the debris strike until the second day of the mission. None of the cameras that recorded the launch had a clear view of the debris striking the wing, leaving the group unable to determine the level of damage sustained by the orbiter. The group's chair contacted Wayne Hale, the Shuttle Program Manager for Launch Integration, to request on-orbit pictures of Columbias wing to assess its damage. After receiving notification of the debris strike, engineers at NASA, United Space Alliance, and Boeing created the Debris Assessment Team and began working to determine the damage to the orbiter. Intercenter Photo Working Group believed that the orbiter's RCC tiles were possibly damaged; NASA program managers were less concerned over the danger caused by the debris strike. Boeing analysts attempted to model the damage caused to the orbiter's TPS from the foam strike. The software models predicted damage that was deeper than the thickness of the TPS tiles, indicating that the orbiter's aluminum skin would be unprotected in that area. The Debris Assessment Team dismissed this conclusion as inaccurate, because of previous instances of predictions of damage greater than the actual damage. Further modeling specific to the RCC panels used software calibrated to predict damage caused by falling ice. The software predicted only one of 15 scenarios that ice would cause damage, leading the Debris Assessment Team to conclude there was minimal damage due to the lower density of foam to ice. To assess the possible damage to Columbias wing, members of the Debris Assessment Team made multiple requests to get imagery of the orbiter from the Department of Defense (DoD). Imagery requests were channeled through both the DoD Manned Space Flight Support Office and the Johnson Space Center Engineering Directorate. Hale coordinated the request through a DoD representative at KSC. The request was relayed to the U.S. Strategic Command (USSTRATCOM), which began identifying imaging assets that could observe the orbiter. The imagery request was soon rescinded by NASA Mission Management Team Chair Linda Ham after she investigated the origin of it. She had consulted with Flight Director Phil Engelauf and members of the Mission Management Team, who stated that they did not have a requirement for imagery of Columbia. Ham did not consult with the Debris Assessment Team, and cancelled the imagery request on the basis that it had not been made through official channels. Maneuvering the orbiter to allow its left wing to be imaged would have interrupted ongoing science operations, and Ham dismissed the DoD imaging capabilities as insufficient to assess damage to the orbiter. Following the rejection of their imagery request, the Debris Assessment Team did not make further requests for the orbiter to be imaged. Throughout the flight, members of the Mission Management Team were less concerned than the Debris Assessment Team about the potential risk of a debris strike. The loss of bipod foam on STS-107 was compared to previous foam strike events, none of which caused the loss of an orbiter or crew. Ham, scheduled to work as an integration manager for STS-114, was concerned with the potential delays from a foam loss event. Mission management also downplayed the risk of the debris strike in communications with the crew. On January 23, flight director Steve Stich sent an e-mail to Husband and McCool to tell them about the foam strike and inform them there was no cause for concern about damage to the TPS, as foam strikes had occurred on previous flights. The crew were also sent a fifteen-second video of the debris strike in preparation for a press conference, but were reassured that there were no safety concerns. On January 26, the Debris Assessment Team concluded that there were no safety concerns from the debris strike. The team's report was critical of the Mission Management Team for asserting that there were no safety concerns before the Debris Assessment Team's investigation had been completed. On January 29, William Readdy, the Associate Administrator for Space Flight, agreed to DoD imaging of the orbiter, but on the condition that it would not interfere with flight operations; ultimately, the orbiter was not imaged by the DoD during the flight. At a Mission Management Team on January 31, the day before Columbia reentered the atmosphere, the Launch Integration Office voiced Ham's intention to review on-board footage to view the missing foam, but concerns of crew safety were not discussed. Reentry Columbia was scheduled to reenter the atmosphere and land on February 1, 2003. At 3:30am EST the Entry Flight Control Team started its shift at the Mission Control Center. On board the orbiter, the crew stowed loose items and prepared their equipment for reentry. At 45 minutes before the deorbit burn, Husband and McCool began working through the entry checklist. At 8:10am the Capsule Communicator (CAPCOM), Charlie Hobaugh, informed the crew that they were approved to conduct the deorbit burn. At 8:15:30the crew successfully executed the deorbit burn, which lasted 2minutes and 38seconds. At 8:44:09Columbia reentered the atmosphere at an altitude of , a point named entry interface. The damage to the TPS on the orbiter's left wing allowed for hot air to enter and begin melting the aluminum structure. Four and a half minutes after entry interface, a sensor began recording greater-than-normal amounts of strain on the left wing; the sensor's data was recorded to internal storage and not transmitted to the crew or ground controllers. The orbiter began to turn (yaw) to the left as a result of the increased drag on the left wing, but this was not noticed by the crew or mission control because of corrections from the orbiter's flight control system. This was followed by sensors in the left wheel well reporting a rise in temperature. At 8:53:46 am, Columbia crossed over the California coast; it was traveling at Mach23 at an altitude of , and the temperature of its wings' leading edges was estimated to be . Soon after it entered California airspace, the orbiter shed several pieces of debris, events observed on the ground as sudden increases in brightness of the air around the orbiter. The MMACS officer reported that the hydraulic sensors in the left wing had readings below the sensors' minimum detection thresholds at 8:54:24am. Columbia continued its reentry and traveled over Utah, Arizona, New Mexico, and Texas, where observers would report seeing signs of debris being shed. At 8:58:03, the orbiter's aileron trim changed from the predicted values because of the increasing drag caused by the damage to the left wing. At 8:58:21, the orbiter shed a TPS tile that would later land in Littlefield, Texas; it would become the westernmost piece of recovered debris. The crew first received an indication of a problem at 8:58:39, when the Backup Flight Software monitor began displaying fault messages for a loss of pressure in the tires of the left landing gear. The pilot and commander then received indications that the status of the left landing gear was unknown, as different sensors reported the gear was down and locked or in the stowed position. The drag of the left wing continued to yaw the orbiter to the left until it could no longer be corrected using aileron trim. The orbiter's Reaction Control System (RCS) thrusters began firing continuously to correct its orientation. The loss of signal (LOS) from Columbia occurred at 8:59:32. Mission control stopped receiving information from the orbiter at this time, and Husband's last radio call of "Roger, uh..." was cut off mid-transmission. One of the channels in the flight control system was bypassed as the result of a failed wire, and a Master Alarm began sounding on the flight deck. Loss of control of the orbiter is estimated to have begun several seconds later with a loss of hydraulic pressure and an uncontrolled pitch-up maneuver. The orbiter began flying along a ballistic trajectory, which was significantly steeper and had more drag than the previous gliding trajectory. The orbiter, while still traveling faster than Mach 15, entered into a flat spin of 30° to 40° per second. The acceleration that the crew was experiencing increased from approximately 0.8 g to 3g, which would have likely caused dizziness and disorientation, but not incapacitation. The autopilot was switched to manual control and reset to automatic mode at 9:00:03; this would have required the input of either Husband or McCool, indicating that they were still conscious and able to perform functions at the time. All hydraulic pressure was lost, and McCool's final switch configurations indicate that he had tried to restore the hydraulic systems at some time after 9:00:05. At 9:00:18, the orbiter began a catastrophic breakup, and all on-board data recording soon ceased. Ground observers noted a sudden increase in debris being shed, and all on-board systems lost power. By 9:00:25, the orbiter's fore and aft sections had separated from one another. The sudden jerk caused the crew compartment to collide with the interior wall of the fuselage, resulting in the start of depressurization of the crew compartment by 9:00:35. The pieces of the orbiter continued to break apart into smaller pieces, and within a minute after breakup were too small to be detected by ground-based videos. A NASA report estimates that by 9:35, all crew remains and a majority of debris had hit the ground. The loss of signal occurred at a time when the Flight Control Team expected brief communication outages as the orbiter stopped communication via the west tracking and data relay satellite (TDRS). Personnel in Mission Control were unaware of the in-flight break-up, and continued to try to reestablish contact with the orbiter. At approximately 9:06, when Columbia would have been conducting its final maneuvers to land, a Mission Control member received a phone call concerning news coverage of the orbiter breaking up. This information was passed on to the Entry Flight Director, LeRoy Cain, who initiated contingency procedures. At KSC, where Columbia had been expected to land at 9:16, NASA Associate Administrator and former astronaut William Readdy also began contingency procedures after the orbiter did not land as scheduled. Crew survivability During reentry, all seven of the STS-107 crew members were killed, but the exact time of their deaths could not be determined. The level of acceleration that they experienced during crew module breakup was not lethal. The first lethal event the crew experienced was the depressurization of the crew module. The rate and exact time of complete depressurization could not be determined, but it occurred no later than 9:00:59 and was likely much earlier. The remains of the crew members indicated they all experienced depressurization. The astronauts' helmets have a visor that, when closed, can temporarily protect the crew member from depressurization. None of the crew members had closed their visors, and one was not wearing a helmet; this would indicate that depressurization occurred quickly before they could take protective measures. They were rendered unconscious or deceased within seconds and tissue damage was extensive enough that they could not have regained consciousness even if the cabin had regained pressurization. During and after the breakup of the crew module, the crew, either unconscious or dead, experienced rotation on all three axes. The astronauts' shoulder harnesses were unable to prevent trauma to their upper bodies, as the inertia reel system failed to retract sufficiently to secure them, leaving them only restrained by their lap belts. The helmets were not conformal to the crew members' heads, allowing head injuries to occur inside of the helmet. The neck ring of the helmet may have also acted as a fulcrum that caused spine and neck injuries. The physical trauma to the astronauts, who could not brace to prevent such injuries, also could have resulted in their deaths. The astronauts also likely suffered from significant thermal trauma. Hot gas entered the disintegrating crew module, burning the crew members, whose bodies were still somewhat protected by their ACES suits. Once the crew module fell apart, the astronauts were violently exposed to windblast and a possible shock wave, which stripped their suits from their bodies. The crews' remains were exposed to hot gas and molten metal as they fell away from the orbiter. After separation from the crew module, the bodies of the crew members entered an environment with almost no oxygen, very low atmospheric pressure, and both high temperatures caused by deceleration, and extremely low ambient temperatures. Their bodies hit the ground with lethal force. Presidential response At 14:04 EST (19:04 UTC), President George W. Bush said in a televised address to the nation, "My fellow Americans, this day has brought terrible news, and great sadness to our country. At 9:00 a.m. this morning, Mission Control in Houston lost contact with our Space Shuttle Columbia. A short time later, debris was seen falling from the skies above Texas. The Columbia is lost; there are no survivors." Recovery of debris After the orbiter broke up, reports came in to eastern Texas law enforcement agencies of an explosion and falling debris. Astronauts Mark Kelly and Gregory Johnson traveled on a US Coast Guard helicopter from Houston to Nacogdoches, and Jim Wetherbee drove a team of astronauts to Lufkin to assist with recovery efforts. Debris was reported from east Texas through southern Louisiana. Recovery crews and local volunteers worked to locate and identify debris. On the first day of the disaster, searchers began finding remains of the astronauts. Within three days of the crash, some remains from every crew member had been recovered. These recoveries occurred along a line south of Hemphill, Texas, and west of the Toledo Bend Reservoir. The final body of a crew member was recovered on February 11. The crew remains were transported to the Armed Forces Institute of Pathology at Dover Air Force Base. Immediately after the disaster, the Texas Army National Guard deployed 300 members to assist with security and recovery, and the Coast Guard Gulf Strike Team was assigned to help recover hazardous debris. Over the following days, the search grew to include hundreds of individuals from the Environmental Protection Agency, US Forest Service, Civil Air Patrol, and Texas and Louisiana public safety organizations, as well as local volunteers. In the months after the disaster, the largest-ever organized ground search took place. NASA officials warned of the dangers of handling debris, as it could have been contaminated by propellants. Soon after the accident, some individuals attempted to sell Columbia debris on the internet, including on the online auction website eBay. Officials at NASA were critical of these efforts, as the debris was NASA property and was needed for the investigation. A three-day amnesty period was offered for recovered orbiter debris. During this time, about 20 individuals contacted NASA to return debris, which included debris from the Challenger disaster. After the end of the amnesty period, several individuals were arrested for illegal looting and possession of debris.Columbias flight data recorder was found near Hemphill, Texas, southeast of Nacogdoches, on March 19, 2003. Columbia was the first orbiter, and it had a unique flight data OEX (Orbiter EXperiments) recorder to record vehicle performance data during the test flights. The recorder was left in Columbia after the initial Shuttle test-flights were completed, and began recording information 15 minutes prior to reentry. The tape it recorded to was broken at the time of the crash, but information from the orbiter's sensors could have been recorded beforehand. Several days later, the tape was sent to the Imation Corporation for it to be inspected and cleaned. On March 25, the OEX's tape was sent to KSC, where it was copied and analyzed. On March 27, a Bell 407 helicopter that was being used in the debris search crashed due to mechanical failure in the Angelina National Forest. The crash killed the pilot, Jules F. Mier Jr., and a Texas Forest Service aviation specialist, Charles Krenek, and injured three other crew members. A group of Caenorhabditis elegans worms, enclosed in aluminum canisters, survived reentry and impact with the ground and were recovered weeks after the disaster. The culture, which was part of an experiment to research their growth while consuming synthetic nutrients, was found to be alive on April 28, 2003. NASA management selected the Reusable Launch Vehicle hangar at KSC to reconstruct recovered Columbia debris. NASA Launch Director Michael Leinbach led the reconstruction team, which was staffed by Columbia engineers and technicians. Debris was laid out on the floor of the hangar in the shape of the orbiter to allow investigators to look for patterns in the damage that indicated the cause of the disaster. Astronaut Pamela Melroy was assigned to oversee the six-person team reconstructing the crew compartment, which included fellow astronaut Marsha Ivins. Recovered debris was shipped from the field to KSC, where it was unloaded and checked to see if it was contaminated by toxic hypergolic propellants. Each piece of debris had an identifying number and a tag indicating the coordinates where it was found. Staff attached photographs and catalogued each piece of debris. Recovered debris from inside the orbiter was placed in a separate area, as it was not considered to be a contributor to the accident. NASA conducted a fault tree analysis to determine the probable causes of the accident, and focused its investigations on the parts of the orbiter most likely to have been responsible for the in-flight breakup. Engineers in the hangar analyzed the debris to determine how the orbiter came apart. Even though the crew compartment was not considered as a likely cause of the accident, Melroy successfully argued for its analysis to learn more about how its safety systems helped, or failed to help, the crew survive. The tiles on the left wing were studied to determine the nature of the burning and melting that occurred. The damage to the debris indicated that the breach began at the wing's leading edge, allowing hot gas to get past the orbiter's thermal protection system. The search for Columbia debris ended in May. Approximately 83,900 pieces of debris were recovered, weighing , which was about 38 percent of the orbiter's overall weight. When the CAIB report was released, about 40,000 recovered pieces of debris had not been identified. All recovered non-human Columbia debris was stored in unused office space at the Vehicle Assembly Building, except for parts of the crew compartment, which were kept separate. By the end of reconstruction efforts only 720 items remained classified as unknown. In July 2011, lower water levels caused by a drought revealed a piece of debris in Lake Nacogdoches. NASA identified the piece as a power reactant storage and distribution tank. Columbia Accident Investigation Board About ninety minutes after the disaster, NASA Administrator Sean O'Keefe called to convene the Columbia Accident Investigation Board (CAIB) to determine the cause. It was chaired by retired U.S. Navy Admiral Harold W. Gehman, Jr. and included military and civilian analysts. It initially consisted of eight members, including Gehman, but expanded to 13 members by March. The CAIB members were notified by noon on the day of the accident, and participated in a teleconference that evening. The following day, they traveled to Barksdale AFB to begin the investigation. The CAIB members first toured the debris fields, and then established their operations at JSC. The CAIB established four teams to investigate NASA management and program safety, NASA training and crew operations, the technical aspects of the disaster, and how NASA culture affected the Space Shuttle program. These groups collaborated, and hired other support staff to investigate. The CAIB worked alongside the reconstruction efforts to determine the cause of the accident, and interviewed members of the Space Shuttle program, including those who had been involved with STS-107. The CAIB conducted public hearings from March until June, and released its final report in August 2003. Cause of the accident After looking at sensor data, the CAIB considered damage to the left wing as a likely culprit for Columbias destruction. It investigated that recovered debris and noted the difference in heat damage between the two wings. RCC panels from the left wing were found in the western portion of the debris field, indicating that it was shed first before the rest of the orbiter disintegrated. X-ray and chemical analysis was conducted on the RCC panels, revealing the highest levels of slag deposits to be in the left wing tiles. Impact testing was conducted at the Southwest Research Institute, using a nitrogen-powered gun to fire a projectile made of the same material as the ET bipod foam. Panels taken from Enterprise, , and Atlantis were used to determine the projectiles' effect on RCC panels. A test on RCC panel8, taken from Atlantis, was the most consistent with the damage observed on Columbia, indicating it was the damaged panel that led to the in-flight breakup. Organizational culture The CAIB was critical of NASA organizational culture, and compared its current state to that of NASA leading up to the Challenger disaster. It concluded that NASA was experiencing budget constraints while still expecting to keep a high level of launches and operations. Program operating costs were lowered by 21% from 1991 to 1994, despite a planned increase in the yearly flight rate for assembly of the International Space Station. Despite a history of foam strike events, NASA management did not consider the potential risk to the astronauts as a safety-of-flight issue. The CAIB found that a lack of a safety program led to the lack of concern over foam strikes. The board determined that NASA lacked the appropriate communication and integration channels to allow problems to be discussed and effectively routed and addressed. This risk was further compounded by pressure to adhere to a launch schedule for construction of the ISS. Possible emergency procedures In its report, the CAIB discussed potential options that could have saved Columbias crew. They determined that the mission could have been extended to at most 30 days (February 15), after which the lithium hydroxide canisters used to remove carbon dioxide would have run out. On STS-107, Columbia was carrying the Extended Duration Orbiter, which increased its supply of oxygen and hydrogen. To maximize the mission duration, non-essential systems would have been powered down, and animals in the Spacehab module would have been euthanized. When STS-107 launched, Atlantis was undergoing preparation for the STS-114 launch on March 1, 2003. Had NASA management decided to launch a rescue mission, an expedited process could have begun to launch it as a rescue vehicle. Some pre-launch tests would have been eliminated to allow it to launch on time. Atlantis would have launched with additional equipment for EVAs, and launched with a minimum required crew. It would have rendezvoused with Columbia, and the STS-107 crew would have conducted EVAs to transfer to Atlantis. Columbia would have been remotely deorbited; as Mission Control would have been unable to remotely land it, it would have been disposed of in the Pacific Ocean. The CAIB also investigated the possibility of on-orbit repair of the left wing. Although there were no materials or adhesives onboard Columbia that could have survived reentry, the board researched the effectiveness of stuffing materials from the orbiter, crew cabin, or water into the RCC hole. They determined that the best option would have been to harvest tiles from other places on the orbiter, shape them, and then stuff them into the RCC hole. Given the difficulty of on-orbit repair and the risk of further damaging the RCC tiles, the CAIB determined that the likelihood of a successful on-orbit repair would have been low. NASA response Space Shuttle updates The Space Shuttle program was suspended after the loss of Columbia. The further construction of the International Space Station (ISS) was delayed, as the Space Shuttle had been scheduled for seven missions to the ISS in 2003 and 2004 to complete its construction. To prevent future foam strikes, the ET was redesigned to remove foam from the bipod. Instead, electric heaters were installed to prevent ice building up in the bipod due to the cold liquid oxygen in its feedlines. Additional heaters were also installed along the liquid oxygen line, which ran from the base of the tank to its interstage section. NASA also improved its ground imaging capabilities at Kennedy Space Center to better observe and monitor potential issues that occur during launch. The existing cameras at LC-39A, LC-39B, and along the coast were upgraded, and nine new camera sites were added. Cameras were added to the bellies of Discovery, Atlantis, and Endeavour (only Columbia and Challenger had them prior) to allow digital images of the ET to be viewed on the ground soon after launch. The prior system on Columbia used film and could only be downlinked after the orbiter returned to Earth. The Orbiter Boom Sensor System, a camera on the end of the Canadarm, was added to allow the crew to inspect the orbiter for any tile damage once they reached orbit. Each of the orbiter's wings was equipped with 22 temperature sensors to detect any breaches during reentry and with 66 accelerometers to detect an impact. Post-landing inspection procedures were updated to include technicians examining the RCC panels using flash thermography. As well as the updates to the orbiter, NASA prepared contingency plans in the event that a mission would be unable to safely land. The plan involved the stranded mission docking with the ISS, on which the crew would inspect and attempt to repair the damaged orbiter. If they were unsuccessful, they would remain aboard the ISS and wait for a rescue. The rescue mission, designated STS-3xx, would be activated, and would use the next-in-line hardware for the orbiter, ET, and SRBs. The expected time to launch would be 35 days, as that was the requirement to prepare launch facilities. Before the arrival of the rescue mission, the stranded crew would power up the damaged orbiter, which would be remotely controlled as it was undocked and deorbited, and its debris would land in the Pacific Ocean. The minimal crew would launch, dock with the ISS, and spend a day transferring astronauts and equipment before undocking and landing. First Return to Flight mission (STS-114) The first Return to Flight mission, STS-114, began with the launch of Discovery on July 26, 2005, at 10:39am (EDT). Sixteen pieces of foam from the ET were dislodged during the launch that were large enough to be considered significant by NASA investigators, including one piece that was approximately . Post-launch investigations did not find any indications of damage from the foam loss, but ET video did reveal that a small piece of TPS tile from the nose landing gear fell off during launch. Upon reaching orbit the crew inspected Discovery with the Orbiter Boom Sensor System. On July 29 Discovery rendezvoused with the ISS and, before docking, performed the first rendezvous pitch maneuver to allow the crew aboard the ISS to observe and photograph the orbiter's belly. The next day, astronauts Soichi Noguchi and Stephen Robinson performed the first of three spacewalks. They tested a tile repair tool, the Emittance Wash Applicator, on intentionally damaged TPS tiles that had been brought in the payload bay. On August 3 the same astronauts performed the third EVA of the mission, during which Robinson stood on the ISS's Canadarm2 and went to Discoverys belly to remove two gap fillers between tiles that had begun to protrude. After a delay due to bad weather at KSC, the decision was made to land at Edwards AFB. Discovery successfully landed at 8:11am (EDT) on August 9. Had Discovery been unable to safely land, the crew would have remained on the ISS until Atlantis was flown to rescue them. As a result of the foam loss, NASA grounded the Space Shuttle fleet again. Second Return to Flight mission (STS-121) To address the problem of foam loss for the second Return to Flight mission (STS-121), NASA engineers removed the foam ramp from the protuberance air load (PAL) on the ET, which was the source of the largest piece of debris on STS-114. The launch was postponed from its scheduled launch of July 1, 2006, and again on July 2 due to inclement weather at KSC. On July 3 a piece of foam approximately and weighing broke off from the ET. The mission still launched as scheduled at 2:38pm (EDT) on July 4. After reaching orbit, Discovery performed post-launch inspections of its TPS and docked with the ISS on July 6. The orbiter carried a remote control orbiter in-flight maintenance cable that could connect the flight deck systems to the avionics system in the mid-deck; it would allow the spacecraft to be landed remotely, to include controlling the landing gear and deploying the parachute. On July 12 astronauts Piers Sellers and Michael Fossum performed an EVA to test the NonOxide Adhesive eXperiment (NOAX), which applied protective sealant to samples of damaged TPS tiles. Discovery undocked from the ISS on July 14 and safely landed at 9:14am on July 17 at KSC. Had the crew been stranded in orbit, NASA planned to launch Atlantis to rescue them from the ISS. Program cancellation In January 2004 President Bush announced the Vision for Space Exploration, calling for the Space Shuttle fleet to complete the ISS and be retired by 2010, to be replaced by a newly developed Crew Exploration Vehicle for travel to the Moon and Mars. In 2004, NASA Administrator Sean O'Keefe canceled a planned servicing of the Hubble Space Telescope and decided that future missions would all rendezvous with the ISS to ensure the safety of the crew. In 2006, his successor, Michael Griffin, decided to have one more servicing mission to the telescope, STS-125, which flew in May 2009. The retirement of the Space Shuttle was delayed until 2011, after which no further crewed spacecraft were launched from the United States until 2020 when SpaceX's Crew Dragon Demo-2 mission successfully carried NASA astronauts Doug Hurley and Robert Behnken to the ISS. Legacy On February 4, 2003, President Bush and his wife Laura led a memorial service for the astronauts' families at the Johnson Space Center. Two days later, Vice President Dick Cheney and his wife Lynne led a similar service at Washington National Cathedral. Patti LaBelle sang "Way Up There" as part of the service. A memorial service was held at KSC on February 7; Robert Crippen, the first pilot of Columbia, gave a eulogy for the crew and a tribute for Columbia herself, acknowledging her achievements as the first orbiter and NASA's flagship, her role in trying desperately to save the crew on STS-107, and her many missions dedicated to scientific research. On October 28, 2003, the names of the crew were added to the Space Mirror Memorial at the KSC Visitor Complex in Merritt Island, Florida, alongside the names of 17 other astronauts and cosmonauts who have died in the line of duty. On February 2, 2004, NASA Administrator O'Keefe unveiled a memorial for the STS-107 crew at Arlington National Cemetery, and it is located near the Challenger memorial. A tree for each astronaut was planted in NASA's Astronaut Memorial Grove at the Johnson Space Center, along with trees for each astronaut from the Apollo 1 and Challenger disasters. The exhibit Forever Remembered at KSC Visitor Complex features the cockpit window frames from Columbia. In 2004, Bush conferred posthumous Congressional Space Medals of Honor to all 14 crew members killed in the Challenger and Columbia accidents. NASA named several places in honor of Columbia and the crew. Seven asteroids discovered in July 2001 were named after astronauts: 51823 Rickhusband, 51824 Mikeanderson, 51825 Davidbrown, 51826 Kalpanachawla, 51827 Laurelclark, 51828 Ilanramon, 51829 Williemccool. On Mars, the landing site of the rover Spirit was named Columbia Memorial Station, and included a memorial plaque to the Columbia crew mounted on the back of the high gain antenna. A complex of seven hills east of the Spirit landing site was dubbed the Columbia Hills; each of the seven hills was individually named for a member of the crew, and the rover explored the summit of Husband Hill in 2005. In 2006, the IAU approved naming seven lunar craters after the astronauts. In February 2006, NASA's National Scientific Balloon Facility was renamed the Columbia Scientific Balloon Facility. A supercomputer built in 2004 at the NASA Advanced Supercomputing Division was named "Columbia". The first part of the system, named "Kalpana", was dedicated to Chawla, who had worked at the Ames Research Center before joining the Space Shuttle program. The first dedicated meteorological satellite launched by the Indian Space Research Organisation (ISRO), Metsat-1, was renamed to Kalpana-1 on February 5, 2003, after Chawla. In 2003, the airport in Amarillo, Texas, where Husband was from, was renamed to the Rick Husband Amarillo International Airport. A mountain peak in the Sangre de Cristo Range in the Colorado Rockies was renamed Columbia Point in 2003. In October 2004, both houses of Congress passed a resolution to change the name of Downey, California's Space Science Learning Center to the Columbia Memorial Space Center, which is located at the former manufacturing site of the Space Shuttle orbiters. On April 1, 2003, the Opening Day of baseball season, the Houston Astros honored the Columbia crew by having seven simultaneous first pitches thrown by family and friends of the crew. During the singing of the national anthem, 107 NASA personnel carried a U.S. flag onto the field. The Astros wore the mission patch on their sleeves the entire season. On February 1, 2004, the first anniversary of the Columbia disaster, Super Bowl XXXVIII held in Houston's Reliant Stadium began with a pregame tribute to the crew of the Columbia by singer Josh Groban performing "You Raise Me Up", with the crew of STS-114 in attendance. In 2004, two space journalists, Michael Cabbage and William Harwood, released their book, Comm Check: The Final Flight of Shuttle Columbia. It discusses the history of the Space Shuttle program, and documents the post-disaster recovery and investigation efforts. Michael Leinbach, a retired Launch Director at KSC who was working on the day of the disaster, released Bringing Columbia Home: The Untold Story of a Lost Space Shuttle and Her Crew in 2018. It documents his personal experience during the disaster, and the debris and remains recovery efforts. In 2004, the documentary Columbia: The Tragic Loss was released; it told of the life of Ilan Ramon and focused on the issues in NASA management that led to the disaster. PBS released a Nova documentary, Space Shuttle Disaster, in 2008. It featured commentary from NASA officials and space experts, and discussed historical issues with the spacecraft and NASA. The Scottish Celtic-Rock band Runrig included a song titled "Somewhere" on their album The Story that ends with a recording of a radio communication from Laurel Clark. Clark, who had become a fan of the band when she lived in Scotland, had a Runrig song "Running to the Light" play as her wakeup music on January 27; her CD of The Stamping Ground'' was recovered in the debris and presented to the band by Clark's husband and son. See also Criticism of the Space Shuttle program Engineering disasters Expedition 6 List of spaceflight-related accidents and incidents Notes References External links NASA's Space Shuttle Columbia and her crew Doppler radar animation of the debris after break up President Bush's remarks at memorial service (February 4, 2003) The CBS News Space Reporter's Handbook STS-51L/107 Supplement The 13-min. Crew cabin video (subtitled). Ends 4-min. before the Shuttle began to disintegrate. Photos of recovered debris stored on the 16th floor of the Vehicle Assembly Building at KSC Articles containing video clips Kalpana Chawla Space accidents and incidents in the United States Space program fatalities Space Shuttle program Destroyed spacecraft Atmospheric entry Accidental deaths in Texas 2003 disasters 2003 in spaceflight 2003 in the United States Aviation accidents and incidents in the United States in 2003 2003 in Louisiana 2003 in Texas Disasters in Louisiana Disasters in Texas February 2003 events in the United States Presidency of George W. Bush Space missions that ended in failure
Space Shuttle Columbia disaster
[ "Engineering" ]
9,754
[ "Space program fatalities", "Space programs", "Atmospheric entry", "Aerospace engineering" ]
177,602
https://en.wikipedia.org/wiki/Outer%20space
Outer space (or simply space) is the expanse that exists beyond Earth's atmosphere and between celestial bodies. It contains ultra-low levels of particle densities, constituting a near-perfect vacuum of predominantly hydrogen and helium plasma, permeated by electromagnetic radiation, cosmic rays, neutrinos, magnetic fields and dust. The baseline temperature of outer space, as set by the background radiation from the Big Bang, is . The plasma between galaxies is thought to account for about half of the baryonic (ordinary) matter in the universe, having a number density of less than one hydrogen atom per cubic metre and a kinetic temperature of millions of kelvins. Local concentrations of matter have condensed into stars and galaxies. Intergalactic space takes up most of the volume of the universe, but even galaxies and star systems consist almost entirely of empty space. Most of the remaining mass-energy in the observable universe is made up of an unknown form, dubbed dark matter and dark energy. Outer space does not begin at a definite altitude above Earth's surface. The Kármán line, an altitude of above sea level, is conventionally used as the start of outer space in space treaties and for aerospace records keeping. Certain portions of the upper stratosphere and the mesosphere are sometimes referred to as "near space". The framework for international space law was established by the Outer Space Treaty, which entered into force on 10 October 1967. This treaty precludes any claims of national sovereignty and permits all states to freely explore outer space. Despite the drafting of UN resolutions for the peaceful uses of outer space, anti-satellite weapons have been tested in Earth orbit. The concept that the space between the Earth and the Moon must be a vacuum was first proposed in the 17th century after scientists discovered that air pressure decreased with altitude. The immense scale of outer space was grasped in the 20th century when the distance to the Andromeda Galaxy was first measured. Humans began the physical exploration of space later in the same century with the advent of high-altitude balloon flights. This was followed by crewed rocket flights and, then, crewed Earth orbit, first achieved by Yuri Gagarin of the Soviet Union in 1961. The economic cost of putting objects, including humans, into space is very high, limiting human spaceflight to low Earth orbit and the Moon. On the other hand, uncrewed spacecraft have reached all of the known planets in the Solar System. Outer space represents a challenging environment for human exploration because of the hazards of vacuum and radiation. Microgravity has a negative effect on human physiology that causes both muscle atrophy and bone loss. Terminology The use of the short version space, as meaning 'the region beyond Earth's sky', predates the use of full term "outer space", with the earliest recorded use of this meaning in an epic poem by John Milton called Paradise Lost, published in 1667. The term outward space existed in a poem from 1842 by the English poet Lady Emmeline Stuart-Wortley called "The Maiden of Moscow", but in astronomy the term outer space found its application for the first time in 1845 by Alexander von Humboldt. The term was eventually popularized through the writings of H. G. Wells after 1901. Theodore von Kármán used the term of free space to name the space of altitudes above Earth where spacecrafts reach conditions sufficiently free from atmospheric drag, differentiating it from airspace, identifying a legal space above territories free from the sovereign jurisdiction of countries. "Spaceborne" denotes existing in outer space, especially if carried by a spacecraft; similarly, "space-based" means based in outer space or on a planet or moon. Formation and state The size of the whole universe is unknown, and it might be infinite in extent. According to the Big Bang theory, the very early universe was an extremely hot and dense state about 13.8 billion years ago which rapidly expanded. About 380,000 years later the universe had cooled sufficiently to allow protons and electrons to combine and form hydrogen—the so-called recombination epoch. When this happened, matter and energy became decoupled, allowing photons to travel freely through the continually expanding space. Matter that remained following the initial expansion has since undergone gravitational collapse to create stars, galaxies and other astronomical objects, leaving behind a deep vacuum that forms what is now called outer space. As light has a finite velocity, this theory constrains the size of the directly observable universe. The present day shape of the universe has been determined from measurements of the cosmic microwave background using satellites like the Wilkinson Microwave Anisotropy Probe. These observations indicate that the spatial geometry of the observable universe is "flat", meaning that photons on parallel paths at one point remain parallel as they travel through space to the limit of the observable universe, except for local gravity. The flat universe, combined with the measured mass density of the universe and the accelerating expansion of the universe, indicates that space has a non-zero vacuum energy, which is called dark energy. Estimates put the average energy density of the present day universe at the equivalent of 5.9 protons per cubic meter, including dark energy, dark matter, and baryonic matter (ordinary matter composed of atoms). The atoms account for only 4.6% of the total energy density, or a density of one proton per four cubic meters. The density of the universe is clearly not uniform; it ranges from relatively high density in galaxies—including very high density in structures within galaxies, such as planets, stars, and black holes—to conditions in vast voids that have much lower density, at least in terms of visible matter. Unlike matter and dark matter, dark energy seems not to be concentrated in galaxies: although dark energy may account for a majority of the mass-energy in the universe, dark energy's influence is 5 orders of magnitude smaller than the influence of gravity from matter and dark matter within the Milky Way. Environment Outer space is the closest known approximation to a perfect vacuum. It has effectively no friction, allowing stars, planets, and moons to move freely along their ideal orbits, following the initial formation stage. The deep vacuum of intergalactic space is not devoid of matter, as it contains a few hydrogen atoms per cubic meter. By comparison, the air humans breathe contains about 1025 molecules per cubic meter. The low density of matter in outer space means that electromagnetic radiation can travel great distances without being scattered: the mean free path of a photon in intergalactic space is about 1023 km, or 10 billion light years. In spite of this, extinction, which is the absorption and scattering of photons by dust and gas, is an important factor in galactic and intergalactic astronomy. Stars, planets, and moons retain their atmospheres by gravitational attraction. Atmospheres have no clearly delineated upper boundary: the density of atmospheric gas gradually decreases with distance from the object until it becomes indistinguishable from outer space. The Earth's atmospheric pressure drops to about Pa at of altitude, compared to 100,000 Pa for the International Union of Pure and Applied Chemistry (IUPAC) definition of standard pressure. Above this altitude, isotropic gas pressure rapidly becomes insignificant when compared to radiation pressure from the Sun and the dynamic pressure of the solar wind. The thermosphere in this range has large gradients of pressure, temperature and composition, and varies greatly due to space weather. The temperature of outer space is measured in terms of the kinetic activity of the gas, as it is on Earth. The radiation of outer space has a different temperature than the kinetic temperature of the gas, meaning that the gas and radiation are not in thermodynamic equilibrium. All of the observable universe is filled with photons that were created during the Big Bang, which is known as the cosmic microwave background radiation (CMB). (There is quite likely a correspondingly large number of neutrinos called the cosmic neutrino background.) The current black body temperature of the background radiation is about . The gas temperatures in outer space can vary widely. For example, the temperature in the Boomerang Nebula is , while the solar corona reaches temperatures over . Magnetic fields have been detected in the space around just about every class of celestial object. Star formation in spiral galaxies can generate small-scale dynamos, creating turbulent magnetic field strengths of around 5–10 μG. The Davis–Greenstein effect causes elongated dust grains to align themselves with a galaxy's magnetic field, resulting in weak optical polarization. This has been used to show ordered magnetic fields that exist in several nearby galaxies. Magneto-hydrodynamic processes in active elliptical galaxies produce their characteristic jets and radio lobes. Non-thermal radio sources have been detected even among the most distant high-z sources, indicating the presence of magnetic fields. Outside a protective atmosphere and magnetic field, there are few obstacles to the passage through space of energetic subatomic particles known as cosmic rays. These particles have energies ranging from about 106 eV up to an extreme 1020 eV of ultra-high-energy cosmic rays. The peak flux of cosmic rays occurs at energies of about 109 eV, with approximately 87% protons, 12% helium nuclei and 1% heavier nuclei. In the high energy range, the flux of electrons is only about 1% of that of protons. Cosmic rays can damage electronic components and pose a health threat to space travelers. Smells retained from low Earth orbit, when returning from extravehicular activity, have a burned/metallic odor, similar to the scent of arc welding fumes, resulting from oxygen in low Earth orbit around the ISS, which clings to suits and equipment. Other regions of space could have very different smells, like that of different alcohols in molecular clouds. Human access Effect on biology and human bodies Despite the harsh environment, several life forms have been found that can withstand extreme space conditions for extended periods. Species of lichen carried on the ESA BIOPAN facility survived exposure for ten days in 2007. Seeds of Arabidopsis thaliana and Nicotiana tabacum germinated after being exposed to space for 1.5 years. A strain of Bacillus subtilis has survived 559 days when exposed to low Earth orbit or a simulated Martian environment. The lithopanspermia hypothesis suggests that rocks ejected into outer space from life-harboring planets may successfully transport life forms to another habitable world. A conjecture is that just such a scenario occurred early in the history of the Solar System, with potentially microorganism-bearing rocks being exchanged between Venus, Earth, and Mars. Vacuum The lack of pressure in space is the most immediate dangerous characteristic of space to humans. Pressure decreases above Earth, reaching a level at an altitude of around that matches the vapor pressure of water at the temperature of the human body. This pressure level is called the Armstrong line, named after American physician Harry G. Armstrong. At or above the Armstrong line, fluids in the throat and lungs boil away. More specifically, exposed bodily liquids such as saliva, tears, and liquids in the lungs boil away. Hence, at this altitude, human survival requires a pressure suit, or a pressurized capsule. Out in space, sudden exposure of an unprotected human to very low pressure, such as during a rapid decompression, can cause pulmonary barotrauma—a rupture of the lungs, due to the large pressure differential between inside and outside the chest. Even if the subject's airway is fully open, the flow of air through the windpipe may be too slow to prevent the rupture. Rapid decompression can rupture eardrums and sinuses, bruising and blood seep can occur in soft tissues, and shock can cause an increase in oxygen consumption that leads to hypoxia. As a consequence of rapid decompression, oxygen dissolved in the blood empties into the lungs to try to equalize the partial pressure gradient. Once the deoxygenated blood arrives at the brain, humans lose consciousness after a few seconds and die of hypoxia within minutes. Blood and other body fluids boil when the pressure drops below , and this condition is called ebullism. The steam may bloat the body to twice its normal size and slow circulation, but tissues are elastic and porous enough to prevent rupture. Ebullism is slowed by the pressure containment of blood vessels, so some blood remains liquid. Swelling and ebullism can be reduced by containment in a pressure suit. The Crew Altitude Protection Suit (CAPS), a fitted elastic garment designed in the 1960s for astronauts, prevents ebullism at pressures as low as . Supplemental oxygen is needed at to provide enough oxygen for breathing and to prevent water loss, while above pressure suits are essential to prevent ebullism. Most space suits use around of pure oxygen, about the same as the partial pressure of oxygen at the Earth's surface. This pressure is high enough to prevent ebullism, but evaporation of nitrogen dissolved in the blood could still cause decompression sickness and gas embolisms if not managed. Weightlessness and radiation Humans evolved for life in Earth gravity, and exposure to weightlessness has been shown to have deleterious effects on human health. Initially, more than 50% of astronauts experience space motion sickness. This can cause nausea and vomiting, vertigo, headaches, lethargy, and overall malaise. The duration of space sickness varies, but it typically lasts for 1–3 days, after which the body adjusts to the new environment. Longer-term exposure to weightlessness results in muscle atrophy and deterioration of the skeleton, or spaceflight osteopenia. These effects can be minimized through a regimen of exercise. Other effects include fluid redistribution, slowing of the cardiovascular system, decreased production of red blood cells, balance disorders, and a weakening of the immune system. Lesser symptoms include loss of body mass, nasal congestion, sleep disturbance, and puffiness of the face. During long-duration space travel, radiation can pose an acute health hazard. Exposure to high-energy, ionizing cosmic rays can result in fatigue, nausea, vomiting, as well as damage to the immune system and changes to the white blood cell count. Over longer durations, symptoms include an increased risk of cancer, plus damage to the eyes, nervous system, lungs and the gastrointestinal tract. On a round-trip Mars mission lasting three years, a large fraction of the cells in an astronaut's body would be traversed and potentially damaged by high energy nuclei. The energy of such particles is significantly diminished by the shielding provided by the walls of a spacecraft and can be further diminished by water containers and other barriers. The impact of the cosmic rays upon the shielding produces additional radiation that can affect the crew. Further research is needed to assess the radiation hazards and determine suitable countermeasures. Boundary The transition between Earth's atmosphere and outer space lacks a well-defined physical boundary, with the air pressure steadily decreasing with altitude until it mixes with the solar wind. Various definitions for a practical boundary have been proposed, ranging from out to . High-altitude aircraft, such as high-altitude balloons have reached altitudes above Earth of up to 50 km. Up until 2021, the United States designated people who travel above an altitude of as astronauts. Astronaut wings are now only awarded to spacecraft crew members that "demonstrated activities during flight that were essential to public safety, or contributed to human space flight safety." In 2009, measurements of the direction and speed of ions in the atmosphere were made from a sounding rocket. The altitude of above Earth was the midpoint for charged particles transitioning from the gentle winds of the Earth's atmosphere to the more extreme flows of outer space. The latter can reach velocities well over . Spacecraft have entered into a highly elliptical orbit with a perigee as low as , surviving for multiple orbits. At an altitude of , descending spacecraft such as NASA's Space Shuttle begin atmospheric entry (termed the Entry Interface), when atmospheric drag becomes noticeable, thus beginning the process of switching from steering with thrusters to maneuvering with aerodynamic control surfaces. The Kármán line, established by the Fédération Aéronautique Internationale, and used internationally by the United Nations, is set at an altitude of as a working definition for the boundary between aeronautics and astronautics. This line is named after Theodore von Kármán, who argued for an altitude where a vehicle would have to travel faster than orbital velocity to derive sufficient aerodynamic lift from the atmosphere to support itself, which he calculated to be at an altitude of about . This distinguishes altitudes below as the region of aerodynamics and airspace, and above as the space of astronautics and free space. There is no internationally recognized legal altitude limit on national airspace, although the Kármán line is the most frequently used for this purpose. Objections have been made to setting this limit too high, as it could inhibit space activities due to concerns about airspace violations. It has been argued for setting no specified singular altitude in international law, instead applying different limits depending on the case, in particular based on the craft and its purpose. Spacecraft have flown over foreign countries as low as , as in the example of the Space Shuttle. Legal status The Outer Space Treaty provides the basic framework for international space law. It covers the legal use of outer space by nation states, and includes in its definition of outer space, the Moon, and other celestial bodies. The treaty states that outer space is free for all nation states to explore and is not subject to claims of national sovereignty, calling outer space the "province of all mankind". This status as a common heritage of mankind has been used, though not without opposition, to enforce the right to access and shared use of outer space for all nations equally, particularly non-spacefaring nations. It prohibits the deployment of nuclear weapons in outer space. The treaty was passed by the United Nations General Assembly in 1963 and signed in 1967 by the Union of Soviet Socialist Republics (USSR), the United States of America (USA), and the United Kingdom (UK). As of 2017, 105 state parties have either ratified or acceded to the treaty. An additional 25 states signed the treaty, without ratifying it. Since 1958, outer space has been the subject of multiple United Nations resolutions. Of these, more than 50 have been concerning the international co-operation in the peaceful uses of outer space and preventing an arms race in space. Four additional space law treaties have been negotiated and drafted by the UN's Committee on the Peaceful Uses of Outer Space. Still, there remains no legal prohibition against deploying conventional weapons in space, and anti-satellite weapons have been successfully tested by the USA, USSR, China, and in 2019, India. The 1979 Moon Treaty turned the jurisdiction of all heavenly bodies (including the orbits around such bodies) over to the international community. The treaty has not been ratified by any nation that currently practices human spaceflight. In 1976, eight equatorial states (Ecuador, Colombia, Brazil, The Republic of the Congo, Zaire, Uganda, Kenya, and Indonesia) met in Bogotá, Colombia: with their "Declaration of the First Meeting of Equatorial Countries", or the Bogotá Declaration, they claimed control of the segment of the geosynchronous orbital path corresponding to each country. These claims are not internationally accepted. An increasing issue of international space law and regulation has been the dangers of the growing number of space debris. Earth orbit A spacecraft enters orbit when its centripetal acceleration due to gravity is less than or equal to the centrifugal acceleration due to the horizontal component of its velocity. For a low Earth orbit, this velocity is about ; by contrast, the fastest piloted airplane speed ever achieved (excluding speeds achieved by deorbiting spacecraft) was in 1967 by the North American X-15. To achieve an orbit, a spacecraft must travel faster than a sub-orbital spaceflight along an arcing trajectory. The energy required to reach Earth orbital velocity at an altitude of is about 36 MJ/kg, which is six times the energy needed merely to climb to the corresponding altitude. The escape velocity required to pull free of Earth's gravitational field altogether and move into interplanetary space is about . Orbiting spacecraft with a perigee below about are subject to drag from the Earth's atmosphere, which decreases the orbital altitude. The rate of orbital decay depends on the satellite's cross-sectional area and mass, as well as variations in the air density of the upper atmosphere. At altitudes above , orbital lifetime is measured in centuries. Below about , decay becomes more rapid with lifetimes measured in days. Once a satellite descends to , it has only hours before it vaporizes in the atmosphere. Low Earth orbit overlaps the inner Van Allen radiation belt, a toroidal region composed primarily of high energy protons. This region represents a space weather threat to space systems and is difficult to shield against. Operational problems for satellites are particularly prevalent over the South Atlantic Anomaly, where charged particles approach closest to Earth. Regions Regions near the Earth Space in proximity to the Earth is physically similar to the remainder of interplanetary space, but is home to a multitude of Earth–orbiting satellites and has been subject to extensive studies. For identification purposes, this volume is divided into overlapping regions of space. is the region of space extending from low Earth orbits out to geostationary orbits. This region includes the major orbits for artificial satellites and is the site of most of humanity's space activity. The region has seen high levels of space debris, sometimes dubbed space pollution, threatening any space activity in this region. Some of this debris re-enters Earth's atmosphere periodically. Although it meets the definition of outer space, the atmospheric density inside low-Earth orbital space, the first few hundred kilometers above the Kármán line, is still sufficient to produce significant drag on satellites. Geospace is a region of space that includes Earth's upper atmosphere and magnetosphere. The Van Allen radiation belts lie within the geospace. The outer boundary of geospace is the magnetopause, which forms an interface between the Earth's magnetosphere and the solar wind. The inner boundary is the ionosphere. The variable space-weather conditions of geospace are affected by the behavior of the Sun and the solar wind; the subject of geospace is interlinked with heliophysics—the study of the Sun and its impact on the planets of the Solar System. The day-side magnetopause is compressed by solar-wind pressure—the subsolar distance from the center of the Earth is typically 10 Earth radii. On the night side, the solar wind stretches the magnetosphere to form a magnetotail that sometimes extends out to more than 100–200 Earth radii. For roughly four days of each month, the lunar surface is shielded from the solar wind as the Moon passes through the magnetotail. Geospace is populated by electrically charged particles at very low densities, the motions of which are controlled by the Earth's magnetic field. These plasmas form a medium from which storm-like disturbances powered by the solar wind can drive electrical currents into the Earth's upper atmosphere. Geomagnetic storms can disturb two regions of geospace, the radiation belts and the ionosphere. These storms increase fluxes of energetic electrons that can permanently damage satellite electronics, interfering with shortwave radio communication and GPS location and timing. Magnetic storms can be a hazard to astronauts, even in low Earth orbit. They create aurorae seen at high latitudes in an oval surrounding the geomagnetic poles. xGeo space is a concept used by the US to refer to space of high Earth orbits, ranging from beyond geosynchronous orbit (GEO) at approximately , out to the L2 Earth-Moon Lagrange point at . This is located beyond the orbit of the Moon and therefore includes cislunar space. Translunar space is the region of lunar transfer orbits, between the Moon and Earth. Cislunar space is a region outside of Earth that includes lunar orbits, the Moon's orbital space around Earth and the Lagrange points. The region where a body's gravitational potential remains dominant against gravitational potentials from other bodies, is the body's sphere of influence or gravity well, mostly described with the Hill sphere model. In the case of Earth this includes all space from the Earth to a distance of roughly 1% of the mean distance from Earth to the Sun, or . Beyond Earth's Hill sphere extends along Earth's orbital path its orbital and co-orbital space. This space is co-populated by groups of co-orbital Near-Earth Objects (NEOs), such as horseshoe librators and Earth trojans, with some NEOs at times becoming temporary satellites and quasi-moons to Earth. Deep space is defined by the United States government as all of outer space which lies further from Earth than a typical low-Earth-orbit, thus assigning the Moon to deep-space. Other definitions vary the starting point of deep-space from, "That which lies beyond the orbit of the moon," to "That which lies beyond the farthest reaches of the Solar System itself." The International Telecommunication Union responsible for radio communication, including with satellites, defines deep-space as, "distances from the Earth equal to, or greater than, ," which is about five times the Moon's orbital distance, but which distance is also far less than the distance between Earth and any adjacent planet. Interplanetary space Interplanetary space within the Solar System is the space between the eight planets, the space between the planets and the Sun, as well as that space beyond the orbit of the outermost planet Neptune where the solar wind remains active. The solar wind is a continuous stream of charged particles emanating from the Sun which creates a very tenuous atmosphere (the heliosphere) for billions of kilometers into space. This wind has a particle density of 5–10 protons/cm3 and is moving at a velocity of . Interplanetary space extends out to the heliopause where the influence of the galactic environment starts to dominate over the magnetic field and particle flux from the Sun. The distance and strength of the heliopause varies depending on the activity level of the solar wind. The heliopause in turn deflects away low-energy galactic cosmic rays, with this modulation effect peaking during solar maximum. The volume of interplanetary space is a nearly total vacuum, with a mean free path of about one astronomical unit at the orbital distance of the Earth. This space is not completely empty, and is sparsely filled with cosmic rays, which include ionized atomic nuclei and various subatomic particles. There is gas, plasma and dust, small meteors, and several dozen types of organic molecules discovered to date by microwave spectroscopy. A cloud of interplanetary dust is visible at night as a faint band called the zodiacal light. Interplanetary space contains the magnetic field generated by the Sun. There are magnetospheres generated by planets such as Jupiter, Saturn, Mercury and the Earth that have their own magnetic fields. These are shaped by the influence of the solar wind into the approximation of a teardrop shape, with the long tail extending outward behind the planet. These magnetic fields can trap particles from the solar wind and other sources, creating belts of charged particles such as the Van Allen radiation belts. Planets without magnetic fields, such as Mars, have their atmospheres gradually eroded by the solar wind. Interstellar space Interstellar space is the physical space outside of the bubbles of plasma known as astrospheres, formed by stellar winds originating from individual stars, or formed by solar wind emanating from the Sun. It is the space between the stars or stellar systems within a nebula or galaxy. Interstellar space contains an interstellar medium of sparse matter and radiation. The boundary between an astrosphere and interstellar space is known as an astropause. For the Sun, the astrosphere and astropause are called the heliosphere and heliopause. Approximately 70% of the mass of the interstellar medium consists of lone hydrogen atoms; most of the remainder consists of helium atoms. This is enriched with trace amounts of heavier atoms formed through stellar nucleosynthesis. These atoms are ejected into the interstellar medium by stellar winds or when evolved stars begin to shed their outer envelopes such as during the formation of a planetary nebula. The cataclysmic explosion of a supernova propagates shock waves of stellar ejecta outward, distributing it throughout the interstellar medium, including the heavy elements previously formed within the star's core. The density of matter in the interstellar medium can vary considerably: the average is around 106 particles per m3, but cold molecular clouds can hold 108–1012 per m3. A number of molecules exist in interstellar space, which can form dust particles as tiny as 0.1 μm. The tally of molecules discovered through radio astronomy is steadily increasing at the rate of about four new species per year. Large regions of higher density matter known as molecular clouds allow chemical reactions to occur, including the formation of organic polyatomic species. Much of this chemistry is driven by collisions. Energetic cosmic rays penetrate the cold, dense clouds and ionize hydrogen and helium, resulting, for example, in the trihydrogen cation. An ionized helium atom can then split relatively abundant carbon monoxide to produce ionized carbon, which in turn can lead to organic chemical reactions. The local interstellar medium is a region of space within 100 pc of the Sun, which is of interest both for its proximity and for its interaction with the Solar System. This volume nearly coincides with a region of space known as the Local Bubble, which is characterized by a lack of dense, cold clouds. It forms a cavity in the Orion Arm of the Milky Way Galaxy, with dense molecular clouds lying along the borders, such as those in the constellations of Ophiuchus and Taurus. The actual distance to the border of this cavity varies from 60 to 250 pc or more. This volume contains about 104–105 stars and the local interstellar gas counterbalances the astrospheres that surround these stars, with the volume of each sphere varying depending on the local density of the interstellar medium. The Local Bubble contains dozens of warm interstellar clouds with temperatures of up to 7,000 K and radii of 0.5–5 pc. When stars are moving at sufficiently high peculiar velocities, their astrospheres can generate bow shocks as they collide with the interstellar medium. For decades it was assumed that the Sun had a bow shock. In 2012, data from Interstellar Boundary Explorer (IBEX) and NASA's Voyager probes showed that the Sun's bow shock does not exist. Instead, these authors argue that a subsonic bow wave defines the transition from the solar wind flow to the interstellar medium. A bow shock is a third boundary characteristic of an astrosphere, lying outside the termination shock and the astropause. Intergalactic space Intergalactic space is the physical space between galaxies. Studies of the large-scale distribution of galaxies show that the universe has a foam-like structure, with groups and clusters of galaxies lying along filaments that occupy about a tenth of the total space. The remainder forms cosmic voids that are mostly empty of galaxies. Typically, a void spans a distance of 7–30 megaparsecs. Surrounding and stretching between galaxies is the intergalactic medium (IGM). This a rarefied plasma is organized in a galactic filamentary structure. The diffuse photoionized gas contains filaments of higher density, about one atom per cubic meter, which is 5–200 times the average density of the universe. The IGM is inferred to be mostly primordial in composition, with 76% hydrogen by mass, and enriched with higher mass elements from high-velocity galactic outflows. As gas falls into the intergalactic medium from the voids, it heats up to temperatures of 105 K to 107 K. At these temperatures, it is called the warm–hot intergalactic medium (WHIM). Although the plasma is very hot by terrestrial standards, 105 K is often called "warm" in astrophysics. Computer simulations and observations indicate that up to half of the atomic matter in the universe might exist in this warm–hot, rarefied state. When gas falls from the filamentary structures of the WHIM into the galaxy clusters at the intersections of the cosmic filaments, it can heat up even more, reaching temperatures of 108 K and above in the so-called intracluster medium (ICM). History of discovery In 350 BCE, Greek philosopher Aristotle suggested that nature abhors a vacuum, a principle that became known as the horror vacui. This concept built upon a 5th-century BCE ontological argument by the Greek philosopher Parmenides, who denied the possible existence of a void in space. Based on this idea that a vacuum could not exist, in the West it was widely held for many centuries that space could not be empty. As late as the 17th century, the French philosopher René Descartes argued that the entirety of space must be filled. In ancient China, the 2nd-century astronomer Zhang Heng became convinced that space must be infinite, extending well beyond the mechanism that supported the Sun and the stars. The surviving books of the Hsüan Yeh school said that the heavens were boundless, "empty and void of substance". Likewise, the "sun, moon, and the company of stars float in the empty space, moving or standing still". The Italian scientist Galileo Galilei knew that air had mass and so was subject to gravity. In 1640, he demonstrated that an established force resisted the formation of a vacuum. It would remain for his pupil Evangelista Torricelli to create an apparatus that would produce a partial vacuum in 1643. This experiment resulted in the first mercury barometer and created a scientific sensation in Europe. Torricelli suggested that since air has weight, then air pressure should decrease with altitude. The French mathematician Blaise Pascal proposed an experiment to test this hypothesis. In 1648, his brother-in-law, Florin Périer, repeated the experiment on the Puy de Dôme mountain in central France and found that the column was shorter by three inches. This decrease in pressure was further demonstrated by carrying a half-full balloon up a mountain and watching it gradually expand, then contract upon descent. In 1650, German scientist Otto von Guericke constructed the first vacuum pump: a device that would further refute the principle of horror vacui. He correctly noted that the atmosphere of the Earth surrounds the planet like a shell, with the density gradually declining with altitude. He concluded that there must be a vacuum between the Earth and the Moon. In the 15th century, German theologian Nicolaus Cusanus speculated that the universe lacked a center and a circumference. He believed that the universe, while not infinite, could not be held as finite as it lacked any bounds within which it could be contained. These ideas led to speculations as to the infinite dimension of space by the Italian philosopher Giordano Bruno in the 16th century. He extended the Copernican heliocentric cosmology to the concept of an infinite universe filled with a substance he called aether, which did not resist the motion of heavenly bodies. English philosopher William Gilbert arrived at a similar conclusion, arguing that the stars are visible to us only because they are surrounded by a thin aether or a void. This concept of an aether originated with ancient Greek philosophers, including Aristotle, who conceived of it as the medium through which the heavenly bodies move. The concept of a universe filled with a luminiferous aether retained support among some scientists until the early 20th century. This form of aether was viewed as the medium through which light could propagate. In 1887, the Michelson–Morley experiment tried to detect the Earth's motion through this medium by looking for changes in the speed of light depending on the direction of the planet's motion. The null result indicated something was wrong with the concept. The idea of the luminiferous aether was then abandoned. It was replaced by Albert Einstein's theory of special relativity, which holds that the speed of light in a vacuum is a fixed constant, independent of the observer's motion or frame of reference. The first professional astronomer to support the concept of an infinite universe was the Englishman Thomas Digges in 1576. But the scale of the universe remained unknown until the first successful measurement of the distance to a nearby star in 1838 by the German astronomer Friedrich Bessel. He showed that the star system 61 Cygni had a parallax of just 0.31 arcseconds (compared to the modern value of 0.287″). This corresponds to a distance of over 10 light years. In 1917, Heber Curtis noted that novae in spiral nebulae were, on average, 10 magnitudes fainter than galactic novae, suggesting that the former are 100 times further away. The distance to the Andromeda Galaxy was determined in 1923 by American astronomer Edwin Hubble by measuring the brightness of cepheid variables in that galaxy, a new technique discovered by Henrietta Leavitt. This established that the Andromeda Galaxy, and by extension all galaxies, lay well outside the Milky Way. With this Hubble formulated the Hubble constant, which allowed for the first time a calculation of the age of the Universe and size of the Observable Universe, starting at 2 billion years and 280 million light-years. This became increasingly precise with better measurements, until 2006 when data of the Hubble Space Telescope allowed a very accurate calculation of the age of the Universe and size of the Observable Universe. The modern concept of outer space is based on the "Big Bang" cosmology, first proposed in 1931 by the Belgian physicist Georges Lemaître. This theory holds that the universe originated from a state of extreme energy density that has since undergone continuous expansion. The earliest known estimate of the temperature of outer space was by the Swiss physicist Charles É. Guillaume in 1896. Using the estimated radiation of the background stars, he concluded that space must be heated to a temperature of 5–6 K. British physicist Arthur Eddington made a similar calculation to derive a temperature of 3.18 K in 1926. German physicist Erich Regener used the total measured energy of cosmic rays to estimate an intergalactic temperature of 2.8 K in 1933. American physicists Ralph Alpher and Robert Herman predicted 5 K for the temperature of space in 1948, based on the gradual decrease in background energy following the then-new Big Bang theory. Exploration For most of human history, space was explored by observations made from the Earth's surface—initially with the unaided eye and then with the telescope. Before reliable rocket technology, the closest that humans had come to reaching outer space was through balloon flights. In 1935, the American Explorer II crewed balloon flight reached an altitude of . This was greatly exceeded in 1942 when the third launch of the German A-4 rocket climbed to an altitude of about . In 1957, the uncrewed satellite Sputnik 1 was launched by a Russian R-7 rocket, achieving Earth orbit at an altitude of . This was followed by the first human spaceflight in 1961, when Yuri Gagarin was sent into orbit on Vostok 1. The first humans to escape low Earth orbit were Frank Borman, Jim Lovell and William Anders in 1968 on board the American Apollo 8, which achieved lunar orbit and reached a maximum distance of from the Earth. The first spacecraft to reach escape velocity was the Soviet Luna 1, which performed a fly-by of the Moon in 1959. In 1961, Venera 1 became the first planetary probe. It revealed the presence of the solar wind and performed the first fly-by of Venus, although contact was lost before reaching Venus. The first successful planetary mission was the 1962 fly-by of Venus by Mariner 2. The first fly-by of Mars was by Mariner 4 in 1964. Since that time, uncrewed spacecraft have successfully examined each of the Solar System's planets, as well their moons and many minor planets and comets. They remain a fundamental tool for the exploration of outer space, as well as for observation of the Earth. In August 2012, Voyager 1 became the first man-made object to leave the Solar System and enter interstellar space. Application Outer space has become an important element of global society. It provides multiple applications that are beneficial to the economy and scientific research. The placing of artificial satellites in Earth orbit has produced numerous benefits and has become the dominating sector of the space economy. They allow relay of long-range communications like television, provide a means of precise navigation, and permit direct monitoring of weather conditions and remote sensing of the Earth. The latter role serves a variety of purposes, including tracking soil moisture for agriculture, prediction of water outflow from seasonal snow packs, detection of diseases in plants and trees, and surveillance of military activities. They facilitate the discovery and monitoring of climate change influences. Satellites make use of the significantly reduced drag in space to stay in stable orbits, allowing them to efficiently span the whole globe, compared to for example stratospheric balloons or high-altitude platform stations, which have other benefits. The absence of air makes outer space an ideal location for astronomy at all wavelengths of the electromagnetic spectrum. This is evidenced by the spectacular pictures sent back by the Hubble Space Telescope, allowing light from more than 13 billion years ago—almost to the time of the Big Bang—to be observed. Not every location in space is ideal for a telescope. The interplanetary zodiacal dust emits a diffuse near-infrared radiation that can mask the emission of faint sources such as extrasolar planets. Moving an infrared telescope out past the dust increases its effectiveness. Likewise, a site like the Daedalus crater on the far side of the Moon could shield a radio telescope from the radio frequency interference that hampers Earth-based observations. The deep vacuum of space could make it an attractive environment for certain industrial processes, such as those requiring ultraclean surfaces. Like asteroid mining, space manufacturing would require a large financial investment with little prospect of immediate return. An important factor in the total expense is the high cost of placing mass into Earth orbit: $–$ per kg, according to a 2006 estimate (allowing for inflation since then). The cost of access to space has declined since 2013. Partially reusable rockets such as the Falcon 9 have lowered access to space below 3500 dollars per kilogram. With these new rockets the cost to send materials into space remains prohibitively high for many industries. Proposed concepts for addressing this issue include, fully reusable launch systems, non-rocket spacelaunch, momentum exchange tethers, and space elevators. Interstellar travel for a human crew remains at present only a theoretical possibility. The distances to the nearest stars mean it would require new technological developments and the ability to safely sustain crews for journeys lasting several decades. For example, the Daedalus Project study, which proposed a spacecraft powered by the fusion of deuterium and helium-3, would require 36 years to reach the "nearby" Alpha Centauri system. Other proposed interstellar propulsion systems include light sails, ramjets, and beam-powered propulsion. More advanced propulsion systems could use antimatter as a fuel, potentially reaching relativistic velocities. From the Earth's surface, the ultracold temperature of outer space can be used as a renewable cooling technology for various applications on Earth through passive daytime radiative cooling. This enhances longwave infrared (LWIR) thermal radiation heat transfer through the amosphere's infrared window into outer space, lowering ambient temperatures. Photonic metamaterials can be used to suppress solar heating. See also Absolute space and time Artemis Accords List of government space agencies List of topics in space Olbers' paradox Outline of space science Panspermia Space art Space and survival Space race Space station Space technology Timeline of knowledge about the interstellar and intergalactic medium Timeline of Solar System exploration Timeline of spaceflight References Citations Sources Note: this source gives a value of molecules per cubic meter. Note: a light year is about 1013 km. External links Space plasmas Environments Vacuum
Outer space
[ "Physics", "Astronomy" ]
9,114
[ "Space plasmas", "Outer space", "Vacuum", "Astrophysics", "Matter" ]
177,603
https://en.wikipedia.org/wiki/J.%20R.%20D.%20Tata
Jehangir Ratanji Dadabhoy Tata (29 July 1904 – 29 November 1993) was an Indian industrialist, philanthropist, aviator and former chairman of Tata Group. Born into the Tata family of India, he was the son of noted businessman Ratanji Dadabhoy Tata and his wife Suzanne Brière. He is best known for being the founder of several industries under the Tata Group, including Tata Consultancy Services, Tata Motors, Titan Industries, Tata Salt, Voltas and Air India. In 1982, he was awarded the French Legion of Honour and in 1955 and 1992, he received two of India's highest civilian awards: the Padma Vibhushan and the Bharat Ratna. These honours were bestowed on him for his contributions to Indian industry. Early life J. R. D. Tata was born on 29 July 1904 to an Indian Parsi family in Paris, France. He was the second child of businessman Ratanji Dadabhoy Tata and his French wife, Suzanne "Sooni" Brière. His father was the first cousin of Jamsetji Tata, a pioneer industrialist in India. He had one elder sister Sylla, a younger sister Rodabeh and two younger brothers Darab and Jamshed (called Jimmy) Tata. His sister, Sylla, was married to Dinshaw Maneckji Petit, the third baronet of Petits. His sister's sister-in-law, Rattanbai Petit, was the wife of Muhammad Ali Jinnah, who later became the founder of Pakistan in August 1947. As his mother was French, he spent much of his childhood in France and as a result, French was his first language. He attended the Janson De Sailly School in Paris. One of the teachers at that school used to call him L'Egyptien. He attended the Cathedral and John Connon School, Bombay. Tata was educated in London, Japan, France and India. When his father joined the Tata company he moved the whole family to London. During this time, J. R. D.'s mother died at the age of 43 while his father was in India and his family was in France. After his mother's death, Ratanji Dadabhoy Tata decided to move his family to India and sent J. R. D. to England for higher studies in October 1923. He was enrolled in a grammar school, and was interested in studying engineering at Cambridge University. However, as a citizen of France J. R. D. had to enlist in the French army for at least a year. In between grammar school and his time in the army, he spent a brief spell at home in Bombay. After joining the French Army he was posted into a regiment of spahis. Upon discovering Tata could not only read and write French and English, but could type as well, a colonel had him assigned as a secretary in his office. After his time in the French Army, his father decided to bring him back to India and he joined the Tata Company. In 1929, Tata renounced his French citizenship and became an Indian citizen. In 1930 Tata married Thelma Vicaji, the niece of Jack Vicaji, a colourful lawyer whom he hired to defend him on a charge of driving his Bugatti too fast along Bombay's main promenade, Marine Drive. Previously he had been engaged to Dinbai Mehta, the future mother of The Economist editor Shapur Kharegat. While he was born to a Parsi father, and his French mother converted to Zoroastrianism, J. R. D. was agnostic. He found some Parsi religious customs like their funeral rites and their exclusiveness irksome. He adhered to the three basic tenets of Zoroastrianism, which were good thoughts, good words, and good deeds, but he did not profess belief or disbelief in God. Career When Tata was in tour, he was inspired by his friend's father, aviation pioneer Louis Blériot, the first man to fly across the English Channel, and took to flying. On 10 February 1929, Tata obtained the first license issued in India. He later came to be known as the "Father of Indian civil aviation". He founded India's first commercial airline, Tata Airlines in 1932, which became Air India in 1946, now India's national airline. He and Nevill Vintcent worked together in building Tata Airlines. They were also good friends. In 1929, J. R. D. became one of the first Indians to be granted a commercial's license. In 1932 Tata Aviation Service, the forerunner to Tata Airline and Air India, took to the skies. That same year he flew the first commercial mail flight to Juhu, in a de Havilland Puss Moth. The first flight in the History of Indian aviation lifted off from Drigh in Karachi to Madras with J. R. D. at the controls of a Puss on 15 October 1932. J. R. D. nourished and nurtured his airline baby through to 1953, when the government of Jawaharlal Nehru nationalised Air India along with several other private Airlines and appointed JRD as its first Chairman. JRD continued as chairman for 25 years before being removed by Morarji Desai in 1978. He joined Tata Sons as an unpaid apprentice in 1925. In 1938, at the age of 34, Tata was elected Chairman of Tata Sons making him the head of the largest industrial group in India. He took over as Chairman of Tata Sons from his second cousin Nowroji Saklatwala. For decades, he directed the huge Tata Group of companies, with major interests in steel, engineering, power, chemicals and hospitality. He was famous for succeeding in business while maintaining high ethical standards – refusing to bribe politicians or use the black market. Under his chairmanship, the assets of the Tata Group grew from US$100 million to over US$5 billion. He started with 14 enterprises under his leadership and half a century later on 26 July 1988, when he left, Tata Sons was a conglomerate of 95 enterprises which they either started or in which they had controlling interest. He was the trustee of the Sir Dorabji Tata Trust from its inception in 1932 for over half a century. Under his guidance, this Trust established Asia's first cancer facility, the Tata Memorial Centre for Cancer, Research and Treatment, Bombay in 1941. He also founded the Tata Institute of Social Sciences (TISS, 1936), the Tata Institute of Fundamental Research (TIFR, 1945), and the National Center for Performing Arts. In 1945, he founded Tata Motors. In 1948, Tata launched Air India International as India's first international airline. In 1953, the Indian Government appointed Tata as Chairman of Air India and a director on the Board of Indian Airlines – a position he retained for 25 years. For his crowning achievements in aviation, he was bestowed with the title of Honorary Air Commodore of India. Tata cared greatly for his workers. In 1956, he initiated a programme of closer 'employee association with management' to give workers a stronger voice in the affairs of the company. He firmly believed in employee welfare and espoused the principles of an eight-hour working day, free medical aid, workers' provident scheme, and workmen's accident compensation schemes, which were later, adopted as statutory requirements in India. He was also a founding member of the first Governing Body of NCAER, the National Council of Applied Economic Research in New Delhi, India's first independent economic policy institute established in 1956. In 1968, he founded Tata Consultancy Services as Tata Computer Centre. In 1979, Tata Steel instituted a new practice: a worker being deemed to be "at work" from the moment he leaves home for work until he returns home from work. This made the company financially liable to the worker for any mishap on the way to and from work. In 1987, he founded Titan Industries. Jamshedpur was also selected as a UN Global Compact City because of the quality of life, conditions of sanitation, roads and welfare that were offered by Tata Steel. Support of emergency powers in 1975 Tata was also supportive of the declaration of emergency powers by Prime Minister Indira Gandhi, in 1975. He is quoted to have told a reporter of the Times, "things had gone too far. You can't imagine what we've been through here—strikes, boycotts, demonstrations. Why, there were days I couldn't walk out of my house into the streets. The parliamentary system is not suited to our needs." Awards and honours Tata received a number of awards. He was conferred the honorary rank of group captain by the Indian Air Force in 1948, was promoted to the Air Commodore rank (equivalent to Brigadier in the army) on 4 October 1966, and was further promoted on 1 April 1974 to the Air Vice Marshal rank. Several international awards for aviation were given to him – the Tony Jannus Award in March 1979, the Gold Air Medal of the in 1985, the Edward Warner Award of the International Civil Aviation Organisation, Canada in 1986 and the Daniel Guggenheim Medal in 1988. He received the Padma Vibhushan in 1955. The French Legion of Honour was bestowed on him in 1983. In 1992, because of his selfless humanitarian endeavours, Tata was awarded India's highest civilian honour, the Bharat Ratna. In his memory, the Government of Maharashtra named its first double-decker bridge the Bharatratna JRD Tata Overbridge at Nasik Phata, Pimpri Chinchwad. Following Prime Minister Indira Gandhi's 1975-1977 Emergency, in which she controversially pursued forced sterilizations as a form of population control, Tata built on these efforts by ordering Tata Steel to open nine family planning centers in 1984. Employees and their non-employee partners were compensated for undergoing sterilization, and factory plant departments were awarded for achieving the lowest fertility rate. While such incentives arguably violated the medical ethics principle of personal bodily autonomy, Tata was awarded the 1992 United Nations Population Award for his efforts. Death Tata died in Geneva, Switzerland of a kidney infection on 29 November 1993, at the age of 89. He said a few days before his death: "Comme c'est doux de mourir" ("How gentle it is to die"). Upon his death, the Indian Parliament was adjourned in his memory, an honour not usually given to persons who are not members of parliament. He was buried at the Père Lachaise Cemetery in Paris. In 2012, Tata was ranked the sixth "The Greatest Indian" in an Outlook magazine poll, "conducted in conjunction with CNN-IBN and History18 Channels with BBC." See also The Greatest Indian R. M. Lala Jamsetji Tata Ratanji Dadabhoy Tata Rattanbai Petit Dorabji Tata References Bibliography External links Tata Family Tree Brief Lifestory of JRD Tata Biography at newindiadigest.com Biography at tata.com 1904 births 1993 deaths Aviation history of India Aviation pioneers Indian people of French descent Indian aviators Indian chief executives Parsi people from Mumbai Businesspeople from Mumbai Indian agnostics Recipients of the Bharat Ratna Recipients of the Padma Vibhushan in trade & industry Soldiers of the French Foreign Legion Jehangir Ratanji Dadabhoy Businesspeople in steel Businesspeople in information technology Businesspeople in coffee Chief executives in the automobile industry Indian founders of automobile manufacturers Bessemer Gold Medal Tata Group people Burials at Père Lachaise Cemetery Businesspeople from Paris Cathedral and John Connon School alumni Indian Freemasons Naturalised citizens of India
J. R. D. Tata
[ "Chemistry" ]
2,375
[ "Bessemer Gold Medal", "Chemical engineering awards" ]
177,633
https://en.wikipedia.org/wiki/Hypothetical%20syllogism
In classical logic, a hypothetical syllogism is a valid argument form, a deductive syllogism with a conditional statement for one or both of its premises. Ancient references point to the works of Theophrastus and Eudemus for the first investigation of this kind of syllogisms. Types Hypothetical syllogisms come in two types: mixed and pure. A mixed hypothetical syllogism has two premises: one conditional statement and one statement that either affirms or denies the antecedent or consequent of that conditional statement. For example, If P, then Q. P. ∴ Q. In this example, the first premise is a conditional statement in which "P" is the antecedent and "Q" is the consequent. The second premise "affirms" the antecedent. The conclusion, that the consequent must be true, is deductively valid. A mixed hypothetical syllogism has four possible forms, two of which are valid, while the other two are invalid. A valid mixed hypothetical syllogism either affirms the antecedent (modus ponens) or denies the consequent (modus tollens). An invalid hypothetical syllogism either affirms the consequent (fallacy of the converse) or denies the antecedent (fallacy of the inverse). A pure hypothetical syllogism is a syllogism in which both premises and the conclusion are all conditional statements. The antecedent of one premise must match the consequent of the other for the conditional to be valid. Consequently, conditionals contain remained antecedent as antecedent and remained consequent as consequent. If P, then Q. If Q, then R. ∴ If P, then R. An example in English: If I do not wake up, then I cannot go to work. If I cannot go to work, then I will not get paid. Therefore, if I do not wake up, then I will not get paid. Propositional logic In propositional logic, hypothetical syllogism is the name of a valid rule of inference (often abbreviated HS and sometimes also called the chain argument, chain rule, or the principle of transitivity of implication). The rule may be stated: In other words, whenever instances of "", and "" appear on lines of a proof, "" can be placed on a subsequent line. Applicability The rule of hypothetical syllogism holds in classical logic, intuitionistic logic, most systems of relevance logic, and many other systems of logic. However, it does not hold in all logics, including, for example, non-monotonic logic, probabilistic logic and default logic. The reason for this is that these logics describe defeasible reasoning, and conditionals that appear in real-world contexts typically allow for exceptions, default assumptions, ceteris paribus conditions, or just simple uncertainty. An example, derived from Ernest W. Adams, If Jones wins the election, Smith will retire after the election. If Smith dies before the election, Jones will win the election. If Smith dies before the election, Smith will retire after the election. Clearly, (3) does not follow from (1) and (2). (1) is true by default, but fails to hold in the exceptional circumstances of Smith dying. In practice, real-world conditionals always tend to involve default assumptions or contexts, and it may be infeasible or even impossible to specify all the exceptional circumstances in which they might fail to be true. For similar reasons, the rule of hypothetical syllogism does not hold for counterfactual conditionals. Formal notation The hypothetical syllogism inference rule may be written in sequent notation, which amounts to a specialization of the cut rule: where is a metalogical symbol and meaning that is a syntactic consequence of in some logical system; and expressed as a truth-functional tautology or theorem of propositional logic: where , , and are propositions expressed in some formal system. Proof Alternative forms An alternative form of hypothetical syllogism, more useful for classical propositional calculus systems with implication and negation (i.e. without the conjunction symbol), is the following: (HS1) Yet another form is: (HS2) Proof An example of the proofs of these theorems in such systems is given below. We use two of the three axioms used in one of the popular systems described by Jan Łukasiewicz. The proofs relies on two out of the three axioms of this system: (A1) (A2) The proof of the (HS1) is as follows: (1)       (instance of (A1)) (2)       (instance of (A2)) (3)       (from (1) and (2) by modus ponens) (4)       (instance of (A2)) (5)       (from (3) and (4) by modus ponens) (6)       (instance of (A1)) (7) (from (5) and (6) by modus ponens) The proof of the (HS2) is given here. As a metatheorem Whenever we have two theorems of the form and , we can prove by the following steps: (1)       (instance of the theorem proved above) (2)       (instance of (T1)) (3)       (from (1) and (2) by modus ponens) (4)       (instance of (T2)) (5)       (from (3) and (4) by modus ponens) See also Plausible reasoning Transitive relation Type of syllogism (disjunctive, hypothetical, legal, poly-, prosleptic, quasi-, statistical) References External links Philosophy Index: Hypothetical Syllogism Rules of inference Theorems in propositional logic Classical logic Syllogism Ancient Greek logic
Hypothetical syllogism
[ "Mathematics" ]
1,251
[ "Rules of inference", "Theorems in propositional logic", "Theorems in the foundations of mathematics", "Proof theory" ]
177,648
https://en.wikipedia.org/wiki/Personality
Personality is any person's collection of interrelated behavioral, cognitive, and emotional patterns that comprise a person’s unique adjustment to life. These interrelated patterns are relatively stable, but can change over long time periods, driven by experiences and maturational processes, especially the adoption of social roles as worker or parent. Personality differences are the strongest predictors of virtually all key life outcomes, from academic and work and relationship success and satisfaction to mental and somatic health and well-being and longevity. Although there is no consensus definition of personality, most theories focus on motivation and psychological interactions with one's environment. Trait-based personality theories, such as those defined by Raymond Cattell, define personality as traits that predict an individual's behavior. On the other hand, more behaviorally-based approaches define personality through learning and habits. Nevertheless, most theories view personality as relatively stable. The study of the psychology of personality, called personality psychology, attempts to explain the tendencies that underlie differences in behavior. Psychologists have taken many different approaches to the study of personality, which can be organized across dispositional, biological, intrapsychic (psychodynamic), cognitive-experiential, social and cultural, and adjustment domains. The various approaches used to study personality today reflect the influence of the first theorists in the field, a group that includes Sigmund Freud, Alfred Adler, Gordon Allport, Hans Eysenck, Abraham Maslow, and Carl Rogers. Measuring Personality can be determined through a variety of tests. Due to the fact that personality is a complex idea, the dimensions of personality and scales of such tests vary and often are poorly defined. Two main tools to measure personality are objective tests and projective measures. Examples of such tests are the: Big Five Inventory (BFI), Minnesota Multiphasic Personality Inventory (MMPI-2), Rorschach Inkblot test, Neurotic Personality Questionnaire KON-2006, or Eysenck's Personality Questionnaire (EPQ-R). All of these tests are beneficial because they have both reliability and validity, two factors that make a test accurate. "Each item should be influenced to a degree by the underlying trait construct, giving rise to a pattern of positive intercorrelations so long as all items are oriented (worded) in the same direction." A recent, but not well-known, measuring tool that psychologists use is the 16PF. It measures personality based on Cattell's 16-factor theory of personality. Psychologists also use it as a clinical measuring tool to diagnose psychiatric disorders and help with prognosis and therapy planning. Personality is frequently broken into factors or dimensions, statistically extracted from large questionnaires through factor analysis. When brought back to two dimensions, often the dimensions of introvert-extrovert and neuroticism (emotionally unstable-stable) are used as first proposed by Eysenck in the 1960s. Five-factor inventory The approach with most support in the field is called the Big Five, which are openness to experience, conscientiousness, extraversion, agreeableness, and neuroticism (or emotional stability), often memorized as "OCEAN". These components are generally stable over time, and about half of the variance appears to be attributable to a person's genetics rather than the effects of one's environment. These five factors are made up of two aspects each as well as many facets (e.g., openness splits into experiencing and intellect, which each further split into facets like fantasy and ideas). These five factors also show correlations with each other that suggest higher order meta-traits (e.g., factor beta, which combines openness and extraversion to form a meta-trait associated with mental and physical exploration). There are several personality frameworks that recognize the Big Five factors and there are thousands of measures of personality that can be used to measure specific facets as well as general traits. Some research has investigated whether the relationship between happiness and extraversion seen in adults also can be seen in children. The implications of these findings can help identify children who are more likely to experience episodes of depression and develop types of treatment that such children are likely to respond to. In both children and adults, research shows that genetics, as opposed to environmental factors, exert a greater influence on happiness levels. Personality is not stable over the course of a lifetime, but it changes much more quickly during childhood, so personality constructs in children are referred to as temperament. Temperament is regarded as the precursor to personality. Another interesting finding has been the link found between acting extraverted and positive affect. Extraverted behaviors include acting talkative, assertive, adventurous, and outgoing. For the purposes of this study, positive affect is defined as experiences of happy and enjoyable emotions. This study investigated the effects of acting in a way that is counter to a person's dispositional nature. In other words, the study focused on the benefits and drawbacks of introverts (people who are shy, socially inhibited, and non-aggressive) acting extraverted, and of extraverts acting introverted. After acting extraverted, introverts' experience of positive affect increased whereas extraverts seemed to experience lower levels of positive affect and suffered from the phenomenon of ego depletion. Ego depletion, or cognitive fatigue, is the use of one's energy to overtly act in a way that is contrary to one's inner disposition. When people act in a contrary fashion, they divert most, if not all, (cognitive) energy toward regulating this foreign style of behavior and attitudes. Because all available energy is being used to maintain this contrary behavior, the result is an inability to use any energy to make important or difficult decisions, plan for the future, control or regulate emotions, or perform effectively on other cognitive tasks. One question that has been posed is why extraverts tend to be happier than introverts. The two types of explanations that attempt to account for this difference are instrumental theories and temperamental theories. The instrumental theory suggests that extraverts end up making choices that place them in more positive situations and they also react more strongly than introverts to positive situations. The temperamental theory suggests that extroverts have a disposition that generally leads them to experience a higher degree of positive affect. In their study of extraversion, Lucas and Baird found no statistically significant support for the instrumental theory but did, however, find that extraverts generally experience a higher level of positive affect. Research has been done to uncover some of the mediators that are responsible for the correlation between extraversion and happiness. Self-esteem and self-efficacy are two such mediators. Self-efficacy is one's belief about abilities to perform up to personal standards, the ability to produce desired results, and the feeling of having some ability to make important life decisions. Self-efficacy has been found to be related to the personality traits of extraversion and subjective well-being. Self-efficacy, however, only partially mediates the relationship between extraversion (and neuroticism) and subjective happiness. This implies that there are most likely other factors that mediate the relationship between subjective happiness and personality traits. Self-esteem maybe another similar factor. Individuals with a greater degree of confidence about themselves and their abilities seem to have both higher degrees of subjective well-being and higher levels of extraversion. Other research has examined the phenomenon of mood maintenance as another possible mediator. Mood maintenance is the ability to maintain one's average level of happiness in the face of an ambiguous situation – meaning a situation that has the potential to engender either positive or negative emotions in different individuals. It has been found to be a stronger force in extroverts. This means that the happiness levels of extraverted individuals are less susceptible to the influence of external events. This finding implies that extraverts' positive moods last longer than those of introverts. Developmental biological model Modern conceptions of personality, such as the Temperament and Character Inventory have suggested four basic temperaments that are thought to reflect basic and automatic responses to danger and reward that rely on associative learning. The four temperaments, harm avoidance, reward dependence, novelty-seeking and persistence, are somewhat analogous to ancient conceptions of melancholic, sanguine, choleric, phlegmatic personality types, although the temperaments reflect dimensions rather than distance categories. The harm avoidance trait has been associated with increased reactivity in insular and amygdala salience networks, as well as reduced 5-HT2 receptor binding peripherally, and reduced GABA concentrations. Novelty seeking has been associated with reduced activity in insular salience networks increased striatal connectivity. Novelty seeking correlates with dopamine synthesis capacity in the striatum and reduced auto receptor availability in the midbrain. Reward dependence has been linked with the oxytocin system, with increased concentration of plasma oxytocin being observed, as well as increased volume in oxytocin-related regions of the hypothalamus. Persistence has been associated with increased striatal-mPFC connectivity, increased activation of ventral striatal-orbitofrontal-anterior cingulate circuits, as well as increased salivary amylase levels indicative of increased noradrenergic tone. Environmental influences It has been shown that personality traits are more malleable by environmental influences than researchers originally believed. Personality differences predict the occurrence of life experiences. One study has shown how the home environment, specifically the types of parents a person has, can affect and shape their personality. Mary Ainsworth's strange situation experiment showcased how babies reacted to having their mother leave them alone in a room with a stranger. The different styles of attachment, labeled by Ainsworth, were Secure, Ambivalent, avoidant, and disorganized. Children who were securely attached tend to be more trusting, sociable, and are confident in their day-to-day life. Children who were disorganized were reported to have higher levels of anxiety, anger, and risk-taking behavior. Judith Rich Harris's group socialization theory postulates that an individual's peer groups, rather than parental figures, are the primary influence of personality and behavior in adulthood. Intra- and intergroup processes, not dyadic relationships such as parent-child relationships, are responsible for the transmission of culture and for environmental modification of children's personality characteristics. Thus, this theory points at the peer group representing the environmental influence on a child's personality rather than the parental style or home environment. Tessuya Kawamoto's Personality Change from Life Experiences: Moderation Effect of Attachment Security talked about some significant laboratory tests. The study mainly focused on the effects of life experiences on change in personality and life experiences. The assessments suggested that "the accumulation of small daily experiences may work for the personality development of university students and that environmental influences may vary by individual susceptibility to experiences, like attachment security". Some studies suggest that a shared family environment between siblings has less influence on personality than individual experiences of each child. Identical twins have similar personalities largely because they share the same genetic makeup rather than their shared environment. Cross-cultural studies Personality can be distinguished from more dispositional temperaments as reflecting adjustment to the culture in which one lives and grows, such as what to be ashamed or proud about, and cultural values. Many personality characteristics are human universals but other elements have proven to be unique to specific cultures and "the Big Five" have shown clear cross-cultural applicability. Cross-cultural assessment depends on the universality of personality traits, which is whether there are common traits among humans regardless of culture or other factors. If there is a common foundation of personality, then it can be studied on the basis of human traits rather than within certain cultures. This can be measured by comparing whether assessment tools are measuring similar constructs across countries or cultures. Two approaches to researching personality are looking at emic and etic traits. Emic traits are constructs unique to each culture, which are determined by local customs, thoughts, beliefs, and characteristics. Etic traits are considered universal constructs, which establish traits that are evident across cultures that represent a biological basis of human personality. If personality traits are unique to the individual culture, then different traits should be apparent in different cultures. However, the idea that personality traits are universal across cultures is supported by establishing the Five-Factor Model of personality across multiple translations of the NEO-PI-R, which is one of the most widely used personality measures. When administering the NEO-PI-R to 7,134 people across six languages, the results show a similar pattern of the same five underlying constructs that are found in the American factor structure. Similar results were found using the Big Five Inventory (BFI), as it was administered in 56 nations across 28 languages. The five factors continued to be supported both conceptually and statistically across major regions of the world, suggesting that these underlying factors are common across cultures. There are some differences across culture, but they may be a consequence of using a lexical approach to study personality structures, as language has limitations in translation and different cultures have unique words to describe emotion or situations. Differences across cultures could be due to real cultural differences, but they could also be consequences of poor translations, biased sampling, or differences in response styles across cultures. Examining personality questionnaires developed within a culture can also be useful evidence for the universality of traits across cultures, as the same underlying factors can still be found. Results from several European and Asian studies have found overlapping dimensions with the Five-Factor Model as well as additional culture-unique dimensions. Finding similar factors across cultures provides support for the universality of personality trait structure, but more research is necessary to gain stronger support. Culture is an important factor in shaping the personality of individuals. Psychologists have found that cultural norms, beliefs, and practices shape the way people interact and behave with others, which can impact personality development (Cheung et al., 2011). Studies have identified cultural differences in personality traits such as extraversion, agreeableness, and conscientiousness, indicating that culture influences personality development (Allik & McCrae, 2004). For example, Western cultures value individualism, independence, and assertiveness, which are reflected in personality traits such as extraversion. In contrast, Eastern cultures value collectivism, cooperation, and social harmony, which are reflected in personality traits such as agreeableness (Cheung et al., 2011). Historical development of concept The modern sense of individual personality is a result of the shifts in culture originating in the Renaissance, an essential element in modernity. In contrast, the Medieval European's sense of self was linked to a network of social roles: "the household, the Kinship network, the guild, the corporation – these were the building blocks of personhood". Stephen Greenblatt observes, in recounting the recovery (1417) and career of Lucretius' poem De rerum natura: "at the core of the poem lay key principles of a modern understanding of the world." "Dependent on the family, the individual alone was nothing," Jacques Gélis observes. "The characteristic mark of the modern man has two parts: one internal, the other external; one dealing with his environment, the other with his attitudes, values, and feelings." Rather than being linked to a network of social roles, the modern man is largely influenced by the environmental factors such as: "urbanization, education, mass communication, industrialization, and politicization." Temperament and philosophy William James (1842–1910) argued that temperament explains a great deal of the controversies in the history of philosophy by arguing that it is a very influential premise in the arguments of philosophers. Despite seeking only impersonal reasons for their conclusions, James argued, the temperament of philosophers influenced their philosophy. Temperament thus conceived is tantamount to a bias. Such bias, James explained, was a consequence of the trust philosophers place in their own temperament. James thought the significance of his observation lay on the premise that in philosophy an objective measure of success is whether philosophy is peculiar to its philosopher or not, and whether a philosopher is dissatisfied with any other way of seeing things or not. Mental make-up James argued that temperament may be the basis of several divisions in academia, but focused on philosophy in his 1907 lectures on Pragmatism. In fact, James' lecture of 1907 fashioned a sort of trait theory of the empiricist and rationalist camps of philosophy. As in most modern trait theories, the traits of each camp are described by James as distinct and opposite, and maybe possessed in different proportions on a continuum, and thus characterize the personality of philosophers of each camp. The "mental make-up" (i.e. personality) of rationalist philosophers is described as "tender-minded" and "going by "principles", and that of empiricist philosophers is described as "tough-minded" and "going by "facts." James distinguishes each not only in terms of the philosophical claims they made in 1907, but by arguing that such claims are made primarily on the basis of temperament. Furthermore, such categorization was only incidental to James' purpose of explaining his pragmatist philosophy and is not exhaustive. Empiricists and rationalists According to James, the temperament of rationalist philosophers differed fundamentally from the temperament of empiricist philosophers of his day. The tendency of rationalist philosophers toward refinement and superficiality never satisfied an empiricist temper of mind. Rationalism leads to the creation of closed systems, and such optimism is considered shallow by the fact-loving mind, for whom perfection is far off. Rationalism is regarded as pretension, and a temperament most inclined to abstraction. Empiricists, on the other hand, stick with the external senses rather than logic. British empiricist John Locke's (1632–1704) explanation of personal identity provides an example of what James referred to. Locke explains the identity of a person, i.e. personality, on the basis of a precise definition of identity, by which the meaning of identity differs according to what it is being applied to. The identity of a person is quite distinct from the identity of a man, woman, or substance according to Locke. Locke concludes that consciousness is personality because it "always accompanies thinking, it is that which makes everyone to be what he calls self," and remains constant in different places at different times. Rationalists conceived of the identity of persons differently than empiricists such as Locke who distinguished identity of substance, person, and life. According to Locke, René Descartes (1596–1650) agreed only insofar as he did not argue that one immaterial spirit is the basis of the person "for fear of making brutes thinking things too." According to James, Locke tolerated arguments that a soul was behind the consciousness of any person. However, Locke's successor David Hume (1711–1776), and empirical psychologists after him denied the soul except for being a term to describe the cohesion of inner lives. However, some research suggests Hume excluded personal identity from his opus An Inquiry Concerning Human Understanding because he thought his argument was sufficient but not compelling. Descartes himself distinguished active and passive faculties of mind, each contributing to thinking and consciousness in different ways. The passive faculty, Descartes argued, simply receives, whereas the active faculty produces and forms ideas, but does not presuppose thought, and thus cannot be within the thinking thing. The active faculty mustn't be within self because ideas are produced without any awareness of them, and are sometimes produced against one's will. Rationalist philosopher Benedictus Spinoza (1632–1677) argued that ideas are the first element constituting the human mind, but existed only for actually existing things. In other words, ideas of non-existent things are without meaning for Spinoza, because an idea of a non-existent thing cannot exist. Further, Spinoza's rationalism argued that the mind does not know itself, except insofar as it perceives the "ideas of the modifications of body", in describing its external perceptions, or perceptions from without. On the contrary, from within, Spinoza argued, perceptions connect various ideas clearly and distinctly. The mind is not the free cause of its actions for Spinoza. Spinoza equates the will with the understanding and explains the common distinction of these things as being two different things as an error which results from the individual's misunderstanding of the nature of thinking. Biology The biological basis of personality is the theory that anatomical structures such as genes, hormones, or brain areas underlie individual differences in personality. This stems from neuropsychology, which studies how the structure of the brain relates to various psychological processes and behaviors. For instance, in human beings, the frontal lobes are responsible for foresight and anticipation, and the occipital lobes are responsible for processing visual information. In addition, certain physiological functions such as hormone secretion also affect personality. For example, the hormone testosterone is important for sociability, affectivity, aggressiveness, and sexuality. Additionally, studies show that the expression of a personality trait depends on the volume of the brain cortex it is associated with. Personology Personology confers a multidimensional, complex, and comprehensive approach to personality. According to Henry A. Murray, personology is: From a holistic perspective, personology studies personality as a whole, as a system, but at the same time through all its components, levels, and spheres. See also Personality in animals Association for Research in Personality, an academic organization Differential psychology Human variability Offender profiling Personality and Individual Differences, a scientific journal published bi-monthly by Elsevier Personality computing Personality crisis Personality disorder Personality rights, consisting of the right to individual publicity and privacy Trait theory Two-factor models of personality References Further reading Conceptions of self Human development Metaphysics of mind
Personality
[ "Biology" ]
4,537
[ "Behavior", "Human development", "Behavioural sciences", "Personality", "Human behavior" ]
177,652
https://en.wikipedia.org/wiki/List%20of%20psychological%20research%20methods
A wide range of research methods are used in psychology. These methods vary by the sources from which information is obtained, how that information is sampled, and the types of instruments that are used in data collection. Methods also vary by whether they collect qualitative data, quantitative data or both. Qualitative psychological research findings are not arrived at by statistical or other quantitative procedures. Quantitative psychological research findings result from mathematical modeling and statistical estimation or statistical inference. The two types of research differ in the methods employed, rather than the topics they focus on. There are three main types of psychological research: Correlational research Descriptive research Experimental research Common methods Common research designs and data collection methods include: Archival research Case study uses different research methods (e.g. interview, observation, self-report questionnaire) with a single case or small number of cases. Computer simulation (modeling) Ethnography Event sampling methodology, also referred to as experience sampling methodology, diary study, or ecological momentary assessment Experiment, often with separate treatment and control groups (see scientific control and design of experiments). See Experimental psychology for many details. Field experiment Focus group Interview, can be structured or unstructured. Meta-analysis Neuroimaging and other psychophysiological methods Observational study, can be naturalistic (see natural experiment), participant or controlled. Program evaluation Quasi-experiment Self-report inventory Survey, often with a random sample (see survey sampling) Twin study Research designs vary according to the period(s) of time over which data are collected: Retrospective cohort study: Participants are chosen, then data are collected about their past experiences. Prospective cohort study: Participants are recruited prior to the proposed independent effects being administered or occurring. Cross-sectional study: A population is sampled on all proposed measures at one point in time. Longitudinal study: Participants are studied at multiple time points. May address the cohort effect and help to indicate causal directions of effects. Cross-sequential study: Groups of different ages are studied at multiple time points; combines cross-sectional and longitudinal designs Research in psychology has been conducted with both animals and human subjects: Animal study Human subject research References Stangor, C. (2007). Research Methods for the Behavioral Sciences. 3rd ed. Boston, MA: Houghton Mifflin Company. Weathington, B.L., Cunningham, C.J.L., & Pittenger, D.P. (2010). Research Methods for the Behavioral and Social Sciences. Hoboken, NJ: John Wiley & Sons, Inc. Research methods Cognitive science lists Research-related lists Quantitative psychology
List of psychological research methods
[ "Mathematics" ]
528
[ "Applied mathematics", "Quantitative psychology" ]
177,668
https://en.wikipedia.org/wiki/Voronoi%20diagram
In mathematics, a Voronoi diagram is a partition of a plane into regions close to each of a given set of objects. It can be classified also as a tessellation. In the simplest case, these objects are just finitely many points in the plane (called seeds, sites, or generators). For each seed there is a corresponding region, called a Voronoi cell, consisting of all points of the plane closer to that seed than to any other. The Voronoi diagram of a set of points is dual to that set's Delaunay triangulation. The Voronoi diagram is named after mathematician Georgy Voronoy, and is also called a Voronoi tessellation, a Voronoi decomposition, a Voronoi partition, or a Dirichlet tessellation (after Peter Gustav Lejeune Dirichlet). Voronoi cells are also known as Thiessen polygons, after Alfred H. Thiessen. Voronoi diagrams have practical and theoretical applications in many fields, mainly in science and technology, but also in visual art. Simplest case In the simplest case, shown in the first picture, we are given a finite set of points in the Euclidean plane. In this case, each point has a corresponding cell consisting of the points in the Euclidean plane for which is the nearest site: the distance to is less than or equal to the minimum distance to any other site . For one other site , the points that are closer to than to , or equally distant, form a closed half-space, whose boundary is the perpendicular bisector of line segment . Cell is the intersection of all of these half-spaces, and hence it is a convex polygon. When two cells in the Voronoi diagram share a boundary, it is a line segment, ray, or line, consisting of all the points in the plane that are equidistant to their two nearest sites. The vertices of the diagram, where three or more of these boundaries meet, are the points that have three or more equally distant nearest sites. Formal definition Let be a metric space with distance function . Let be a set of indices and let be a tuple (indexed collection) of nonempty subsets (the sites) in the space . The Voronoi cell, or Voronoi region, , associated with the site is the set of all points in whose distance to is not greater than their distance to the other sites , where is any index different from . In other words, if denotes the distance between the point and the subset , then The Voronoi diagram is simply the tuple of cells . In principle, some of the sites can intersect and even coincide (an application is described below for sites representing shops), but usually they are assumed to be disjoint. In addition, infinitely many sites are allowed in the definition (this setting has applications in geometry of numbers and crystallography), but again, in many cases only finitely many sites are considered. In the particular case where the space is a finite-dimensional Euclidean space, each site is a point, there are finitely many points and all of them are different, then the Voronoi cells are convex polytopes and they can be represented in a combinatorial way using their vertices, sides, two-dimensional faces, etc. Sometimes the induced combinatorial structure is referred to as the Voronoi diagram. In general however, the Voronoi cells may not be convex or even connected. In the usual Euclidean space, we can rewrite the formal definition in usual terms. Each Voronoi polygon is associated with a generator point . Let be the set of all points in the Euclidean space. Let be a point that generates its Voronoi region , that generates , and that generates , and so on. Then, as expressed by Tran et al, "all locations in the Voronoi polygon are closer to the generator point of that polygon than any other generator point in the Voronoi diagram in Euclidean plane". Illustration As a simple illustration, consider a group of shops in a city. Suppose we want to estimate the number of customers of a given shop. With all else being equal (price, products, quality of service, etc.), it is reasonable to assume that customers choose their preferred shop simply by distance considerations: they will go to the shop located nearest to them. In this case the Voronoi cell of a given shop can be used for giving a rough estimate on the number of potential customers going to this shop (which is modeled by a point in our city). For most cities, the distance between points can be measured using the familiar Euclidean distance: or the Manhattan distance: . The corresponding Voronoi diagrams look different for different distance metrics. Properties The dual graph for a Voronoi diagram (in the case of a Euclidean space with point sites) corresponds to the Delaunay triangulation for the same set of points. The closest pair of points corresponds to two adjacent cells in the Voronoi diagram. If the setting is the Euclidean plane and a discrete set of points is given, then two points of the set are adjacent on the convex hull if and only if their Voronoi cells share an infinitely long side. If the space is a normed space and the distance to each site is attained (e.g., when a site is a compact set or a closed ball), then each Voronoi cell can be represented as a union of line segments emanating from the sites. As shown there, this property does not necessarily hold when the distance is not attained. Under relatively general conditions (the space is a possibly infinite-dimensional uniformly convex space, there can be infinitely many sites of a general form, etc.) Voronoi cells enjoy a certain stability property: a small change in the shapes of the sites, e.g., a change caused by some translation or distortion, yields a small change in the shape of the Voronoi cells. This is the geometric stability of Voronoi diagrams. As shown there, this property does not hold in general, even if the space is two-dimensional (but non-uniformly convex, and, in particular, non-Euclidean) and the sites are points. History and research Informal use of Voronoi diagrams can be traced back to Descartes in 1644. Peter Gustav Lejeune Dirichlet used two-dimensional and three-dimensional Voronoi diagrams in his study of quadratic forms in 1850. British physician John Snow used a Voronoi-like diagram in 1854 to illustrate how the majority of people who died in the Broad Street cholera outbreak lived closer to the infected Broad Street pump than to any other water pump. Voronoi diagrams are named after Georgy Feodosievych Voronoy who defined and studied the general n-dimensional case in 1908. Voronoi diagrams that are used in geophysics and meteorology to analyse spatially distributed data are called Thiessen polygons after American meteorologist Alfred H. Thiessen, who used them to estimate rainfall from scattered measurements in 1911. Other equivalent names for this concept (or particular important cases of it): Voronoi polyhedra, Voronoi polygons, domain(s) of influence, Voronoi decomposition, Voronoi tessellation(s), Dirichlet tessellation(s). Examples Voronoi tessellations of regular lattices of points in two or three dimensions give rise to many familiar tessellations. A 2D lattice gives an irregular honeycomb tessellation, with equal hexagons with point symmetry; in the case of a regular triangular lattice it is regular; in the case of a rectangular lattice the hexagons reduce to rectangles in rows and columns; a square lattice gives the regular tessellation of squares; note that the rectangles and the squares can also be generated by other lattices (for example the lattice defined by the vectors (1,0) and (1/2,1/2) gives squares). A simple cubic lattice gives the cubic honeycomb. A hexagonal close-packed lattice gives a tessellation of space with trapezo-rhombic dodecahedra. A face-centred cubic lattice gives a tessellation of space with rhombic dodecahedra. A body-centred cubic lattice gives a tessellation of space with truncated octahedra. Parallel planes with regular triangular lattices aligned with each other's centers give the hexagonal prismatic honeycomb. Certain body-centered tetragonal lattices give a tessellation of space with rhombo-hexagonal dodecahedra. Certain body-centered tetragonal lattices give a tessellation of space with rhombo-hexagonal dodecahedra. For the set of points (x, y) with x in a discrete set X and y in a discrete set Y, we get rectangular tiles with the points not necessarily at their centers. Higher-order Voronoi diagrams Although a normal Voronoi cell is defined as the set of points closest to a single point in S, an nth-order Voronoi cell is defined as the set of points having a particular set of n points in S as its n nearest neighbors. Higher-order Voronoi diagrams also subdivide space. Higher-order Voronoi diagrams can be generated recursively. To generate the nth-order Voronoi diagram from set S, start with the (n − 1)th-order diagram and replace each cell generated by X = {x1, x2, ..., xn−1} with a Voronoi diagram generated on the set S − X. Farthest-point Voronoi diagram For a set of n points, the (n − 1)th-order Voronoi diagram is called a farthest-point Voronoi diagram. For a given set of points S = {p1, p2, ..., pn}, the farthest-point Voronoi diagram divides the plane into cells in which the same point of P is the farthest point. A point of P has a cell in the farthest-point Voronoi diagram if and only if it is a vertex of the convex hull of P. Let H = {h1, h2, ..., hk} be the convex hull of P; then the farthest-point Voronoi diagram is a subdivision of the plane into k cells, one for each point in H, with the property that a point q lies in the cell corresponding to a site hi if and only if d(q, hi) > d(q, pj) for each pj ∈ S with hi ≠ pj, where d(p, q) is the Euclidean distance between two points p and q. The boundaries of the cells in the farthest-point Voronoi diagram have the structure of a topological tree, with infinite rays as its leaves. Every finite tree is isomorphic to the tree formed in this way from a farthest-point Voronoi diagram. Generalizations and variations As implied by the definition, Voronoi cells can be defined for metrics other than Euclidean, such as the Mahalanobis distance or Manhattan distance. However, in these cases the boundaries of the Voronoi cells may be more complicated than in the Euclidean case, since the equidistant locus for two points may fail to be subspace of codimension 1, even in the two-dimensional case. A weighted Voronoi diagram is the one in which the function of a pair of points to define a Voronoi cell is a distance function modified by multiplicative or additive weights assigned to generator points. In contrast to the case of Voronoi cells defined using a distance which is a metric, in this case some of the Voronoi cells may be empty. A power diagram is a type of Voronoi diagram defined from a set of circles using the power distance; it can also be thought of as a weighted Voronoi diagram in which a weight defined from the radius of each circle is added to the squared Euclidean distance from the circle's center. The Voronoi diagram of points in -dimensional space can have vertices, requiring the same bound for the amount of memory needed to store an explicit description of it. Therefore, Voronoi diagrams are often not feasible for moderate or high dimensions. A more space-efficient alternative is to use approximate Voronoi diagrams. Voronoi diagrams are also related to other geometric structures such as the medial axis (which has found applications in image segmentation, optical character recognition, and other computational applications), straight skeleton, and zone diagrams. Applications Meteorology/Hydrology It is used in meteorology and engineering hydrology to find the weights for precipitation data of stations over an area (watershed). The points generating the polygons are the various station that record precipitation data. Perpendicular bisectors are drawn to the line joining any two stations. This results in the formation of polygons around the stations. The area touching station point is known as influence area of the station. The average precipitation is calculated by the formula Humanities and social sciences In classical archaeology, specifically art history, the symmetry of statue heads is analyzed to determine the type of statue a severed head may have belonged to. An example of this that made use of Voronoi cells was the identification of the Sabouroff head, which made use of a high-resolution polygon mesh. In dialectometry, Voronoi cells are used to indicate a supposed linguistic continuity between survey points. In political science, Voronoi diagrams have been used to study multi-dimensional, multi-party competition. Natural sciences In biology, Voronoi diagrams are used to model a number of different biological structures, including cells and bone microarchitecture. Indeed, Voronoi tessellations work as a geometrical tool to understand the physical constraints that drive the organization of biological tissues. In hydrology, Voronoi diagrams are used to calculate the rainfall of an area, based on a series of point measurements. In this usage, they are generally referred to as Thiessen polygons. In ecology, Voronoi diagrams are used to study the growth patterns of forests and forest canopies, and may also be helpful in developing predictive models for forest fires. In ethology, Voronoi diagrams are used to model domains of danger in the selfish herd theory. In computational chemistry, ligand-binding sites are transformed into Voronoi diagrams for machine learning applications (e.g., to classify binding pockets in proteins). In other applications, Voronoi cells defined by the positions of the nuclei in a molecule are used to compute atomic charges. This is done using the Voronoi deformation density method. In astrophysics, Voronoi diagrams are used to generate adaptative smoothing zones on images, adding signal fluxes on each one. The main objective of these procedures is to maintain a relatively constant signal-to-noise ratio on all the images. In computational fluid dynamics, the Voronoi tessellation of a set of points can be used to define the computational domains used in finite volume methods, e.g. as in the moving-mesh cosmology code AREPO. In computational physics, Voronoi diagrams are used to calculate profiles of an object with Shadowgraph and proton radiography in High energy density physics. Health In medical diagnosis, models of muscle tissue, based on Voronoi diagrams, can be used to detect neuromuscular diseases. In epidemiology, Voronoi diagrams can be used to correlate sources of infections in epidemics. One of the early applications of Voronoi diagrams was implemented by John Snow to study the 1854 Broad Street cholera outbreak in Soho, England. He showed the correlation between residential areas on the map of Central London whose residents had been using a specific water pump, and the areas with the most deaths due to the outbreak. Engineering In polymer physics, Voronoi diagrams can be used to represent free volumes of polymers. In materials science, polycrystalline microstructures in metallic alloys are commonly represented using Voronoi tessellations. In island growth, the Voronoi diagram is used to estimate the growth rate of individual islands. In solid-state physics, the Wigner-Seitz cell is the Voronoi tessellation of a solid, and the Brillouin zone is the Voronoi tessellation of reciprocal (wavenumber) space of crystals which have the symmetry of a space group. In aviation, Voronoi diagrams are superimposed on oceanic plotting charts to identify the nearest airfield for in-flight diversion (see ETOPS), as an aircraft progresses through its flight plan. In architecture, Voronoi patterns were the basis for the winning entry for the redevelopment of The Arts Centre Gold Coast. In urban planning, Voronoi diagrams can be used to evaluate the Freight Loading Zone system. In mining, Voronoi polygons are used to estimate the reserves of valuable materials, minerals, or other resources. Exploratory drillholes are used as the set of points in the Voronoi polygons. In surface metrology, Voronoi tessellation can be used for surface roughness modeling. In robotics, some of the control strategies and path planning algorithms of multi-robot systems are based on the Voronoi partitioning of the environment. Mathematics A point location data structure can be built on top of the Voronoi diagram in order to answer nearest neighbor queries, where one wants to find the object that is closest to a given query point. Nearest neighbor queries have numerous applications. For example, one might want to find the nearest hospital or the most similar object in a database. A large application is vector quantization, commonly used in data compression. In geometry, Voronoi diagrams can be used to find the largest empty circle amid a set of points, and in an enclosing polygon; e.g. to build a new supermarket as far as possible from all the existing ones, lying in a certain city. Voronoi diagrams together with farthest-point Voronoi diagrams are used for efficient algorithms to compute the roundness of a set of points. The Voronoi approach is also put to use in the evaluation of circularity/roundness while assessing the dataset from a coordinate-measuring machine. Zeroes of iterated derivatives of a rational function on the complex plane accumulate on the edges of the Voronoi diagam of the set of the poles (Pólya's shires theorem). Informatics In networking, Voronoi diagrams can be used in derivations of the capacity of a wireless network. In computer graphics, Voronoi diagrams are used to calculate 3D shattering / fracturing geometry patterns. It is also used to procedurally generate organic or lava-looking textures. In autonomous robot navigation, Voronoi diagrams are used to find clear routes. If the points are obstacles, then the edges of the graph will be the routes furthest from obstacles (and theoretically any collisions). In machine learning, Voronoi diagrams are used to do 1-NN classifications. In global scene reconstruction, including with random sensor sites and unsteady wake flow, geophysical data, and 3D turbulence data, Voronoi tesselations are used with deep learning. In user interface development, Voronoi patterns can be used to compute the best hover state for a given point. Algorithms Several efficient algorithms are known for constructing Voronoi diagrams, either directly (as the diagram itself) or indirectly by starting with a Delaunay triangulation and then obtaining its dual. Direct algorithms include Fortune's algorithm, an O(n log(n)) algorithm for generating a Voronoi diagram from a set of points in a plane. Bowyer–Watson algorithm, an O(n log(n)) to O(n2) algorithm for generating a Delaunay triangulation in any number of dimensions, can be used in an indirect algorithm for the Voronoi diagram. The Jump Flooding Algorithm can generate approximate Voronoi diagrams in constant time and is suited for use on commodity graphics hardware. Lloyd's algorithm and its generalization via the Linde–Buzo–Gray algorithm (aka k-means clustering) use the construction of Voronoi diagrams as a subroutine. These methods alternate between steps in which one constructs the Voronoi diagram for a set of seed points, and steps in which the seed points are moved to new locations that are more central within their cells. These methods can be used in spaces of arbitrary dimension to iteratively converge towards a specialized form of the Voronoi diagram, called a Centroidal Voronoi tessellation, where the sites have been moved to points that are also the geometric centers of their cells. Voronoi in 3D Voronoi meshes can also be generated in 3D. See also Delaunay triangulation Map segmentation Natural element method Natural neighbor interpolation Nearest-neighbor interpolation Power diagram Voronoi pole Notes References Includes a description of Fortune's algorithm. External links Voronoi Diagrams in CGAL, the Computational Geometry Algorithms Library Demo program for SFTessellation algorithm, which creates Voronoi diagram using a Steppe Fire Model Discrete geometry Computational geometry Eponymous diagrams Eponymous geometric shapes Ukrainian inventions Russian inventions
Voronoi diagram
[ "Mathematics", "Technology" ]
4,437
[ "Discrete mathematics", "Discrete geometry", "Computational mathematics", "Information systems", "Computational geometry", "Geographic information systems" ]
177,680
https://en.wikipedia.org/wiki/History%20of%20aviation
The history of aviation spans over two millennia, from the earliest innovations like kites and attempts at tower jumping to supersonic and hypersonic flight in powered, heavier-than-air jet aircraft. Kite flying in China, dating back several hundred years BC, is considered the earliest example of man-made flight. In the 15th-century Leonardo da Vinci created several flying machine designs incorporating aeronautical concepts, but they were unworkable due to the limitations of contemporary knowledge. In the late 18th century, the Montgolfier brothers invented the hot-air balloon which soon led to manned flights. At almost the same time, the discovery of hydrogen gas led to the invention of the hydrogen balloon. Various theories in mechanics by physicists during the same period, such as fluid dynamics and Newton's laws of motion, led to the development of modern aerodynamics; most notably by Sir George Cayley. Balloons, both free-flying and tethered, began to be used for military purposes from the end of the 18th century, with France establishing balloon companies during the French Revolution. In the 19th century, especially the second half, experiments with gliders provided the basis for learning the dynamics of winged aircraft; most notably by Cayley, Otto Lilienthal, and Octave Chanute. By the early 20th century, advances in engine technology and aerodynamics made controlled, powered, manned heavier-than-air flight possible for the first time. In 1903, following their pioneering research and experiments with wing design and aircraft control, the Wright brothers successfully incorporated all of the required elements to create and fly the first aeroplane. The basic configuration with its characteristic cruciform tail was established by 1909, followed by rapid design and performance improvements aided by the development of more powerful engines. The first vessels of the air were the rigid steerable balloons pioneered by Ferdinand von Zeppelin that became synonymous with airships and dominated long-distance flight until the 1930s, when large flying boats became popular for trans-oceanic routes. After World War II, the flying boats were in turn replaced by airplanes operating from land, made far more capable first by improved propeller engines, then by jet engines, which revolutionized both civilian air travel and military aviation. In the latter half of the 20th century, the development of digital electronics led to major advances in flight instrumentation and "fly-by-wire" systems. The 21st century has seen the widespread use of pilotless drones for military, commercial, and recreational purposes. With computerized controls, inherently unstable aircraft designs, such as flying wings, have also become practical. Etymology The term aviation, is a noun of action from the stem of Latin avis "bird" with the suffix -ation meaning action or progress. It was coined in 1863 by French pioneer Guillaume Joseph Gabriel de La Landelle (1812–1886) in Aviation ou Navigation aérienne sans ballons. Primitive beginnings Tower jumping Since ancient times, there have been stories of men strapping birdlike wings, stiffened cloaks, or other devices to themselves and attempting to fly, typically by jumping off a tower. The Greek legends of Daedalus and Icarus are some of the earliest known. Others originated in ancient Asia and the European Middle Ages. During this early period, the concepts of lift, stability, and control were not well understood, and most attempts resulted in serious injuries or death. The Andalusian scientist Abbas ibn Firnas (810–887 AD) attempted to fly in Córdoba, Spain, by covering his body with vulture feathers and attached two wings to his arms. The 17th-century Algerian historian Ahmed Mohammed al-Maqqari, quoting a poem by Muhammad I of Córdoba's 9th-century court poet Mu'min ibn Said, recounts that Firnas flew some distance before landing with some injuries, attributed to his lacking a tail (as birds use them to land). In the 12th century, William of Malmesbury wrote that Eilmer of Malmesbury, an 11th-century Benedictine monk, attached wings to his hands and feet and flew a short distance, but broke both legs while landing, also having neglected to make himself a tail. Many others made well-documented jumps in the following centuries. As late as 1811, Albrecht Berblinger constructed an ornithopter and jumped into the Danube at Ulm. Kites The kite may have been the first form of man-made heavier-than-aircraft. It was invented in China possibly as far back as the 5th century BC. by Mozi (Mo Di) and Lu Ban (Gongshu Ban). Evidence to support this finding stands with materials commonly found and ideal for kite building located in China.  These are materials such as "silk fabric for sail material, fine, high-tensile-strength silk for flying line, and resilient bamboo for…framework" The reason these materials were so perfect for building kites is largely due to the structure of the materials themselves. Bamboo being a strong, hollow material, largely resembled the hollow bones in birds, which allow for less weight, making flight easier. Some kites were fitted with strings and whistles to make musical sounds while flying. Ancient and mediaeval Chinese sources describe kites being used to measure distances, test the wind, lift men, signal, and communicate and send messages. Later designs often depicted images of flying insects, birds, and other beasts, both real and mythical. Kites spread from China around the world. After being introduced into the rest of Asia, the kite further evolved into the fighter kite, which has an abrasive line used to cut down other kites. The most notable fighter kite designs originated in India and Japan Man-lifting kites Man-lifting kites are believed to have been used extensively in ancient China for civil and military purposes and sometimes enforced as a punishment. An early recorded flight was that of the prisoner Yuan Huangtou, a Chinese prince, in the 6th century AD. Stories of man-lifting kites can be found in Japan, following the introduction of the kite from China around the seventh century AD. For a period, there was a Japanese law against man-carrying kites. Rotor wings The use of a rotor for vertical flight has existed since 400 BC in the form of the bamboo-copter, an ancient Chinese toy. The similar "moulinet à noix" (rotor on a nut) appeared in Europe in the 14th century AD. Hot air balloons Since ancient times, the Chinese understood that hot air rises and applied the principle to a type of small hot air balloon called a sky lantern. A sky lantern consists of a paper balloon under or just inside which a small lamp is placed. Sky lanterns are traditionally launched for recreation and during festivals. According to Joseph Needham, such lanterns were found in China since the 3rd century BC. Their military use is attributed to the general Zhuge Liang (180–234 AD), who is said to have used them to scare the enemy troops. There is evidence that the Chinese also "solved the problem of aerial navigation" using balloons, hundreds of years before the 18th century. Renaissance Eventually, after Ibn Firnas's construction, some investigators began to discover and define some of the basics of rational aircraft design. Most notable of these was Leonardo da Vinci, although his work remained unknown until 1797, and so had no influence on developments over the next three hundred years. While his designs are rational, they are not scientific. He particularly underestimated the amount of power that would be needed to propel a flying object, basing his designs on the flapping wings of a bird rather than an engine-powered propeller. Leonardo studied bird and bat flight, claiming the superiority of the latter owing to its unperforated wing. He analyzed these and anticipated many principles of aerodynamics. He understood that "An object offers as much resistance to the air as the air does to the object." Isaac Newton later defined this as the third law of motion in 1687. From the last years of the 15th century until 1505, Leonardo wrote about and sketched many designs for flying machines and mechanisms, including ornithopters, fixed-wing gliders, rotorcraft (perhaps inspired by whirligig toys), parachutes (in the form of a wooden-framed pyramidal tent) and a wind speed gauge. His early designs were man-powered and included ornithopters and rotorcraft; however, he came to realise the impracticality of this and later turned to controlled gliding flight, also sketching some designs powered by a spring. In an essay titled Sul volo (On flight), Leonardo describes a flying machine called "the bird" which he built from starched linen, leather joints, and raw silk thongs. In the Codex Atlanticus, he wrote, "Tomorrow morning, on the second day of January 1496, I will make the thong and the attempt." According to one commonly repeated, albeit presumably fictional story, in 1505 Leonardo or one of his pupils attempted to fly from the summit of Monte Ceceri. Lighter than air Beginnings of modern theories Francesco Lana de Terzi proposed in Prodromo dell'Arte Maestra (1670) that large vessels could float in the atmosphere by applying the principles of a vacuum. Lana designed an airship with four huge copper foil spheres connected to support a rider's basket, a tail, and a steering rudder. Critics argued that the thin copper spheres could not sustain ambient air pressure, and further experiments proved that his idea was impossible. Using a vacuum to create lift is called a vacuum airship, but it is still impossible to build with the materials available today. In 1709, Bartolomeu de Gusmão approached King John V of Portugal and claimed to have discovered a way for airborne flight. Due to the King's illness, Gusmão's experiment was rescheduled from its initial 24 June 1709, date to 8 August. The experiment was carried out in front of the king and other nobles in the Casa da India yard, but the paper ship or device burned down before it could take flight. Balloons In France, five aviation firsts were accomplished between 4 June and 1 December 1783: On 4 June, a crowd gathered in Annonay, France, to witness the unmanned hot air balloon display by the Montgolfier brothers. Their 500-pound balloon ascended to nearly 3,000 feet and traveled over a mile and a half. It stayed in the air for ten minutes before tipping over and catching fire. On 27 August, Jacques Charles and the Robert brothers unveiled the first unmanned hydrogen balloon from Paris' Champ de Mars. It landed almost an hour later in Gonesse, where terrified farmers mistook it for a monster and destroyed it. On 19 October, in front of 2,000 spectators, Jean-François Pilâtre de Rozier and the Marquis d'Arlandes boarded the Montgolfier aircraft as the first people. Later that day, Giroud de Villette, another pilot, took to the skies much higher. On 21 November, the Montgolfiers launched the first free flight with human passengers. King Louis XVI had originally decreed that condemned criminals would be the first pilots, but Jean-François Pilâtre de Rozier, along with the Marquis François d'Arlandes, successfully petitioned for the honour. They drifted in a balloon powered by a wood fire. On 1 December, Jacques Charles and the Nicolas-Louis Robert launched their manned hydrogen balloon from the Jardin des Tuileries in Paris, as a crowd of 400,000 witnessed. They ascended to a height of about [15] and landed at sunset in Nesles-la-Vallée after a flight of 2 hours and 5 minutes, covering 36 km. After Robert alighted Charles decided to ascend alone. This time he ascended rapidly to an altitude of about , where he saw the sun again, suffered extreme pain in his ears, and never flew again. Ballooning became a major interest in Europe in the late 18th century, providing the first detailed understanding of the relationship between altitude and the atmosphere. Non-steerable balloons were employed during the American Civil War by the Union Army Balloon Corps. The young Ferdinand von Zeppelin first flew as a balloon passenger with the Union Army of the Potomac in 1863. In the early 1900s, ballooning was a popular sport in Britain. These privately owned balloons usually used coal gas as the lifting gas. This has half the lifting power of hydrogen so the balloons had to be larger, however, coal gas was far more readily available and the local gas works sometimes provided a special lightweight formula for ballooning events. Airships Airships were originally called "dirigible balloons" and are still sometimes called dirigibles today. Work on developing a steerable (or dirigible) balloon continued sporadically throughout the 19th century. The first powered, controlled, sustained lighter-than-air flight is believed to have taken place in 1852 when Henri Giffard flew in France, with a steam engine-driven craft. Another advancement was made in 1884, when the first fully controllable free-flight was made in a French Army electric-powered airship, La France, by Charles Renard and Arthur Krebs. The long, airship covered in 23 minutes with the aid of an 8½ horsepower electric motor. However, these aircraft were generally short-lived and extremely frail. Routine, controlled flights did not occur until the advent of the internal combustion engine. The first aircraft to make routine controlled flights were non-rigid airships (sometimes called "blimps".) The most successful early pioneering pilot of this type of aircraft was the Brazilian Alberto Santos-Dumont who effectively combined a balloon with an internal combustion engine. On 19 October 1901, he flew his airship Number 6 over Paris from the Parc de Saint Cloud around the Eiffel Tower and back in under 30 minutes to win the Deutsch de la Meurthe prize. Santos-Dumont went on to design and build several aircraft. The subsequent controversy surrounding his and others' competing claims with regard to aircraft overshadowed his great contribution to the development of airships. At the same time that non-rigid airships were starting to have some success, the first successful rigid airships were also being developed. These were far more capable than fixed-wing aircraft in terms of pure cargo-carrying capacity for decades. Rigid airship design and advancement was pioneered by the German count Ferdinand von Zeppelin. Construction of the first Zeppelin airship began in 1899 in a floating assembly hall on Lake Constance in the Bay of Manzell, Friedrichshafen. This was intended to ease the starting procedure, as the hall could easily be aligned with the wind. The prototype airship LZ 1 (LZ for "Luftschiff Zeppelin") had a length of , was driven by two Daimler engines and balanced by moving a weight between its two nacelles. Its first flight, on 2 July 1900, lasted for only 18 minutes, as LZ 1 was forced to land on the lake after the winding mechanism for the balancing weight had broken. Upon repair, the technology proved its potential in subsequent flights, bettering the 6 m/s speed attained by the French airship La France by 3 m/s, but could not yet convince possible investors. It was several years before the Count was able to raise enough funds for another try. The German airship passenger service known as DELAG (Deutsche-Luftschiffahrts AG) was established in 1910. Although airships were used in both World War I and II, and continue on a limited basis to this day, their development has been largely overshadowed by heavier-than-air craft. Heavier than air 17th and 18th centuries Traveller Evliya Çelebi reported that in 1633, Ottoman scientist and engineer Lagari Hasan Çelebi blasted off from Sarayburnu in a 7-winged rocket propelled by 50 okka (140 lbs) of gunpowder. The flight was said to have been undertaken at the time of the birth of Sultan Murad IV's daughter. As Evliya Celebi wrote, Lagari proclaimed before launching his craft "O my sultan! Be blessed, I am going to talk to Jesus!"; after ascending in the rocket, he landed in the sea, swimming ashore and joking "O my sultan! Jesus sends his regards to you!"; he was rewarded by the Sultan with silver and the rank of sipahi in the Ottoman army. Evliya Çelebi also wrote of Lagari's brother, Hezârfen Ahmed Çelebi, making a flight by glider a year earlier. Italian inventor Tito Livio Burattini, invited by the Polish King Władysław IV to his court in Warsaw, built a model aircraft with four fixed glider wings in 1647. Described as "four pairs of wings attached to an elaborate 'dragon'", it was said to have successfully lifted a cat in 1648 but not Burattini himself. He promised that "only the most minor injuries" would result from landing the craft. His "Dragon Volant" is considered "the most elaborate and sophisticated aeroplane to be built before the 19th Century". The first published paper on aviation was "Sketch of a Machine for Flying in the Air" by Emanuel Swedenborg published in 1716. This flying machine consisted of a light frame covered with strong canvas and provided with two large oars or wings moving on a horizontal axis, arranged so that the upstroke met with no resistance while the downstroke provided lifting power. Swedenborg knew that the machine would not fly, but suggested it as a start and was confident that the problem would be solved. Swedenborg proved prescient in his observation that a method of powering of an aircraft was one of the critical problems to be overcome. On 16 May 1793, the Spanish inventor Diego Marín Aguilera managed to cross the river Arandilla in Coruña del Conde, Castile, flying 300 to 400 m with a flying machine. 19th century Balloon jumping replaced tower jumping, also demonstrating with typically fatal results that man-power and flapping wings were useless in achieving flight. At the same time scientific study of heavier-than-air flight began in earnest. In 1801, the French officer André Guillaume Resnier de Goué managed a 300-metre glide by starting from the top of the city walls of Angoulême and he broke one leg on arrival. In 1837, French mathematician and brigadier general Isidore Didion stated, "Aviation will be successful only if one finds an engine whose ratio with the weight of the device to be supported will be larger than current steam machines or the strength developed by humans or most of the animals". Sir George Cayley and the first modern aircraft Sir George Cayley was first called the "father of the aeroplane" in 1846. During the last years of the 18th century, he had begun the first rigorous study of the physics of flight and would later design the first modern heavier-than-air craft. Among his many achievements, his most important contributions to aeronautics include: Clarifying our ideas and laying down the principles of heavier-than-air flight. Reaching a scientific understanding of the principles of bird flight. Scientific aerodynamic experiments were conducted to demonstrate drag and streamlining, movement of the center of pressure, and the increase in lift from curving the wing surface. Defining the modern aeroplane configuration comprising a fixed-wing, fuselage and tail assembly. Demonstrations of manned, gliding flight. Identified the crucial understanding that a lightweight, powerful engine would be necessary for sustained heavier-than-air flight, now known as the power-to-weight ratio Recognized for establishing the theoretical foundation for engine use in airplanes and modern aircraft design by identifying and explaining the four fundamental forces of flight: lift, thrust, drag, and weight. Cayley's research on the aeroplane aimed to address the four fundamental areas that are essential to aeronautics: propulsion, structural design, aerodynamics, and stability and control. His work laid the groundwork for a comprehensive understanding of these critical components, which continue to be vital in the field today. Cayley's first innovation was to study the basic science of lift by adopting the whirling arm test rig for use in aircraft research and using simple aerodynamic models on the arm, rather than attempting to fly a model of a complete design. In 1799, he set down the concept of the modern aeroplane as a fixed-wing flying machine with separate systems for lift, propulsion, and control. In 1804, Cayley constructed a model glider, which was the first modern heavier-than-air flying machine. It had the layout of a conventional modern aircraft, with an inclined wing towards the front and an adjustable tail at the back with both tailplane and fin. A movable weight allowed adjustment of the model's centre of gravity. In 1809, goaded by the farcical antics of his contemporaries, he began the publication of a landmark three-part treatise titled "On Aerial Navigation" (1809–1810). In it he wrote the first scientific statement of the problem, "The whole problem is confined within these limits, viz. to make a surface support a given weight by the application of power to the resistance of air". He identified the four vector forces that influence an aircraft: thrust, lift, drag and weight and distinguished stability and control in his designs. He also identified and described the importance of the cambered aerofoil, dihedral, diagonal bracing and drag reduction, and contributed to the understanding and design of ornithopters and parachutes. In 1848, he had progressed far enough to construct a glider in the form of a triplane large and safe enough to carry a child. A local boy was chosen; his name is unknown. He went on to publish in 1852 the design for a full-size manned glider or "governable parachute" to be launched from a balloon. He then constructed a version capable of launching from the top of a hill, which carried the first adult aviator across Brompton Dale in 1853. Age of steam Drawing directly from Cayley's work, Henson's 1842 design for an aerial steam carriage broke new ground. Although only a design, it was the first in history for a propeller-driven fixed-wing aircraft. 1866 saw the founding of the Aeronautical Society of Great Britain and two years later the world's first aeronautical exhibition was held at the Crystal Palace, London, where John Stringfellow was awarded a £100 prize for the steam engine with the best power-to-weight ratio. In 1848, Stringfellow achieved the first powered flight using an unmanned wingspan steam-powered monoplane built in a disused lace factory in Chard, Somerset. Employing two contra-rotating propellers on the first attempt, made indoors, the machine flew ten feet before becoming destabilised, damaging the craft. The second attempt was more successful, the machine leaving a guidewire to fly freely, achieving thirty yards of straight and level powered flight. Francis Herbert Wenham presented the first paper to the newly formed Aeronautical Society (later the Royal Aeronautical Society), On Aerial Locomotion. He advanced Cayley's work on cambered wings, making important findings. To test his ideas, from 1858 he had constructed several gliders, both manned and unmanned, and with up to five stacked wings. He realised that long, thin wings are better than bat-like ones because they have more leading edge for their area. Today this relationship is known as the aspect ratio of a wing. The latter part of the 19th century became a period of intense study, characterized by the "gentleman scientists" who represented most research efforts until the 20th century. Among them was the British scientist-philosopher and inventor Matthew Piers Watt Boulton, who studied lateral flight control and was the first to patent an aileron control system in 1868. In 1871, Wenham made the first wind tunnel using a fan, driven by a steam engine, to propel air down a tube to the model. Meanwhile, the British advances had galvanised French researchers. In 1857, Félix du Temple proposed a monoplane with a tailplane and retractable undercarriage. Developing his ideas with a model powered first by clockwork and later by steam, he eventually achieved a short hop with a full-size manned craft in 1874. It achieved lift-off under its own power after launching from a ramp, glided for a short time and returned safely to the ground, making it the first successful powered glide in history. In 1865, Louis Pierre Mouillard published an influential book The Empire Of The Air (l'Empire de l'Air). In 1856, Frenchman Jean-Marie Le Bris made the first flight higher than his point of departure, by having his glider "L'Albatros artificiel pulled by a horse on a beach. He reportedly achieved a height of 100 metres, over a distance of 200 metres. Alphonse Pénaud, a Frenchman, advanced the theory of wing contours and aerodynamics. He also constructed successful models of aeroplanes, helicopters and ornithopters. In 1871 he flew the first aerodynamically stable fixed-wing aeroplane, a model monoplane he called the "Planophore", a distance of . Pénaud's model incorporated several of Cayley's discoveries, including the use of a tail, wing dihedral for inherent stability, and rubber power. The planophore also had longitudinal stability, being trimmed such that the tailplane was set at a smaller angle of incidence than the wings, an original and important contribution to the theory of aeronautics. Pénaud's later project for an amphibian aeroplane, although never built, incorporated other modern features. A tailless monoplane with a single vertical fin and twin tractor propellers, it also featured hinged rear elevator and rudder surfaces, retractable undercarriage and a fully enclosed, instrumented cockpit. Another theorist was Frenchman Victor Tatin. In 1879, he flew a model which, like Pénaud's project, was a monoplane with twin tractor propellers but also had a separate horizontal tail. It was powered by compressed air. Flown tethered to a pole, this was the first model to take off under its own power. In 1884, Alexandre Goupil published his work La Locomotion Aérienne (Aerial Locomotion), although the flying machine he later constructed failed to fly. In 1890, the French engineer Clément Ader completed the first of three steam-driven flying machines, the Éole. On 9 October 1890, Ader made an uncontrolled hop of around ; this was the first manned aeroplane to take off under its own power. His Avion III of 1897, notable only for having twin steam engines, failed to fly: Ader later claimed success and was not debunked until 1910 when the French Army published its report on his attempt. Sir Hiram Maxim was an American engineer who had moved to England. He built his own whirling arm rig and wind tunnel and constructed a large machine with a wingspan of , a length of , fore and aft horizontal surfaces and a crew of three. Twin propellers were powered by two lightweight compound steam engines each delivering . The overall weight was . It was intended as a test rig to investigate aerodynamic lift; because it lacked flight controls it ran on rails, with a second set of rails above the wheels to restrain it. Completed in 1894, on its third run it broke from the rail, became airborne for about 200 yards at two to three feet of altitude and was badly damaged upon falling back to the ground. It was subsequently repaired, but Maxim abandoned his experiments shortly afterwards. Manned gliders and Otto Lilienthal Around the last decade of the 19th century, a number of key figures were refining and defining the modern aeroplane. Lacking a suitable engine, aircraft work focused on stability and control in gliding flight. In 1879, Biot constructed a bird-like glider with the help of Massia and flew in it briefly. It is preserved in the Musee de l'Air, France, and is claimed to be the earliest man-carrying flying machine still in existence. The Englishman Horatio Phillips made key contributions to aerodynamics. He conducted extensive wind tunnel research on aerofoil sections, proving the principles of aerodynamic lift foreseen by Cayley and Wenham. His findings underpin all modern aerofoil design. Between 1883 and 1886, the American John Joseph Montgomery developed a series of three manned gliders, before conducting his own independent investigations into aerodynamics and circulation of lift. Otto Lilienthal became known as the "Glider King" or "Flying Man" of Germany. He duplicated Wenham's work and greatly expanded on it in 1884, publishing his research in 1889 as Birdflight as the Basis of Aviation (Der Vogelflug als Grundlage der Fliegekunst), which is seen as one of the most important works in aviation history. He also produced a series of hang gliders, including bat-wing, monoplane, and biplane forms, such as the Derwitzer Glider and Normal soaring apparatus, which is considered to be the first airplane in series production, making the "Maschinenfabrik Otto Lilienthal" the first airplane production company in the world. Starting in 1891, he became the first person to make controlled untethered glides routinely, and the first to be photographed flying a heavier-than-air machine, stimulating interest around the world. Lilienthal's work led to him developing the concept of the modern wing. His flights in the year 1891 are seen as the beginning of human flight and because of that he is often referred to as either the "father of aviation" or "father of flight". He rigorously documented his work, including photographs, and for this reason is one of the best known of the early pioneers. Lilienthal made over 2,000 glider flights until his death in 1896 from injuries sustained in a glider crash. Picking up where Lilienthal left off, Octave Chanute took up aircraft design after an early retirement, and funded the development of several gliders. In the summer of 1896, his team flew several of their designs eventually deciding that the best was a biplane design. Like Lilienthal, he documented and photographed his work. In Britain Percy Pilcher, who had worked for Maxim, built and successfully flew several gliders during the mid to late 1890s. The invention of the box kite during this period by the Australian Lawrence Hargrave led to the development of the practical biplane. In 1894, Hargrave linked four of his kites together, added a sling seat, and was the first to obtain lift with a heavier than air aircraft, when he flew up . Later pioneers of manned kite flying included Samuel Franklin Cody in England and Captain Génie Saconney in France. William Frost from Pembrokeshire, Wales started his project in 1880 and after 16 years, he designed a flying machine and in 1894 won a patent for a "Frost Aircraft Glider". Reports say witnesses claimed the craft flew at Saundersfoot in 1896, travelling 500 yards before colliding with a tree and falling in a field. Langley After a distinguished career in astronomy and shortly before becoming Secretary of the Smithsonian Institution, Samuel Pierpont Langley started a serious investigation into aerodynamics at what is today the University of Pittsburgh. In 1891, he published Experiments in Aerodynamics detailing his research, and then turned to building his designs. He hoped to achieve automatic aerodynamic stability, so he gave little consideration to in-flight control. On 6 May 1896, Langley's Aerodrome No. 5 made the first successful sustained flight of an unpiloted, engine-driven heavier-than-air craft of substantial size. It was launched from a spring-actuated catapult mounted on top of a houseboat on the Potomac River near Quantico, Virginia. Two flights were made that afternoon, one of and a second of , at a speed of approximately . On both occasions, the Aerodrome No. 5 landed in the water as planned, because, in order to save weight, it was not equipped with landing gear. On 28 November 1896, another successful flight was made with the Aerodrome No. 6. This flight, of , was witnessed and photographed by Alexander Graham Bell. The Aerodrome No. 6 was actually Aerodrome No. 4 greatly modified. So little remained of the original aircraft that it was given a new designation. With the successes of the Aerodrome No. 5 and No. 6, Langley started looking for funding to build a full-scale man-carrying version of his designs. Spurred by the Spanish–American War, the U.S. government granted him $50,000 to develop a man-carrying flying machine for aerial reconnaissance. Langley planned on building a scaled-up version known as the Aerodrome A, and started with the smaller Quarter-scale Aerodrome, which flew twice on 18 June 1901, and then again with a newer and more powerful engine in 1903. With the basic design apparently successfully tested, he then turned to the problem of a suitable engine. He contracted Stephen Balzer to build one, but was disappointed when it delivered only instead of the he expected. Langley's assistant, Charles M. Manly, then reworked the design into a five-cylinder water-cooled radial that delivered at 950 rpm, a feat that took years to duplicate. Now with both power and a design, Langley put the two together with great hopes. To his dismay, the resulting aircraft proved to be too fragile. Simply scaling up the original small models resulted in a design that was too weak to hold itself together. Two launches in late 1903 both ended with the Aerodrome immediately crashing into the water. The pilot, Manly, was rescued each time. Also, the aircraft's control system was inadequate to allow quick pilot responses, and it had no method of lateral control, and the Aerodromes aerial stability was marginal. Langley's attempts to gain further funding failed, and his efforts ended. Nine days after his second abortive launch on 8 December, the Wright brothers successfully flew their Flyer. Glenn Curtiss made 93 modifications to the Aerodrome and flew this very different aircraft in 1914. Without acknowledging the modifications, the Smithsonian Institution asserted that Langley's Aerodrome was the first machine "capable of flight". Whitehead Gustave Weißkopf was a German who emigrated to the U.S., where he soon changed his name to Whitehead. From 1897 to 1915, he designed and built early flying machines and engines. On 14 August 1901, two and a half years before the Wright Brothers' flight, he claimed to have carried out a controlled, powered flight in his Number 21 monoplane at Fairfield, Connecticut. The flight was reported in the Bridgeport Sunday Herald local newspaper. About 30 years later, several people questioned by a researcher claimed to have seen that or other Whitehead flights. In March 2013, Jane's All the World's Aircraft, an authoritative source for contemporary aviation, published an editorial which accepted Whitehead's flight as the first manned, powered, controlled flight of a heavier-than-air craft. The Smithsonian Institution (custodians of the original Wright Flyer) and many aviation historians continue to maintain that Whitehead did not fly as suggested. The historians of the Royal Aeronautical Society noted that: "All available evidence fails to support the claim that Gustave Whitehead made sustained, powered, controlled flights predating those of the Wright brothers." The editors of Scientific American agree: "The data show that not only was Whitehead not first in flight, but that he may never have made a controlled, powered flight at any time." Pearse Richard Pearse was a New Zealand farmer and inventor who performed pioneering aviation experiments. Witnesses interviewed many years afterward claimed that Pearse flew and landed a powered heavier-than-air machine on 31 March 1903, nine months before the Wright brothers flew.  Documentary evidence for these claims remains open to interpretation and dispute, and Pearse himself never made such claims. In a newspaper interview in 1909, he said he did not "attempt anything practical ... until 1904". If he did fly in 1903, the flight appears to have been poorly controlled in comparison to the Wrights'. Wright brothers Using a methodical approach and concentrating on the controllability of the aircraft, the brothers built and tested a series of kite and glider designs from 1898 to 1902 before attempting to build a powered design. The gliders worked, but not as well as the Wrights had expected based on the experiments and writings of their predecessors. Their first full-size glider, launched in 1900, had only about half the lift they anticipated. Their second glider, built the following year, performed even more poorly. Rather than giving up, the Wrights constructed their own wind tunnel and created a number of sophisticated devices to measure lift and drag on the 200 wing designs they tested. As a result, the Wrights corrected earlier mistakes in calculations regarding drag and lift. Their testing and calculating produced a third glider with a higher aspect ratio and true three-axis control. They flew it successfully hundreds of times in 1902, and it performed far better than the previous models. By using a rigorous system of experimentation, involving wind-tunnel testing of airfoils and flight testing of full-size prototypes, the Wrights not only built a working aircraft the following year, the Wright Flyer, but also helped advance the science of aeronautical engineering. The Wrights appear to be the first to make serious studied attempts to simultaneously solve the power and control problems. Both problems proved difficult, but they never lost interest. They solved the control problem by inventing wing warping for roll control, combined with simultaneous yaw control with a steerable rear rudder. Almost as an afterthought, they designed and built a low-powered internal combustion engine. They also designed and carved wooden propellers that were more efficient than any before, enabling them to gain adequate performance from their low engine power. Although wing-warping as a means of lateral control was used only briefly during the early history of aviation, the principle of combining lateral control in combination with a rudder was a key advance in aircraft control. While many aviation pioneers appeared to leave safety largely to chance, the Wrights' design was greatly influenced by the need to teach themselves to fly without unreasonable risk to life and limb, by surviving crashes. This emphasis, as well as low engine power, was the reason for low flying speed and for taking off in a headwind. Performance, rather than safety, was the reason for the rear-heavy design because the canard could not be highly loaded; anhedral wings were less affected by crosswinds and were consistent with the low yaw stability. According to the Smithsonian Institution and Fédération Aéronautique Internationale (FAI), the Wrights made the first sustained, controlled, powered heavier-than-air manned flight at Kill Devil Hills, North Carolina, four miles (8 km) south of Kitty Hawk, North Carolina on 17 December 1903. The first flight by Orville Wright, of in 12 seconds, was recorded in a famous photograph. In the fourth flight of the same day, Wilbur Wright flew in 59 seconds. The flights were witnessed by three coastal lifesaving crewmen, a local businessman, and a boy from the village, making these the first public flights and the first well-documented ones. Orville described the final flight of the day: "The first few hundred feet were up and down, as before, but by the time three hundred feet had been covered, the machine was under much better control. The course for the next four or five hundred feet had but little undulation. However, when out about eight hundred feet the machine began pitching again, and, in one of its darts downward, struck the ground. The distance over the ground was measured to be ; the time of the flight was 59 seconds. The frame supporting the front rudder was badly broken, but the main part of the machine was not injured at all. We estimated that the machine could be put in condition for flight again in about a day or two". They flew only about ten feet above the ground as a safety precaution, so they had little room to manoeuvre, and all four flights in the gusty winds ended in a bumpy and unintended "landing". Modern analysis by Professor Fred E. C. Culick and Henry R. Rex (1985) has demonstrated that the 1903 Wright Flyer was so unstable as to be almost unmanageable by anyone but the Wrights, who had trained themselves in the 1902 glider. The Wrights continued flying at Huffman Prairie near Dayton, Ohio in 1904–05. In May 1904 they introduced the Flyer II, a heavier and improved version of the original Flyer. On 23 June 1905, they first flew a third machine, the Flyer III. After a severe crash on 14 July 1905, they rebuilt the Flyer III and made important design changes. They almost doubled the size of the elevator and rudder and moved them about twice the distance from the wings. They added two fixed vertical vanes (called "blinkers") between the elevators and gave the wings a very slight dihedral. They disconnected the rudder from the wing-warping control, and as in all future aircraft, placed it on a separate control handle. When flights resumed the results were immediate. The serious pitch instability that hampered Flyers I and II was significantly reduced, so repeated minor crashes were eliminated. Flights with the redesigned Flyer III started lasting over 10 minutes, then 20, then 30. Flyer III became the first practical aircraft (though without wheels and needing a launching device), flying consistently under full control and bringing its pilot back to the starting point safely and landing without damage. On 5 October 1905, Wilbur flew in 39 minutes 23 seconds. According to the April 1907 issue of the Scientific American magazine, the Wright brothers seemed to have the most advanced knowledge of heavier-than-air navigation at the time. However, the same magazine issue also claimed that no public flight had been made in the United States before its April 1907 issue. Hence, they devised the Scientific American Aeronautic Trophy in order to encourage the development of a heavier-than-air flying machine. Glenn H. Curtiss won the trophy in 1908 with the first pre-announced and officially recorded flight of the June Bug. History Pioneer Era (1903–1914) This period saw the development of practical aeroplanes and airships and their early application, alongside balloons and kites, for private, sport and military use. Pioneers in Europe Although the full details of the Wright Brothers' system of flight control had been published in l'Aerophile in January 1906, the importance of this advance was not recognised, and European experimenters generally concentrated on attempting to produce inherently stable machines. Short powered flights were performed in France by Romanian engineer Traian Vuia on 18 March and 19 August 1906 when he flew 12 and 24 metres, respectively, in a self-designed, fully self-propelled, fixed-wing aircraft, that possessed a fully wheeled undercarriage. He was followed by Jacob Ellehammer who built a monoplane which he tested with a tether in Denmark on 12 September 1906, flying 42 metres. On 13 September 1906, the Brazilian Alberto Santos-Dumont made a public flight in Paris with the 14-bis, also known as Oiseau de proie (French for "bird of prey"). This was canard configured with a pronounced wing dihedral, and covered a distance of on the grounds of the Chateau de Bagatelle in Paris' Bois de Boulogne before a large crowd of witnesses. This well-documented event was the first flight verified by the Aéro-Club de France of a powered heavier-than-air machine in Europe and won the Deutsch-Archdeacon Prize for the first officially observed flight greater than . On 12 November 1906, Santos-Dumont set the first world record recognized by the Federation Aeronautique Internationale by flying in 21.5 seconds. Only one more brief flight was made by the 14-bis in March 1907, after which it was abandoned. In March 1907, Gabriel Voisin flew the first example of his Voisin biplane. On 13 January 1908, a second example was flown by Henri Farman to win the Deutsch-Archdeacon Grand Prix d'Aviation prize for a flight in which the aircraft flew a distance of more than a kilometre and landed at the point where it had taken off. The flight lasted 1 minute and 28 seconds. Flight as an established technology Santos-Dumont later added ailerons between the wings in an effort to gain more lateral stability. His final design, first flown in 1907, was the series of Demoiselle monoplanes (Nos. 19 to 22). The Demoiselle No 19 could be constructed in only 15 days and became the world's first series production aircraft. The Demoiselle achieved 120 km/h. The fuselage consisted of three specially reinforced bamboo booms. The pilot sat in a seat between the main wheels of a conventional landing gear whose pair of wire-spoked mainwheels were located at the lower front of the airframe, with a tailskid half-way back beneath the rear fuselage structure. The Demoiselle was controlled in flight by a cruciform tail unit hinged on a form of universal joint at the aft end of the fuselage structure to function as elevator and rudder, with roll control provided through wing warping (No. 20), with the wings only warping "down". In 1908, Wilbur Wright travelled to Europe, and starting in August gave a series of flight demonstrations at Le Mans in France. The first demonstration, made on 8 August, attracted an audience including most of the major French aviation experimenters, who were astonished by the clear superiority of the Wright Brothers' aircraft, particularly its ability to make tight controlled turns. The importance of using roll control in making turns was recognised by almost all the European experimenters: Henri Farman fitted ailerons to his Voisin biplane and shortly afterwards set up his own aircraft construction business, whose first product was the influential Farman III biplane. The following year saw the widespread recognition of powered flight as something other than the preserve of dreamers and eccentrics. On 25 July 1909, Louis Blériot won worldwide fame by winning a £1,000 prize offered by the British Daily Mail newspaper for a flight across the English Channel, and in August around half a million people, including the President of France Armand Fallières and the Prime Minister of the United Kingdom David Lloyd George, attended one of the first aviation meetings, the Grande Semaine d'Aviation at Reims. In 1914, pioneering aviator Tony Jannus captained the inaugural flight of the St. Petersburg-Tampa Airboat Line, the world's first commercial passenger airline. Historians disagree about whether the Wright brothers patent war impeded development of the aviation industry in the United States compared to Europe. The patent war ended during World War I when the government pressured the industry into forming a patent pool, and major litigants had left the industry. Rotorcraft In 1877, the Italian engineer, inventor and aeronautical pioneer Enrico Forlanini developed an unmanned helicopter powered by a steam engine. It rose to a height of , where it remained for 20 seconds, after a vertical take-off from a park in Milan. Milan has dedicated its city airport to Enrico Forlanini, the airport is also named Linate Airport, as well as the nearby park, the Parco Forlanini. In Milan he also has an avenue named after him, Viale Enrico Forlanini. The first time a manned helicopter is known to have risen off the ground was on a tethered flight in 1907 by the Breguet-Richet Gyroplane. Later the same year the Cornu helicopter, also French, made the first rotary-winged free flight at Lisieux, France. However, these were not practical designs. Military use Almost as soon as they were invented, aeroplanes were used for military purposes. The first country to use them for military purposes was Italy, whose aircraft made reconnaissance, bombing and artillery correction flights in Libya during the Italian-Turkish war (September 1911 – October 1912). This war also saw Ottoman soldiers shoot down a warplane for the first time in history. The first warplane reconnaissance mission flown on 23 October 1911 by the Italian air force's Captain Carlo Piazza, and the first bombing mission was flown on 1 November 1911 by Italy's Second Lieutenant Giolio Gavotti. Bulgaria later followed this example. Its planes attacked and reconnoitred Ottoman positions during the First Balkan War 1912–13. The first war to see major use of aeroplanes in offensive, defensive and reconnaissance capabilities was World War I. The Allies and Central Powers both used aeroplanes and airships extensively. While the concept of using the aeroplane as an offensive weapon was generally discounted before World War I, the idea of using it for photography was one that was not lost on any of the major forces. All of the major forces in Europe had light aircraft, typically derived from pre-war sporting designs, attached to their reconnaissance departments. Radiotelephones were also being explored on aeroplanes, notably the SCR-68, as communication between pilots and ground commander grew more and more important. World War I (1914–1918) Combat schemes It was not long before aircraft were shooting at each other, but the lack of any sort of steady point for the gun was a problem. The French solved this problem when, in late 1914, Roland Garros attached a fixed machine gun to the front of his plane. Adolphe Pegoud became known as the first "ace", getting credit for five victories before also becoming the first ace to die in action, it was German Luftstreitkräfte Leutnant Kurt Wintgens who, on 1 July 1915, scored the very first aerial victory by a purpose-built fighter plane, with a synchronized machine gun. Aviators were styled as modern-day knights, doing individual combat with their enemies. Several pilots became famous for their air-to-air combat; the most well known is Manfred von Richthofen, better known as the "Red Baron", who shot down 80 planes in air-to-air combat with several different planes, the most celebrated of which was the Fokker Dr.I. On the Allied side, René Paul Fonck is credited with the most all-time victories at 75, even when later wars are considered. France, Britain, Germany, and Italy were the leading manufacturers of fighter planes that saw action during the war, with German aviation technologist Hugo Junkers showing the way to the future through his pioneering use of all-metal aircraft from late 1915. Between the World Wars (1918–1939) The years between World War I and World War II saw great advancements in aircraft technology. Airplanes evolved from low-powered biplanes made from wood and fabric to sleek, high-powered monoplanes made of aluminum, based primarily on the founding work of Hugo Junkers during the World War I period and its adoption by American designer William Bushnell Stout and Soviet designer Andrei Tupolev. After World War I, experienced fighter pilots were eager to show off their skills. Many American pilots became barnstormers, flying into small towns across the country and showing off their flying abilities, as well as taking paying passengers for rides. Eventually, the barnstormers grouped into more organized displays. Air shows sprang up around the country, with air races, acrobatic stunts, and feats of air superiority. The air races drove engine and airframe development—the Schneider Trophy, for example, led to a series of ever faster and sleeker monoplane designs culminating in the Supermarine S.6B. With pilots competing for cash prizes, there was an incentive to go faster. Amelia Earhart was perhaps the most famous of those on the barnstorming/air show circuit. She was also the first female pilot to achieve records such as the crossing of the Atlantic and Pacific Oceans. Prizes for distance and speed records also drove development forwards. On 14 June 1919, Captain John Alcock and Lieutenant Arthur Brown co-piloted a Vickers Vimy non-stop from St. John's, Newfoundland to Clifden, Ireland, winning the £13,000 ($65,000). Northcliffe prize. The first flight across the South Atlantic and the first aerial crossing using astronomical navigation, was made by the naval aviators Gago Coutinho and Sacadura Cabral in 1922, from Lisbon, Portugal, to Rio de Janeiro, Brazil, using an aircraft fitted with an artificial horizon for aeronautical use. In 1924, Major General Mason Patrick led a group of U.S. Army Air Service members to complete the first aerial circumnavigation of the world. This flight around the world came with many logistical challenges, traveling 26,343 miles over the span of 175 days. This flight led to improved foreign relations by promoting commercial collaboration, and greater public interest in aviation, prompting governments to put more resources into developing their aviation forces. On 21 May 1927, Charles Lindbergh received the Orteig Prize of $25,000 for the first solo non-stop crossing of the Atlantic. This caused what was known in aviation at the time as the "Lindbergh boom", which increased public interest in aviation. Australian Sir Charles Kingsford Smith was the first to fly across the larger Pacific Ocean in the Southern Cross. His crew left Oakland, California to make the first trans-Pacific flight to Australia, making three stops to complete the journey. Kingsford-Smith and his crew made their first stop in Hawaii from Oakland, California, and from Hawaii to Suva, Fiji. During the last segment of their journey from Fiji to Brisbane, Australia, they encountered severe thunderstorms, and were thrown nearly 140 miles off their course. The flight concluded on 9 June 1928 after flying 7,230 miles, Kingsford-Smith and his crew landed in Brisbane, Australia, receiving $25,000 from the Australian government for their achievement. Accompanying him were Australian aviator Charles Ulm as the relief pilot, and the Americans James Warner and Captain Harry Lyon (who were the radio operator, navigator and engineer). A week after they landed, Kingsford Smith and Ulm recorded a disc for Columbia talking about their trip. With Ulm, Kingsford Smith later continued his journey being the first in 1929 to circumnavigate the world, crossing the equator twice. The first lighter-than-air crossings of the Atlantic were made by airship in July 1919 by His Majesty's Airship R34 and crew when they flew from East Lothian, Scotland to Long Island, New York and then back to Pulham, England. By 1929, airship technology had advanced to the point that the first round-the-world flight was completed by the Graf Zeppelin in September and in October, the same aircraft inaugurated the first commercial transatlantic service. However, the age of the rigid airship ended following the destruction by fire of the zeppelin LZ 129 Hindenburg just before landing at Lakehurst, New Jersey on 6 May 1937, killing 35 of the 97 people aboard. Previous spectacular airship accidents, from the Wingfoot Express disaster (1919), the loss of the R101 (1930), the Akron (1933) and the Macon (1935) had already cast doubt on airship safety. The disasters of the U.S. Navy's rigids showed the importance of solely using helium as the lifting medium. Following the destruction of the Hindenburg, the remaining airship making international flights, the Graf Zeppelin was retired (June 1937). Its replacement, the rigid airship Graf Zeppelin II, made a number of flights, primarily over Germany, from 1938 to 1939, but was grounded when Germany began World War II. Both remaining German zeppelins were scrapped in 1940 to supply metal for the German Luftwaffe air force. Meanwhile, Germany, which was restricted by the Treaty of Versailles in its development of powered aircraft, developed gliding as a sport, especially at the Wasserkuppe, during the 1920s. In its various forms, in the 21st-century sailplane aviation now has over 400,000 participants. In 1929, Jimmy Doolittle developed flight instruments . 1929 also saw the first flight of by far the largest plane ever built until then: the Dornier Do X with a wingspan of 48 m. On its 70th test flight on 21 October 1929, there were 169 people on board, a record that was not broken for 20 years. In 1923, The first successful rotorcraft appeared in the form of the autogyro, invented by Spanish engineer Juan de la Cierva and first flown in 1919. In this design, the rotor is not powered but spins freely as it moves through the air, while a separate engine powers the aircraft to move forward. This was the basis of further development and prototypes that led to the creation of the helicopter. In 1930 Corradino D'Ascanio, an Italian engineer, developed a coaxial helicopter with the important inclusion of three small propellers on the craft, which controlled the pitch, roll, and yaw of the aircraft. Later helicopters saw several adjustments to their rotors but the first modern helicopter was not constructed until 1947 by Igor Sikorsky Only five years after the German Dornier Do-X had flown, Tupolev designed the largest aircraft of the 1930s era, the Maksim Gorky in the Soviet Union by 1934, as the largest aircraft ever built using the Junkers methods of metal aircraft construction. In the 1930s, development of the jet engines began in Germany and in Britain and they began testing in 1939 before World War II. The jet engine saw considerable development during the war, with a few jet powered aircraft being used in the war. After enrolling in the Military Aviation Academy in Eskisehir in 1936 and undertaking training at the First Aircraft Regiment, Sabiha Gökçen, flew fighter and bomber planes becoming the first Turkish, female aviator and the world's first, female, combat pilot. During her flying career, she achieved some 8,000 hours, 32 of which were combat missions. World War II (1939–1945) World War II saw a great increase in the pace of development and production, not only of aircraft but also the associated flight-based weapon delivery systems. Air combat tactics and doctrines started being rapidly developed. Large-scale strategic bombing campaigns were launched, fighter escorts introduced and the more flexible aircraft and weapons allowed precise attacks on small targets with dive bombers, fighter-bombers, and ground-attack aircraft. New technologies like radar also allowed more coordinated and controlled deployment of air defence. The first jet aircraft to fly was the Heinkel He 178 (Germany), flown by Erich Warsitz in 1939, followed by the world's first operational jet aircraft, the Messerschmitt Me 262, in July 1942 and world's first jet-powered bomber, the Arado Ar 234, in June 1943. British developments, like the Gloster Meteor, followed afterwards, but saw only brief use in World War II. The first cruise missile (V-1), the first ballistic missile (V-2), the first (and to date only) operational rocket-powered combat aircraft Me 163—which attained velocities of up to in test flights—and the first vertical take-off a manned point-defence interceptor, the Bachem Ba 349 Natter, were also developed by Germany. However, jet and rocket aircraft had only limited impact due to their late introduction, fuel shortages, the lack of experienced pilots and the declining war industry of Germany. Not only aeroplanes, but also helicopters saw rapid development in the Second World War, with the introduction of the Focke Achgelis Fa 223, the Flettner Fl 282 synchropter in 1941 in Germany and the Sikorsky R-4 in 1942 in the USA. Postwar era (1945–1979) Following World War II, commercial aviation expanded quickly, primarily relying on former military aircraft to carry passengers and cargo. There was an excess of large bombers, such as the B-29 and Lancaster, which were easily converted for commercial use. The DC-3 specifically played a key role, enabling longer and more efficient flights. The British de Havilland Comet became the first commercial jet airliner and was introduced into scheduled service by 1952. The aircraft was a breakthrough in technical achievements, but had several intense failures. The square design of the windows caused stress cracks from metal fatigue, caused by cycles of cabin pressurization and depressurization. This eventually led to severe structural failures in the fuel area. These issues were resolved too late, since competing jet airliners were already flying. On 15 September 1956, the USSR’s airline Aeroflot became the first to offer continuous, regular jet services using the Tupolev Tu-104. Soon after, Boeing 707 and DC-8 also set new standards in comfort, safety, and passenger experience. This was the beginning of the Jet Age, the introduction of large-scale commercial air travel. In October 1947, Chuck Yeager became the first to fly faster than the speed of sound when he piloted the rocket-powered Bell X-1 past the sound barrier. The air speed record for an aircraft was set by the X-15 at 4,534 mph (7,297 km/h) or Mach 6.1 in 1967. This record was later broken by the X-43 in 2004, excluding spacecraft. Military aircraft had a strategic advantage during the Cold War with the invention of nuclear bombs in 1945. Even just a small fleet of bombers could inflict catastrophic damage, which caused for the development of effective defenses. One early development was supersonic interceptor aircraft. By 1955, the focus shifted toward guided surface-to-air missiles. This eventually led to the emergence of intercontinental ballistic missiles (ICBMs), which have nuclear capabilities. An early example of ICBMs occurred in 1957 when the Soviet Union launched Sputnik 1, beginning the Space Race. In 1961, Yuri Gagarin became the first human in space when he completed a single orbit around Earth in 108 minutes aboard Vostok I. Following this, the United States sent Alan Shepard on a suborbital flight using a Mercury program capsule. In 1963, Canada became the third nation to enter space with the launch of its satellite, Alouette I. The space race culminated in the landing on the moon in 1969. The Harrier jump jet, capable of vertical landing and takeoff, first flew in 1969. This was also the year of the introduction of the Boeing 747. Additionally, the Aérospatiale-BAC Concorde supersonic passenger airliner had its maiden flight. The Boeing 747 was the largest commercial passenger aircraft ever to fly at the time, now replaced by the Airbus A380, capable of transporting 853 passengers. Aeroflot started flying the Tu-144—the first supersonic passenger plane in 1975. The next year, British Airways and Air France began supersonic flights over the Atlantic. In 1979, the Gossamer Albatross achieved the status of the first human-powered aircraft to fly over the English channel, which had been a dream for centuries. Digital age (1980–present) The last quarter of the 20th century saw a change of emphasis. No longer was revolutionary progress made in flight speeds, distances and materials technology. This part of the century instead saw the spreading of the digital revolution both in flight avionics and in aircraft design and manufacturing techniques. In 1986, Dick Rutan and Jeana Yeager flew an aircraft, the Rutan Voyager, around the world un-refuelled, and without landing. In 1999, Bertrand Piccard became the first person to circle the earth in a balloon. Digital fly-by-wire systems allow an aircraft to be designed with relaxed static stability. These systems were initially used to increase the manoeuvrability of military aircraft such as the General Dynamics F-16 Fighting Falcon, however they are now being used to reduce drag on commercial airliners. The U.S. Centennial of Flight Commission was established in 1999 to encourage the broadest national and international participation in the celebration of 100 years of powered flight. It publicized and encouraged a number of programmes, projects and events intended to educate people about the history of aviation. 21st century 21st-century aviation has seen increasing interest in fuel savings and fuel diversification, as well as low cost airlines and facilities. Additionally, much of the developing world that did not have good access to air transport has been steadily adding aircraft and facilities; though severe congestion remains a problem in many up and coming nations. Around 20,000 city pairs are served by commercial aviation, up from less than 10,000 as recently as 1996. There appears to be newfound interest in returning to the supersonic era whereby waning demand in the turn of the 20th century made flights unprofitable, as well as the final commercial stoppage of the Concorde due to reduced demand following a fatal accident and rising costs. At the beginning of the 21st century, digital technology allowed subsonic military aviation to begin eliminating the pilot in favour of remotely operated or completely autonomous unmanned aerial vehicles (UAVs). In April 2001, the unmanned aircraft Global Hawk flew from Edwards AFB in the US to Australia non-stop and un-refuelled. This is the longest point-to-point flight ever undertaken by an unmanned aircraft and took 23 hours and 23 minutes. In October 2003, the first totally autonomous flight across the Atlantic by a computer-controlled model aircraft occurred. UAVs are now an established feature of modern warfare, carrying out pinpoint attacks under the control of a remote operator. Major disruptions to air travel in the 21st century included the closing of U.S. airspace due to the September 11 attacks, and the closing of most of European airspace after the 2010 eruption of Eyjafjallajökull. In 2015, André Borschberg and Bertrand Piccard flew a record distance of from Nagoya, Japan to Honolulu, Hawaii in a solar-powered plane, Solar Impulse 2. The flight took nearly five days; during the nights the aircraft used its batteries and the potential energy gained during the day. On 14 July 2019, Frenchman Franky Zapata attracted worldwide attention when he participated at the Bastille Day military parade riding his invention, a jet-powered Flyboard Air. He subsequently succeeded in crossing the English Channel on his device on 4 August 2019, covering the 35-kilometre (22 mi) journey from Sangatte in northern France to St Margaret's Bay in Kent, UK, in 22 minutes, with a midpoint fueling stop included. 24 July 2019 was the busiest day in aviation, Flightradar24 recorded a total of over 225,000 flights that day. It includes helicopters, private jets, gliders, sight-seeing flights, as well as personal aircraft. On 10 June 2020, the Pipistrel Velis Electro became the first electric aeroplane to secure a type certificate from EASA. In the early 21st Century, the first fifth-generation military fighters were produced, starting with the F-22 Raptor. As of 2019, Russia, America and China have 5th gen aircraft. The COVID-19 pandemic had a significant impact on the aviation industry due to the resulting travel restrictions as well as slump in demand among travellers, and may also affect the future of air travel. For example, the mandatory use of face masks on planes was common when flying in 2020 and 2021. Mars On 19 April 2021, NASA successfully flew its diminutive unmanned helicopter Ingenuity on Mars, humanity's first controlled powered aircraft flight on another planet. The helicopter rose to a height of three metres and hovered in a stable holding position for 30 seconds. A video of the flight was made by its accompanying rover, Perseverance. Ingenuity, which was initially designed for five demonstration flights, flew 72 times traveling 11 miles in nearly three years. As a homage to all of its aerial predecessors, it carries a postage stamp sized piece of wing fabric from the 1903 Wright Flyer. Ingenuity's last flight was 18 January 2024, a span of since its first takeoff (the duration in Martian days, or sols, was ). Broken and damaged rotor blades suffered during its final landing forced the helicopter's retirement. See also Aviation archaeology Claims to the first powered flight List of firsts in aviation Timeline of aviation References Bibliography Further reading Celebrating a History of Flight , NASA Office of Aerospace Technology HQ, United States Air Force Bruno, Harry (1944) Wings over America: The Story of American Aviation, Halcyon House, Garden City, New York. Camm, Sydney (1919) Aeroplane construction , Crosby Lockwood and son, London Hynes, Samuel (1988). Flights of Passage: Reflections of a World War II Aviator. New York: Frederic C. Beil / Annapolis:Naval Institute Press. Includes photos, diagrams and specifications of many c. 1910 aircraft. Includes photos and specifics of many c. 1908 dirigibles and aeroplanes. Van Vleck, Jenifer (2013). Empire of the Air: Aviation and the American Ascendancy. Cambridge, MA: Harvard University Press. External links "Alberto Santos-Dumont Est Peut-Être Le Véritable 'Père De L'aviation'". Magazine Aviation. Articles Media Articles containing video clips History of industries History of science by discipline History of technology History by topic
History of aviation
[ "Technology" ]
14,169
[ "Science and technology studies", "History of science and technology", "History of technology" ]
177,681
https://en.wikipedia.org/wiki/Schadenfreude
Schadenfreude (; ; "harm-joy") is the experience of pleasure, joy, or self-satisfaction that comes from learning of or witnessing the troubles, failures, pain, suffering, or humiliation of another. It is a loanword from German. Schadenfreude has been detected in children as young as 24 months and may be an important social emotion establishing "inequity aversion". Etymology Schadenfreude is a term borrowed from German. It is a compound of ("damage/harm") and ("joy"). The German word was first mentioned in English texts in 1852 and 1867, and first used in English running text in 1895. In German, it was first attested in the 1740s. The earliest seems to be Christoph Starke, "Synopsis bibliothecae exegeticae in Vetus Testamentum," Leipzig, 1750. Although common nouns normally are not capitalized in English, schadenfreude sometimes is, following the German convention. Psychological causes Researchers have found that there are three driving forces behind schadenfreude – aggression, rivalry, and justice. Self-esteem has a negative relationship with the frequency and intensity of schadenfreude experienced by an individual; individuals with lower self-esteem tend to experience schadenfreude more frequently and intensely. It is hypothesized that this inverse relationship is mediated through the human psychological inclination to define and protect their self- and in-group- identity or self-conception. Specifically, for someone with high self-esteem, seeing another person fail may still bring them a small (but effectively negligible) surge of confidence because the observer's high self-esteem significantly lowers the threat they believe the visibly-failing human poses to their status or identity. Since this confident individual perceives that, regardless of circumstances, the successes and failures of the other person will have little impact on their own status or well-being, they have very little emotional investment in how the other person fares, be it positive or negative. Conversely, for someone with low self-esteem, someone who is more successful poses a threat to their sense of self, and seeing this person fall can be a source of comfort because they perceive a relative improvement in their internal or in-group standing. Aggression-based schadenfreude primarily involves group identity. The joy of observing the suffering of others comes from the observer's feeling that the other's failure represents an improvement or validation of their own group's (in-group) status in relation to external (out-groups) groups (see In-group and out-group). This is, essentially, schadenfreude based on group versus group status. Rivalry-based schadenfreude is individualistic and related to interpersonal competition. It arises from a desire to stand out from and out-perform one's peers. This is schadenfreude based on another person's misfortune eliciting pleasure because the observer now feels better about their personal identity and self-worth, instead of their group identity. Justice-based schadenfreude comes from seeing that behavior seen as immoral or "bad" is punished. It is the pleasure associated with seeing a "bad" person being harmed or receiving retribution. Schadenfreude is experienced here because it makes people feel that fairness has been restored for a previously un-punished wrong, and is a type of moral emotion. Synonyms Schadenfreude has equivalents in many other languages (such as: in Dutch , Swedish , Danish , and Slovak ) but no commonly-used precise English single-word equivalent. There are other ways to express the concept in English. Epicaricacy is a seldom-used direct equivalent, borrowed from Greek epichairekakia (ἐπιχαιρεκακία, first attested in Aristotle) from ἐπί epi 'upon', χαρά chara 'joy', and κακόν kakon 'evil'. Tall poppy syndrome is a cultural phenomenon where people of high status are resented, attacked, cut down, or criticized because they have been classified as better than their peers. This is similar to "begrudgery", the resentment or envy of the success of a peer. If someone were to feel joy by the victim's fall from grace, they would be experiencing schadenfreude. Roman holiday is a metaphor from Byron's poem Childe Harold's Pilgrimage, where a gladiator in ancient Rome expects to be "butchered to make a Roman holiday" while the audience would take pleasure from watching his suffering. The term suggests debauchery and disorder in addition to sadistic enjoyment. Morose delectation (), meaning "the habit of dwelling with enjoyment on evil thoughts", was considered by the medieval church to be a sin. French writer Pierre Klossowski maintained that the appeal of sadism is morose delectation. "Gloating" is an English word of similar meaning, where "gloat" means "to observe or think about something with triumphant and often malicious satisfaction, gratification, or delight" (e.g., to gloat over an enemy's misfortune). Gloating is different from schadenfreude in that it does not necessarily require malice (one may gloat to a friend without ill intent about having defeated him in a game), and that it describes an action rather than a state of mind (one typically gloats to the subject of the misfortune or to a third party). Also, unlike schadenfreude, where the focus is on another's misfortune, gloating often brings to mind inappropriately celebrating or bragging about one's own good fortune without any particular focus on the misfortune of others. Related emotions or concepts Permutations of the concept of pleasure at another's unhappiness are: pleasure at another's happiness, displeasure at another's happiness, and displeasure at another's unhappiness. Words for these concepts are sometimes cited as antonyms to schadenfreude, as each is the opposite in some way. There is no common English term for pleasure at another's happiness (i.e.; vicarious joy), though terms like 'celebrate', 'cheer', 'congratulate', 'applaud', 'rejoice' or 'kudos' often describe a shared or reciprocal form of pleasure. The pseudo-German coinage freudenfreude is occasionally used in English. The Hebrew slang term firgun refers to happiness at another's accomplishment. Displeasure at another's happiness is involved in envy, and perhaps in jealousy. The pseudo-German coinage "freudenschade" similarly means sorrow at another's success. The correct form would be Freudenschaden, since the pseudo-German coinage incorrectly assumes the n in Schadenfreude to be an interfix and the adjective ("unfortunate") a noun. Displeasure at another's good fortune is Gluckschmerz, a pseudo-German word coined in 1985 as a joke by the pseudonymous Wanda Tinasky; the correct German form would be Glücksschmerz. It has since been used in academic contexts. Displeasure at another's unhappiness is sympathy, pity, or compassion. Sadism gives pleasure through the infliction of pain, whereas schadenfreude is pleasure on observing misfortune and in particular, the fact that the other somehow deserved the misfortune. Neologisms and variants The word schadenfreude had been blended with other words to form neologisms as early as 1993, when Lincoln Caplan, in his book Skadden: Power, Money, and the Rise of a Legal Empire, used the word Skaddenfreude to describe the delight that competitors of Skadden Arps took in its troubles of the early 1990s. Others include spitzenfreude, coined by The Economist to refer to the fall of Eliot Spitzer, and Schadenford, coined by Toronto Life in regard to Canadian politician Rob Ford. Literary usage and philosophical analysis The Biblical Book of Proverbs mentions an emotion similar to schadenfreude: "Rejoice not when thine enemy falleth, and let not thine heart be glad when he stumbleth: Lest the LORD see it, and it displease him, and he turn away his wrath from him." (Proverbs 24:17–18, King James Version). In East Asia, the emotion of feeling joy from seeing the hardship of others was described as early as late 4th century BCE. The phrase () first appeared separately as (), meaning the feeling of joy from seeing the hardship of others, and (), meaning the happiness derived from the unfortunate situation of others, in the ancient Chinese text (). The chengyu () is still used among Chinese speakers. In Japanese, the saying exemplifies schadenfreude. In the Nicomachean Ethics, Aristotle used epikhairekakia (ἐπιχαιρεκακία in Greek) as part of a triad of terms, in which epikhairekakia stands as the opposite of phthonos (φθόνος), and nemesis (νέμεσις) occupies the mean. Nemesis is "a painful response to another's undeserved good fortune", while phthonos is a painful response to any good fortune of another, deserved or not. The epikhairekakos (ἐπιχαιρέκακος) person takes pleasure in another's ill fortune. Lucretius characterises the emotion in an extended simile in De rerum natura: Suave, mari magno turbantibus aequora ventis, e terra magnum alterius spectare laborem, "It is pleasant to watch from the land the great struggle of someone else in a sea rendered great by turbulent winds." The abbreviated Latin tag suave mare magno recalled the passage to generations familiar with the Latin classics. Caesarius of Heisterbach regards "delight in the adversity of a neighbour" as one of the "daughters of envy... which follows anger" in his Dialogue on Miracles. During the seventeenth century, Robert Burton wrote: The philosopher Arthur Schopenhauer mentioned schadenfreude as the most evil sin of human feeling, famously saying "To feel envy is human, to savor schadenfreude is diabolic." The song "Schadenfreude" in the musical Avenue Q, is a comedic exploration of the general public's relationship with the emotion. Rabbi Harold S. Kushner in his book When Bad Things Happen to Good People describes schadenfreude as a universal, even wholesome reaction that cannot be helped. "There is a German psychological term, Schadenfreude, which refers to the embarrassing reaction of relief we feel when something bad happens to someone else instead of to us." He gives examples and writes, "[People] don't wish their friends ill, but they can't help feeling an embarrassing spasm of gratitude that [the bad thing] happened to someone else and not to them." Susan Sontag's book Regarding the Pain of Others, published in 2003, is a study of the issue of how the pain and misfortune of some people affects others, namely whether war photography and war paintings may be helpful as anti-war tools, or whether they only serve some sense of schadenfreude in some viewers. Philosopher and sociologist Theodor Adorno defined schadenfreude as "... largely unanticipated delight in the suffering of another, which is cognized as trivial and/or appropriate." Schadenfreude is steadily becoming a more popular word according to Google. Scientific studies A New York Times article in 2002 cited a number of scientific studies of schadenfreude, which it defined as "delighting in others' misfortune". Many such studies are based on social comparison theory, the idea that when people around us have bad luck, we look better to ourselves. Other researchers have found that people with low self-esteem are more likely to feel schadenfreude than are those who have high self-esteem. A 2003 study examined intergroup schadenfreude within the context of sports, specifically an international football (soccer) competition. The study focused on the German and Dutch football teams and their fans. The results of this study indicated that the emotion of schadenfreude is very sensitive to circumstances that make it more or less legitimate to feel such malicious pleasure toward a sports rival. A 2011 study by Cikara and colleagues using functional magnetic resonance imaging (fMRI) examined schadenfreude among Boston Red Sox and New York Yankees fans, and found that fans showed increased activation in brain areas correlated with self-reported pleasure (ventral striatum) when observing the rival team experience a negative outcome (e.g., a strikeout). By contrast, fans exhibited increased activation in the anterior cingulate and insula when viewing their own team experience a negative outcome. A 2006 experiment about "justice served" suggests that men, but not women, enjoy seeing "bad people" suffer. The study was designed to measure empathy by watching which brain centers are stimulated when subjects observed via fMRI see someone experiencing physical pain. Researchers expected that the brain's empathy center of subjects would show more stimulation when those seen as "good" got an electric shock, than would occur if the shock was given to someone the subject had reason to consider "bad". This was indeed the case, but for male subjects, the brain's pleasure centers also lit up when someone got a shock that the male thought was "well-deserved". Brain-scanning studies show that schadenfreude is correlated with envy in subjects. Strong feelings of envy activated physical pain nodes in the brain's dorsal anterior cingulate cortex; the brain's reward centers, such as the ventral striatum, were activated by news that other people who were envied had suffered misfortune. The magnitude of the brain's schadenfreude response could even be predicted from the strength of the previous envy response. A study conducted in 2009 provides evidence for people's capacity to feel schadenfreude in response to negative events in politics. The study was designed to determine whether or not there was a possibility that events containing objective misfortunes might produce schadenfreude. It was reported in the study that the likelihood of experiencing feelings of schadenfreude depends upon whether an individual's own party or the opposing party is suffering harm. This study suggests that the domain of politics is prime territory for feelings of schadenfreude, especially for those who identify strongly with their political party. In 2014, research in the form of an online survey analyzed the relationship between schadenfreude and 'Dark Triad' traits (i.e. narcissism, Machiavellianism, and psychopathy). The findings showed that those respondents who had higher levels of Dark Triad traits also had higher levels of schadenfreude, engaged in greater anti-social activities and had greater interests in sensationalism. See also Crab mentality Katagelasticism, a psychological condition in which a person excessively enjoys laughing at others Masochism Revenge Vicarious embarrassment List of German expressions in English References Further reading External links Concepts in ethics Concepts in social philosophy Empathy German words and phrases Interpersonal relationships Moral psychology Pain Psychological concepts Social emotions Words and phrases with no direct English translation
Schadenfreude
[ "Biology" ]
3,326
[ "Behavior", "Interpersonal relationships", "Human behavior" ]
177,694
https://en.wikipedia.org/wiki/Ecological%20economics
Ecological economics, bioeconomics, ecolonomy, eco-economics, or ecol-econ is both a transdisciplinary and an interdisciplinary field of academic research addressing the interdependence and coevolution of human economies and natural ecosystems, both intertemporally and spatially. By treating the economy as a subsystem of Earth's larger ecosystem, and by emphasizing the preservation of natural capital, the field of ecological economics is differentiated from environmental economics, which is the mainstream economic analysis of the environment. One survey of German economists found that ecological and environmental economics are different schools of economic thought, with ecological economists emphasizing strong sustainability and rejecting the proposition that physical (human-made) capital can substitute for natural capital (see the section on weak versus strong sustainability below). Ecological economics was founded in the 1980s as a modern discipline on the works of and interactions between various European and American academics (see the section on History and development below). The related field of green economics is in general a more politically applied form of the subject. According to ecological economist , ecological economics is defined by its focus on nature, justice, and time. Issues of intergenerational equity, irreversibility of environmental change, uncertainty of long-term outcomes, and sustainable development guide ecological economic analysis and valuation. Ecological economists have questioned fundamental mainstream economic approaches such as cost-benefit analysis, and the separability of economic values from scientific research, contending that economics is unavoidably normative, i.e. prescriptive, rather than positive or descriptive. Positional analysis, which attempts to incorporate time and justice issues, is proposed as an alternative. Ecological economics shares several of its perspectives with feminist economics, including the focus on sustainability, nature, justice and care values. Karl Marx also commented on relationship between capital and ecology, what is now known as ecosocialism. History and development The antecedents of ecological economics can be traced back to the Romantics of the 19th century as well as some Enlightenment political economists of that era. Concerns over population were expressed by Thomas Malthus, while John Stuart Mill predicted the desirability of the stationary state of an economy. Mill thereby anticipated later insights of modern ecological economists, but without having had their experience of the social and ecological costs of the Post–World War II economic expansion. In 1880, Marxian economist Sergei Podolinsky attempted to theorize a labor theory of value based on embodied energy; his work was read and critiqued by Marx and Engels. Otto Neurath developed an ecological approach based on a natural economy whilst employed by the Bavarian Soviet Republic in 1919. He argued that a market system failed to take into account the needs of future generations, and that a socialist economy required calculation in kind, the tracking of all the different materials, rather than synthesising them into money as a general equivalent. In this he was criticised by neo-liberal economists such as Ludwig von Mises and Freidrich Hayek in what became known as the socialist calculation debate. The debate on energy in economic systems can also be traced back to Nobel prize-winning radiochemist Frederick Soddy (1877–1956). In his book Wealth, Virtual Wealth and Debt (1926), Soddy criticized the prevailing belief of the economy as a perpetual motion machine, capable of generating infinite wealth—a criticism expanded upon by later ecological economists such as Nicholas Georgescu-Roegen and Herman Daly. European predecessors of ecological economics include K. William Kapp (1950) Karl Polanyi (1944), and Romanian economist Nicholas Georgescu-Roegen (1971). Georgescu-Roegen, who would later mentor Herman Daly at Vanderbilt University, provided ecological economics with a modern conceptual framework based on the material and energy flows of economic production and consumption. His magnum opus, The Entropy Law and the Economic Process (1971), is credited by Daly as a fundamental text of the field, alongside Soddy's Wealth, Virtual Wealth and Debt. Some key concepts of what is now ecological economics are evident in the writings of Kenneth Boulding and E. F. Schumacher, whose book Small Is Beautiful – A Study of Economics as if People Mattered (1973) was published just a few years before the first edition of Herman Daly's comprehensive and persuasive Steady-State Economics (1977). The first organized meetings of ecological economists occurred in the 1980s. These began in 1982, at the instigation of Lois Banner, with a meeting held in Sweden (including Robert Costanza, Herman Daly, Charles Hall, Bruce Hannon, H.T. Odum, and David Pimentel). Most were ecosystem ecologists or mainstream environmental economists, with the exception of Daly. In 1987, Daly and Costanza edited an issue of Ecological Modeling to test the waters. A book entitled Ecological Economics, by Joan Martinez Alier, was published later that year. Alier renewed interest in the approach developed by Otto Neurath during the interwar period. The year 1989 saw the foundation of the International Society for Ecological Economics and publication of its journal, Ecological Economics, by Elsevier. Robert Costanza was the first president of the society and first editor of the journal, which is currently edited by Richard Howarth. Other figures include ecologists C.S. Holling and H.T. Odum, biologist Gretchen Daily, and physicist Robert Ayres. In the Marxian tradition, sociologist John Bellamy Foster and CUNY geography professor David Harvey explicitly center ecological concerns in political economy. Articles by Inge Ropke (2004, 2005) and Clive Spash (1999) cover the development and modern history of ecological economics and explain its differentiation from resource and environmental economics, as well as some of the controversy between American and European schools of thought. An article by Robert Costanza, David Stern, Lining He, and Chunbo Ma responded to a call by Mick Common to determine the foundational literature of ecological economics by using citation analysis to examine which books and articles have had the most influence on the development of the field. However, citations analysis has itself proven controversial and similar work has been criticized by Clive Spash for attempting to pre-determine what is regarded as influential in ecological economics through study design and data manipulation. In addition, the journal Ecological Economics has itself been criticized for swamping the field with mainstream economics. Schools of thought Various competing schools of thought exist in the field. Some are close to resource and environmental economics while others are far more heterodox in outlook. An example of the latter is the European Society for Ecological Economics. An example of the former is the Swedish Beijer International Institute of Ecological Economics. Clive Spash has argued for the classification of the ecological economics movement, and more generally work by different economic schools on the environment, into three main categories. These are the mainstream new resource economists, the new environmental pragmatists, and the more radical social ecological economists. International survey work comparing the relevance of the categories for mainstream and heterodox economists shows some clear divisions between environmental and ecological economists. A growing field of radical social-ecological theory is degrowth economics.Degrowth addresses both biophysical limits and global inequality while rejecting neoliberal economics. Degrowth prioritizes grassroots initiatives in progressive socio-ecological goals, adhering to ecological limits by shrinking the human ecological footprint (See Differences from Mainstream Economics Below). It involves an equitable downscale in both production and consumption of resources in order to adhere to biophysical limits. Degrowth draws from Marxian economics, citing the growth of efficient systems as the alienation of nature and man. Economic movements like degrowth reject the idea of growth itself. Some degrowth theorists call for an "exit of the economy". Critics of the degrowth movement include new resource economists, who point to the gaining momentum of sustainable development. These economists highlight the positive aspects of a green economy, which include equitable access to renewable energy and a commitment to eradicate global inequality through sustainable development (See Green Economics). Examples of heterodox ecological economic experiments include the Catalan Integral Cooperative and the Solidarity Economy Networks in Italy. Both of these grassroots movements use communitarian based economies and consciously reduce their ecological footprint by limiting material growth and adapting to regenerative agriculture. Non-traditional approaches to ecological economics Cultural and heterodox applications of economic interaction around the world have begun to be included as ecological economic practices. E.F. Schumacher introduced examples of non-western economic ideas to mainstream thought in his book, Small is Beautiful, where he addresses neoliberal economics through the lens of natural harmony in Buddhist economics. This emphasis on natural harmony is witnessed in diverse cultures across the globe. Buen Vivir is a traditional socio-economic movement in South America that rejects the western development model of economics. Meaning Good Life, Buen Vivir emphasizes harmony with nature, diverse pluralculturism, coexistence, and inseparability of nature and material. Value is not attributed to material accumulation, and it instead takes a more spiritual and communitarian approach to economic activity. Ecological Swaraj originated out of India, and is an evolving world view of human interactions within the ecosystem. This train of thought respects physical bio-limits and non-human species, pursuing equity and social justice through direct democracy and grassroots leadership. Social well-being is paired with spiritual, physical, and material well-being. These movements are unique to their region, but the values can be seen across the globe in indigenous traditions, such as the Ubuntu Philosophy in South Africa. Differences from mainstream economics Ecological economics differs from mainstream economics in that it heavily reflects on the ecological footprint of human interactions in the economy. This footprint is measured by the impact of human activities on natural resources and the waste generated in the process. Ecological economists aim to minimize the ecological footprint, taking into account the scarcity of global and regional resources and their accessibility to an economy. Some ecological economists prioritise adding natural capital to the typical capital asset analysis of land, labor, and financial capital. These ecological economists use tools from mathematical economics, as in mainstream economics, but may apply them more closely to the natural world. Whereas mainstream economists tend to be technological optimists, ecological economists are inclined to be technological sceptics. They reason that the natural world has a limited carrying capacity and that its resources may run out. Since destruction of important environmental resources could be practically irreversible and catastrophic, ecological economists are inclined to justify cautionary measures based on the precautionary principle. As ecological economists try to minimize these potential disasters, calculating the fallout of environmental destruction becomes a humanitarian issue as well. Already, the Global South has seen trends of mass migration due to environmental changes. Climate refugees from the Global South are adversely affected by changes in the environment, and some scholars point to global wealth inequality within the current neoliberal economic system as a source of this issue. The most cogent example of how the different theories treat similar assets is tropical rainforest ecosystems, most obviously the Yasuni region of Ecuador. While this area has substantial deposits of bitumen it is also one of the most diverse ecosystems on Earth and some estimates establish it has over 200 undiscovered medical substances in its genomes – most of which would be destroyed by logging the forest or mining the bitumen. Effectively, the instructional capital of the genomes is undervalued by analyses that view the rainforest primarily as a source of wood, oil/tar and perhaps food. Increasingly the carbon credit for leaving the extremely carbon-intensive ("dirty") bitumen in the ground is also valued – the government of Ecuador set a price of US$350M for an oil lease with the intent of selling it to someone committed to never exercising it at all and instead preserving the rainforest. While this natural capital and ecosystems services approach has proven popular amongst many it has also been contested as failing to address the underlying problems with mainstream economics, growth, market capitalism and monetary valuation of the environment. Critiques concern the need to create a more meaningful relationship with Nature and the non-human world than evident in the instrumentalism of shallow ecology and the environmental economists commodification of everything external to the market system. Nature and ecology A simple circular flow of income diagram is replaced in ecological economics by a more complex flow diagram reflecting the input of solar energy, which sustains natural inputs and environmental services which are then used as units of production. Once consumed, natural inputs pass out of the economy as pollution and waste. The potential of an environment to provide services and materials is referred to as an "environment's source function", and this function is depleted as resources are consumed or pollution contaminates the resources. The "sink function" describes an environment's ability to absorb and render harmless waste and pollution: when waste output exceeds the limit of the sink function, long-term damage occurs. Some persistent pollutants, such as some organic pollutants and nuclear waste are absorbed very slowly or not at all; ecological economists emphasize minimizing "cumulative pollutants". Pollutants affect human health and the health of the ecosystem. The economic value of natural capital and ecosystem services is accepted by mainstream environmental economics, but is emphasized as especially important in ecological economics. Ecological economists may begin by estimating how to maintain a stable environment before assessing the cost in dollar terms. Ecological economist Robert Costanza led an attempted valuation of the global ecosystem in 1997. Initially published in Nature, the article concluded on $33 trillion with a range from $16 trillion to $54 trillion (in 1997, total global GDP was $27 trillion). Half of the value went to nutrient cycling. The open oceans, continental shelves, and estuaries had the highest total value, and the highest per-hectare values went to estuaries, swamps/floodplains, and seagrass/algae beds. The work was criticized by articles in Ecological Economics Volume 25, Issue 1, but the critics acknowledged the positive potential for economic valuation of the global ecosystem. The Earth's carrying capacity is a central issue in ecological economics. Early economists such as Thomas Malthus pointed out the finite carrying capacity of the earth, which was also central to the MIT study Limits to Growth. Diminishing returns suggest that productivity increases will slow if major technological progress is not made. Food production may become a problem, as erosion, an impending water crisis, and soil salinity (from irrigation) reduce the productivity of agriculture. Ecological economists argue that industrial agriculture, which exacerbates these problems, is not sustainable agriculture, and are generally inclined favorably to organic farming, which also reduces the output of carbon. Global wild fisheries are believed to have peaked and begun a decline, with valuable habitat such as estuaries in critical condition. The aquaculture or farming of piscivorous fish, like salmon, does not help solve the problem because they need to be fed products from other fish. Studies have shown that salmon farming has major negative impacts on wild salmon, as well as the forage fish that need to be caught to feed them. Since animals are higher on the trophic level, they are less efficient sources of food energy. Reduced consumption of meat would reduce the demand for food, but as nations develop, they tend to adopt high-meat diets similar to that of the United States. Genetically modified food (GMF) a conventional solution to the problem, presents numerous problems – Bt corn produces its own Bacillus thuringiensis toxin/protein, but the pest resistance is believed to be only a matter of time. Global warming is now widely acknowledged as a major issue, with all national scientific academies expressing agreement on the importance of the issue. As the population growth intensifies and energy demand increases, the world faces an energy crisis. Some economists and scientists forecast a global ecological crisis if energy use is not contained – the Stern report is an example. The disagreement has sparked a vigorous debate on issue of discounting and intergenerational equity. Ethics Mainstream economics has attempted to become a value-free 'hard science', but ecological economists argue that value-free economics is generally not realistic. Ecological economics is more willing to entertain alternative conceptions of utility, efficiency, and cost-benefits such as positional analysis or multi-criteria analysis. Ecological economics is typically viewed as economics for sustainable development, and may have goals similar to green politics. Green economics In international, regional, and national policy circles, the concept of the green economy grew in popularity as a response to the financial predicament at first then became a vehicle for growth and development. The United Nations Environment Programme (UNEP) defines a 'green economy' as one that focuses on the human aspects and natural influences and an economic order that can generate high-salary jobs. In 2011, its definition was further developed as the word 'green' is made to refer to an economy that is not only resourceful and well-organized but also impartial, guaranteeing an objective shift to an economy that is low-carbon, resource-efficient, and socially-inclusive. The ideas and studies regarding the green economy denote a fundamental shift for more effective, resourceful, environment-friendly and resource‐saving technologies that could lessen emissions and alleviate the adverse consequences of climate change, at the same time confront issues about resource exhaustion and grave environmental dilapidation. As an indispensable requirement and vital precondition to realizing sustainable development, the Green Economy adherents robustly promote good governance. To boost local investments and foreign ventures, it is crucial to have a constant and foreseeable macroeconomic atmosphere. Likewise, such an environment will also need to be transparent and accountable. In the absence of a substantial and solid governance structure, the prospect of shifting towards a sustainable development route would be insignificant. In achieving a green economy, competent institutions and governance systems are vital in guaranteeing the efficient execution of strategies, guidelines, campaigns, and programmes. Shifting to a Green Economy demands a fresh mindset and an innovative outlook of doing business. It likewise necessitates new capacities, skills set from labor and professionals who can competently function across sectors, and able to work as effective components within multi-disciplinary teams. To achieve this goal, vocational training packages must be developed with focus on greening the sectors. Simultaneously, the educational system needs to be assessed as well in order to fit in the environmental and social considerations of various disciplines. Topics Among the topics addressed by ecological economics are methodology, allocation of resources, weak versus strong sustainability, energy economics, energy accounting and balance, environmental services, cost shifting, modeling, and monetary policy. Methodology A primary objective of ecological economics (EE) is to ground economic thinking and practice in physical reality, especially in the laws of physics (particularly the laws of thermodynamics) and in knowledge of biological systems. It accepts as a goal the improvement of human well-being through development, and seeks to ensure achievement of this through planning for the sustainable development of ecosystems and societies. Of course the terms development and sustainable development are far from lacking controversy. Richard B. Norgaard argues traditional economics has hi-jacked the development terminology in his book Development Betrayed. Well-being in ecological economics is also differentiated from welfare as found in mainstream economics and the 'new welfare economics' from the 1930s which informs resource and environmental economics. This entails a limited preference utilitarian conception of value i.e., Nature is valuable to our economies, that is because people will pay for its services such as clean air, clean water, encounters with wilderness, etc. Ecological economics is distinguishable from neoclassical economics primarily by its assertion that the economy is embedded within an environmental system. Ecology deals with the energy and matter transactions of life and the Earth, and the human economy is by definition contained within this system. Ecological economists argue that neoclassical economics has ignored the environment, at best considering it to be a subset of the human economy. The neoclassical view ignores much of what the natural sciences have taught us about the contributions of nature to the creation of wealth e.g., the planetary endowment of scarce matter and energy, along with the complex and biologically diverse ecosystems that provide goods and ecosystem services directly to human communities: micro- and macro-climate regulation, water recycling, water purification, storm water regulation, waste absorption, food and medicine production, pollination, protection from solar and cosmic radiation, the view of a starry night sky, etc. There has then been a move to regard such things as natural capital and ecosystems functions as goods and services. However, this is far from uncontroversial within ecology or ecological economics due to the potential for narrowing down values to those found in mainstream economics and the danger of merely regarding Nature as a commodity. This has been referred to as ecologists 'selling out on Nature'. There is then a concern that ecological economics has failed to learn from the extensive literature in environmental ethics about how to structure a plural value system. Allocation of resources Resource and neoclassical economics focus primarily on the efficient allocation of resources and less on the two other problems of importance to ecological economics: distribution (equity), and the scale of the economy relative to the ecosystems upon which it relies. Ecological economics makes a clear distinction between growth (quantitative increase in economic output) and development (qualitative improvement of the quality of life), while arguing that neoclassical economics confuses the two. Ecological economists point out that beyond modest levels, increased per-capita consumption (the typical economic measure of "standard of living") may not always lead to improvement in human well-being, but may have harmful effects on the environment and broader societal well-being. This situation is sometimes referred to as uneconomic growth (see diagram above). Weak versus strong sustainability Ecological economics challenges the conventional approach towards natural resources, claiming that it undervalues natural capital by considering it as interchangeable with human-made capital—labor and technology. The impending depletion of natural resources and increase of climate-changing greenhouse gasses should motivate us to examine how political, economic and social policies can benefit from alternative energy. Shifting dependence on fossil fuels with specific interest within just one of the above-mentioned factors easily benefits at least one other. For instance, photo voltaic (or solar) panels have a 15% efficiency when absorbing the sun's energy, but its construction demand has increased 120% within both commercial and residential properties. Additionally, this construction has led to a roughly 30% increase in work demands (Chen). The potential for the substitution of man-made capital for natural capital is an important debate in ecological economics and the economics of sustainability. There is a continuum of views among economists between the strongly neoclassical positions of Robert Solow and Martin Weitzman, at one extreme and the 'entropy pessimists', notably Nicholas Georgescu-Roegen and Herman Daly, at the other. Neoclassical economists tend to maintain that man-made capital can, in principle, replace all types of natural capital. This is known as the weak sustainability view, essentially that every technology can be improved upon or replaced by innovation, and that there is a substitute for any and all scarce materials. At the other extreme, the strong sustainability view argues that the stock of natural resources and ecological functions are irreplaceable. From the premises of strong sustainability, it follows that economic policy has a fiduciary responsibility to the greater ecological world, and that sustainable development must therefore take a different approach to valuing natural resources and ecological functions. Recently, Stanislav Shmelev developed a new methodology for the assessment of progress at the macro scale based on multi-criteria methods, which allows consideration of different perspectives, including strong and weak sustainability or conservationists vs industrialists and aims to search for a 'middle way' by providing a strong neo-Keynesian economic push without putting excessive pressure on the natural resources, including water or producing emissions, both directly and indirectly. Energy economics A key concept of energy economics is net energy gain, which recognizes that all energy sources require an initial energy investment in order to produce energy. To be useful the energy return on energy invested (EROEI) has to be greater than one. The net energy gain from the production of coal, oil and gas has declined over time as the easiest to produce sources have been most heavily depleted. In traditional energy economics, surplus energy is often seen as something to be capitalized on—either by storing for future use or by converting it into economic growth. Ecological economics generally rejects the view of energy economics that growth in the energy supply is related directly to well-being, focusing instead on biodiversity and creativity – or natural capital and individual capital, in the terminology sometimes adopted to describe these economically. In practice, ecological economics focuses primarily on the key issues of uneconomic growth and quality of life. Ecological economists are inclined to acknowledge that much of what is important in human well-being is not analyzable from a strictly economic standpoint and suggests an interdisciplinary approach combining social and natural sciences as a means to address this. When considering surplus energy, ecological economists state this could be used for activities that do not directly contribute to economic productivity but instead enhance societal and environmental well-being. This concept of dépense, as developed by Georges Bataille, offers a novel perspective on the management of surplus energy within economies. This concept encourages a shift from growth-centric models to approaches that prioritise sustainable and meaningful expenditures of excess resources. Thermoeconomics is based on the proposition that the role of energy in biological evolution should be defined and understood through the second law of thermodynamics, but also in terms of such economic criteria as productivity, efficiency, and especially the costs and benefits (or profitability) of the various mechanisms for capturing and utilizing available energy to build biomass and do work. As a result, thermoeconomics is often discussed in the field of ecological economics, which itself is related to the fields of sustainability and sustainable development. Exergy analysis is performed in the field of industrial ecology to use energy more efficiently. The term exergy, was coined by Zoran Rant in 1956, but the concept was developed by J. Willard Gibbs. In recent decades, utilization of exergy has spread outside of physics and engineering to the fields of industrial ecology, ecological economics, systems ecology, and energetics. Energy accounting and balance An energy balance can be used to track energy through a system, and is a very useful tool for determining resource use and environmental impacts, using the First and Second laws of thermodynamics, to determine how much energy is needed at each point in a system, and in what form that energy is a cost in various environmental issues. The energy accounting system keeps track of energy in, energy out, and non-useful energy versus work done, and transformations within the system. Scientists have written and speculated on different aspects of energy accounting.<ref>Stabile, Donald R. "Veblen and the Political Economy of the Engineer: the radical thinker and engineering leaders came to technocratic ideas at the same time," American Journal of Economics and Sociology (45:1) 1986, 43-44.</ref> Ecosystem services and their valuation Ecological economists agree that ecosystems produce enormous flows of goods and services to human beings, playing a key role in producing well-being. At the same time, there is intense debate about how and when to place values on these benefits. A study was carried out by Costanza and colleagues to determine the 'value' of the services provided by the environment. This was determined by averaging values obtained from a range of studies conducted in very specific context and then transferring these without regard to that context. Dollar figures were averaged to a per hectare number for different types of ecosystem e.g. wetlands, oceans. A total was then produced which came out at 33 trillion US dollars (1997 values), more than twice the total GDP of the world at the time of the study. This study was criticized by pre-ecological and even some environmental economists – for being inconsistent with assumptions of financial capital valuation – and ecological economists – for being inconsistent with an ecological economics focus on biological and physical indicators. The whole idea of treating ecosystems as goods and services to be valued in monetary terms remains controversial. A common objection is that life is precious or priceless, but this demonstrably degrades to it being worthless within cost-benefit analysis and other standard economic methods. Reducing human bodies to financial values is a necessary part of mainstream economics and not always in the direct terms of insurance or wages. One example of this in practice is the value of a statistical life, which is a dollar value assigned to one life used to evaluate the costs of small changes in risk to life–such as exposure to one pollutant. Economics, in principle, assumes that conflict is reduced by agreeing on voluntary contractual relations and prices instead of simply fighting or coercing or tricking others into providing goods or services. In doing so, a provider agrees to surrender time and take bodily risks and other (reputation, financial) risks. Ecosystems are no different from other bodies economically except insofar as they are far less replaceable than typical labour or commodities. Despite these issues, many ecologists and conservation biologists are pursuing ecosystem valuation. Biodiversity measures in particular appear to be the most promising way to reconcile financial and ecological values, and there are many active efforts in this regard. The growing field of biodiversity finance began to emerge in 2008 in response to many specific proposals such as the Ecuadoran Yasuni proposalMultinational Monitor, 9/2007. Accessed: December 23, 2012. or similar ones in the Congo. US news outlets treated the stories as a "threat" to "drill a park" reflecting a previously dominant view that NGOs and governments had the primary responsibility to protect ecosystems. However Peter Barnes and other commentators have recently argued that a guardianship/trustee/commons model is far more effective and takes the decisions out of the political realm. Commodification of other ecological relations as in carbon credit and direct payments to farmers to preserve ecosystem services are likewise examples that enable private parties to play more direct roles protecting biodiversity, but is also controversial in ecological economics. The United Nations Food and Agriculture Organization achieved near-universal agreement in 2008 that such payments directly valuing ecosystem preservation and encouraging permaculture were the only practical way out of a food crisis. The holdouts were all English-speaking countries that export GMOs and promote "free trade" agreements that facilitate their own control of the world transport network: The US, UK, Canada and Australia. Not 'externalities', but cost shifting Ecological economics is founded upon the view that the neoclassical economics (NCE) assumption that environmental and community costs and benefits are mutually canceling "externalities" is not warranted. Joan Martinez Alier, for instance shows that the bulk of consumers are automatically excluded from having an impact upon the prices of commodities, as these consumers are future generations who have not been born yet. The assumptions behind future discounting, which assume that future goods will be cheaper than present goods, has been criticized by David Pearce and by the recent Stern Report (although the Stern report itself does employ discounting and has been criticized for this and other reasons by ecological economists such as Clive Spash). Concerning these externalities, some like the eco-businessman Paul Hawken argue an orthodox economic line that the only reason why goods produced unsustainably are usually cheaper than goods produced sustainably is due to a hidden subsidy, paid by the non-monetized human environment, community or future generations. These arguments are developed further by Hawken, Amory and Hunter Lovins to promote their vision of an environmental capitalist utopia in Natural Capitalism: Creating the Next Industrial Revolution. In contrast, ecological economists, like Joan Martinez-Alier, appeal to a different line of reasoning. Rather than assuming some (new) form of capitalism is the best way forward, an older ecological economic critique questions the very idea of internalizing externalities as providing some corrective to the current system. The work by Karl William Kapp explains why the concept of "externality" is a misnomer. In fact the modern business enterprise operates on the basis of shifting costs onto others as normal practice to make profits. Charles Eisenstein has argued that this method of privatising profits while socialising the costs through externalities, passing the costs to the community, to the natural environment or to future generations is inherently destructive. As social ecological economist Clive Spash has noted, externality theory fallaciously assumes environmental and social problems are minor aberrations in an otherwise perfectly functioning efficient economic system. Internalizing the odd externality does nothing to address the structural systemic problem and fails to recognize the all pervasive nature of these supposed 'externalities'. Ecological-economic modeling Mathematical modeling is a powerful tool that is used in ecological economic analysis. Various approaches and techniques include:Faucheux, S., Pearce, D., and Proops, J. (eds.) (1995), Models of Sustainable Development, Edward Elgar evolutionary, input-output, neo-Austrian modeling, entropy and thermodynamic models, multi-criteria, and agent-based modeling, the environmental Kuznets curve, and Stock-Flow consistent model frameworks. System dynamics and GIS are techniques applied, among other, to spatial dynamic landscape simulation modeling. The Matrix accounting methods of Christian Felber provide a more sophisticated method for identifying "the common good" Monetary theory and policy Ecological economics draws upon its work on resource allocation and strong sustainability to address monetary policy. Drawing upon a transdisciplinary literature, ecological economics roots its policy work in monetary theory and its goals of sustainable scale, just distribution, and efficient allocation. Ecological economics' work on monetary theory and policy can be traced to Frederick Soddy's work on money. The field considers questions such as the growth imperative of interest-bearing debt, the nature of money, and alternative policy proposals such as alternative currencies and public banking. Criticism Assigning monetary value to natural resources such as biodiversity, and the emergent ecosystem services is often viewed as a key process in influencing economic practices, policy, and decision-making.Dasgupta P. Nature’s role in sustaining economic development. Philos Trans R Soc Lond B Biol Sci. 2010 Jan 12;365(1537):5–11. While this idea is becoming more and more accepted among ecologists and conservationist, some argue that it is inherently false. McCauley argues that ecological economics and the resulting ecosystem service based conservation can be harmful. He describes four main problems with this approach: Firstly, it seems to be assumed that all ecosystem services are financially beneficial. This is undermined by a basic characteristic of ecosystems: they do not act specifically in favour of any single species. While certain services might be very useful to us, such as coastal protection from hurricanes by mangroves for example, others might cause financial or personal harm, such as wolves hunting cattle. The complexity of Eco-systems makes it challenging to weigh up the value of a given species. Wolves play a critical role in regulating prey populations; the absence of such an apex predator in the Scottish Highlands has caused the over population of deer, preventing afforestation, which increases the risk of flooding and damage to property. Secondly, allocating monetary value to nature would make its conservation reliant on markets that fluctuate. This can lead to devaluation of services that were previously considered financially beneficial. Such is the case of the bees in a forest near former coffee plantations in Finca Santa Fe, Costa Rica. The pollination services were valued to over US$60,000 a year, but soon after the study, coffee prices dropped and the fields were replanted with pineapple. Pineapple does not require bees to be pollinated, so the value of their service dropped to zero. Thirdly, conservation programmes for the sake of financial benefit underestimate human ingenuity to invent and replace ecosystem services by artificial means. McCauley argues that such proposals are deemed to have a short lifespan as the history of technology is about how Humanity developed artificial alternatives to nature's services and with time passing the cost of such services tend to decrease. This would also lead to the devaluation of ecosystem services. Lastly, it should not be assumed that conserving ecosystems is always financially beneficial as opposed to alteration. In the case of the introduction of the Nile perch to Lake Victoria, the ecological consequence was decimation of native fauna. However, this same event is praised by the local communities as they gain significant financial benefits from trading the fish. McCauley argues that, for these reasons, trying to convince decision-makers to conserve nature for monetary reasons is not the path to be followed, and instead appealing to morality is the ultimate way to campaign for the protection of nature. See also Agroecology Circular economy Critique of political economy Deep ecology Earth Economics (policy think tank) Eco-economic decoupling Eco-socialism Ecofeminism Ecological economists (category) Ecological model of competition Ecological values of mangrove Energy quality Harrington paradox Green accounting Gund Institute for Ecological Economics Index of Sustainable Economic Welfare International Society for Ecological Economics Natural capital accounting Natural resource economics Outline of green politics Social metabolism Spaceship Earth Steady-state economy References Further reading Common, M. and Stagl, S. (2005). Ecological Economics: An Introduction. New York: Cambridge University Press. Costanza, R., Cumberland, J. H., Daly, H., Goodland, R., Norgaard, R. B. (1997). An Introduction to Ecological Economics. St. Lucie Press and International Society for Ecological Economics, (e-book at the Encyclopedia of Earth) Daly, H. (1980). Economics, Ecology, Ethics: Essays Toward a Steady-State Economy, W.H. Freeman and Company, . Daly, H. and Townsend, K. (eds.) 1993. Valuing The Earth: Economics, Ecology, Ethics. Cambridge, Mass.; London, England: MIT Press. Daly, H. (1994). "Steady-state Economics". In: Ecology - Key Concepts in Critical Theory, edited by C. Merchant. Humanities Press, . Daly, H., and J. B. Cobb (1994). For the Common Good: Redirecting the Economy Toward Community, the Environment, and a Sustainable Future. Beacon Press, . Daly, H. (1997). Beyond Growth: The Economics of Sustainable Development. Beacon Press, . Daly, H. (2015). "Economics for a Full World." Great Transition Initiative, https://www.greattransition.org/publication/economics-for-a-full-world. Daly, H., and J. Farley (2010). Ecological Economics: Principles and Applications. Island Press, . Fragio, A. (2022). Historical Epistemology of Ecological Economics. Springer. Georgescu-Roegen, N. (1999). The Entropy Law and the Economic Process. iUniverse Press, . Greer, J. M. (2011). The Wealth of Nature: Economics as if Survival Mattered. New Society Publishers, . Hesmyr, Atle Kultorp (2020). Civilization: Its Economic Basis, Historical Lessons and Future Prospects. Nisus Publications. Huesemann, Michael H., and Joyce A. Huesemann (2011). Technofix: Why Technology Won't Save Us or the Environment, New Society Publishers, Gabriola Island, British Columbia, Canada, , 464 pp. Jackson, Tim (2009). Prosperity without Growth - Economics for a finite Planet. London: Routledge/Earthscan. . Kevlar, M. (2014). Eco-Economics on the horizon, Economics and human nature from a behavioural perspective. Krishnan R., Harris J. M., and N. R. Goodwin (1995). A Survey of Ecological Economics. Island Press. . Martinez-Alier, J. (1990). Ecological Economics: Energy, Environment and Society. Oxford, England: Basil Blackwell. Martinez-Alier, J., Ropke, I. eds. (2008). Recent Developments in Ecological Economics, 2 vols., E. Elgar, Cheltenham, UK. Soddy, F. A. (1926). Wealth, Virtual Wealth and Debt. London, England: George Allen & Unwin. Stern, D. I. (1997). "Limits to substitution and irreversibility in production and consumption: A neoclassical interpretation of ecological economics". Ecological Economics 21(3): 197–215. Tacconi, L. (2000). Biodiversity and Ecological Economics: Participation, Values, and Resource Management. London, UK: Earthscan Publications. Vatn, A. (2005). Institutions and the Environment. Cheltenham: Edward Elgar. Vianna Franco, M. P., and A. Missemer (2022). A History of Ecological Economic Thought. London & New York: Routledge. Vinje, Victor Condorcet (2015). Economics as if Soil & Health Matters. Nisus Publications. Walker, J. (2020). More Heat than Life: The Tangled Roots of Ecology, Energy, and Economics''. Springer. Industrial ecology Natural resources Environmental social science Environmental economics Schools of economic thought Political ecology
Ecological economics
[ "Chemistry", "Engineering", "Environmental_science" ]
8,456
[ "Political ecology", "Industrial engineering", "Environmental economics", "Environmental engineering", "Industrial ecology", "Environmental social science" ]
177,696
https://en.wikipedia.org/wiki/Energy%20economics
Energy economics is a broad scientific subject area which includes topics related to supply and use of energy in societies. Considering the cost of energy services and associated value gives economic meaning to the efficiency at which energy can be produced. Energy services can be defined as functions that generate and provide energy to the “desired end services or states”. The efficiency of energy services is dependent on the engineered technology used to produce and supply energy. The goal is to minimise energy input required (e.g. kWh, mJ, see Units of Energy) to produce the energy service, such as lighting (lumens), heating (temperature) and fuel (natural gas). The main sectors considered in energy economics are transportation and building, although it is relevant to a broad scale of human activities, including households and businesses at a microeconomic level and resource management and environmental impacts at a macroeconomic level. History Energy related issues have been actively present in economic literature since the 1973 oil crisis, but have their roots much further back in the history. As early as 1865, W.S. Jevons expressed his concern about the eventual depletion of coal resources in his book The Coal Question. One of the best known early attempts to work on the economics of exhaustible resources (incl. fossil fuel) was made by H. Hotelling, who derived a price path for non-renewable resources, known as Hotelling's rule. Development of energy economics theory over the last two centuries can be attributed to three main economic subjects – the rebound effect, the energy efficiency gap and more recently, 'green nudges'. The Rebound Effect (1860s to 1930s) While energy efficiency is improved with new technology, expected energy savings are less-than proportional to the efficiency gains due to behavioural responses. There are three behavioural sub-theories to be considered: the direct rebound effect, which anticipates increased use of the energy service that was improved; the indirect rebound effect, which considers an increased income effect created by savings then allowing for increased energy consumption, and; the economy-wide effect, which results from an increase in energy prices due to the newly developed technology improvements. The Energy Efficiency Gap (1980s to 1990s) Suboptimal investment in improvement of energy efficiency resulting from market failures/barriers prevents the optimal use of energy. From an economical standpoint, a rational decision-maker with perfect information will optimally choose between the trade-off of initial investment and energy costs. However, due to uncertainties such as environmental externalities, the optimal potential energy efficiency is not always able to be achieved, thus creating an energy efficiency gap. Green Nudges (1990s to Current) While the energy efficiency gap considers economical investments, it does not consider behavioural anomalies in energy consumers. Growing concerns surrounding climate change and other environmental impacts have led to what economists would describe as irrational behaviours being exhibited by energy consumers. A contribution to this has been government interventions, coined "green nudges’ by Thaler and Sustein (2008), such as feedback on energy bills. Now that it is realised people do not behave rationally, research into energy economics is more focused on behaviours and impacting decision-making to close the energy efficiency gap. Economic factors Due to diversity of issues and methods applied and shared with a number of academic disciplines, energy economics does not present itself as a self-contained academic discipline, but it is an applied subdiscipline of economics. From the list of main topics of economics, some relate strongly to energy economics: Computable general equilibrium Econometrics Environmental economics Finance Industrial organization Input–output model Microeconomics Macroeconomics Operations research Resource economics Energy economics also draws heavily on results of energy engineering, geology, political sciences, ecology etc. Recent focus of energy economics includes the following issues: Climate change and climate policy Demand response Elasticity of supply and demand in energy market Energy and economic growth Energy derivatives Energy elasticity Energy forecasting Energy markets and electricity markets - liberalisation, (de- or re-) regulation energy infrastructure Environmental policy Sustainability Some institutions of higher education (universities) recognise energy economics as a viable career opportunity, offering this as a curriculum. The University of Cambridge, Massachusetts Institute of Technology and the Vrije Universiteit Amsterdam are the top three research universities, and Resources for the Future the top research institute. There are numerous other research departments, companies, and professionals offering energy economics studies and consultations. International Association for Energy Economics International Association for Energy Economics (IAEE) is an international non-profit society of professionals interested in energy economics. IAEE was founded in 1977, during the period of the energy crisis. IAEE is incorporated under United States laws and has headquarters in Cleveland. The IAEE operates through a 17-member Council of elected and appointed members. Council and officer members serve in a voluntary position. IAEE has over 4,500 members worldwide (in over 100 countries). There are more than 25 national chapters, in countries where membership exceeds 25 individual members. Some of the regularly active national chapters of the IAEE are; USAEE - United States; GEE - Germany; BIEE - Great Britain; AEE - France; AIEE - Italy. Publications The International Association for Energy Economics publishes three publications throughout the year: The Energy Journal, a quarterly academic publication the Economics of Energy & Environmental Policy, a semi-annual publication the Energy Forum Conferences The IAEE conferences address critical issues of vital concern and importance to governments and industries and provide a forum where policy issues are presented, considered and discussed at both formal sessions and informal social functions. IAEE typically holds five Conferences each year. The main annual conference for IAEE is the IAEE International Conference which is organized at diverse locations around the world. From the year 1996 on these conferences have taken place (or will take place) in the following cities: 2021 - Online Conference 2020 - No Conference 2019 - Montreal, Canada 2018 - Groningen, The Netherlands. 2017 - Singapore. 2016 - Bergen, Norway. 2015 - Antalya, Turkey. 2014 - New York City, United States. 2013 - Daegu, South Korea. 2012 - Perth, Australia (35th). 2011 - Stockholm, Sweden. 2010 - Rio, Brazil. 2009 - San Francisco, United States. 2008 - Istanbul, Turkey. 2007 - Wellington, New Zealand. 2006 - Potsdam, Germany. 2005 - Taipei, China (Taipei). 2003 - Prague, Czech Republic. 2002 - Aberdeen, Scotland. 2001 - Houston, Texas. 2000 - Sydney, Australia. 1999 - Rome, Italy. 1998 - Quebec, Canada. 1997 - New Delhi, India. 1996 - Budapest, Hungary. Other annual IAEE conferences are the North American Conference and the European Conference. IAEE Awards The Association's Immediate Past President annually chairs the Awards committee that selects the award recipients. Outstanding Contributions to the Profession Outstanding Contributions to the IAEE The Energy Journal Campbell Watkins Best Paper Award Economics of Energy & Environmental Policy Best Paper Award Journalism Award Sources, links and portals Leading journals of energy economics include: Energy Economics The Energy Journal Resource and Energy Economics There are several other journals that regularly publish papers in energy economics: Energy – The International Journal Energy Policy International Journal of Global Energy Issues Journal of Energy Markets Utilities Policy Much progress in energy economics has been made through the conferences of the International Association for Energy Economics, the model comparison exercises of the (Stanford) Energy Modeling Forum and the meetings of the International Energy Workshop. IDEAS/RePEc has a collection of recent working papers. Leading energy economists The top 20 leading energy economists as of December 2016 are: Martin L. Weitzman Lutz Kilian Robert S. Pindyck David M. Newbery Kenneth J. Arrow Richard S.J. Tol Severin Borenstein Richard G. Newell Frederick (Rick) van der Ploeg Michael Greenstone Richard Schmalensee James Hamilton Robert Norman Stavins Ilhan Ozturk Paul Joskow Ramazan Sari Jeffrey A. Frankel David Ian Stern Kenneth S. Rogoff Rafal Weron Michael Gerald Pollitt Ugur Soytas See also Energy economists (category) Cost of electricity by source Ecological economics Embodied energy Energy accounting Energy & Environment Energy balance Energy policy Energy subsidy EROEI Industrial ecology International Energy Agency List of energy storage projects List of energy topics Social metabolism Sustainable energy Thermoeconomics References Further reading How to Measure the True Cost of Fossil Fuels March 30, 2013 Scientific American Bhattacharyya, S. (2011). Energy Economics: Concepts, Issues, Markets, and Governance. London: Springer-Verlag limited. Herberg, Mikkal (2014). Energy Security and the Asia-Pacific: Course Reader. United States: The National Bureau of Asian Research. Zweifel, P., Praktiknjo, A., Erdmann, G. (2017). Energy Economics - Theory and Applications . Berlin, Heidelberg: Springer-Verlag. External links United States Association for Energy Economics UIA - International Association for Energy Economics (IAEE) The Distinguished Lecturer Series IAEE Newsletter Environmental social science Economics Resource economics
Energy economics
[ "Physics", "Environmental_science" ]
1,853
[ "Physical quantities", "Energy economics", "Energy (physics)", "Energy", "Environmental social science" ]
177,698
https://en.wikipedia.org/wiki/Behavioral%20economics
Behavioral economics is the study of the psychological (e.g. cognitive, behavioral, affective, social) factors involved in the decisions of individuals or institutions, and how these decisions deviate from those implied by traditional economic theory. Behavioral economics is primarily concerned with the bounds of rationality of economic agents. Behavioral models typically integrate insights from psychology, neuroscience and microeconomic theory. Behavioral economics began as a distinct field of study in the 1970s and 1980s, but can be traced back to 18th-century economists, such as Adam Smith, who deliberated how the economic behavior of individuals could be influenced by their desires. The status of behavioral economics as a subfield of economics is a fairly recent development; the breakthroughs that laid the foundation for it were published through the last three decades of the 20th century. Behavioral economics is still growing as a field, being used increasingly in research and in teaching. History Early classical economists included psychological reasoning in much of their writing, though psychology at the time was not a recognized field of study. In The Theory of Moral Sentiments, Adam Smith wrote on concepts later popularized by modern Behavioral Economic theory, such as loss aversion. Jeremy Bentham, a Utilitarian philosopher in the 1700s conceptualized utility as a product of psychology. Other economists who incorporated psychological explanations in their works included Francis Edgeworth, Vilfredo Pareto and Irving Fisher. A rejection and elimination of psychology from economics in the early 1900s brought on a period defined by a reliance on empiricism. There was a lack of confidence in hedonic theories, which saw pursuance of maximum benefit as an essential aspect in understanding human economic behavior. Hedonic analysis had shown little success in predicting human behavior, leading many to question its viability as a reliable source for prediction. There was also a fear among economists that the involvement of psychology in shaping economic models was inordinate and a departure from accepted principles. They feared that an increased emphasis on psychology would undermine the mathematic components of the field. To boost the ability of economics to predict accurately, economists started looking to tangible phenomena rather than theories based on human psychology. Psychology was seen as unreliable to many of these economists as it was a new field, not regarded as sufficiently scientific. Though a number of scholars expressed concern towards the positivism within economics, models of study dependent on psychological insights became rare. Economists instead conceptualized humans as purely rational and self-interested decision makers, illustrated in the concept of homo economicus. The resurgence of psychology within economics, which facilitated the expansion of behavioral economics, has been linked to the cognitive revolution. In the 1960s, cognitive psychology began to shed more light on the brain as an information processing device (in contrast to behaviorist models). Psychologists in this field, such as Ward Edwards, Amos Tversky and Daniel Kahneman began to compare their cognitive models of decision-making under risk and uncertainty to economic models of rational behavior. These developments spurred economists to reconsider how psychology could be applied to economic models and theories. Concurrently, the Expected utility hypothesis and discounted utility models began to gain acceptance. In challenging the accuracy of generic utility, these concepts established a practice foundational in behavioral economics: Building on standard models by applying psychological knowledge. Mathematical psychology reflects a long-standing interest in preference transitivity and the measurement of utility. Development of Behavioral Economics In 2017, Niels Geiger, a lecturer in economics at the University of Hohenheim conducted an investigation into the proliferation of behavioral economics. Geiger's research looked at studies that had quantified the frequency of references to terms specific to behavioral economics, and how often influential papers in behavioral economics were cited in journals on economics. The quantitative study found that there was a significant spread in behavioral economics after Kahneman and Tversky's work in the 1990s and into the 2000s. Bounded rationality Bounded rationality is the idea that when individuals make decisions, their rationality is limited by the tractability of the decision problem, their cognitive limitations and the time available. Herbert A. Simon proposed bounded rationality as an alternative basis for the mathematical modeling of decision-making. It complements "rationality as optimization", which views decision-making as a fully rational process of finding an optimal choice given the information available. Simon used the analogy of a pair of scissors, where one blade represents human cognitive limitations and the other the "structures of the environment", illustrating how minds compensate for limited resources by exploiting known structural regularity in the environment. Bounded rationality implicates the idea that humans take shortcuts that may lead to suboptimal decision-making. Behavioral economists engage in mapping the decision shortcuts that agents use in order to help increase the effectiveness of human decision-making. Bounded rationality finds that actors do not assess all available options appropriately, in order to save on search and deliberation costs. As such decisions are not always made in the sense of greatest self-reward as limited information is available. Instead agents shall choose to settle for an acceptable solution. One approach, adopted by Richard M. Cyert and March in their 1963 book A Behavioral Theory of the Firm, was to view firms as coalitions of groups whose targets were based on satisficing rather than optimizing behaviour. Another treatment of this idea comes from Cass Sunstein and Richard Thaler's Nudge. Sunstein and Thaler recommend that choice architectures are modified in light of human agents' bounded rationality. A widely cited proposal from Sunstein and Thaler urges that healthier food be placed at sight level in order to increase the likelihood that a person will opt for that choice instead of less healthy option. Some critics of Nudge have lodged attacks that modifying choice architectures will lead to people becoming worse decision-makers. Prospect theory In 1979, Kahneman and Tversky published Prospect Theory: An Analysis of Decision Under Risk, that used cognitive psychology to explain various divergences of economic decision making from neo-classical theory. Kahneman and Tversky utilising prospect theory determined three generalisations; gains are treated differently than losses, outcomes received with certainty are overweighed relative to uncertain outcomes and the structure of the problem may affect choices. These arguments were supported in part by altering a survey question so that it was no longer a case of achieving gains but averting losses and the majority of respondents altered their answers accordingly. In essence proving that emotions such as fear of loss, or greed can alter decisions, indicating the presence of an irrational decision-making process. Prospect theory has two stages: an editing stage and an evaluation stage. In the editing stage, risky situations are simplified using various heuristics. In the evaluation phase, risky alternatives are evaluated using various psychological principles that include: Reference dependence: When evaluating outcomes, the decision maker considers a "reference level". Outcomes are then compared to the reference point and classified as "gains" if greater than the reference point and "losses" if less than the reference point. Loss aversion: Losses are avoided more than equivalent gains are sought. In their 1992 paper, Kahneman and Tversky found the median coefficient of loss aversion to be about 2.25, i.e., losses hurt about 2.25 times more than equivalent gains reward. Non-linear probability weighting: Decision makers overweigh small probabilities and underweigh large probabilities—this gives rise to the inverse-S shaped "probability weighting function". Diminishing sensitivity to gains and losses: As the size of the gains and losses relative to the reference point increase in absolute value, the marginal effect on the decision maker's utility or satisfaction falls. In 1992, in the Journal of Risk and Uncertainty, Kahneman and Tversky gave a revised account of prospect theory that they called cumulative prospect theory. The new theory eliminated the editing phase in prospect theory and focused just on the evaluation phase. Its main feature was that it allowed for non-linear probability weighting in a cumulative manner, which was originally suggested in John Quiggin's rank-dependent utility theory. Psychological traits such as overconfidence, projection bias and the effects of limited attention are now part of the theory. Other developments include a conference at the University of Chicago, a special behavioral economics edition of the Quarterly Journal of Economics ("In Memory of Amos Tversky"), and Kahneman's 2002 Nobel Prize for having "integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty." A further argument of Behavioural Economics relates to the impact of the individual's cognitive limitations as a factor in limiting the rationality of people's decisions. Sloan first argued this in his paper 'Bounded Rationality' where he stated that our cognitive limitations are somewhat the consequence of our limited ability to foresee the future, hampering the rationality of decision. Daniel Kahneman further expanded upon the effect cognitive ability and processes have on decision making in his book Thinking, Fast and Slow Kahneman delved into two forms of thought, fast thinking which he considered "operates automatically and quickly, with little or no effort and no sense of voluntary control". Conversely, slow thinking is the allocation of cognitive ability, choice and concentration. Fast thinking utilises heuristics, which is a decision-making process that undertakes shortcuts, and rules of thumb to provide an immediate but often irrational and imperfect solution. Kahneman proposed that the result of the shortcuts is the occurrence of a number of biases such as hindsight bias, confirmation bias and outcome bias among others. A key example of fast thinking and the resultant irrational decisions is the 2008 financial crisis. Nudge theory Nudge is a concept in behavioral science, political theory and economics which proposes designs or changes in decision environments as ways to influence the behavior and decision making of groups or individuals—in other words, it's "a way to manipulate people's choices to lead them to make specific decisions". The first formulation of the term and associated principles was developed in cybernetics by James Wilk before 1995 and described by Brunel University academic D. J. Stewart as "the art of the nudge" (sometimes referred to as micronudges). It also drew on methodological influences from clinical psychotherapy tracing back to Gregory Bateson, including contributions from Milton Erickson, Watzlawick, Weakland and Fisch, and Bill O'Hanlon. In this variant, the nudge is a microtargeted design geared towards a specific group of people, irrespective of the scale of intended intervention. In 2008, Richard Thaler and Cass Sunstein's book Nudge: Improving Decisions About Health, Wealth, and Happiness brought nudge theory to prominence. It also gained a following among US and UK politicians, in the private sector and in public health. The authors refer to influencing behavior without coercion as libertarian paternalism and the influencers as choice architects. Thaler and Sunstein defined their concept as: Nudging techniques aim to capitalise on the judgemental heuristics of people. In other words, a nudge alters the environment so that when heuristic, or System 1, decision-making is used, the resulting choice will be the most positive or desired outcome. An example of such a nudge is switching the placement of junk food in a store, so that fruit and other healthy options are located next to the cash register, while junk food is relocated to another part of the store. In 2008, the United States appointed Sunstein, who helped develop the theory, as administrator of the Office of Information and Regulatory Affairs. Notable applications of nudge theory include the formation of the British Behavioural Insights Team in 2010. It is often called the "Nudge Unit", at the British Cabinet Office, headed by David Halpern. In addition, the Penn Medicine Nudge Unit is the world's first behavioral design team embedded within a health system. Nudge theory has also been applied to business management and corporate culture, such as in relation to health, safety and environment (HSE) and human resources. Regarding its application to HSE, one of the primary goals of nudge is to achieve a "zero accident culture". Criticisms Cass Sunstein has responded to critiques at length in his The Ethics of Influence making the case in favor of nudging against charges that nudges diminish autonomy, threaten dignity, violate liberties, or reduce welfare. Ethicists have debated this rigorously. These charges have been made by various participants in the debate from Bovens to Goodwin. Wilkinson for example charges nudges for being manipulative, while others such as Yeung question their scientific credibility. Some, such as Hausman & Welch have inquired whether nudging should be permissible on grounds of (distributive) justice; Lepenies & Malecka have questioned whether nudges are compatible with the rule of law. Similarly, legal scholars have discussed the role of nudges and the law. Behavioral economists such as Bob Sugden have pointed out that the underlying normative benchmark of nudging is still homo economicus, despite the proponents' claim to the contrary. It has been remarked that nudging is also a euphemism for psychological manipulation as practiced in social engineering. There exists an anticipation and, simultaneously, implicit criticism of the nudge theory in works of Hungarian social psychologists who emphasize the active participation in the nudge of its target (Ferenc Merei and Laszlo Garai). Concepts Behavioral economics aims to improve or overhaul traditional economic theory by studying failures in its assumptions that people are rational and selfish. Specifically, it studies the biases, tendencies and heuristics of people's economic decisions. It aids in determining whether people make good choices and whether they could be helped to make better choices. It can be applied both before and after a decision is made. Search heuristics Behavioral economics proposes search heuristics as an aid for evaluating options. It is motivated by the fact that it is costly to gain information about options and it aims to maximise the utility of searching for information. While each heuristic is not wholistic in its explanation of the search process alone, a combination of these heuristics may be used in the decision-making process. There are three primary search heuristics. Satisficing Satisficing is the idea that there is some minimum requirement from the search and once that has been met, stop searching. After satisficing, a person may not have the most optimal option (i.e. the one with the highest utility), but would have a "good enough" one. This heuristic may be problematic if the aspiration level is set at such a level that no products exist that could meet the requirements. Directed cognition Directed cognition is a search heuristic in which a person treats each opportunity to research information as their last. Rather than a contingent plan that indicates what will be done based on the results of each search, directed cognition considers only if one more search should be conducted and what alternative should be researched. Elimination by aspects Whereas satisficing and directed cognition compare choices, elimination by aspects compares certain qualities. A person using the elimination by aspects heuristic first chooses the quality that they value most in what they are searching for and sets an aspiration level. This may be repeated to refine the search. i.e. identify the second most valued quality and set an aspiration level. Using this heuristic, options will be eliminated as they fail to meet the minimum requirements of the chosen qualities. Heuristics and cognitive effects Besides searching, behavioral economists and psychologists have identified other heuristics and other cognitive effects that affect people's decision making. These include: Mental accounting Mental accounting refers to the propensity to allocate resources for specific purposes. Mental accounting is a behavioral bias that causes one to separate money into different categories known as mental accounts either based on the source or the intention of the money. Anchoring Anchoring describes when people have a mental reference point with which they compare results to. For example, a person who anticipates that the weather on a particular day would be raining, but finds that on the day it is actually clear blue skies, would gain more utility from the pleasant weather because they anticipated that it would be bad. Herd behavior This is a relatively simple bias that reflects the tendency of people to mimic what everyone else is doing and follow the general consensus. Framing effects People tend to choose differently depending on how the options are presented to them. People tend to have little control over their susceptibility to the framing effect, as often their choice-making process is based on intuition. Biases and fallacies While heuristics are tactics or mental shortcuts to aid in the decision-making process, people are also affected by a number of biases and fallacies. Behavioral economics identifies a number of these biases that negatively affect decision making such as: Present bias Present bias reflects the human tendency to want rewards sooner. It describes people who are more likely to forego a greater payoff in the future in favour of receiving a smaller benefit sooner. An example of this is a smoker who is trying to quit. Although they know that in the future they will suffer health consequences, the immediate gain from the nicotine hit is more favourable to a person affected by present bias. Present bias is commonly split into people who are aware of their present bias (sophisticated) and those who are not (naive). Gambler's fallacy The gambler's fallacy stems from law of small numbers. It is the belief that an event that has occurred often in the past is less likely to occur in the future, despite the probability remaining constant. For example, if a coin had been flipped three times and turned up heads every single time, a person influenced by the gambler's fallacy would predict that the next one ought to be tails because of the abnormal number of heads flipped in the past, even though the probability of a heads occurring is still 50%. Hot hand fallacy The hot hand fallacy is the opposite of the gambler's fallacy. It is the belief that an event that has occurred often in the past is more likely to occur again in the future such that the streak will continue. This fallacy is particularly common within sports. For example, if a football team has consistently won the last few games they have participated in, then it is often said that they are 'on form' and thus, it is expected that the football team will maintain their winning streak. Narrative fallacy Narrative fallacy refers to when people use narratives to connect the dots between random events to make sense of arbitrary information. The term stems from Nassim Taleb's book The Black Swan: The Impact of the Highly Improbable. The narrative fallacy can be problematic as it can lead to individuals making false cause-effect relationships between events. For example, a startup may get funding because investors are swayed by a narrative that sounds plausible, rather than by a more reasoned analysis of available evidence. Loss aversion Loss aversion refers to the tendency to place greater weight on losses compared to equivalent gains. In other words, this means that when an individual receives a loss, this will cause their utility to decline more so than the same-sized gain. This means that they are far more likely to try to assign a higher priority on avoiding losses than making investment gains. As a result, some investors might want a higher payout to compensate for losses. If the high payout is not likely, they might try to avoid losses altogether even if the investment's risk is acceptable from a rational standpoint. Recency bias Recency bias is the belief that of a particular outcome is more probably simply because it had just occurred. For example, if the previous one or two flips were heads, a person affected by recency bias would continue to predict that heads would be flipped. Confirmation bias Confirmation bias is the tendency to prefer information consistent with one's beliefs and discount evidence inconsistent with them. Familiarity bias Familiarity bias simply describes the tendency of people to return to what they know and are comfortable with. Familiarity bias discourages affected people from exploring new options and may limit their ability to find an optimal solution. Status quo bias Status quo bias describes the tendency of people to keep things as they are. It is a particular aversion to change in favor of remaining comfortable with what is known. Connected to this concept is the endowment effect, a theory that people value things more if they own them - they require more to give up an object than they would be willing to pay to acquire it. Behavioral finance Behavioral finance is the study of the influence of psychology on the behavior of investors or financial analysts. It assumes that investors are not always rational, have limits to their self-control and are influenced by their own biases. For example, behavioral law and economics scholars studying the growth of financial firms' technological capabilities have attributed decision science to irrational consumer decisions. It also includes the subsequent effects on the markets. Behavioral Finance attempts to explain the reasoning patterns of investors and measures the influential power of these patterns on the investor's decision making. The central issue in behavioral finance is explaining why market participants make irrational systematic errors contrary to assumption of rational market participants. Such errors affect prices and returns, creating market inefficiencies. Traditional finance The accepted theories of finance are referred to as traditional finance. The foundation of traditional finance is associated with the modern portfolio theory (MPT) and the efficient-market hypothesis (EMH). Modern portfolio theory is based on a stock or portfolio's expected return, standard deviation, and its correlation with the other assets held within the portfolio. With these three concepts, an efficient portfolio can be created for any group of assets. An efficient portfolio is a group of assets that has the maximum expected return given the amount of risk. The efficient-market hypothesis states that all public information is already reflected in a security's price. The proponents of the traditional theories believe that "investors should just own the entire market rather than attempting to outperform the market". Behavioral finance has emerged as an alternative to these theories of traditional finance and the behavioral aspects of psychology and sociology are integral catalysts within this field of study. Evolution The foundations of behavioral finance can be traced back over 150 years. Several original books written in the 1800s and early 1900s marked the beginning of the behavioral finance school. Originally published in 1841, MacKay's Extraordinary Popular Delusions and the Madness of Crowds presents a chronological timeline of the various panics and schemes throughout history. This work shows how group behavior applies to the financial markets of today. Le Bon's important work, The Crowd: A Study of the Popular Mind, discusses the role of "crowds" (also known as crowd psychology) and group behavior as they apply to the fields of behavioral finance, social psychology, sociology and history. Selden's 1912 book Psychology of The Stock Market applies the field of psychology directly to the stock market and discusses the emotional and psychological forces at work on investors and traders in the financial markets. These three works along with several others form the foundation of applying psychology and sociology to the field of finance. The foundation of behavioral finance is an area based on an interdisciplinary approach including scholars from the social sciences and business schools. From the liberal arts perspective, this includes the fields of psychology, sociology, anthropology, economics and behavioral economics. On the business administration side, this covers areas such as management, marketing, finance, technology and accounting. Critics contend that behavioral finance is more a collection of anomalies than a true branch of finance and that these anomalies are either quickly priced out of the market or explained by appealing to market microstructure arguments. However, individual cognitive biases are distinct from social biases; the former can be averaged out by the market, while the other can create positive feedback loops that drive the market further and further from a "fair price" equilibrium. It is observed that, the problem with the general area of behavioral finance is that it only serves as a complement to general economics. Similarly, for an anomaly to violate market efficiency, an investor must be able to trade against it and earn abnormal profits; this is not the case for many anomalies. A specific example of this criticism appears in some explanations of the equity premium puzzle. It is argued that the cause is entry barriers (both practical and psychological) and that the equity premium should reduce as electronic resources open up the stock market to more traders. In response, others contend that most personal investment funds are managed through superannuation funds, minimizing the effect of these putative entry barriers. In addition, professional investors and fund managers seem to hold more bonds than one would expect given return differentials. Quantitative behavioral finance Quantitative behavioral finance uses mathematical and statistical methodology to understand behavioral biases. Some financial models used in money management and asset valuation, as well as more theoretical models, likewise, incorporate behavioral finance parameters. Examples: Thaler's model of price reactions to information, with three phases (underreaction, adjustment, and overreaction), creating a price trend. (One characteristic of overreaction is that average returns following announcements of good news is lower than following bad news. In other words, overreaction occurs if the market reacts too strongly or for too long to news, thus requiring an adjustment in the opposite direction. As a result, outperforming assets in one period is likely to underperform in the following period. This also applies to customers' irrational purchasing habits.) The stock image coefficient Artificial financial market Market microstructure Applied issues Behavioral game theory Behavioral game theory, invented by Colin Camerer, analyzes interactive strategic decisions and behavior using the methods of game theory, experimental economics, and experimental psychology. Experiments include testing deviations from typical simplifications of economic theory such as the independence axiom and neglect of altruism, fairness, and framing effects. On the positive side, the method has been applied to interactive learning and social preferences. As a research program, the subject is a development of the last three decades. Artificial intelligence Much of the decisions are more and more made either by human beings with the assistance of artificial intelligent machines or wholly made by these machines. Tshilidzi Marwala and Evan Hurwitz in their book, studied the utility of behavioral economics in such situations and concluded that these intelligent machines reduce the impact of bounded rational decision making. In particular, they observed that these intelligent machines reduce the degree of information asymmetry in the market, improve decision making and thus making markets more rational. The use of AI machines in the market in applications such as online trading and decision making has changed major economic theories. Other theories where AI has had impact include in rational choice, rational expectations, game theory, Lewis turning point, portfolio optimization and counterfactual thinking. Other areas of research Other branches of behavioral economics enrich the model of the utility function without implying inconsistency in preferences. Ernst Fehr, Armin Falk, and Rabin studied fairness, inequity aversion and reciprocal altruism, weakening the neoclassical assumption of perfect selfishness. This work is particularly applicable to wage setting. The work on "intrinsic motivation by Uri Gneezy and Aldo Rustichini and "identity" by George Akerlof and Rachel Kranton assumes that agents derive utility from adopting personal and social norms in addition to conditional expected utility. According to Aggarwal, in addition to behavioral deviations from rational equilibrium, markets are also likely to suffer from lagged responses, search costs, externalities of the commons, and other frictions making it difficult to disentangle behavioral effects in market behavior. "Conditional expected utility" is a form of reasoning where the individual has an illusion of control, and calculates the probabilities of external events and hence their utility as a function of their own action, even when they have no causal ability to affect those external events. Behavioral economics caught on among the general public with the success of books such as Dan Ariely's Predictably Irrational. Practitioners of the discipline have studied quasi-public policy topics such as broadband mapping. Applications for behavioral economics include the modeling of the consumer decision-making process for applications in artificial intelligence and machine learning. The Silicon Valley–based start-up Singularities is using the AGM postulates proposed by Alchourrón, Gärdenfors, and Makinson—the formalization of the concepts of beliefs and change for rational entities—in a symbolic logic to create a "machine learning and deduction engine that uses the latest data science and big data algorithms in order to generate the content and conditional rules (counterfactuals) that capture customer's behaviors and beliefs." The University of Pennsylvania's Center for Health Incentives & Behavioral Economics (CHIBE) looks at how behavioral economics can improve health outcomes. CHIBE researchers have found evidence that many behavioral economics principles (incentives, patient and clinician nudges, gamification, loss aversion, and more) can be helpful to encourage vaccine uptake, smoking cessation, medication adherence, and physical activity, for example. Applications of behavioral economics also exist in other disciplines, for example in the area of supply chain management. Honors and awards Nobel Prize 1978 – Herbert Simon In 1978 Herbert Simon was awarded the Nobel Memorial Prize in Economic Sciences "for his pioneering research into the decision-making process within economic organizations". Simon earned his Bachelor of Arts and his Ph.D. in Political Science from the University of Chicago before going on to teach at Carnegie Tech. Herbert was praised for his work on bounded rationality, a challenge to the assumption that humans are rational actors. 2002 –- Daniel Kahneman and Vernon L. Smith In 2002, psychologist Daniel Kahneman and economist Vernon L. Smith were awarded the Nobel Memorial Prize in Economic Sciences. Kahneman was awarded the prize "for having integrated insights from psychological research into economic science, especially concerning human judgment and decision-making under uncertainty", while Smith was awarded the prize "for having established laboratory experiments as a tool in empirical economic analysis, especially in the study of alternative market mechanisms." 2017 – Richard Thaler In 2017, economist Richard Thaler was awarded the Nobel Memorial Prize in Economic Sciences for "his contributions to behavioral economics and his pioneering work in establishing that people are predictably irrational in ways that defy economic theory." Thaler was especially recognized for presenting inconsistencies in standard Economic theory and for his formulation of mental accounting and Libertarian paternalism Other Awards 1999 – Andrei Shleifer The work of Andrei Shleifer focused on behavioral finance and made observations on the limits of the efficient market hypothesis. Shleifer received the 1999 John Bates Clark Medal from the American Economic Association for his work. 2001 – Matthew Rabin Matthew Rabin received the "genius" award from the MarArthur Foundation in 2000. The American Economic Association chose Rabin as the recipient of the 2001 John Bates Clark medal. Rabin's awards were given to him primarily on the basis of his work on fairness and reciprocity, and on present bias. 2003 – Sendhil Mullainathan Sendhil Mullainathan was the youngest of the chosen MacArthur Fellows in 2002, receiving a fellowship grant of $500,000 in 2003. Mullainathan was praised by the MacArthur Foundation as working on economics and psychology as an aggregate. Mullainathan's research focused on the salaries of executives on Wall Street; he also has looked at the implications of racial discrimination in markets in the United States. Criticism Taken together, two landmark papers in economic theory which were published before the field of Behavioral Economics emerged, the first is the paper "Uncertainty, Evolution, and Economic Theory" by Armen Alchian from 1950 and the second is the paper "Irrational Behavior and Economic Theory" from 1962 by Gary Becker, both of which were published in the Journal of Political Economy, provide a justification for standard neoclassical economic analysis. Alchian's 1950 paper uses the logic of natural selection, the Evolutionary Landscape model, stochastic processes, probability theory, and several other lines of reasoning to justify many of the results derived from standard supply analysis assuming firms which maximizing their profits, are certain about the future, and have accurate foresight without having to assume any of those things. Becker's 1962 paper shows that downward sloping market demand curves (the most important implication of the law of demand) do not actually require an assumption that the consumers in that market are rational, as is claimed by behavioral economists and they also follow from a wide variety of irrational behavior as well. The lines of reasoning and argumentation used in these two papers is re-expressed and expanded upon in (at least) one other professional economic publication for each of them. As for Alchian's evolutionary economics via natural selection by way of environmental adoption thesis, it is summarized, followed by an explicit exploration of its theoretical implications for Behavioral Economic theory, then illustrated via examples in several different industries including banking, hospitality, and transportation, in the 2014 paper "Uncertainty, Evolution, and Behavioral Economic Theory," by Manne and Zywicki. And the argument made in Becker's 1962 paper, that a 'pure' increase in the (relative) price (or terms of trade) of good X must reduce the amount of X demanded in the market for good X, is explained in greater detail in chapters (or as he calls them, "Lectures" because this textbook is more or less a transcription of his lectures given in his Price Theory course taught to 1st year PhD students several years earlier) 4 (called The Opportunity Set) and 5 (called Substitution Effects) of Gary Becker's graduate level textbook Economic Theory, originally published in 1971. Besides the three critical aforementioned articles, critics of behavioral economics typically stress the rationality of economic agents. A fundamental critique is provided by Maialeh (2019) who argues that no behavioral research can establish an economic theory. Examples provided on this account include pillars of behavioral economics such as satisficing behavior or prospect theory, which are confronted from the neoclassical perspective of utility maximization and expected utility theory respectively. The author shows that behavioral findings are hardly generalizable and that they do not disprove typical mainstream axioms related to rational behavior. Others, such as the essayist and former trader Nassim Taleb note that cognitive theories, such as prospect theory, are models of decision-making, not generalized economic behavior, and are only applicable to the sort of once-off decision problems presented to experiment participants or survey respondents. It is noteworthy that in the episode of EconTalk in which Taleb said this, he and the host, Russ Roberts discuss the significance of Gary Becker's 1962 paper cited in the first paragraph in this section as an argument against any implications which can be drawn from one shot psychological experiments on market level outcomes outside of laboratory settings, i.e. in the real world. Others argue that decision-making models, such as the endowment effect theory, that have been widely accepted by behavioral economists may be erroneously established as a consequence of poor experimental design practices that do not adequately control subject misconceptions. Despite a great deal of rhetoric, no unified behavioral theory has yet been espoused: behavioral economists have proposed no alternative unified theory of their own to replace neoclassical economics with. David Gal has argued that many of these issues stem from behavioral economics being too concerned with understanding how behavior deviates from standard economic models rather than with understanding why people behave the way they do. Understanding why behavior occurs is necessary for the creation of generalizable knowledge, the goal of science. He has referred to behavioral economics as a "triumph of marketing" and particularly cited the example of loss aversion. Traditional economists are skeptical of the experimental and survey-based techniques that behavioral economics uses extensively. Economists typically stress revealed preferences over stated preferences (from surveys) in the determination of economic value. Experiments and surveys are at risk of systemic biases, strategic behavior and lack of incentive compatibility. Some researchers point out that participants of experiments conducted by behavioral economists are not representative enough and drawing broad conclusions on the basis of such experiments is not possible. An acronym WEIRD has been coined in order to describe the studies participants—as those who come from Western, Educated, Industrialized, Rich, and Democratic societies. Responses Matthew Rabin dismisses these criticisms, countering that consistent results typically are obtained in multiple situations and geographies and can produce good theoretical insight. Behavioral economists, however, responded to these criticisms by focusing on field studies rather than lab experiments. Some economists see a fundamental schism between experimental economics and behavioral economics, but prominent behavioral and experimental economists tend to share techniques and approaches in answering common questions. For example, behavioral economists are investigating neuroeconomics, which is entirely experimental and has not been verified in the field. The epistemological, ontological, and methodological components of behavioral economics are increasingly debated, in particular by historians of economics and economic methodologists. According to some researchers, when studying the mechanisms that form the basis of decision-making, especially financial decision-making, it is necessary to recognize that most decisions are made under stress because, "Stress is the nonspecific body response to any demands presented to it." Related fields Experimental economics Experimental economics is the application of experimental methods, including statistical, econometric, and computational, to study economic questions. Data collected in experiments are used to estimate effect size, test the validity of economic theories, and illuminate market mechanisms. Economic experiments usually use cash to motivate subjects, in order to mimic real-world incentives. Experiments are used to help understand how and why markets and other exchange systems function as they do. Experimental economics have also expanded to understand institutions and the law (experimental law and economics). A fundamental aspect of the subject is design of experiments. Experiments may be conducted in the field or in laboratory settings, whether of individual or group behavior. Variants of the subject outside such formal confines include natural and quasi-natural experiments. Neuroeconomics Neuroeconomics is an interdisciplinary field that seeks to explain human decision making, the ability to process multiple alternatives and to follow a course of action. It studies how economic behavior can shape our understanding of the brain, and how neuroscientific discoveries can constrain and guide models of economics. It combines research methods from neuroscience, experimental and behavioral economics, and cognitive and social psychology. As research into decision-making behavior becomes increasingly computational, it has also incorporated new approaches from theoretical biology, computer science, and mathematics. Neuroeconomics studies decision making by using a combination of tools from these fields so as to avoid the shortcomings that arise from a single-perspective approach. In mainstream economics, expected utility (EU) and the concept of rational agents are still being used. Many economic behaviors are not fully explained by these models, such as heuristics and framing. Behavioral economics emerged to account for these anomalies by integrating social, cognitive, and emotional factors in understanding economic decisions. Neuroeconomics adds another layer by using neuroscientific methods in understanding the interplay between economic behavior and neural mechanisms. By using tools from various fields, some scholars claim that neuroeconomics offers a more integrative way of understanding decision making. Evolutionary psychology An evolutionary psychology perspective states that many of the perceived limitations in rational choice can be explained as being rational in the context of maximizing biological fitness in the ancestral environment, but not necessarily in the current one. Thus, when living at subsistence level where a reduction of resources may result in death, it may have been rational to place a greater value on preventing losses than on obtaining gains. It may also explain behavioral differences between groups, such as males being less risk-averse than females since males have more variable reproductive success than females. While unsuccessful risk-seeking may limit reproductive success for both sexes, males may potentially increase their reproductive success from successful risk-seeking much more than females can. Notable people Economics George Akerlof Werner De Bondt Paul De Grauwe Linda C. Babcock Douglas Bernheim Colin Camerer Armin Falk Urs Fischbacher Tshilidzi Marwala Susan E. Mayer Ernst Fehr Simon Gächter Uri Gneezy David Laibson Louis Lévy-Garboua John A. List George Loewenstein Sendhil Mullainathan John Quiggin Matthew Rabin Reinhard Selten Herbert A. Simon Vernon L. Smith Robert Sugden Larry Summers Richard Thaler Abhijit Banerjee Esther Duflo Kevin Volpp Katy Milkman Finance Malcolm Baker Nicholas Barberis Gunduz Caginalp David Hirshleifer Andrew Lo Michael Mauboussin Terrance Odean Richard L. Peterson Charles Plott Robert Prechter Hersh Shefrin Robert Shiller Andrei Shleifer Robert Vishny Psychology George Ainslie Dan Ariely Ed Diener Ward Edwards Laszlo Garai Djuradj Caranovic Gerd Gigerenzer Daniel Kahneman Ariel Kalil George Katona Walter Mischel Drazen Prelec Eldar Shafir Paul Slovic John Staddon Amos Tversky Moran Cerf See also Adaptive market hypothesis Animal Spirits (Keynes) Behavioralism Behavioral operations research Behavioral Strategy Big Five personality traits Confirmation bias Cultural economics Culture change Economic sociology Emotional bias Fuzzy-trace theory Hindsight bias Homo reciprocans Important publications in behavioral economics List of cognitive biases Methodological individualism Nudge theory Observational techniques Praxeology Priority heuristic Regret theory Repugnancy costs Socioeconomics Socionomics References Citations Sources External links The Behavioral Economics Guide Overview of Behavioral Finance The Institute of Behavioral Finance Stirling Behavioural Science Blog, of the Stirling Behavioural Science Centre at University of Stirling Society for the Advancement of Behavioural Economics Behavioral Economics: Past, Present, Future – Colin F. Camerer and George Loewenstein A History of Behavioural Finance / Economics in Published Research: 1944–1988 MSc Behavioural Economics, MSc in Behavioural Economics at the University of Essex Behavioral Economics of Shipping Business Behavioral finance Financial economics Market trends Microeconomics Prospect theory
Behavioral economics
[ "Biology" ]
8,645
[ "Behavior", "Behavioral economics", "Human behavior", "Behaviorism", "Behavioral finance" ]
177,700
https://en.wikipedia.org/wiki/Risk%20aversion
In economics and finance, risk aversion is the tendency of people to prefer outcomes with low uncertainty to those outcomes with high uncertainty, even if the average outcome of the latter is equal to or higher in monetary value than the more certain outcome. Risk aversion explains the inclination to agree to a situation with a lower average payoff that is more predictable rather than another situation with a less predictable payoff that is higher on average. For example, a risk-averse investor might choose to put their money into a bank account with a low but guaranteed interest rate, rather than into a stock that may have high expected returns, but also involves a chance of losing value. Example A person is given the choice between two scenarios: one with a guaranteed payoff, and one with a risky payoff with same average value. In the former scenario, the person receives $50. In the uncertain scenario, a coin is flipped to decide whether the person receives $100 or nothing. The expected payoff for both scenarios is $50, meaning that an individual who was insensitive to risk would not care whether they took the guaranteed payment or the gamble. However, individuals may have different risk attitudes. A person is said to be: risk averse (or risk avoiding) - if they would accept a certain payment (certainty equivalent) of less than $50 (for example, $40), rather than taking the gamble and possibly receiving nothing. risk neutral – if they are indifferent between the bet and a certain $50 payment. risk loving (or risk seeking) – if they would accept the bet even when the guaranteed payment is more than $50 (for example, $60). The average payoff of the gamble, known as its expected value, is $50. The smallest guaranteed dollar amount that an individual would be indifferent to compared to an uncertain gain of a specific average predicted value is called the certainty equivalent, which is also used as a measure of risk aversion. An individual that is risk averse has a certainty equivalent that is smaller than the prediction of uncertain gains. The risk premium is the difference between the expected value and the certainty equivalent. For risk-averse individuals, risk premium is positive, for risk-neutral persons it is zero, and for risk-loving individuals their risk premium is negative. Utility of money In expected utility theory, an agent has a utility function u(c) where c represents the value that he might receive in money or goods (in the above example c could be $0 or $40 or $100). The utility function u(c) is defined only up to positive affine transformation – in other words, a constant could be added to the value of u(c) for all c, and/or u(c) could be multiplied by a positive constant factor, without affecting the conclusions. An agent is risk-averse if and only if the utility function is concave. For instance u(0) could be 0, u(100) might be 10, u(40) might be 5, and for comparison u(50) might be 6. The expected utility of the above bet (with a 50% chance of receiving 100 and a 50% chance of receiving 0) is , and if the person has the utility function with u(0)=0, u(40)=5, and u(100)=10 then the expected utility of the bet equals 5, which is the same as the known utility of the amount 40. Hence the certainty equivalent is 40. The risk premium is ($50 minus $40)=$10, or in proportional terms or 25% (where $50 is the expected value of the risky bet: (). This risk premium means that the person would be willing to sacrifice as much as $10 in expected value in order to achieve perfect certainty about how much money will be received. In other words, the person would be indifferent between the bet and a guarantee of $40, and would prefer anything over $40 to the bet. In the case of a wealthier individual, the risk of losing $100 would be less significant, and for such small amounts his utility function would be likely to be almost linear. For instance, if u(0) = 0 and u(100) = 10, then u(40) might be 4.02 and u(50) might be 5.01. The utility function for perceived gains has two key properties: an upward slope, and concavity. (i) The upward slope implies that the person feels that more is better: a larger amount received yields greater utility, and for risky bets the person would prefer a bet which is first-order stochastically dominant over an alternative bet (that is, if the probability mass of the second bet is pushed to the right to form the first bet, then the first bet is preferred). (ii) The concavity of the utility function implies that the person is risk averse: a sure amount would always be preferred over a risky bet having the same expected value; moreover, for risky bets the person would prefer a bet which is a mean-preserving contraction of an alternative bet (that is, if some of the probability mass of the first bet is spread out without altering the mean to form the second bet, then the first bet is preferred). Measures of risk aversion under expected utility theory There are various measures of the risk aversion expressed by those given utility function. Several functional forms often used for utility functions are represented by these measures. Absolute risk aversion The higher the curvature of , the higher the risk aversion. However, since expected utility functions are not uniquely defined (are defined only up to affine transformations), a measure that stays constant with respect to these transformations is needed rather than just the second derivative of . One such measure is the Arrow–Pratt measure of absolute risk aversion (ARA), after the economists Kenneth Arrow and John W. Pratt, also known as the coefficient of absolute risk aversion, defined as where and denote the first and second derivatives with respect to of . For example, if so and then Note how does not depend on and so affine transformations of do not change it. The following expressions relate to this term: Exponential utility of the form is unique in exhibiting constant absolute risk aversion (CARA): is constant with respect to c. Hyperbolic absolute risk aversion (HARA) is the most general class of utility functions that are usually used in practice (specifically, CRRA (constant relative risk aversion, see below), CARA (constant absolute risk aversion), and quadratic utility all exhibit HARA and are often used because of their mathematical tractability). A utility function exhibits HARA if its absolute risk aversion is a hyperbola, namely The solution to this differential equation (omitting additive and multiplicative constant terms, which do not affect the behavior implied by the utility function) is: where and . Note that when , this is CARA, as , and when , this is CRRA (see below), as . See Decreasing/increasing absolute risk aversion (DARA/IARA) is present if is decreasing/increasing. Using the above definition of ARA, the following inequality holds for DARA: and this can hold only if . Therefore, DARA implies that the utility function is positively skewed; that is, . Analogously, IARA can be derived with the opposite directions of inequalities, which permits but does not require a negatively skewed utility function (). An example of a DARA utility function is , with , while , with would represent a quadratic utility function exhibiting IARA. Experimental and empirical evidence is mostly consistent with decreasing absolute risk aversion. Contrary to what several empirical studies have assumed, wealth is not a good proxy for risk aversion when studying risk sharing in a principal-agent setting. Although is monotonic in wealth under either DARA or IARA and constant in wealth under CARA, tests of contractual risk sharing relying on wealth as a proxy for absolute risk aversion are usually not identified. Relative risk aversion The Arrow–Pratt measure of relative risk aversion (RRA) or coefficient of relative risk aversion is defined as . Unlike ARA whose units are in $−1, RRA is a dimensionless quantity, which allows it to be applied universally. Like for absolute risk aversion, the corresponding terms constant relative risk aversion (CRRA) and decreasing/increasing relative risk aversion (DRRA/IRRA) are used. This measure has the advantage that it is still a valid measure of risk aversion, even if the utility function changes from risk averse to risk loving as c varies, i.e. utility is not strictly convex/concave over all c. A constant RRA implies a decreasing ARA, but the reverse is not always true. As a specific example of constant relative risk aversion, the utility function implies . In intertemporal choice problems, the elasticity of intertemporal substitution often cannot be disentangled from the coefficient of relative risk aversion. The isoelastic utility function exhibits constant relative risk aversion with and the elasticity of intertemporal substitution . When using l'Hôpital's rule shows that this simplifies to the case of log utility, , and the income effect and substitution effect on saving exactly offset. A time-varying relative risk aversion can be considered. Implications of increasing/decreasing absolute and relative risk aversion The most straightforward implications of increasing or decreasing absolute or relative risk aversion, and the ones that motivate a focus on these concepts, occur in the context of forming a portfolio with one risky asset and one risk-free asset. If the person experiences an increase in wealth, he/she will choose to increase (or keep unchanged, or decrease) the number of dollars of the risky asset held in the portfolio if absolute risk aversion is decreasing (or constant, or increasing). Thus economists avoid using utility functions such as the quadratic, which exhibit increasing absolute risk aversion, because they have an unrealistic behavioral implication. Similarly, if the person experiences an increase in wealth, he/she will choose to increase (or keep unchanged, or decrease) the fraction of the portfolio held in the risky asset if relative risk aversion is decreasing (or constant, or increasing). In one model in monetary economics, an increase in relative risk aversion increases the impact of households' money holdings on the overall economy. In other words, the more the relative risk aversion increases, the more money demand shocks will impact the economy. Portfolio theory In modern portfolio theory, risk aversion is measured as the additional expected reward an investor requires to accept additional risk. If an investor is risk-averse, they will invest in multiple uncertain assets, but only when the predicted return on a portfolio that is uncertain is greater than the predicted return on one that is not uncertain will the investor will prefer the former. Here, the risk-return spectrum is relevant, as it results largely from this type of risk aversion. Here risk is measured as the standard deviation of the return on investment, i.e. the square root of its variance. In advanced portfolio theory, different kinds of risk are taken into consideration. They are measured as the n-th root of the n-th central moment. The symbol used for risk aversion is A or An. Von Neumann-Morgenstern utility theorem The von Neumann-Morgenstern utility theorem is another model used to denote how risk aversion influences an actor’s utility function. An extension of the expected utility function, the von Neumann-Morgenstern model includes risk aversion axiomatically rather than as an additional variable. John von Neumann and Oskar Morgenstern first developed the model in their book Theory of Games and Economic Behaviour. Essentially, von Neumann and Morgenstern hypothesised that individuals seek to maximise their expected utility rather than the expected monetary value of assets. In defining expected utility in this sense, the pair developed a function based on preference relations. As such, if an individual’s preferences satisfy four key axioms, then a utility function based on how they weigh different outcomes can be deduced. In applying this model to risk aversion, the function can be used to show how an individual’s preferences of wins and losses will influence their expected utility function. For example, if a risk-averse individual with $20,000 in savings is given the option to gamble it for $100,000 with a 30% chance of winning, they may still not take the gamble in fear of losing their savings. This does not make sense using the traditional expected utility model however; The von Neumann-Morgenstern model can explain this scenario. Based on preference relations, a specific utility can be assigned to both outcomes. Now the function becomes; For a risk averse person,  would equal a value that means that the individual would rather keep their $20,000 in savings than gamble it all to potentially increase their wealth to $100,000. Hence a risk averse individuals’ function would show that; Limitations of expected utility treatment of risk aversion Using expected utility theory's approach to risk aversion to analyze small stakes decisions has come under criticism. Matthew Rabin has showed that a risk-averse, expected-utility-maximizing individual who, from any initial wealth level [...] turns down gambles where she loses $100 or gains $110, each with 50% probability [...] will turn down 50–50 bets of losing $1,000 or gaining any sum of money. Rabin criticizes this implication of expected utility theory on grounds of implausibility—individuals who are risk averse for small gambles due to diminishing marginal utility would exhibit extreme forms of risk aversion in risky decisions under larger stakes. One solution to the problem observed by Rabin is that proposed by prospect theory and cumulative prospect theory, where outcomes are considered relative to a reference point (usually the status quo), rather than considering only the final wealth. Another limitation is the reflection effect, which demonstrates the reversing of risk aversion. This effect was first presented by Kahneman and Tversky as a part of the prospect theory, in the behavioral economics domain. The reflection effect is an identified pattern of opposite preferences between negative as opposed to positive prospects: people tend to avoid risk when the gamble is between gains, and to seek risks when the gamble is between losses. For example, most people prefer a certain gain of 3,000 to an 80% chance of a gain of 4,000. When posed the same problem, but for losses, most people prefer an 80% chance of a loss of 4,000 to a certain loss of 3,000. The reflection effect (as well as the certainty effect) is inconsistent with the expected utility hypothesis. It is assumed that the psychological principle which stands behind this kind of behavior is the overweighting of certainty. Options which are perceived as certain are over-weighted relative to uncertain options. This pattern is an indication of risk-seeking behavior in negative prospects and eliminates other explanations for the certainty effect such as aversion for uncertainty or variability. The initial findings regarding the reflection effect faced criticism regarding its validity, as it was claimed that there are insufficient evidence to support the effect on the individual level. Subsequently, an extensive investigation revealed its possible limitations, suggesting that the effect is most prevalent when either small or large amounts and extreme probabilities are involved. Bargaining and risk aversion Numerous studies have shown that in riskless bargaining scenarios, being risk-averse is disadvantageous. Moreover, opponents will always prefer to play against the most risk-averse person. Based on both the von Neumann-Morgenstern and Nash Game Theory model, a risk-averse person will happily receive a smaller commodity share of the bargain. This is because their utility function concaves hence their utility increases at a decreasing rate while their non-risk averse opponents may increase at a constant or increasing rate. Intuitively, a risk-averse person will hence settle for a smaller share of the bargain as opposed to a risk-neutral or risk-seeking individual. In the brain Attitudes towards risk have attracted the interest of the field of neuroeconomics and behavioral economics. A 2009 study by Christopoulos et al. suggested that the activity of a specific brain area (right inferior frontal gyrus) correlates with risk aversion, with more risk averse participants (i.e. those having higher risk premia) also having higher responses to safer options. This result coincides with other studies, that show that neuromodulation of the same area results in participants making more or less risk averse choices, depending on whether the modulation increases or decreases the activity of the target area. Public understanding and risk in social activities In the real world, many government agencies, e.g. Health and Safety Executive, are fundamentally risk-averse in their mandate. This often means that they demand (with the power of legal enforcement) that risks be minimized, even at the cost of losing the utility of the risky activity. It is important to consider the opportunity cost when mitigating a risk; the cost of not taking the risky action. Writing laws focused on the risk without the balance of the utility may misrepresent society's goals. The public understanding of risk, which influences political decisions, is an area which has recently been recognised as deserving focus. In 2007 Cambridge University initiated the Winton Professorship of the Public Understanding of Risk, a role described as outreach rather than traditional academic research by the holder, David Spiegelhalter. Children Children's services such as schools and playgrounds have become the focus of much risk-averse planning, meaning that children are often prevented from benefiting from activities that they would otherwise have had. Many playgrounds have been fitted with impact-absorbing matting surfaces. However, these are only designed to save children from death in the case of direct falls on their heads and do not achieve their main goals. They are expensive, meaning that less resources are available to benefit users in other ways (such as building a playground closer to the child's home, reducing the risk of a road traffic accident on the way to it), and—some argue—children may attempt more dangerous acts, with confidence in the artificial surface. Shiela Sage, an early years school advisor, observes "Children who are only ever kept in very safe places, are not the ones who are able to solve problems for themselves. Children need to have a certain amount of risk taking ... so they'll know how to get out of situations." Game shows and investments One experimental study with student-subject playing the game of the TV show Deal or No Deal finds that people are more risk averse in the limelight than in the anonymity of a typical behavioral laboratory. In the laboratory treatments, subjects made decisions in a standard, computerized laboratory setting as typically employed in behavioral experiments. In the limelight treatments, subjects made their choices in a simulated game show environment, which included a live audience, a game show host, and video cameras. In line with this, studies on investor behavior find that investors trade more and more speculatively after switching from phone-based to online trading and that investors tend to keep their core investments with traditional brokers and use a small fraction of their wealth to speculate online. The behavioural approach to employment status The basis of the theory, on the connection between employment status and risk aversion, is the varying income level of individuals. On average higher income earners are less risk averse than lower income earners. In terms of employment the greater the wealth of an individual the less risk averse they can afford to be, and they are more inclined to make the move from a secure job to an entrepreneurial venture. The literature assumes a small increase in income or wealth initiates the transition from employment to entrepreneurship-based decreasing absolute risk aversion (DARA), constant absolute risk aversion (CARA), and increasing absolute risk aversion (IARA) preferences as properties in their utility function. The apportioning risk perspective can also be used to as a factor in the transition of employment status, only if the strength of downside risk aversion exceeds the strength of risk aversion. If using the behavioural approach to model an individual’s decision on their employment status there must be more variables than risk aversion and any absolute risk aversion preferences. Incentive effects are a factor in the behavioural approach an individual takes in deciding to move from a secure job to entrepreneurship. Non-financial incentives provided by an employer can change the decision to transition into entrepreneurship as the intangible benefits helps to strengthen how risk averse an individual is relative to the strength of downside risk aversion. Utility functions do not equate for such effects and can often screw the estimated behavioural path that an individual takes towards their employment status. The design of experiments, to determine at what increase of wealth or income would an individual change their employment status from a position of security to more risky ventures, must include flexible utility specifications with salient incentives integrated with risk preferences. The application of relevant experiments can avoid the generalisation of varying individual preferences through the use of this model and its specified utility functions. See also Ambiguity aversion Equity premium puzzle Investor profile Loss aversion Marginal utility Neuroeconomics Optimism bias Problem gambling, a contrary behavior Prudence in economics and finance Risk premium St. Petersburg paradox Statistical risk Uncertainty avoidance, which is different, as uncertainty is not the same as risk Utility References U.Sankar (1971), A Utility Function for Wealth for a Risk Averter, Journal of Economic Theory. External links Arrow-Pratt Measure on About.com:Economics The benefit of utilities: a plausible explanation for small risky parts in the portfolio Actuarial science Behavioral finance Financial risk modeling Utility
Risk aversion
[ "Mathematics", "Biology" ]
4,544
[ "Behavior", "Human behavior", "Applied mathematics", "Actuarial science", "Behavioral finance" ]
177,766
https://en.wikipedia.org/wiki/IAU%20designated%20constellations%20by%20area
The International Astronomical Union (IAU) designates 88 constellations of stars. In the table below, they are ranked by the solid angle that they subtend in the sky, measured in square degrees and millisteradians. These solid angles depend on arbitrary boundaries between the constellations: the list below is based on constellation boundaries drawn up by Eugène Delporte in 1930 on behalf of the IAU and published in Délimitation scientifique des constellations (Cambridge University Press). Before Delporte's work, there was no standard list of the boundaries of each constellation. Delporte drew the boundaries along vertical and horizontal lines of right ascension and declination; however, he did so for the epoch B1875.0, which means that due to precession of the equinoxes, the borders on a modern star map (e.g., for epoch J2000) are already somewhat skewed and no longer perfectly vertical or horizontal. This skew will increase over the centuries to come. However, this does not change the solid angle of any constellation. Notes See also IAU designated constellations by geographical visibility Lists of astronomical objects Lists of stars by constellation Sources Constellations – Ian Ridpath Constellations – RASC Calgary Centre Area IAU constellations
IAU designated constellations by area
[ "Astronomy" ]
267
[ "Lists of constellations", "Sky regions", "IAU constellations", "Constellations" ]
177,787
https://en.wikipedia.org/wiki/Jakarta%20XML%20Registries
Jakarta XML Registries (JAXR; formerly Java API for XML Registries) defines a standard API for Jakarta EE applications to access and programmatically interact with various kinds of metadata registries. JAXR is one of the Java XML programming APIs. The JAXR API was developed under the Java Community Process as JSR 93. JAXR provides a uniform and standard Java API for accessing different kinds of XML-based metadata registry. Current implementations of JAXR support ebXML Registry version 2.0, and UDDI version 2.0. More such registries could be defined in the future. JAXR provides an API for the clients to interact with XML registries and a service provider interface (SPI) for the registry providers so they can plug in their registry implementations. The JAXR API insulates application code from the underlying registry mechanism. When writing a JAXR based client to browse or populate a registry, the code does not have to change if the registry changes, for instance from UDDI to ebXML. Jakarta XML Registries (JAXR) was removed from Jakarta EE 9. References External links Apache Scout is an open source implementation of the JSR 93 JAXR home page freebXML Registry Provides a royalty-free open source JAXR implementation XML-based standards Java API for XML Java specification requests Java enterprise platform
Jakarta XML Registries
[ "Technology" ]
277
[ "Computer standards", "XML-based standards" ]
177,793
https://en.wikipedia.org/wiki/Great%20chain%20of%20being
The great chain of being is a hierarchical structure of all matter and life, thought by medieval Christianity to have been decreed by God. The chain begins with God and descends through angels, humans, animals and plants to minerals. The great chain of being () is a concept derived from Plato, Aristotle (in his Historia Animalium), Plotinus and Proclus. Further developed during the Middle Ages, it reached full expression in early modern Neoplatonism. Divisions The chain of being hierarchy has God at the top, above angels, which like him are entirely spirit, without material bodies, and hence unchangeable. Beneath them are humans, consisting both of spirit and matter; they change and die, and are thus essentially impermanent. Lower are animals and plants. At the bottom are the mineral materials of the earth itself; they consist only of matter. Thus, the higher the being is in the chain, the more attributes it has, including all the attributes of the beings below it. The minerals are, in the medieval mind, a possible exception to the immutability of the material beings in the chain, as alchemy promised to turn lower elements like lead into those higher up the chain, like silver or gold. The Great Chain The Great Chain of being links God, angels, humans, animals, plants, and minerals. The links of the chain are: God God is the creator of all things. Many religions, such as Judaism, Christianity, and Islam believe he created the entire universe and everything in it. He has spiritual attributes found in angels and humans. God has unique attributes of omnipotence, omnipresence, and omniscience. He is the model of perfection in all of creation. Angelic beings In the New Testament, the Epistle to the Colossians sets out a partial list: "everything visible and everything invisible, Thrones, Dominations, Sovereignties, Powers – all things were created through him and for him." The Epistle to the Ephesians also lists several entities: "Far above all principality, and power, and might, and dominion, and every name that is named, not only in this world, but also in that which is to come". In the 5th and 6th centuries, Pseudo-Dionysius the Areopagite set out a more elaborate hierarchy, consisting of three lists, each of three types: Angels of presence, praising God Seraphim – spirits of love Cherubim – spirits of harmony Thrones – record keepers of universal laws Angels of government, spreading light Dominions – spirits of wisdom and knowledge Virtues – angels of movement and free will Powers – angels of form and space Angels of revelation, able to communicate with humans Principalities – angels of time and personality Archangels – powerful angels superior to ordinary angels Angels – governors of spirits of nature Humanity Humans uniquely share spiritual attributes with God and the angels above them, Love and language, and physical attributes with the animals below them, like having material bodies that experienced emotions and sensations such as lust and pain, and physical needs such as hunger and thirst. Animals Animals have senses, are able to move, and have physical appetites. The apex predator like the lion, could move vigorously, and has powerful senses like keen eyesight and the ability to smell their prey from a distance, while a lower order of animals might wiggle or crawl, or like oysters were sessile, attached to the sea-bed. All, however, share the senses of touch and taste. Plants Plants lacked sense organs and the ability to move, but they could grow and reproduce. The highest plants have important healing attributes within their leaves, buds, and flowers. Lower plants included fungi and mosses. Minerals At the bottom of the chain, minerals were unable to move, sense, grow, or reproduce. Their attributes were being solid and strong, while the gemstones possessed magic. The king of gems was the diamond. Natural science From Aristotle to Linnaeus The basic idea of a ranking of the world's organisms goes back to Aristotle's biology. In his History of Animals, where he ranked animals over plants based on their ability to move and sense, and graded the animals by their reproductive mode, live birth being "higher" than laying cold eggs, and possession of blood, warm-blooded mammals and birds again being "higher" than "bloodless" invertebrates. Aristotle's non-religious concept of higher and lower organisms was taken up by natural philosophers during the Scholastic period to form the basis of the Scala Naturae. The scala allowed for an ordering of beings, thus forming a basis for classification where each kind of mineral, plant and animal could be slotted into place. In medieval times, the great chain was seen as a God-given and unchangeable ordering. In the Northern Renaissance, the scientific focus shifted to biology; the threefold division of the chain below humans formed the basis for Carl Linnaeus's Systema Naturæ from 1737, where he divided the physical components of the world into the three familiar kingdoms of minerals, plants and animals. In alchemy Alchemy used the great chain as the basis for its cosmology. Since all beings were linked into a chain, so that there was a fundamental unity of all matter, the transformation from one place in the chain to the next might, according to alchemical reasoning, be possible. In turn, the unit of the matter enabled alchemy to make another key assumption, the philosopher's stone, which somehow gathered and concentrated the universal spirit found in all matter along the chain, and which ex hypothesi might enable the alchemical transformation of one substance to another, such as the base metal lead to the noble metal gold. Scala Naturae in evolution The set nature of species, and thus the absoluteness of creatures' places in the great chain, came into question during the 18th century. The dual nature of the chain, divided yet united, had always allowed for seeing creation as essentially one continuous whole, with the potential for overlap between the links. Radical thinkers like Jean-Baptiste Lamarck saw a progression of life forms from the simplest creatures striving towards complexity and perfection, a schema accepted by zoologists like Henri de Blainville. The very idea of an ordering of organisms, even if supposedly fixed, laid the basis for the idea of transmutation of species, whether progressive goal-directed orthogenesis or Charles Darwin's undirected theory of evolution. The chain of being continued to be part of metaphysics in 19th-century education, and the concept was well known. The geologist Charles Lyell used it as a metaphor in his 1851 Elements of Geology description of the geological column, where he used the term "missing links" about missing parts of the continuum. The term "missing link" later came to signify transitional fossils, particularly those bridging the gulf between man and beasts. The idea of the great chain, as well as the derived "missing link", was abandoned in early 20th-century science, as the notion of modern animals representing ancestors of other modern animals was abandoned in biology. The idea of a certain sequence from "lower" to "higher" however lingers on, as does the idea of progress in biology. Political implications Allenby and Garreau propose that the Catholic Church's narrative of the great chain of being kept the peace in Europe for centuries. The very concept of rebellion simply lay outside the reality within which most people lived, for to defy the King was to defy God. King James I himself wrote, "The state of monarchy is the most supreme thing upon earth: for kings are not only God's Lieutenants upon earth, and sit upon God's throne, but even by God himself they are called Gods." Adaptations and similar concepts The American philosopher Ken Wilber described a "Great Nest of Being" which he claims to belong to a culture-independent "perennial philosophy" traceable across 3000 years of mystical and esoteric writings. Wilber's system corresponds with other concepts of transpersonal psychology. In his 1977 book A Guide for the Perplexed, the economist E. F. Schumacher described a hierarchy of beings, with humans at the top able mindfully to perceive the "eternal now". See also References Sources Further reading External links Dictionary of the History of Ideas – Chain of Being The Great Chain of Being reflected in the work of Descartes, Spinoza & Leibniz. . Peter Suber, Earlham College, Indiana Alchemical concepts Aristotelianism Christian philosophy Esoteric Christianity Hierarchy Jewish mysticism Metaphysics of religion Natural philosophy Neoplatonism Obsolete biology theories Philosophical theories
Great chain of being
[ "Biology" ]
1,773
[ "Biology theories", "Obsolete biology theories" ]
177,934
https://en.wikipedia.org/wiki/Lime%20kiln
A lime kiln is a kiln used for the calcination of limestone (calcium carbonate) to produce the form of lime called quicklime (calcium oxide). The chemical equation for this reaction is CaCO3 + heat → CaO + CO2 This reaction can take place at anywhere above , but is generally considered to occur at (at which temperature the partial pressure of CO2 is 1 atmosphere), but a temperature around (at which temperature the partial pressure of CO2 is 3.8 atmospheres) is usually used to make the reaction proceed quickly. Excessive temperature is avoided because it produces unreactive, "dead-burned" lime. Slaked lime (calcium hydroxide) can be formed by mixing quicklime with water. History Pre-pottery Neolithic In plaster, proto-pottery, and mortar Because it is so readily made by heating limestone, lime must have been known from the earliest times, and all the early civilizations used it in building mortars and as a stabilizer in mud renders and floors. According to finds at 'Ain Ghazal in Jordan, Yiftahel in Israel, and Abu Hureyra in Syria dating to 7500–6000 BCE, the earliest use of lime was mostly as a binder on floors and in plaster for coating walls. This use of plaster may in turn have led to the development of proto-pottery, made from lime and ash. In mortar, the oldest binder was mud. According to finds at Catal Hüyük in Turkey, mud was soon followed by clay, and then by lime in the 6th millennium BCE. Lime use in agriculture and coal Knowledge of its value in agriculture is also ancient, but agricultural use only became widely possible when the use of coal made it cheap in the coalfields in the late 13th century, and an account of agricultural use was given in 1523. The earliest descriptions of lime kilns differ little from those used for small-scale manufacture a century ago. Because land transportation of minerals like limestone and coal was difficult in the pre-industrial era, they were distributed by sea, and lime was most often manufactured at small coastal ports. Many preserved kilns are still to be seen on quaysides around the coasts of Britain. Types of kilns Permanent lime kilns fall into two broad categories: "flare kilns" also known as "intermittent" or "periodic" kilns; and "draw kilns" also known as "perpetual" or "running" kilns. In a flare kiln, a bottom layer of coal was built up and the kiln above filled solely with chalk. The fire was alight for several days, and then the entire kiln was emptied of the lime. In a draw kiln, usually a stone structure, the chalk or limestone was layered with wood, coal or coke and lit. As it burnt through, lime was extracted from the bottom of the kiln, through the draw hole. Further layers of stone and fuel were added to the top. Early kilns The common feature of early kilns was an egg-cup shaped burning chamber, with an air inlet at the base (the "eye"), constructed of brick. Limestone was crushed (often by hand) to fairly uniform lumps – fine stone was rejected. Successive dome-shaped layers of limestone and wood or coal were built up in the kiln on grate bars across the eye. When loading was complete, the kiln was kindled at the bottom, and the fire gradually spread upwards through the charge. When burnt through, the lime was cooled and raked out through the base. Fine ash dropped out and was rejected with the "riddlings". Only lump stone could be used, because the charge needed to "breathe" during firing. This also limited the size of kilns and explains why kilns were all much the same size. Above a certain diameter, the half-burned charge would be likely to collapse under its own weight, extinguishing the fire. So kilns always made 25–30 tonnes of lime in a batch. Typically the kiln took a day to load, three days to fire, two days to cool and a day to unload, so a one-week turnaround was normal. The degree of burning was controlled by trial and error from batch to batch by varying the amount of fuel used. Because there were large temperature differences between the center of the charge and the material close to the wall, a mixture of underburned (i.e. high loss on ignition), well-burned and dead-burned lime was normally produced. Typical fuel efficiency was low, with 0.5 tonnes or more of coal being used per tonne of finished lime (15 MJ/kg). Industrial scale production Lime production was sometimes carried out on an industrial scale. One example at Annery in North Devon, England, near Great Torrington, was made up of three kilns grouped together in an 'L' shape and was situated beside the Torrington canal and the River Torridge to bring in the limestone and coal, and to transport away the calcined lime in the days before properly metalled roads existed. Sets of seven kilns were common. A loading gang and an unloading gang would work the kilns in rotation through the week. A rarely used kiln was known as a "lazy kiln". Australia In the late 19th and early 20th centuries the town of Waratah in Gippsland, Victoria, Australia produced a majority of the quicklime used in the city of Melbourne as well as around other parts of Gippsland. The town, now called Walkerville, was set on an isolated part of the Victorian coastline and exported the lime by ship. When this became unprofitable in 1926 the kilns were shut down. The present-day area, though having no town amenities as such, markets itself as a tourist destination. The ruins of the lime kilns can still be seen today. A lime kiln also existed in Wool Bay, South Australia. Ukraine United Kingdom The large kiln at Crindledykes near Haydon Bridge, Northumbria, was one of more than 300 in the county. It was unique to the area in having four draw arches to a single pot. As production was cut back, the two side arches were blocked up, but were restored in 1989 by English Heritage. The development of the national rail network made the local small-scale kilns increasingly unprofitable, and they gradually died out through the 19th century. They were replaced by larger industrial plants. At the same time, new uses for lime in the chemical, steel and sugar industries led to large-scale plants. These also saw the development of more efficient kilns. A lime kiln erected at Dudley, West Midlands (formerly Worcestershire) in 1842 survives as part of the Black Country Living Museum which opened in 1976, although the kilns were last used during the 1920s. It is now among the last in a region which was dominated by coalmining and limestone mining for generations until the 1960s. Other countries Modern kilns Shaft kilns The theoretical heat (the standard enthalpy) of reaction required to make high-calcium lime is around 3.15 MJ per kg of lime, so the batch kilns were only around 20% efficient. The key to development in efficiency was the invention of continuous kilns, avoiding the wasteful heat-up and cool-down cycles of the batch kilns. The first were simple shaft kilns, similar in construction to blast furnaces. These are counter-current shaft kilns. Modern variants include regenerative and annular kilns. Output is usually in the range 100–500 tonnes per day. Counter-current shaft kilns The fuel is injected part-way up the shaft, producing maximum temperature at this point. The fresh feed fed in at the top is first dried then heated to 800 °C, where de-carbonation begins, and proceeds progressively faster as the temperature rises. Below the burner, the hot lime transfers heat to, and is cooled by, the combustion air. A mechanical grate withdraws the lime at the bottom. A fan draws the gases through the kiln, and the level in the kiln is kept constant by adding feed through an airlock. As with batch kilns, only large, graded stone can be used, in order to ensure uniform gas-flows through the charge. The degree of burning can be adjusted by changing the rate of withdrawal of lime. Heat consumption as low as 4 MJ/kg is possible, but 4.5 to 5 MJ/kg is more typical. Due to temperature peak at the burners up to 1200 °C in a shaft kiln conditions are ideal to produce medium and hard burned lime. Regenerative kilns These typically consist of a pair of shafts, operated alternately. First, when shaft A is the "primary" and B the "secondary" shaft, the combustion air is added from the top of shaft A, while fuel somewhat below via burner lances. The flame is top-bottom. The hot gases pass downward, cross to shaft B via the so-called "channel" and pass upward to exhaust of shaft B. At same time in both shafts cooling air is added from the bottom to cool the lime and to make exhaust of gases via the bottom of the kiln impossible via maintaining a positive pressure. The combustion air and cooling air leave the kiln jointly via exhaust on top of shaft B, preheating the stone. The direction of flow is reversed periodically (typically 5–10 times per hour) shaft A and B changing the role of "primary" and "secondary" shaft. The kiln has three zones: preheating zone on the top, burning zone in the middle, and cooling zone close to the bottom. The cycling produces a long burning zone of constant, relatively low temperature (around 950 °C) that is ideal for the production of high quality soft burned reactive lime. With exhaust gas temperatures as low as 120 °C and lime temperature at kiln outlet in 80 °C range the heat loss of the regenerative kiln is minimal, fuel consumption is as low as 3.6 MJ/kg. Due to these features the regenerative kilns are today mainstream technology under conditions of substantial fuel costs. Regenerative kilns are built with 150 to 800 t/day output, 300 to 450 being typical. Annular kilns These contain a concentric internal cylinder. This gathers pre-heated air from the cooling zone, which is then used to pressurize the middle annular zone of the kiln. Air spreading outward from the pressurized zone causes counter-current flow upwards, and co-current flow downwards. This again produces a long, relatively cool calcining zone. Fuel consumption is in 4 to 4.5 MJ/kg range and the lime is typically medium burned. Rotary kilns Rotary kilns started to be used for lime manufacture at the start of the 20th century and now account for a large proportion of new installations if energy costs are less important. The early use of simple rotary kilns had the advantages that a much wider range of limestone size could be used, from fines upwards, and undesirable elements such as sulfur can be removed. On the other hand, fuel consumption was relatively high because of poor heat exchange compared with shaft kilns, leading to excessive heat loss in exhaust gases. Old fashioned "long" rotary kilns operate at 7 to 10 MJ/kg. Modern installations partially overcome this disadvantage by adding a preheater, which has the same good solids/gas contact as a shaft kiln, but fuel consumption is still somewhat higher, typically in range of 4.5 to 6 MJ/kg. In the design shown, a circle of shafts (typically 8–15) is arranged around the kiln riser duct. Hot limestone is discharged from the shafts in sequence, by the action of a hydraulic "pusher plate". Kilns of 1000 tonnes per day output are typical. The rotary kiln is the most flexible of any lime kilns able to produce soft, medium, or hard burned as well as dead-burned lime or dolime. Gas cleaning All the above kiln designs produce exhaust gas that carries an appreciable amount of dust. Lime dust is particularly corrosive. Equipment is installed to trap this dust, typically in the form of electrostatic precipitators or bag filters. The dust usually contains a high concentration of elements such as alkali metals, halogens and sulfur. Carbon dioxide emissions The lime industry is a significant carbon dioxide emitter. The manufacture of one tonne of calcium oxide involves decomposing calcium carbonate, with the formation of 785 kg of CO2 in some applications, such as when used as mortar; this CO2 is later re-absorbed as the mortar goes off. If the heat supplied to form the lime (3.75 MJ/kg in an efficient kiln) is obtained by burning fossil fuel it will release CO2: in the case of coal fuel 295 kg/t; in the case of natural gas fuel 206 kg/t. The electric power consumption of an efficient plant is around 20 kWh per tonne of lime. This additional input is the equivalent of around 20 kg CO2 per ton if the electricity is coal-generated. Thus, total emission may be around 1 tonne of CO2 for every tonne of lime even in efficient industrial plants, but is typically 1.3 t/t. However, if the source of heat energy used in its manufacture is a fully renewable power source, such as solar, wind, hydro or even nuclear; there may be no net emission of CO2 from the calcination process. Less energy is required in production per weight than portland cement, primarily because a lower temperature is required. Other emissions See also List of lime kilns List of lime kilns in the United States Limepit References External links An authoritative discussion of lime and its uses (US context) Lime Kilns at Newport Pembrokeshire West Wales Muspratt's mid-19th century technical description of lime-burning and cement The Lime Physical-Chemical Process Lime Kiln Digital Collection at Sonoma State University Library Wainmans Double Arched Lime Kiln – Made Grade II Listed Building – 1 February 2005 Details & Image: https://web.archive.org/web/20140522012536/http://cowlingweb.co.uk/local_history/history/wainmanslimekiln.asp History of chemistry Kilns Firing techniques Kiln
Lime kiln
[ "Chemistry", "Engineering" ]
3,031
[ "Chemical equipment", "Lime kilns", "Kilns" ]
177,948
https://en.wikipedia.org/wiki/Snowy%20Mountains%20Scheme
The Snowy Mountains Scheme, also known as the Snowy Hydro or the Snowy scheme, is a hydroelectricity and irrigation complex in south-east Australia. Near the border of New South Wales and Victoria, the scheme consists of sixteen major dams; nine power stations; two pumping stations; and of tunnels, pipelines and aqueducts that were constructed between 1949 and 1974. The Scheme was completed under the supervision of Chief Engineer, Sir William Hudson. It is the largest engineering project undertaken in Australia. The water of the Snowy River and some of its tributaries, much of which formerly flowed southeast onto the river flats of East Gippsland, and into Bass Strait of the Tasman Sea, is captured at high elevations and diverted inland to the Murray and Murrumbidgee Rivers irrigation areas. The Scheme includes two major tunnel systems constructed through the continental divide of the Snowy Mountains, known in Australia as the Great Dividing Range. The water falls and travels through large hydro-electric power stations which generate peak-load power for the Australian Capital Territory, New South Wales and Victoria. The Scheme also provides some security of water flows to the Murray-Darling basin, providing approximately of water a year to the basin for use in Australia's irrigated agriculture industry. In 2016, the Snowy Mountains Scheme was added to the Australian National Heritage List. History Background Since the 1800s, both the Murray and Murrumbidgee rivers have been subject to development and control to meet water supply and irrigation needs. By contrast, the Snowy River, that rises in the Australian Alps and flows through mountainous and practically uninhabited country until debouching onto the river flats of East Gippsland, had never been controlled in any way, neither for the production of power nor for irrigation. A great proportion of its waters flowed eastwards into the South Pacific Ocean (the Tasman Sea). The Snowy River has the highest headwater source of any in Australia and draws away a large proportion of the waters from the south-eastern New South Wales snowfields. The construction of the Snowy Mountains Scheme was seen as a means of supplementing the flow of the great inland rivers, a means for developing hydro-electric power, and also a way to increase agricultural production in the Murray and Murrumbidgee valleys. Following World War II, the Government of New South Wales proposed that the flow of the Snowy River be diverted into the Murrumbidgee River for irrigation and agricultural purposes. There was little emphasis placed on the generation of power. A counter proposal by the Government of Victoria involved a greater generation of power, and involved diversion of the Snowy River to the Murray River. Additionally, the Government of South Australia was concerned that downstream flows on the Murray River would be severely jeopardised. The Commonwealth Government, looking at the national implications of the two proposals, initiated a meeting to discuss the use of the waters of the Snowy River, and a Committee was set up in 1946 to examine the question on the broadest possible basis. This Committee, in a report submitted in November 1948, suggested consideration of a far greater scheme than any previously put forward. It involved not only the simple question of use of the waters of the Snowy River, but consideration of the possible diversion of a number of rivers in the area, tributaries, not only of the Snowy, but of the Murray and Murrumbidgee. The recommendations of the Committee were generally agreed to by a conference of Ministers representing the Commonwealth, New South Wales, and Victoria, and it was also agreed that the Committee should continue its investigations. However, limitations in the Australian Constitution meant that the Commonwealth Government was limited in the powers it could exercise, without the agreement of the States. Subsequently, the Commonwealth Government introduced legislation into the Federal Parliament under its defence power; and enacted the that enabled the formation of the Snowy Mountains Hydroelectric Authority. Ten years later, the relevant States and Territories introduced their own corresponding legislation and in January 1959 the Snowy Mountains Agreement was reached between the Commonwealth and the States. The legislation created the Snowy Mountains Hydroelectric Authority that was given responsibility for the final evaluation, design and construction of the Snowy Mountains Scheme. The final agreed plan was to divert the waters of the Snowy Mountains region to provide increased electricity generating capacity and to provide irrigation water for the dry west. It was "greeted with enthusiasm by the people of Australia" and was seen to be "a milestone towards full national development". The chief engineer, New Zealand-born William Hudson (knighted 1955), was chosen to head the scheme as Chairman of the Snowy Mountains Hydroelectric Authority, and was instructed to seek workers from overseas. Hudson's employment of workers from 32 (mostly European) countries, many of whom had been at war with each other only a few years earlier, had a significant effect on the cultural mix of Australia. Construction Construction of the Snowy Scheme was managed by the Snowy Mountains Hydroelectric Authority. It officially began on 17 October 1949 and took 25 years, being officially completed in 1974. An agreement between the United States Bureau of Reclamation and Snowy Mountains Hydro to provide technical assistance and training of engineers was agreed between the United States and Australia in Washington, D.C., on 16 November 1951. A loan for $100 million was obtained from the World Bank in 1962. Tunneling records were set in the construction of the Scheme and it was completed on time and on budget in 1974, at a cost of 820 million; a dollar value equivalent in 1999 and 2004 to A$6 billion. Around two thirds of the workforce employed in the construction of the scheme were immigrant workers, originating from over thirty countries. The official death toll of workers on the Scheme stands at 121 people. Some of roads and tracks were constructed, seven townships and over 100 camps were built to enable construction of the 16 major dams, seven hydroelectric power stations, two pumping stations, of tunnel and of pipelines and aqueducts. Just 2% of the construction work is visible from above ground. Two of the towns constructed for the scheme are now permanent; Cabramurra, the highest town in Australia; and Khancoban. Cooma flourished during construction of the Scheme and remains the headquarters of the operating company of the Scheme. Townships at Adaminaby, Jindabyne and Talbingo were inundated by the flooded waters from Lake Eucumbene, Lake Jindabyne and Jounama Reservoir. Improved vehicular access to the high country enabled ski-resort villages to be constructed at Thredbo and Guthega in the 1950s by former Snowy Scheme workers who realised the potential for expansion of the Australian ski industry. The Scheme is in an area of , almost entirely within the Kosciuszko National Park. The design of the scheme was modelled on that of the Tennessee Valley Authority. Over 100,000 people from over 30 countries were employed during its construction, providing employment for many recently arrived immigrants, and was important in Australia's post-war economic and social development. Seventy percent of all the workers were migrants. During construction of the tunnels, a number of railways were employed to convey spoil from worksites and to deliver personnel, concrete and equipment throughout. The project used Australia's first transistorised computer; one of the first in the world. Called 'Snowcom', the computer was used from 1960 to 1967. At the completion of the project, the Australian Government maintained much of the diverse workforce and established the Snowy Mountains Engineering Corporation (SMEC), which is now an international engineering consultancy company. The Scheme is the largest renewable energy generator in mainland Australia and plays an important role in the operation of the National Electricity Market, generating approximately 67% of all renewable energy in the mainland National Electricity Market. The Snowy Scheme's primary function is as a water manager, however under the corporatised model must deliver dollar dividends to the three shareholder governments - the NSW, Commonwealth and Victorian Governments. The Scheme also has a significant role in providing security of water flows to the Murray-Darling Basin. The Scheme provides approximately of water a year to the Basin, providing additional water for an irrigated agriculture industry worth about A$3 bn per annum, representing more than 40% of the gross value of the nation's agricultural production. The Snowy Mountains Hydro-electric Scheme, is one of the most complex integrated water and hydro-electric power schemes in the world and is listed as a "world-class civil engineering project" by the American Society of Civil Engineers. The scheme interlocks seven power stations and 16 major dams through of trans-mountain tunnels and of aqueducts. The history of the Snowy Scheme reveals its important role in building post World War II Australia. Sir William Hudson was appointed the first commissioner of the Snowy Mountains Hydroelectric Authority, serving between 1949 and 1967. The Commissioner's role was the overall management of the Scheme. He represented the Scheme at the highest levels of government, welcomed international scientists and engineers, encouraged scientific and engineering research, as well as attending many social and civic activities. Sir William's management style 'stressed cooperation between management and labour and scientific knowledge (facts) over opinion'. The Scheme was completed with the official opening of the Tumut 3 Power Station project by the Governor-General of Australia, Sir Paul Hasluck on 21 October 1972. Safety The Scheme used a number of innovative approaches to all sorts of things during its construction. For example all vehicles driven on all parts of the scheme were required to be fitted with seatbelts for driver and front seat passenger; and that these seatbelts were required to be used. On 16 April 1958, an elevator at a dam near Cabramurra fell about 400 feet when the cable broke, killing 4 Italian employees of a French construction firm. Personal stories and memoirs of work on the Snowy Scheme Various stories and memoirs have been written about work on the Snowy Mountains Scheme. Siobhan McHugh's social history, The Snowy: The People Behind the Power is the most prominent, having been awarded the NSW Premier's Literary Award for Non-Fiction and being the source of an ABC radio documentary series (1987) and a Film Australia documentary, Snowy, A Dream of Growing Up (1989). Her book is based on about 90 oral histories with former Snowy workers and residents, with original recordings archived as a research collection at the State Library of New South Wales. An updated 70th anniversary edition of her book was published by New South in 2019 and its content showcased by Richard Fidler in an interview with McHugh for his popular ABC podcast, Conversations. Most recently, Snowy Hydro, Woden Community Service, Gen S Stories and PhotoAccess partnered for a Digital Storytelling project to present a diverse collection of stories told from the point of view of seven ex-workers, two lifelong employees and a child of a Snowy worker. As part of the project, participants created short films about their experience on the Snowy Scheme, each story offering a unique perspective into what life was like building the Scheme between 1949 and 1974. The project's artistic director Jenni Savigny assisted participants to make the short films; enabling them to put together the scripts, record voice overs and edit the short films. In an interview with Andrew Brown (The Canberra Times), Savigny said it was important to create a history of the Snowy Hydro using the participant's own words, "You just get a personal sense of what it was like to be there, and what it meant to people's lives." The films premiered 7 June 2018 at the Palace Electric Cinema in New Acton in Canberra and can be viewed on the Woden Community Service YouTube Channel. Current operations The Scheme is operated by Snowy Hydro Limited, an unlisted public company incorporated pursuant to the , owned by the Australian Federal government. There is currently further work ongoing for the expansion of the snowy scheme under the Snowy scheme 2.0 announced in 2017. Despite government support it has received many criticisms and concerns over the logistical and financial feasibility of the operation. Environmental concerns The original plan was for 99% of the water of the Snowy River's natural flow to be diverted by the Scheme below Lake Jindabyne. Releases from the Scheme were based on the needs of riparian users only and took no account of the ecosystem's needs. It soon became clear that there were major environmental problems in the lower reaches of the Snowy river. An extensive public campaign led to the Snowy Water Inquiry being established in January 1998. The Inquiry reported to the New South Wales and Victorian Governments in October of that year, recommending an increase to 15% of natural flows. The two governments were equivocal about this target. Aside from economic considerations, there was a view that the health of the Murray was more important than that of the Snowy and that any extra environmental flows were better used there instead. In the 1999 Victorian state election, the seat of Gippsland East was won by Craig Ingram, an independent and member of the Snowy River Alliance, based in large part on his campaign to improve Snowy flows. In 2000, Victoria and NSW agreed to a long-term target of 28%, requiring A$156 million of investment to offset losses to inland irrigators. In August 2002 flows were increased to 6%, with a target of 21% within 10 years. However, by October 2008 it was evident that the return of environmental flows to the Snowy River in 2009 would be no more than 4% of natural flow with governments arguing the Snowy River needs to "pay back" the "Mowamba Borrowings". At the 2010 state election, Ingram lost the seat of Gippsland East to the Nationals. In 2017, it was announced that the 21% target would be reached for the first time. Some concerned water managers, conservationists, politicians and farmers continue to advocate for the return of environmental flows to the Snowy River. The Snowy River Alliance formed in 1996 to address the lack of environmental flow commemorates Snowy River Day annually, towards the end of August, to mark the 2002 anniversary of when the governments of Victoria, NSW and the Commonwealth first released water into the Snowy River over the Mowamba Weir. The Dalgety District and Community Association started in response to dirty drinking water for the town of Dalgety, the loss of fishing and looming closure of the caravan park. A weir was constructed at Dalgety and the caravan park stayed as a result of their efforts. In accordance with the Snowy Water Licence, Snowy Hydro Limited has 're-commissioned' the Mowamba Aqueduct. Seasonal variable flows are essential to river ecology including flushing flows to support vital ecosystems for the Australian platypus and native Australian Bass, the species over which Ingram initially fought for flows into the Snowy River. A major spillway upgrade now facilitates these flows. Components Construction of the Scheme began in 1949 and was completed in 1974. Guthega power station commenced power production on 21 February 1955. Power stations The total installed capacity is . Major dams and reservoirs The Scheme's largest dam is Talbingo Dam with an embankment volume of 14 488 000 m3 and a wall height of 161.5 metres. Khancoban Dam is the longest dam in the scheme with a crest length of . A variety of dam and spillway types were used in the construction. With a capacity of , Lake Eucumbene is the largest reservoir in the Scheme. At the other end of the scale, Deep Creek Reservoir is the smallest reservoir with just . Tunnels Pumping Stations The Snowy Mountains Scheme has two pumping stations. The 600 MW pump storage facility at Tumut 3 Power Station returns water to Talbingo Reservoir. The 1 MW Jindabyne Pumping Station pumps water from Lake Jindabyne through to the Snowy-Geehi Tunnel at Island Bend. Expansion plans In March 2017, the Australian government then headed by Malcolm Turnbull suggested a $2 billion project expanding the 4.1 GW Snowy Mountains Scheme by 2 GW of pump storage for a week, building new tunnels and power stations, but no new dams. The 80% efficiency of such storage can be sufficient in leveling differences between supply and demand. Dubbed Snowy 2.0, the expansion has been under construction since 2019 and was expected to be complete by 2026. Delays have meant the project will not be fully operational until 2029. Tourism The Snowy Scheme is a major tourist destination. Sightseeing driving tours to the key locations of the Scheme are popular out of regional centres like Cooma, Adaminaby and Jindabyne along roads built for the Scheme like the Snowy Mountains Highway and Alpine Way and towards sights like Cabramurra, as Australia's highest town, spectacular dam walls, and scenic lakes. Trout fishing is popular in the lakes of the Scheme, notably Lake Jindabyne and Lake Eucumbene. The Snowy Scheme Museum opened at Adaminaby in 2011 to profile the history of the Scheme. Though skiing in Australia began in the northern Snowy Mountains in the 1860s, it was the construction of the vast Snowy Scheme from 1949, with its improvements to infrastructure and influx of experienced European skiers among the workers on the Scheme, that really opened up the mountains for the large scale development of a ski industry, and led to the establishment of Thredbo and Perisher as leading Australian resorts. The construction of Guthega Dam brought skiers to the isolated Guthega district and a rope tow was installed there in 1957. Charles Anton, a snowy worker identified the potential of the Thredbo Valley. Engineering heritage award The scheme is listed as a National Engineering Landmark by Engineers Australia as part of its Engineering Heritage Recognition Program. See also Snowy A 1990s TV series starring Rebecca Gibney set in the scheme. Bradfield Scheme, a similar proposal for Queensland Economic history of Australia National Electricity Market Kiewa Hydroelectric Scheme References External links Snowy Hydro Limited The Snowy River Alliance — a community group for the protection of the Snowy River Further reading Adaminaby Australian National Heritage List Economic history of New South Wales Engineering projects Historic Civil Engineering Landmarks History of Australia (1945–present) Hydroelectric power stations in New South Wales Interbasin transfer Murray River River regulation in Australia Scheme Snowy River Water management in New South Wales 1940s in New South Wales 1950s in New South Wales 1960s in New South Wales 1970s in New South Wales 1949 establishments in Australia Infrastructure completed in 1972 1972 establishments in Australia 20th century in New South Wales Recipients of Engineers Australia engineering heritage markers
Snowy Mountains Scheme
[ "Engineering", "Environmental_science" ]
3,712
[ "Hydrology", "Interbasin transfer", "Historic Civil Engineering Landmarks", "Civil engineering", "nan" ]
178,212
https://en.wikipedia.org/wiki/Anticoagulant
An anticoagulant, commonly known as a blood thinner, is a chemical substance that prevents or reduces the coagulation of blood, prolonging the clotting time. Some occur naturally in blood-eating animals, such as leeches and mosquitoes, which help keep the bite area unclotted long enough for the animal to obtain blood. As a class of medications, anticoagulants are used in therapy for thrombotic disorders. Oral anticoagulants (OACs) are taken by many people in pill or tablet form, and various intravenous anticoagulant dosage forms are used in hospitals. Some anticoagulants are used in medical equipment, such as sample tubes, blood transfusion bags, heart–lung machines, and dialysis equipment. One of the first anticoagulants, warfarin, was initially approved as a rodenticide. Anticoagulants are closely related to antiplatelet drugs and thrombolytic drugs by manipulating the various pathways of blood coagulation. Specifically, antiplatelet drugs inhibit platelet aggregation (clumping together), whereas anticoagulants inhibit specific pathways of the coagulation cascade, which happens after the initial platelet aggregation but before the formation of fibrin and stable aggregated platelet products. Common anticoagulants include warfarin and heparin. Medical uses The use of anticoagulants is a decision based on the risks and benefits of anticoagulation. The biggest risk of anticoagulation therapy is the increased risk of bleeding. In otherwise healthy people, the increased risk of bleeding is minimal, but those who have had recent surgery, cerebral aneurysms, and other conditions may have too great a risk of bleeding. Generally, the benefit of anticoagulation is preventing or reducing the progression of a thromboembolic disease. Some indications for anticoagulant therapy that are known to have benefit from therapy include: Atrial fibrillation — commonly forms an atrial appendage clot Coronary artery disease Deep vein thrombosis — can lead to pulmonary embolism Ischemic stroke Hypercoagulable states (e.g., Factor V Leiden) — can lead to deep vein thrombosis Mechanical heart valves Myocardial infarction Pulmonary embolism Restenosis from stents Cardiopulmonary bypass (or any other surgeries requiring temporary aortic occlusion) Heart failure In these cases, anticoagulation therapy prevents the formation or growth of dangerous clots. The decision to begin therapeutic anticoagulation often involves the use of multiple bleeding risk predictable outcome tools as non-invasive pre-test stratifications due to the potential for bleeding while on blood thinning agents. Among these tools are HAS-BLED, ATRIA, HEMORR2HAGES, and CHA2DS2-VASc. The risk of bleeding using the risk assessment tools above must then be weighed against thrombotic risk to formally determine the patient's overall benefit in starting anticoagulation therapy. There is no evidence to indicate that adding anticoagulant therapy to standard treatment has a benefit for people with cerebral small vessel disease but not dementia, and there is an increased risk of a person with this disease experiencing a bleed with this approach. Adverse effects The most serious and common adverse side effects associated with anticoagulants are increased risk of bleeding, both nonmajor and major bleeding events. The bleeding risk depends on the class of anticoagulant agent used, the patient's age, and pre-existing health conditions. Warfarin has an estimated incidence of bleeding of 15-20% per year and a life-threatening bleeding rate of 1-3% per year. Newer non-vitamin K antagonist oral anticoagulants appear to have fewer life-threatening bleeding events than warfarin. Additionally, patients aged 80 years or more may be especially susceptible to bleeding complications, with a rate of 13 bleeds per 100 person-years. Bleeding risk is especially important to consider in patients with renal impairment and NOAC therapy because all NOACs, to some extent, are excreted by the kidneys. Thus, patients with renal impairment may be at higher risk of increased bleeding. In people with cancer, a systematic review has found warfarin had no effect on death rate or the risk of blood clots. However, it did increase the risk of major bleeding in 107 more people per 1000 population and minor bleeding in 167 more people per 1000 population. Apixaban had no effect on mortality, recurrence of blood clots in blood vessels, or major or minor bleeding. However, this finding comes only from one study. Nonhemorrhagic adverse events are less common than hemorrhagic adverse events but should still be monitored closely. Nonhemorrhagic adverse events of warfarin include skin necrosis, limb gangrene, and purple toe syndrome. Skin necrosis and limb gangrene are most commonly observed on the third to eighth day of therapy. The exact pathogenesis of skin necrosis and limb gangrene is not completely understood but it is believed to be associated with warfarin's effect on inhibiting the production of protein C and protein S. Purple toe syndrome typically develops three to eight weeks after initiation of warfarin therapy. Other adverse effects of warfarin are associated with depletion of vitamin K, which can lead to inhibition of G1a proteins and growth arrest-specific gene 6, which can lead to increased risk of arterial calcification and heart valve, especially if too much Vitamin D is present. Warfarin's interference with G1a proteins has also been linked to abnormalities in fetal bone development in mothers who were treated with warfarin during pregnancy. Long-term warfarin and heparin usage have also been linked to osteoporosis. Another potentially severe complication associated with heparin use is called heparin-induced thrombocytopenia (HIT). There are two distinct types: HIT 1) immune-mediated and 2) non-immune-mediated. Immune-mediated HIT most commonly arises five to ten days after exposure to heparin. Pathogenesis of immune-mediated HIT is believed to be caused by heparin-dependent immunoglobulin antibodies binding to platelet factor 4/heparin complexes on platelets, leading to widespread platelet activation. Interactions Foods and food supplements with blood-thinning effects include nattokinase, lumbrokinase, beer, bilberry, celery, cranberries, fish oil, garlic, ginger, ginkgo, ginseng, green tea, horse chestnut, licorice, niacin, onion, papaya, pomegranate, red clover, soybean, St. John's wort, turmeric, wheatgrass, and willow bark. Many herbal supplements have blood-thinning properties, such as danshen and feverfew. Multivitamins that do not interact with clotting are available for patients on anticoagulants. However, some foods and supplements encourage clotting. These include alfalfa, avocado, cat's claw, coenzyme Q10, and dark leafy greens such as spinach. Excessive intake of the food mentioned above should be avoided while taking anticoagulants, or if coagulability is being monitored, their intake should be kept approximately constant so that anticoagulant dosage can be maintained at a level high enough to counteract this effect without fluctuations in coagulability. Grapefruit interferes with some anticoagulant drugs, increasing the time it takes for them to be metabolized out of the body, and should be eaten with caution when on anticoagulant drugs. Anticoagulants are often used to treat acute deep-vein thrombosis. People using anticoagulants to treat this condition should avoid using bed rest as a complementary treatment because there are clinical benefits to continuing to walk and remaining mobile while using anticoagulants in this way. Bed rest while using anticoagulants can harm patients in circumstances in which it is not medically necessary. Types Several anticoagulants are available. Warfarin, other coumarins, and heparins have long been used. Since the 2000s, several agents have been introduced that are collectively referred to as direct oral anticoagulants (DOACs), previously named novel oral anticoagulants (NOACs) or non-vitamin K antagonist oral anticoagulants. These agents include direct thrombin inhibitor (dabigatran) and factor Xa inhibitor (rivaroxaban, apixaban, betrixaban and edoxaban), and they have been shown to be as good or possibly better than the coumarins with less serious side effects. The newer anticoagulants (NOACs/DOACs) are more expensive than the traditional ones and should be used in caring for patients with kidney problems. Coumarins (vitamin K antagonists) These oral anticoagulants are derived from coumarin found in many plants. A prominent member of this class, warfarin (Coumadin), was found to be the anticoagulant most prescribed in a large multispecialty practice. The anticoagulant effect takes at least 48 to 72 hours to develop. Where an immediate effect is required, heparin is given concomitantly. These anticoagulants are used to treat patients with deep-vein thrombosis (DVT) and pulmonary embolism (PE) and to prevent emboli in patients with atrial fibrillation (AF), and mechanical prosthetic heart valves. Other examples are acenocoumarol, phenprocoumon, atromentin, and phenindione. The coumarins brodifacoum and difenacoum are used as mammalicides (particularly as rodenticides) but are not used medically. Heparin and derivative substances Heparin is the most widely used intravenous clinical anticoagulant worldwide. Heparin is a naturally occurring glycosaminoglycan. There are three major categories of heparin: unfractionated heparin (UFH), low molecular weight heparin (LMWH), and ultra-low-molecular weight heparin (ULMWH). Unfractionated heparin is usually derived from pig intestines and bovine lungs. UFH binds to the enzyme inhibitor antithrombin III (AT), causing a conformational change that results in its activation. The activated AT then inactivates factor Xa, thrombin, and other coagulation factors. Heparin can be used in vivo (by injection), and also in vitro to prevent blood or plasma clotting in or on medical devices. In venipuncture, Vacutainer brand blood collecting tubes containing heparin usually have a green cap. Low molecular weight heparin (LMWH) Low molecular weight heparin (LMWH) is produced through a controlled depolymerization of unfractionated heparin. LMWH exhibits a higher anti-Xa/anti-IIa activity ratio and is useful as it does not require monitoring of the APTT coagulation parameter and has fewer side effects. Synthetic pentasaccharide inhibitors of factor Xa Fondaparinux is a synthetic sugar composed of the five sugars (pentasaccharides) in heparin that bind to antithrombin. It is a smaller molecule than low molecular-weight heparin. Idraparinux Idrabiotaparinux Direct oral The direct oral anticoagulants (DOACs) were introduced in and after 2008. There are five DOACs currently on the market: dabigatran, rivaroxaban, apixaban, edoxaban and betrixaban. They were also previously referred to as "new/novel" and "non-vitamin K antagonist" oral anticoagulants (NOACs). Compared to warfarin, DOACs have a rapid onset action and relatively short half-lives; hence, they carry out their function more rapidly and effectively, allowing drugs to reduce their anticoagulation effects quickly. Routine monitoring and dose adjustments of DOACs are less important than for warfarin, as they have better predictable anticoagulation activity. DOAC monitoring, including laboratory monitoring and a complete medication review, should generally be conducted before initiation of a DOAC, 1–3 months after initiation, and then every 6–12 months afterwards. Both DOACs and warfarin are equivalently effective, but compared to warfarin, DOACs have fewer drug interactions, no known dietary interactions, a wider therapeutic index, and have conventional dosing that does not require dose adjustments with constant monitoring. However, there is no countermeasure for most DOACs, unlike for warfarin; nonetheless, the short half-lives of DOACs will allow their effects to recede swiftly. A reversal agent for dabigatran, idarucizumab, is currently available and approved for use by the FDA. Rates of adherence to DOACs are only modestly higher than adherence to warfarin among patients prescribed these drugs. Thus, adherence to anticoagulation is often poor despite hopes that DOACs would lead to higher adherence rates. DOACs are significantly more expensive than warfarin, but the patients on DOACs may experience reduced lab costs as they do not need to monitor their INR. Direct factor Xa inhibitors Drugs such as rivaroxaban, apixaban and edoxaban work by inhibiting factor Xa directly (unlike heparins and fondaparinux, which work via antithrombin activation). Also included in this category are betrixaban from Portola Pharmaceuticals, the discontinued darexaban (YM150) from Astellas, and, more recently, the discontinued letaxaban (TAK-442) from Takeda and eribaxaban (PD0348292) from Pfizer. Betrixaban is significant as it was in 2018, the only oral factor Xa inhibitor approved by the FDA for use in acutely medically ill patients. Darexaban development was discontinued in September 2011; in a trial for prevention of recurrences of myocardial infarction in addition to dual antiplatelet therapy (DAPT), the drug did not demonstrate effectiveness, and the risk of bleeding was increased by approximately 300%. The development of letaxaban for acute coronary syndrome was discontinued in May 2011 following negative results from a Phase II study. Direct thrombin inhibitors Another type of anticoagulant is the direct thrombin inhibitor. Current members of this class include the bivalent drugs hirudin, lepirudin, and bivalirudin and the monovalent drugs argatroban and dabigatran. An oral direct thrombin inhibitor, ximelagatran (Exanta), was denied approval by the Food and Drug Administration (FDA) in September 2004 and was pulled from the market entirely in February 2006 after reports of severe liver damage and heart attacks. In November 2010, dabigatran etexilate was approved by the FDA to prevent thrombosis in atrial fibrillation. Relevance to dental treatments As in any invasive procedure, patients on anticoagulation therapy have an increased risk for bleeding, and caution should be used along with local hemostatic methods to minimize bleeding risk during the operation as well as postoperatively. However, with regards to DOACs and invasive dental treatments, there has not been enough clinical evidence and experience to prove any reliable adverse effects, relevance or interaction between these two. Further clinical prospective studies on DOACs are required to investigate the bleeding risk and hemostasis associated with surgical and dental procedures. Recommendations of modifications to the usage/dosage of DOACs before dental treatments are made based on the balance of the bleeding risk of each procedure and also the individual's own bleeding risks and renal functionality. With low-bleeding-risk dental procedures, it is recommended that DOACs be continued by the patient to avoid any increase in the risk of a thromboembolic event. For dental procedures with a higher risk of bleeding complications (i.e. complex extractions, adjacent extractions leading to a large wound, or more than three extractions), the recommended practice is for the patient to miss or delay a dose of their DOAC before such procedures to minimize the effect on bleeding risk. Antithrombin protein therapeutics The antithrombin protein is used as a protein therapeutic that can be purified from human plasma or produced recombinantly (for example, Atryn, produced in the milk of genetically modified goats). The FDA approves Antithrombin as an anticoagulant for preventing clots before, during, or after surgery or birthing in patients with hereditary antithrombin deficiency. Other Many other anticoagulants exist in research and development, diagnostics, or as drug candidates. Batroxobin, a toxin from snake venom, clots platelet-rich plasma without affecting platelet functions (cleaves fibrinogen). Hementin is an anticoagulant protease from the salivary glands of the giant Amazon leech, Haementeria ghilianii. Vitamin E Alcoholic beverage Reversal agents With the growing number of patients taking oral anticoagulation therapy, studies into reversal agents are gaining increasing interest due to major bleeding events and the need for urgent anticoagulant reversal therapy. Reversal agents for warfarin are more widely studied, and established guidelines for reversal exist due to a longer history of use of warfarin and the ability to get a more accurate measurement of anticoagulation effect in a patient via measuring the INR (International Normalized Ratio). In general, vitamin K is most commonly used to reverse the effect of warfarin in non-urgent settings. However, in urgent settings or settings with extremely high INR (INR >20), hemostatic reversal agents such as fresh frozen plasma (FFP), recombinant factor VIIa, and prothrombin complex concentrate (PCC) have been utilized with proven efficacy. Specifically with warfarin, four-factor PCC (4F-PCC) has been shown to have superior safety and mortality benefits compared to FPP in lowering INR levels. Although specific antidotes and reversal agents for DOACs are not as widely studied, idarucizumab (for dabigatran) and andexanet alfa (for factor Xa inhibitor) have been used in clinical settings with varying efficacy. Idarucizumab is a monoclonal antibody, approved by the US FDA in 2015, that reverses the effect of dabigatran by binding to both free and thrombin-bound dabigatran. Andexanet alfa is a recombinant modified human factor Xa decoy that reverses the effect of factor Xa inhibitors by binding at the active sites of factor Xa inhibitor and making it catalytically inactive. Andexanet alfa was approved by the US FDA in 2018. Another drug called ciraparantag, a potential reversal agent for direct factor Xa inhibitors, is still under investigation. Additionally, hemostatic reversal agents have also been used with varying efficacy to reverse the effects of DOACs. Coagulation inhibitor measurement A Bethesda unit (BU) is a measure of blood coagulation inhibitor activity. It is the amount of inhibitor that will inactivate half of a coagulant during the incubation period. It is the standard measure used in the United States and is so named because it was adopted as a standard at a conference in Bethesda, Maryland. Laboratory use If blood is allowed to clot, laboratory instruments, blood transfusion bags, and medical and surgical equipment will get clogged up and non-operational. In addition, test tubes used for laboratory blood tests will have chemicals added to stop blood clotting. Besides heparin, most of these chemicals bind calcium ions, preventing the coagulation proteins from using them. Ethylenediaminetetraacetic acid (EDTA) strongly and irreversibly chelates (binds) calcium ions, preventing blood from clotting. Citrate is in liquid form in the tube and is used for coagulation tests and blood transfusion bags. It binds calcium but not as strongly as EDTA. The correct proportion of this anticoagulant to blood is crucial because of the dilution, which can be reversed with the addition of calcium. Formulations include plain sodium citrate, acid-citrate-dextrose, and more. Oxalate has a mechanism similar to that of citrate. It is the anticoagulant used in fluoride/oxalate tubes to determine glucose and lactate levels. The fluoride inhibits glycolysis, which can throw off blood sugar measurements. Citrate/fluoride/EDTA tubes work better in this regard. Dental considerations for long-term users Dental practitioners play an important role in the early detection of anticoagulant overdose through oral manifestations, as the patient does not show any symptoms. Dental treatment of patients taking anticoagulant or antiplatelet medication raises safety concerns in terms of the potential risk of bleeding complications following invasive dental procedures. Therefore, certain guidelines for the dental care of patients taking these drugs are needed. Detecting overdose An overdose of anticoagulants usually occurs in people who have heart problems and need to take anticoagulants in the long term to reduce the risk of stroke from their high blood pressure. An International Normalised Ratio (INR) test would be recommended to confirm the overdose so that the dosage can be adjusted to an acceptable standard. The INR test measures the time it takes for a clot to form in a blood sample relative to a standard. An INR value of 1 indicates a level of coagulation equivalent to that of an average patient not taking warfarin, and values greater than 1 indicate a longer clotting time and, thus, a longer bleeding time. Assessing bleeding risk There are two main parts to the assessment of bleeding risk: Assessment of the likely risk of bleeding associated with the required dental procedure Assessment of the patient's individual-level bleeding risk Managing bleeding risk A patient who is on anticoagulants or antiplatelet medications may undergo dental treatments which are unlikely to cause bleeding, such as local anesthesia injection, basic gum charting, removal of plaque, calculus and stain above the gum level, direct or indirect fillings which are above the gingiva, root canal treatment, taking impression for denture or crown and fitting or adjustment of orthodontic appliances.  For all these procedures, it is recommended that the dentist treat the patient following the normal standard procedure and taking care to avoid any bleeding. For a patient who needs to undergo dental treatments which are more likely to cause bleeding, such as simple tooth extractions (1-3 teeth with small wound size), drainage of swelling inside the mouth, periodontal charting, root planing,  direct or indirect filling which extends below the gingiva, complex filling, flap raising procedure, gingival recontouring and biopsies, the dentist needs to take extra precautions apart from the standard procedure. The recommendations are as follows: if the patient has another medical condition or is taking other medication that may increase bleeding risk, consult the patient's general medical practitioner or specialist if the patient is on a short course of anticoagulant or antiplatelet therapy, delay the non-urgent, invasive procedure until the medication has been discontinued plan treatment for early in the day and week, where possible, to allow time for the management of prolonged bleeding or re-bleeding if it occurs perform the procedure as traumatically as possible, use appropriate local measures and only discharge the patient once hemostasis has been confirmed if travel time to emergency care is a concern, place particular emphasis at the time of the initial treatment on the use of measures to avoid complications advise the patient to take paracetamol, unless contraindicated, for pain relief rather than NSAIDs such as aspirin, ibuprofen, diclofenac or naproxen provide the patient with written post-treatment advice and emergency contact details follow the specific recommendations and advice given for the management of patients taking different anticoagulants or antiplatelet drugs There is general agreement that in most cases, treatment regimens with older anticoagulants (e.g., warfarin) and antiplatelet agents (e.g., clopidogrel, ticlopidine, prasugrel, ticagrelor, and/or aspirin) should not be altered before dental procedures. The risks of stopping or reducing these medication regimens (i.e., thromboembolism, stroke, myocardial infarction) far outweigh the consequences of prolonged bleeding, which can be controlled with local measures. In patients with other existing medical conditions that can increase the risk of prolonged bleeding after dental treatment or receiving other therapy that can increase bleeding risk, dental practitioners may wish to consult the patient's physician to determine whether care can safely be delivered in a primary care office. Any suggested modification to the medication regimen before dental surgery should be done in consultation and on the advice of the patient's physician. Based on limited evidence, the consensus appears to be that in most patients who are receiving the newer direct-acting oral anticoagulants (i.e., dabigatran, rivaroxaban, apixaban, or edoxaban) and undergoing dental treatment (in conjunction with usual local measures to control bleeding), no change to the anticoagulant regimen is required. In patients deemed to be at higher risk of bleeding (e.g., patients with other medical conditions or undergoing more extensive procedures associated with higher bleeding risk), consideration may be given, in consultation with and on advice of the patient's physician, to postponing the timing of the daily dose of the anticoagulant until after the procedure; timing the dental intervention as late as possible after last dose of anticoagulant; or temporarily interrupting drug therapy for 24 to 48 hours. Research A substantial number of compounds are being investigated for use as anticoagulants. The most promising ones act on the contact activation system (factor XIIa and factor XIa); it is anticipated that this may provide agents that prevent thrombosis without conferring a risk of bleeding. , the direct factor XIa inhibitor milvexian is in Phase II clinical trials for the prevention of an embolism after surgery. See also CHADS2 score Direct factor Xa inhibitors Hypercoagulability in pregnancy References External links Staying Active and Healthy with Blood Thinners by the Agency for Healthcare Research and Quality Blood tests Medical signs
Anticoagulant
[ "Chemistry" ]
5,749
[ "Blood tests", "Chemical pathology" ]
178,220
https://en.wikipedia.org/wiki/ATP%20synthase
ATP synthase is an enzyme that catalyzes the formation of the energy storage molecule adenosine triphosphate (ATP) using adenosine diphosphate (ADP) and inorganic phosphate (Pi). ATP synthase is a molecular machine. The overall reaction catalyzed by ATP synthase is: ADP + Pi + 2H+out ATP + H2O + 2H+in ATP synthase lies across a cellular membrane and forms an aperture that protons can cross from areas of high concentration to areas of low concentration, imparting energy for the synthesis of ATP. This electrochemical gradient is generated by the electron transport chain and allows cells to store energy in ATP for later use. In prokaryotic cells ATP synthase lies across the plasma membrane, while in eukaryotic cells it lies across the inner mitochondrial membrane. Organisms capable of photosynthesis also have ATP synthase across the thylakoid membrane, which in plants is located in the chloroplast and in cyanobacteria is located in the cytoplasm. Eukaryotic ATP synthases are F-ATPases (which usually work as ATP synthases instead of ATPases in cellular environments) and running "in reverse" for an ATPase (ATPase catalyze the decomposition of ATP into ADP and a free phosphate ion). This article deals mainly with this type. An F-ATPase consists of two main subunits, FO and F1, which has a rotational motor mechanism allowing for ATP production. Nomenclature The F1 fraction derives its name from the term "Fraction 1" and FO (written as a subscript letter "o", not "zero") derives its name from being the binding fraction for oligomycin, a type of naturally derived antibiotic that is able to inhibit the FO unit of ATP synthase. These functional regions consist of different protein subunits — refer to tables. This enzyme is used in synthesis of ATP through aerobic respiration. Structure and function Located within the thylakoid membrane and the inner mitochondrial membrane, ATP synthase consists of two regions FO and F1. FO causes rotation of F1 and is made of c-ring and subunits a, two b, F6. F1 is made of α, β, γ, and δ subunits. F1 has a water-soluble part that can hydrolyze ATP. FO on the other hand has mainly hydrophobic regions. FO F1 creates a pathway for protons movement across the membrane. F1 region The F1 portion of ATP synthase is hydrophilic and responsible for hydrolyzing ATP. The F1 unit protrudes into the mitochondrial matrix space. Subunits α and β make a hexamer with 6 binding sites. Three of them are catalytically inactive and they bind ADP. Three other subunits catalyze the ATP synthesis. The other F1 subunits γ, δ, and ε are a part of a rotational motor mechanism (rotor/axle). The γ subunit allows β to go through conformational changes (i.e., closed, half open, and open states) that allow for ATP to be bound and released once synthesized. The F1 particle is large and can be seen in the transmission electron microscope by negative staining. These are particles of 9 nm diameter that pepper the inner mitochondrial membrane. FO region FO is a water insoluble protein with eight subunits and a transmembrane ring. The ring has a tetrameric shape with a helix-loop-helix protein that goes through conformational changes when protonated and deprotonated, pushing neighboring subunits to rotate, causing the spinning of FO which then also affects conformation of F1, resulting in switching of states of alpha and beta subunits. The FO region of ATP synthase is a proton pore that is embedded in the mitochondrial membrane. It consists of three main subunits, a, b, and c. Six c subunits make up the rotor ring, and subunit b makes up a stalk connecting to F1 OSCP that prevents the αβ hexamer from rotating. Subunit a connects b to the c ring. Humans have six additional subunits, d, e, f, g, F6, and 8 (or A6L). This part of the enzyme is located in the mitochondrial inner membrane and couples proton translocation to the rotation that causes ATP synthesis in the F1 region. In eukaryotes, mitochondrial FO forms membrane-bending dimers. These dimers self-arrange into long rows at the end of the cristae, possibly the first step of cristae formation. An atomic model for the dimeric yeast FO region was determined by cryo-EM at an overall resolution of 3.6 Å. Binding model In the 1960s through the 1970s, Paul Boyer, a UCLA Professor, developed the binding change, or flip-flop, mechanism theory, which postulated that ATP synthesis is dependent on a conformational change in ATP synthase generated by rotation of the gamma subunit. The research group of John E. Walker, then at the MRC Laboratory of Molecular Biology in Cambridge, crystallized the F1 catalytic-domain of ATP synthase. The structure, at the time the largest asymmetric protein structure known, indicated that Boyer's rotary-catalysis model was, in essence, correct. For elucidating this, Boyer and Walker shared half of the 1997 Nobel Prize in Chemistry. The crystal structure of the F1 showed alternating alpha and beta subunits (3 of each), arranged like segments of an orange around a rotating asymmetrical gamma subunit. According to the current model of ATP synthesis (known as the alternating catalytic model), the transmembrane potential created by (H+) proton cations supplied by the electron transport chain, drives the (H+) proton cations from the intermembrane space through the membrane via the FO region of ATP synthase. A portion of the FO (the ring of c-subunits) rotates as the protons pass through the membrane. The c-ring is tightly attached to the asymmetric central stalk (consisting primarily of the gamma subunit), causing it to rotate within the alpha3beta3 of F1 causing the 3 catalytic nucleotide binding sites to go through a series of conformational changes that lead to ATP synthesis. The major F1 subunits are prevented from rotating in sympathy with the central stalk rotor by a peripheral stalk that joins the alpha3beta3 to the non-rotating portion of FO. The structure of the intact ATP synthase is currently known at low-resolution from electron cryo-microscopy (cryo-EM) studies of the complex. The cryo-EM model of ATP synthase suggests that the peripheral stalk is a flexible structure that wraps around the complex as it joins F1 to FO. Under the right conditions, the enzyme reaction can also be carried out in reverse, with ATP hydrolysis driving proton pumping across the membrane. The binding change mechanism involves the active site of a β subunit's cycling between three states. In the "loose" state, ADP and phosphate enter the active site; in the adjacent diagram, this is shown in pink. The enzyme then undergoes a change in shape and forces these molecules together, with the active site in the resulting "tight" state (shown in red) binding the newly produced ATP molecule with very high affinity. Finally, the active site cycles back to the open state (orange), releasing ATP and binding more ADP and phosphate, ready for the next cycle of ATP production. Physiological role Like other enzymes, the activity of F1FO ATP synthase is reversible. Large-enough quantities of ATP cause it to create a transmembrane proton gradient, this is used by fermenting bacteria that do not have an electron transport chain, but rather hydrolyze ATP to make a proton gradient, which they use to drive flagella and the transport of nutrients into the cell. In respiring bacteria under physiological conditions, ATP synthase, in general, runs in the opposite direction, creating ATP while using the proton motive force created by the electron transport chain as a source of energy. The overall process of creating energy in this fashion is termed oxidative phosphorylation. The same process takes place in the mitochondria, where ATP synthase is located in the inner mitochondrial membrane and the F1-part projects into the mitochondrial matrix. By pumping proton cations into the matrix, the ATP-synthase converts ADP into ATP. Evolution The evolution of ATP synthase is thought to have been modular whereby two functionally independent subunits became associated and gained new functionality. This association appears to have occurred early in evolutionary history, because essentially the same structure and activity of ATP synthase enzymes are present in all kingdoms of life. The F-ATP synthase displays high functional and mechanistic similarity to the V-ATPase. However, whereas the F-ATP synthase generates ATP by utilising a proton gradient, the V-ATPase generates a proton gradient at the expense of ATP, generating pH values of as low as 1. The F1 region also shows significant similarity to hexameric DNA helicases (especially the Rho factor), and the entire enzyme region shows some similarity to -powered T3SS or flagellar motor complexes. The α3β3 hexamer of the F1 region shows significant structural similarity to hexameric DNA helicases; both form a ring with 3-fold rotational symmetry with a central pore. Both have roles dependent on the relative rotation of a macromolecule within the pore; the DNA helicases use the helical shape of DNA to drive their motion along the DNA molecule and to detect supercoiling, whereas the α3β3 hexamer uses the conformational changes through the rotation of the γ subunit to drive an enzymatic reaction. The motor of the FO particle shows great functional similarity to the motors that drive flagella. Both feature a ring of many small alpha-helical proteins that rotate relative to nearby stationary proteins, using a potential gradient as an energy source. This link is tenuous, however, as the overall structure of flagellar motors is far more complex than that of the FO particle and the ring with about 30 rotating proteins is far larger than the 10, 11, or 14 helical proteins in the FO complex. More recent structural data do however show that the ring and the stalk are structurally similar to the F1 particle. The modular evolution theory for the origin of ATP synthase suggests that two subunits with independent function, a DNA helicase with ATPase activity and a motor, were able to bind, and the rotation of the motor drove the ATPase activity of the helicase in reverse. This complex then evolved greater efficiency and eventually developed into today's intricate ATP synthases. Alternatively, the DNA helicase/ motor complex may have had pump activity with the ATPase activity of the helicase driving the motor in reverse. This may have evolved to carry out the reverse reaction and act as an ATP synthase. Inhibitors A variety of natural and synthetic inhibitors of ATP synthase have been discovered. These have been used to probe the structure and mechanism of ATP synthase. Some may be of therapeutic use. There are several classes of ATP synthase inhibitors, including peptide inhibitors, polyphenolic phytochemicals, polyketides, organotin compounds, polyenic α-pyrone derivatives, cationic inhibitors, substrate analogs, amino acid modifiers, and other miscellaneous chemicals. Some of the most commonly used ATP synthase inhibitors are oligomycin and DCCD. In different organisms Bacteria ATP synthase is the simplest known form of ATP synthase, with 8 different subunit types. Bacterial F-ATPases can occasionally operate in reverse, turning them into an ATPase. Some bacteria have no F-ATPase, using an A/V-type ATPase bidirectionally. Yeast Yeast ATP synthase is one of the best-studied eukaryotic ATP synthases; and five F1, eight FO subunits, and seven associated proteins have been identified. Most of these proteins have homologues in other eukaryotes. Plant In plants, ATP synthase is also present in chloroplasts (CF1FO-ATP synthase). The enzyme is integrated into thylakoid membrane; the CF1-part sticks into stroma, where dark reactions of photosynthesis (also called the light-independent reactions or the Calvin cycle) and ATP synthesis take place. The overall structure and the catalytic mechanism of the chloroplast ATP synthase are almost the same as those of the bacterial enzyme. However, in chloroplasts, the proton motive force is generated not by respiratory electron transport chain but by primary photosynthetic proteins. The synthase has a 40-aa insert in the gamma-subunit to inhibit wasteful activity when dark. Mammal The ATP synthase isolated from bovine (Bos taurus) heart mitochondria is, in terms of biochemistry and structure, the best-characterized ATP synthase. Beef heart is used as a source for the enzyme because of the high concentration of mitochondria in cardiac muscle. Their genes have close homology to human ATP synthases. Human genes that encode components of ATP synthases: ATP5A1 ATP5B ATP5C1, ATP5D, ATP5E, ATP5F1, ATP5MC1, ATP5G2, ATP5G3, ATP5H, ATP5I, ATP5J, ATP5J2, ATP5L, ATP5O MT-ATP6, MT-ATP8 Other eukaryotes Eukaryotes belonging to some divergent lineages have very special organizations of the ATP synthase. A euglenozoa ATP synthase forms a dimer with a boomerang-shaped F1 head like other mitochondrial ATP synthases, but the FO subcomplex has many unique subunits. It uses cardiolipin. The inhibitory IF1 also binds differently, in a way shared with trypanosomatida. Archaea Archaea do not generally have an F-ATPase. Instead, they synthesize ATP using the A-ATPase/synthase, a rotary machine structurally similar to the V-ATPase but mainly functioning as an ATP synthase. Like the bacteria F-ATPase, it is believed to also function as an ATPase. LUCA and earlier F-ATPase gene linkage and gene order are widely conserved across ancient prokaryote lineages, implying that this system already existed at a date before the last universal common ancestor, the LUCA. See also ATP10 protein required for the assembly of the FO sector of the mitochondrial ATPase complex. Chloroplast Electron transfer chain Flavoprotein Mitochondrion Oxidative phosphorylation P-ATPase Proton pump Rotating locomotion in living systems Transmembrane ATPase V-ATPase References Further reading Nick Lane: The Vital Question: Energy, Evolution, and the Origins of Complex Life, Ww Norton, 2015-07-20, (Link points to Figure 10 showing model of ATP synthase) External links Boris A. Feniouk: "ATP synthase — a splendid molecular machine" Well illustrated ATP synthase lecture by Antony Crofts of the University of Illinois at Urbana–Champaign. Proton and Sodium translocating F-type, V-type and A-type ATPases in OPM database The Nobel Prize in Chemistry 1997 to Paul D. Boyer and John E. Walker for the enzymatic mechanism of synthesis of ATP; and to Jens C. Skou, for discovery of an ion-transporting enzyme, , -ATPase. Harvard Multimedia Production Site — Videos – ATP synthesis animation David Goodsell: "ATP Synthase- Molecule of the Month" Enzymes Cellular respiration Photosynthesis EC 3.6.3 Integral membrane proteins Protein complexes
ATP synthase
[ "Chemistry", "Biology" ]
3,332
[ "Biochemistry", "Cellular respiration", "Metabolism", "Photosynthesis" ]
178,238
https://en.wikipedia.org/wiki/First%20light%20%28astronomy%29
In astronomy, first light is the first use of a telescope (or, in general, a new instrument) to take an astronomical image after it has been constructed. This is often not the first viewing using the telescope; optical tests will probably have been performed to adjust the components. Characteristics The first light image is normally of little scientific interest and is of poor quality, since the various telescope elements are yet to be adjusted for optimum efficiency. Despite this, a first light is always a moment of great excitement, both for the people who design and build the telescope and for the astronomical community, who may have anticipated the moment for many years while the telescope was under construction. A well-known and spectacular astronomical object is usually chosen as a subject. Historical examples The famous Hale Telescope of Palomar Observatory saw first light on 26 January 1949, targeting NGC 2261 under the direction of American astronomer Edwin Powell Hubble. The image was published in many magazines and is available on Caltech Archives. The Isaac Newton Telescope had two first lights: one in England in 1965 with its original mirror, and another in 1984 at La Palma island. The second first light was done with a video camera that showed the Crab Pulsar flashing. Elation at first light images by the Hubble Space Telescope in 1990 soon gave way to initial disappointment when a flaw prevented adjustments for proper operation. The expected first light image quality was finally achieved after a 1993 servicing mission by Space Shuttle Endeavour. The Large Binocular Telescope had its first light with a single primary mirror on 12 October 2005, which was a view of NGC 891. The second primary mirror was installed in January 2006 and became fully operational in January 2008. The Gran Telescopio Canarias had a first light image of Tycho 1205081 on 14 July 2007. The IRIS solar space observatory achieved first light on 17 July 2013. The PI noted: "The quality of images and spectra we are receiving from IRIS is amazing. This is just what we were hoping for ..." On 4 February 2022, the first light viewed by the James Webb Space Telescope (JWST) was from the star HD 84406 for the purpose of testing and aligning the focus of the telescope's 18 mirrors. On 11 February 2022. The New York Times reported that "first light" images from the James Webb Space Telescope were released - as well as a related NASA alignment video (2/11/2022; 3:00). On 6 July 2022, NASA released a test image from the JWST's Fine Guidance Sensor. NASA released the first official JWST image on 11 July 2022. Later, in an official ceremony, the first collection of five JWST science images were released on Tuesday, 12 July 2022 (NASA-TV live; 10:30 am/et/usa). Gallery References Telescopes Astronomical imaging
First light (astronomy)
[ "Astronomy" ]
588
[ "Telescopes", "Astronomical instruments" ]
178,247
https://en.wikipedia.org/wiki/UN/CEFACT
UN/CEFACT is the United Nations Centre for Trade Facilitation and Electronic Business. It was established as an intergovernmental body of the United Nations Economic Commission for Europe (UNECE) in 1996 and evolved from UNECE's long tradition of work in trade facilitation which began in 1957. UN/CEFACT's goal is "Simple, Transparent and Effective Processes for Global Commerce." It aims to help business, trade and administrative organizations from developed, developing and transition economies to exchange products and services effectively. To this end, it focuses on simplifying national and international transactions by harmonizing processes, procedures and information flows related to these transactions, rendering these more efficient and streamlined, with the ultimate goal of contributing to the growth of global commerce. Trade facilitation and electronic business UN/CEFACT focusses on two main areas of activity to make international trade processes more efficient and streamlined, namely: trade facilitation and electronic business. Trade facilitation involves the simplification of trade procedures (or the elimination of unnecessary procedures). This includes work to standardize and harmonize the core information used in trade documents, to ease the flow of information between parties by relying on appropriate information and communication technology, and to promote simplified payment systems to foster transparency, accountability and cost-effectiveness. Electronic business, in the UN/CEFACT context, focuses on harmonizing, standardizing and automating the exchange of information that controls the flow of goods along the international supply chain. UN/CEFACT's work on electronic business is driven by the understanding that goods cannot move faster than the processes and information that accompany them. UN/CEFACT takes a total trade transaction approach to trade facilitation and associated electronic business, covering all the processes from initial placement of orders through to final payments. This is best illustrated through the UN/CEFACT Buy-Ship-Pay model of the international supply chain, which forms the basis of its work. Deliverables UN/CEFACT has produced over 30 trade facilitation recommendations and a range of electronic business standards (collectively referred to as “instruments”) which are used throughout the world by both governments and the private sector. They reflect best practices in trade procedures and data and documentary requirements. The International Organization for Standardization (ISO) has adopted many of them as international standards. UN/CEFACT Recommendations and Standards are implemented throughout the world. Some of the more well known instruments are: Recommendation 1: United Nations Layout Key (UNLK) for Trade Documents. This provides an international basis for the standardized layout of documents used in international trade. The Layout Key facilitates the exchange of information between the various parties involved in a commercial transaction and is used as the basis for many key trade documents such as the European Union's Single Administrative Document (SAD); FIATA's Freight Forwarding Instruction; UNECE's Dangerous Goods Declaration; and the World Customs Organization (WCO) Goods Declaration for Export. Use of the UN Layout Key is specifically recommended in the WCO's Revised Kyoto Convention. It is important to note that the Layout Key is so widely used in international trade that many expect normal trade documents to conform to the Recommendation. Recommendation 16: UN/LOCODE Code for Trade and Transport Locations provides an alphabetic code for seaports, airports, inland freight terminals and other customs clearance sites. Recommended by the International Maritime Organization (IMO), it is used by most major shipping companies and also by the universal Postal Union (UPU). UN/LOCODE's website is regularly visited, accounting for 6% of all “hits” to the UNECE website. The first issue of UN/LOCODE in 1981 provided codes to represent the names of some 8000 locations in the world. Today, UN/LOCODE, which is updated twice annually, contains almost 100 000 entries with strong demand for further updates and extension. Every recognised national and international airport or maritime port will have a UN/LOCODE coding. Recommendation 25 and the UN/EDIFACT Standard represent a set of internationally agreed standards, directories, and guidelines for the electronic interchange of structured data, between independent computerized information systems. UN/EDIFACT is the international standard for Electronic data interchange and is used throughout the commercial and administrative world. UN/EDIFACT accounts for over 90% of all electronic data interchange (EDI) messages exchanged globally. UN/EDIFACT messages are used by almost all national customs administrations, all major seaports and a large range of companies (including over 100 000 in the retail sector), and throughout international supply chains. For example, more than 7 million EDIFACT messages are exchanged each year in the French agricultural supply chain. Recommendation 33 on Single Windows proposes that governments establish a Single Window facility that allows parties involved in trade and transport to lodge standardized information and documents with a single entry point to fulfil all import, export and transit-related regulatory requirements. A suite of complementary Single Window Recommendations has also been developed, namely Recommendation 34 on Data Simplification and Standardization and 35 on Establishing a Legal Framework for Single Windows. A family of Supply Chain “Cross-Industry” messages are exchanged globally between trading partners covering the majority of business-to-business (B2B) electronic exchanges from order to payment. One of the key documents within this family is the Cross Industry Invoice (CII) which functions primarily as a request for payment, used as a key document for Value Added Tax (VAT) declaration and reclamation, for statistics declarations and to support export and import declarations in international trade. The eDAPLOS message describes the data crop sheet exchanged between farmers and their partners. This message has allowed users to harmonize the definitions of technical data, develop consensual data dictionaries which can be used as a basis for all the steps of traceability and create a standardized Crop Data Sheet message. DAPLOS, which is based on the UN/CEFACT Core Component Library, has been adopted by 25 000 farmers and regional agriculture chambers in France and Belgium. CITES (the Convention on International Trade in Endangered Species of Wild Fauna and Flora) has developed a version of their declaration using the Core Component Library of UN/CEFACT and has generated an XML message according to the specifications of UN/CEFACT. CITES is an international agreement between governments. Its aim is to ensure that international trade in specimens of wild animals and plants does not threaten their survival. The CITES declarations are used in customs clearance procedures in all countries around the globe. Benefits Governments, traders and consumers all benefit from having simpler and more cost-effective trade procedures. UN/CEFACT's work is particularly relevant for developing countries, where the elimination of trade-related inefficiencies is many times more beneficial than the reduction or removal of tariff barriers to trade. In addition, UN/CEFACT's work is of particular benefit to landlocked countries and countries distant from major markets. These countries experience challenges and constraints resulting from complex and inefficient procedures, including the creation of many additional costs in bringing goods to market. The gains from this work are considerable for small and medium-sized enterprises, for which the costs of compliance with various trade-related procedures are proportionally higher. This is particularly true in the case of low-value shipments, where the cost of administrative procedures represents a large proportion of total cost. Structure In view of the global character of its work on trade facilitation and electronic business standards, UN/CEFACT is open to representatives of all UN member states and all organisations recognized by the United Nations Economic and Social Council (ECOSOC). The participation of developing countries and countries in transition in standards development processes is strongly encouraged. The highest authority regarding all aspects of UN/CEFACT's work is the intergovernmental Plenary which meets once a year in Geneva. Delegations to the Plenary include UN member states, intergovernmental organizations and non-governmental organizations (NGOs) recognized by the Economic and Social Council of the United Nations (ECOSOC). Only UN member states have the right to vote at Plenary meetings – all other members are observers. The Plenary elects the UN/CEFACT Bureau which consists of a chair and at least four vice-chairs. The Bureau acts in the name of the Plenary between sessions and meets physically an average of four times per year and more frequently through virtual conference calls. Each UN member state may nominate a permanent Head of Delegation (HOD) to UN/CEFACT. HODs participate in the Plenary, are consulted by the bureau on key strategic issues, and are responsible for nominating experts to participate in UN/CEFACT activities. The technical work to develop UN/CEFACT standards and recommendations is done by over 200 volunteer experts from around the world, nominated by HODs. The experts come from both the public and the private sectors, forming a public-private partnership in support of trade facilitation and electronic business. They participate as independent volunteer experts in their own right, without representing any special interests of their countries or institutions. The experts are representatives from: Governments Private companies Intergovernmental organizations Industry associations (representing large collective groups of private-sector companies) Academia UN/CEFACT experts meet physically in UN/CEFACT Forums, which are held twice a year (one in Geneva and one hosted by a UN member state) in order to coordinate their work. The vast majority of the work is done virtually between forums. UNECE provides secretarial support to UN/CEFACT in order that the Centre may implement its programme of work. Annually, the UN/CEFACT Bureau (acting on behalf of the Plenary) and the UNECE Secretariat plan for the implementation of UN/CEFACT's programme of work, taking into account the resources available from both the United Nations and externally. The secretariat takes responsibility for supporting the work of UN/CEFACT including supporting the maintenance of Recommendations and standards; and handles the administrative functions related to UN/CEFACT, including UN oversight, organizing meetings and preparing official documents and papers. The technical work of standards development is undertaken by the UN/CEFACT experts. Some important users of UN/CEFACT norms, standards and recommendations Governments of Australia, Canada, France and New Zealand: for agricultural trade certificates Government of Canada: for its air cargo security initiative Government of France: for , the transfer of digital records and public procurement (all under current development) Government of India: for its ports management system Government of the Republic of Korea: for public procurement services Government of the United States of America: for defence contract compliance reporting and for material safety data sheets (the latter being used in transport of dangerous goods) Governments of Central Asian and South East European (SEE) countries: for Single Window capacity-building and implementation Asia Pacific Economic Cooperation and Association of Southeast Asian Nations and the Government of India: for development of electronic trade documents Southeast European countries: for work on customs corridors World Customs Organization and numerous national Customs administrations, S.W.I.F.T and ISO 20022 (Finance): for data formats and definitions World Trade Organization negotiators on trade facilitation: as examples of relevant standards and recommendations particularly for aligned documents and Single Windows Global Standards 1 (GS1): for all trade messages (with approximately 110,000 companies having implemented these) International insurance industry: for coverage and claims European gas and electricity industries: for information exchange Japanese construction industry: for e-tendering Japanese tourism industry: for rental of small-scale lodging houses European transport companies: for the exchange of short sea-transport information Numerous Governments in both developed and developing countries: for aligned documents Most shipping companies, container operators, terminal operators and others in the maritime shipping industry: for almost all maritime shipping messages through EDIFACT messages. The UN/CEFACT electronic data forms The following 94 forms are until May 2014 (version D13B), developed by UN/CEFACT. In September 2017, this was increased to 123 forms (XML Schemas) AAA Accounting Entry Message AAA Accounting Journal List Message AAA Accounting Message AAA Bundle Collection Message AAA Chart Of Accounts Message AAA Ledger Message AAA Reporting Message AAA Trial Balance Message Acknowledgement Agronomical Observation Report Animal Inspection Message Contract Summary Data Cost Data Cost Schedule Crop Data Sheet Message Cross Border Livestock Cross Industry Catalogue Cross Industry Demand Forecast Response Cross Industry Demand Forecast Cross Industry Despatch Advice Cross Industry Inventory Forecast Cross Industry Invoice Cross Industry Order Change Cross Industry Order Response Cross Industry Order Cross Industry Quotation Proposal Response Cross Industry Quotation Proposal Cross Industry Remittance Advice Cross Industry Request For Quotation Response Cross Industry Request For Quotation Cross Industry Supply Instruction Cross Industry Supply Notification Data Specification Profile Data Specification Query Data Specification Request Data Specification Electronic Animal Passport Message Electronic Data Exchange Proxy Examination Result Notification FLUX ACDR Message FLUX Response Message FLUX Vessel Position Message Funding Data Invitation To Tender Laboratory Observation Report Letter Of Invitation To Tender Lodging House Information Request Lodging House Information Response Lodging House Reservation Request Lodging House Reservation Response Lodging House Travel Product Information Material Safety Data Sheet MSI Request MSI Network Schedule Prequalification Application Project Artefact Profile Project Artefact Query Project Artefact Request Project Artefact Qualification Application Qualification Result Notice RASFF Notification Message Reception Of Prequalification Application Reception Of Qualification Application Reception Of Registration Application Reception Of Request For Tender Information Reception Of Response Of Tender Guarantee Reception Of Tender Guarantee Reception Of Tender Registration Application Reporting Calendar Data Report Structure Request For Tender Information Resourcing Data Response Of Tender Guarantee Schedule Calendar Data SPSAcknowledgement SPSCertificate Tender Guarantee Tender Information Tender Result Notice Tender Thresholds TMW Cancellation Message TMW Certificate Of Waste Receipt Message TMW Certificate Of Waste Recovery Disposal Message TMW Confirmation Of Message Receipt Message TMW Movement Announcement Message TMW Notification Acknowledgement Message TMW Notification Decision Message TMW Notification Submission Message TMW Request For Further Notification Information Message TMW Transport Statement Message See also UN/CEFACT's Modeling Methodology Core Component Technical Specification ebXML OASIS (organization) XBRL GL (General Ledger) SIE (file format) UNeDocs References External links UN/CEFACT UN/CEFACT files, documentation and code Recent specifications such as CCTS "From EDI to UN/CEFACT - An Evolutionary Path towards a Next Generation e-Business Stack" Information about the potential of UN/CEFACT SAP Developer Network: How to solve the business standards dilemma EDIFICAS SIE XBRL OASIS Alphabet AB program demo of the 77 electronic documents, editing, saving and reading files International trade organizations United Nations Economic Commission for Europe E-commerce Computer file formats
UN/CEFACT
[ "Technology" ]
3,022
[ "Information technology", "E-commerce" ]
178,258
https://en.wikipedia.org/wiki/UN%20number
A UN number (United Nations number) is a four-digit number that identifies hazardous materials, and articles (such as explosives, flammable liquids, oxidizers, toxic liquids, etc.) in the framework of international trade and transport. Some hazardous substances have their own UN numbers (e.g. acrylamide has UN 2074), while sometimes groups of chemicals or products with similar properties receive a common UN number (e.g. flammable liquids, not otherwise specified, have UN 1993). A chemical in its solid state may receive a different UN number than the liquid phase if its hazardous properties differ significantly; substances with different levels of purity (or concentration in solution) may also receive different UN number Hazard identifiers Associated with each UN number is a hazard identifier, which encodes the general hazard class and subdivision (and, in the case of explosives, their compatibility group). If a substance poses several dangers, then subsidiary risk identifiers may be specified. It is not possible to deduce the hazard class(es) of a substance from its UN number: they have to be looked up in a table. Range UN numbers range from UN 0004 to about UN 3550 (UN 0001 – UN 0003 are no longer in use) and are assigned by the United Nations Committee of Experts on the Transport of Dangerous Goods. They are published as part of their Recommendations on the Transport of Dangerous Goods, also known as the Orange Book. These recommendations are adopted by the regulatory organization responsible for the different modes of transport. There is no UN number allocated to non-hazardous substances. Non-UN identifiers An NA number (North America number) is issued by the United States Department of Transportation and is identical to UN numbers, except that some substances without a UN number may have an NA number. These additional NA numbers use the range NA 9000 - NA 9279. There are some exceptions, for example, NA 2212 is all asbestos with UN 2212 limited to asbestos, amphibole amosite, tremolite, actinolite, anthophyllite, or crocidolite. Another exception, NA 3334, is self-defense spray, non-pressurized while UN 3334 is aviation-regulated liquid, not otherwise specified. For the complete list, see NA/UN exceptions. An ID number is a third type of identification number used for hazardous substances being offered for air transport. Substances with an ID number are associated with proper shipping names recognized by the ICAO Technical Instructions. ID 8000, Consumer commodity does not have a UN or NA number, and is classed as a Class 9 hazardous material. See also Globally Harmonized System of Classification and Labelling of Chemicals Lists of UN numbers Dangerous goods CAS registry number List of NA numbers References External links United Nations Committee of Experts on the Transport of Dangerous Goods UN Recommendations on the Transport of Dangerous Goods. Part 2 defines the hazard classes and their divisions and Part 3 contains a complete list of all UN numbers and their hazard identifiers. The Emergency Response Guidebook from the U.S. Department of Transportation contains a list of all assigned NA numbers along with recommended emergency procedures. Chemical numbering schemes
UN number
[ "Chemistry", "Mathematics", "Technology" ]
652
[ "Mathematical objects", "Lists of UN numbers", "Chemical numbering schemes", "Numbers" ]
178,282
https://en.wikipedia.org/wiki/Geostationary%20transfer%20orbit
In space mission design, a geostationary transfer orbit (GTO) or geosynchronous transfer orbit is a highly elliptical type of geocentric orbit, usually with a perigee as low as low Earth orbit (LEO) and an apogee as high as geostationary orbit (GEO). Satellites that are destined for geosynchronous orbit (GSO) or GEO are often put into a GTO as an intermediate step for reaching their final orbit. Manufacturers of launch vehicles often advertise the amount of payload the vehicle can put into GTO. Background Geostationary and geosynchronous orbits are very desirable for many communication and Earth observation satellites. However, the delta-v, and therefore financial, cost to send a spacecraft to such orbits is very high due to their high orbital radius. A GTO is an intermediary orbit used to make this process more efficient. Satellite operators often use a high-thrust, low-efficiency launch vehicle to put their satellite into GTO, and then, after detaching the launch vehicle, use low-thrust, high-efficiency thrusters onboard the satellite itself to circularize its orbit (to GEO) over a longer period of time. This process is called spiral-out. This mission architecture is useful because it minimizes the mass that the spacecraft must push to GEO, allows for maximally efficient circularization burns taking advantage of the Oberth effect, and allows the spent launch vehicle to deorbit primarily through aerobraking due to its low perigee, minimizing its orbital lifetime. Technical description GTO is a highly elliptical Earth orbit with an apogee (the point in the orbit of the moon or a satellite at which it is furthest from the earth) of , or a height of above sea level, which corresponds to the geostationary altitude. The period of a standard geosynchronous transfer orbit is about 10.5 hours. The argument of perigee is such that apogee occurs on or near the equator. Perigee can be anywhere above the atmosphere, but is usually restricted to a few hundred kilometers above the Earth's surface to reduce launcher delta-V () requirements and to limit the orbital lifetime of the spent booster so as to curtail space junk. If using low-thrust engines such as electrical propulsion to get from the transfer orbit to geostationary orbit, the transfer orbit can be supersynchronous (having an apogee above the final geosynchronous orbit). However, this method takes much longer to achieve due to the low thrust injected into the orbit. The typical launch vehicle injects the satellite to a supersynchronous orbit having the apogee above 42,164 km. The satellite's low-thrust engines are thrusted continuously around the geostationary transfer orbits. The thrust direction and magnitude are usually determined to optimize the transfer time and/or duration while satisfying the mission constraints. The out-of-plane component of thrust is used to reduce the initial inclination set by the initial transfer orbit, while the in-plane component simultaneously raises the perigee and lowers the apogee of the intermediate geostationary transfer orbit. In case of using the Hohmann transfer orbit, only a few days are required to reach the geosynchronous orbit. By using low-thrust engines or electrical propulsion, months are required until the satellite reaches its final orbit. The orbital inclination of a GTO is the angle between the orbit plane and the Earth's equatorial plane. It is determined by the latitude of the launch site and the launch azimuth (direction). The inclination and eccentricity must both be reduced to zero to obtain a geostationary orbit. If only the eccentricity of the orbit is reduced to zero, the result may be a geosynchronous orbit but will not be geostationary. Because the required for a plane change is proportional to the instantaneous velocity, the inclination and eccentricity are usually changed together in a single maneuver at apogee, where velocity is lowest. The required for an inclination change at either the ascending or descending node of the orbit is calculated as follows: For a typical GTO with a semi-major axis of 24,582 km, perigee velocity is 9.88 km/s and apogee velocity is 1.64 km/s, clearly making the inclination change far less costly at apogee. In practice, the inclination change is combined with the orbital circularization (or "apogee kick") burn to reduce the total for the two maneuvers. The combined is the vector sum of the inclination change and the circularization , and as the sum of the lengths of two sides of a triangle will always exceed the remaining side's length, total in a combined maneuver will always be less than in two maneuvers. The combined can be calculated as follows: where is the velocity magnitude at the apogee of the transfer orbit and is the velocity in GEO. Other considerations Even at apogee, the fuel needed to reduce inclination to zero can be significant, giving equatorial launch sites a substantial advantage over those at higher latitudes. Russia's Baikonur Cosmodrome in Kazakhstan is at 46° north latitude. Kennedy Space Center in the United States is at 28.5° north. China's Wenchang is at 19.5° north. India's SDSC is at 13.7° north. Guiana Space Centre, the European Ariane and European-operated Russian Soyuz launch facility, is at 5° north. The "indefinitely suspended" Sea Launch launched from a floating platform directly on the equator in the Pacific Ocean. Expendable launchers generally reach GTO directly, but a spacecraft already in a low Earth orbit (LEO) can enter GTO by firing a rocket along its orbital direction to increase its velocity. This was done when geostationary spacecraft were launched from the Space Shuttle; a "perigee kick motor" attached to the spacecraft ignited after the shuttle had released it and withdrawn to a safe distance. Although some launchers can take their payloads all the way to geostationary orbit, most end their missions by releasing their payloads into GTO. The spacecraft and its operator are then responsible for the maneuver into the final geostationary orbit. The 5-hour coast to first apogee can be longer than the battery lifetime of the launcher or spacecraft, and the maneuver is sometimes performed at a later apogee or split among multiple apogees. The solar power available on the spacecraft supports the mission after launcher separation. Also, many launchers now carry several satellites in each launch to reduce overall costs, and this practice simplifies the mission when the payloads may be destined for different orbital positions. Because of this practice, launcher capacity is usually quoted as spacecraft mass to GTO, and this number will be higher than the payload that could be delivered directly into GEO. For example, the capacity (adapter and spacecraft mass) of the Delta IV Heavy is 14,200 kg to GTO, or 6,750 kg directly to geostationary orbit. If the maneuver from GTO to GEO is to be performed with a single impulse, as with a single solid-rocket motor, apogee must occur at an equatorial crossing and at synchronous orbit altitude. This implies an argument of perigee of either 0° or 180°. Because the argument of perigee is slowly perturbed by the oblateness of the Earth, it is usually biased at launch so that it reaches the desired value at the appropriate time (for example, this is usually the sixth apogee on Ariane 5 launches). If the GTO inclination is zero, as with Sea Launch, then this does not apply. (It also would not apply to an impractical GTO inclined at 63.4°; see Molniya orbit.) The preceding discussion has primarily focused on the case where the transfer between LEO and GEO is done with a single intermediate transfer orbit. More complicated trajectories are sometimes used. For example, the Proton-M uses a set of three intermediate orbits, requiring five upper-stage rocket firings, to place a satellite into GEO from the high-inclination site of Baikonur Cosmodrome, in Kazakhstan. Because of Baikonur's high latitude and range safety considerations that block launches directly east, it requires less delta-v to transfer satellites to GEO by using a supersynchronous transfer orbit where the apogee (and the maneuver to reduce the transfer orbit inclination) are at a higher altitude than 35,786 km, the geosynchronous altitude. Proton even offers to perform a supersynchronous apogee maneuver up to 15 hours after launch. The geostationary orbit is a special type of orbit around the Earth in which a satellite orbits the planet at the same rate as the Earth's rotation. This means that the satellite appears to remain stationary relative to a fixed point on the Earth's surface. The geostationary orbit is located at an altitude of approximately 35,786 kilometers (22,236 miles) above the Earth's equator. See also Astrodynamics Low Earth orbit List of orbits Aeronautics References Astrodynamics Earth orbits
Geostationary transfer orbit
[ "Engineering" ]
1,909
[ "Astrodynamics", "Aerospace engineering" ]
178,311
https://en.wikipedia.org/wiki/Religious%20ecstasy
Religious ecstasy is a type of altered state of consciousness characterized by greatly reduced external awareness and reportedly expanded interior mental and spiritual awareness, frequently accompanied by visions and emotional (and sometimes physical) euphoria. Although the experience is usually brief in time, there are records of such experiences lasting several days or even more, and of recurring experiences of ecstasy during a person's lifetime. In Sufism, the term is referred to as wajd. Context The adjective "religious" means that the experience occurs in connection with religious activities or is interpreted in the context of a religion. Journalist Marghanita Laski writes in her study "Ecstasy in Religious and Secular Experiences", first published in 1961: Epithets are very often applied to mystical experiences including ecstasies without, apparently, any clear idea about the distinctions that are being made. Thus we find experiences given such names as nature, religious, aesthetic, neo-platonic, etc.. experiences, where in some cases the name seems to derive from a trigger, sometimes from the over belief. History Ancient Yoga provides techniques to attain a state of ecstasy called samādhi. According to practitioners, there are various stages of ecstasy, the highest being Nirvikalpa samādhi. Bhakti Yoga in particular places emphasis on ecstasy as being one of the fruits of its practice. In Buddhism, especially in the Pali Canon, there are eight states of trance also called absorption. The first four states are Rupa or, materially-oriented. The next four are Arupa or non-material. These eight states are preliminary trances which lead up to final saturation. In the Visuddhimagga, great effort and years of sustained meditation are practiced to reach the first absorption, and not all individuals can accomplish it at all. In the Dionysian Mysteries of ancient Greece, initiates used intoxicants, ecstatic dance and music to remove inhibitions and social constraints. Modern Modern meditator experiences in the Thai Forest Tradition, as well as other Theravadan traditions, demonstrate that this effort and rarity is necessary only to become completely immersed in the absorptions and experience no other sensations. It is possible to experience the absorptions in a less intense state with much less practice. In the monotheistic tradition, ecstasy is usually associated with communion and oneness with God. However, such experiences can also be personal mystical experiences with no significance to anyone but the person experiencing them. Some charismatic Christians practice ecstatic states (such as "being slain in the Spirit") and interpret these as given by the Holy Spirit. The firewalkers of Greece dance themselves into a state of ecstasy at the annual Anastenaria, when they believe themselves under the influence of Constantine the Great. Historically, large groups of individuals have experienced religious ecstasies during periods of Christian revivals, to the point of causing controversy as to the origin and nature of these experiences. In response to claims that all emotional expressions of religious ecstasy were attacks on order and theological soundness from the Devil, Jonathan Edwards published his influential Religious Affections. Here, he argues, religious ecstasy could come from oneself, the Devil, or God, and it was only by observing the fruit, or changes in inner thought and behaviour, that one could determine if the religious ecstasy had come from God. In modern Pentecostal, Charismatic and Spirit-filled Christianity, numerous examples of religious ecstasy have transpired, similar to historic revivals. These occurrences have changed significantly since the time of the Toronto Blessing and several other North American so-called revivals and outpourings from the mid-1990s. From that time, religious ecstasy in these movements has been characterized by increasingly unusual behaviors that are understood by adherents to be the anointing of the Holy Spirit and evidence of God's "doing a new work". One of the most controversial and strange examples is that of spiritual birthing a practice during which women, and at times even men, claim to be having actual contractions of the womb while they moan and retch as though experiencing childbirth. It is said to be a prophetic action bringing spiritual blessings from God into the world. Many believe spiritual birthing to be highly demonic and more occult-like than Christian. Religious ecstasy in these Christian movements has also been witnessed in the form of squealing, shrieking, an inability to stand or sit, uttering apocalyptic prophecies, holy laughter, crying and barking. Some people have made dramatic claims of sighting "gold dust", "angel feathers", "holy clouds", or the spontaneous appearance of precious gem stones during ecstatic worship events. In hagiographies (writings about Christian saints), many instances are recorded in which saints are granted ecstasies. According to the Catholic Encyclopedia religious ecstasy (called "supernatural ecstasy") includes two elements: one, interior and invisible, in which the mind rivets its attention on a religious subject, and another, corporeal and visible, in which the activity of the senses is suspended, reducing the effect of external sensations upon the subject and rendering him or her resistant to awakening. The witnesses of a Marian apparition often describe experiencing these elements of ecstasy. Modern witchcraft traditions may define themselves as "ecstatic traditions", and focus on reaching ecstatic states in their rituals. The Reclaiming Tradition and the Feri Tradition are two modern ecstatic Witchcraft examples. According to the Indian spiritual teacher Meher Baba, God-intoxicated souls known in Sufism as masts experience a unique type of spiritual ecstasy: "[M]asts are desperately in love with Godor consumed by their love for God. Masts do not suffer from what may be called a disease. They are in a state of mental disorder because their minds are overcome by such intense spiritual energies that are far too much for them, forcing them to lose contact with the world, shed normal human habits and customs, and civilized society and live in a state of spiritual splendor but physical squalor. They are overcome by an agonizing love for God and are drowned in their ecstasy. Only the divine love embodied in a Perfect Master can reach them." See also Notable individuals or movements , a prophetic sect, founded by Montanus and two female colleagues, Prisca (or Priscilla) and Maximilla, who attained ecstatic visions through fasting and prayer. , who intended his music to induce religious ecstasy. experienced an ecstasy during a church-service towards the end of his life that caused him to stop writing. , religious ecstasy and ritual madness. , Mystic, first entered states of ecstasy while studying religious texts when taken ill in a Carmelite cloister. , an important figure in bridal theology , abbess and Christian mystic. , motivated by ecstatic visions to partake in war. , founder of Gaudiya Vaishnavism, immersed into deeper and deeper stages of ecstasy towards Krishna during the last 24 years of his life. , mystic poet , mystic poet , mystic poet mystic poet , burned at the stake for her writings. References Charismatic and Pentecostal Christianity Religious practices Spirituality ja:エクスタシー
Religious ecstasy
[ "Biology" ]
1,451
[ "Behavior", "Religious practices", "Human behavior", "Spirituality" ]
178,480
https://en.wikipedia.org/wiki/Anaphase
Anaphase () is the stage of mitosis after the process of metaphase, when replicated chromosomes are split and the newly-copied chromosomes (daughter chromatids) are moved to opposite poles of the cell. Chromosomes also reach their overall maximum condensation in late anaphase, to help chromosome segregation and the re-formation of the nucleus. Anaphase starts when the anaphase promoting complex marks an inhibitory chaperone called securin for destruction by ubiquitylating it. Securin is a protein which inhibits a protease known as separase. The destruction of securin unleashes separase which then breaks down cohesin, a protein responsible for holding sister chromatids together. At this point, three subclasses of microtubule unique to mitosis are involved in creating the forces necessary to separate the chromatids: kinetochore microtubules, interpolar microtubules, and astral microtubules. The centromeres are split, and the sister chromatids are pulled toward the poles by kinetochore microtubules. They take on a V-shape or Y-shape as they are pulled to either pole. While the chromosomes are drawn to each side of the cell, interpolar microtubules and astral microtubules generate forces that stretch the cell into an oval. Once anaphase is complete, the cell moves into telophase. Phases Anaphase is characterized by two distinct motions. The first of these, anaphase A, moves chromosomes to either pole of a dividing cell (marked by centrosomes, from which mitotic microtubules are generated and organised). The movement for this is primarily generated by the action of kinetochores, and a subclass of microtubule called kinetochore microtubules. The second motion, anaphase B, involves the separation of these poles from each other. The movement for this is primarily generated by the action of interpolar microtubules and astral microtubules. Anaphase A A combination of different forces have been observed acting on chromatids in anaphase A, but the primary force is exerted centrally. Microtubules attach to the midpoint of chromosomes (the centromere) via protein complexes (kinetochores). The attached microtubules depolymerise and shorten, which together with motor proteins creates movement that pulls chromosomes towards centrosomes located at each pole of the cell. Anaphase B The second part of anaphase is driven by its own distinct mechanisms. Force is generated by several actions. Interpolar microtubules begin at each centrosome and join at the equator of the dividing cell. They push against one another, causing each centrosome to move further apart. Meanwhile, astral microtubules begin at each centrosome and join with the cell membrane. This allows them to pull each centrosome closer to the cell membrane. Movement created by these microtubules is generated by a combination of microtubule growth or shrinking, and by motor proteins such as dyneins or kinesins. Relation to the cell cycle Anaphase accounts for approximately 1% of the cell cycle's duration. It begins with the regulated triggering of the metaphase-to-anaphase transition. Metaphase ends with the destruction of B cyclin. B cyclin is marked with ubiquitin which flags it for destruction by proteasomes, which is required for the function of metaphase cyclin-dependent kinases (M-Cdks). In essence, Activation of the Anaphase-promoting complex (APC) causes the APC to cleave the M-phase cyclin and the inhibitory protein securin which activates the separase protease to cleave the cohesin subunits holding the chromatids together. See also Interphase Prophase Prometaphase Metaphase Telophase Cytoskeleton Anaphase I Anaphase II Cdc20 References External links Mitosis de:Mitose#Anaphase
Anaphase
[ "Biology" ]
872
[ "Cellular processes", "Mitosis" ]
9,575,341
https://en.wikipedia.org/wiki/The%20Art%20of%20the%20Metaobject%20Protocol
The Art of the Metaobject Protocol (AMOP) is a 1991 book by Gregor Kiczales, Jim des Rivieres, and Daniel G. Bobrow (all three working for Xerox PARC) on the subject of metaobject protocol. Overview The book contains an explanation of what a metaobject protocol is, why it is desirable, and the de facto standard for the metaobject protocol supported by many Common Lisp implementations as an extension of the Common Lisp Object System, or CLOS. A more complete and portable implementation of CLOS and the metaobject protocol, as defined in this book, was provided by Xerox PARC as Portable Common Loops. The book presents a simplified CLOS implementation for Common Lisp called "Closette", which for the sake of pedagogical brevity does not include some of the more complex or exotic CLOS features such as forward-referencing of superclasses, full class and method redefinitions, advanced user-defined method combinations, and complete integration of CLOS classes with Common Lisp's type system. It also lacks support for compilation and most error checking, since the purpose of Closette is not actual use, but simply to demonstrate the fundamental power and expressive flexibility of metaobject protocols as an application of the principles of the meta-circular evaluator. In his 1997 talk at OOPSLA, Alan Kay called it "the best book anybody's written in ten years", and contended that it contained "some of the most profound insights, and the most practical insights about OOP", but was dismayed that it was written in a highly Lisp-centric and CLOS-specific fashion, calling it "a hard book for most people to read; if you don't know the Lisp culture, it's very hard to read". References Computer books Lisp (programming language) Object (computer science)
The Art of the Metaobject Protocol
[ "Technology" ]
398
[ "Works about computing", "Computer books" ]
9,575,474
https://en.wikipedia.org/wiki/Object-Oriented%20Programming%20in%20Common%20Lisp
Object-Oriented Programming in Common Lisp: A Programmer's Guide to CLOS (1988, Addison-Wesley, ) is a book by Sonya Keene on the Common Lisp Object System. Published first in 1988, the book starts out with the elements of CLOS and develops through the concepts of data abstraction with classes and methods, inheritance, and genericity towards creating an advanced CLOS program using streams I/O. External links Sonya E. Keene | InformIT 1988 non-fiction books Addison-Wesley books Common Lisp publications
Object-Oriented Programming in Common Lisp
[ "Technology" ]
111
[ "Computing stubs", "Computer book stubs" ]
9,576,297
https://en.wikipedia.org/wiki/Frobenius%20matrix
A Frobenius matrix is a special kind of square matrix from numerical analysis. A matrix is a Frobenius matrix if it has the following three properties: all entries on the main diagonal are ones the entries below the main diagonal of at most one column are arbitrary every other entry is zero The following matrix is an example. Frobenius matrices are invertible. The inverse of a Frobenius matrix is again a Frobenius matrix, equal to the original matrix with changed signs outside the main diagonal. The inverse of the example above is therefore: Frobenius matrices are named after Ferdinand Georg Frobenius. The term Frobenius matrix may also be used for an alternative matrix form that differs from an Identity matrix only in the elements of a single row preceding the diagonal entry of that row (as opposed to the above definition which has the matrix differing from the identity matrix in a single column below the diagonal). The following matrix is an example of this alternative form showing a 4-by-4 matrix with its 3rd row differing from the identity matrix. An alternative name for this latter form of Frobenius matrices is Gauss transformation matrix, after Carl Friedrich Gauss. They are used in the process of Gaussian elimination to represent the Gaussian transformations. If a matrix is multiplied from the left (left multiplied) with a Gauss transformation matrix, a linear combination of the preceding rows is added to the given row of the matrix (in the example shown above, a linear combination of rows 1 and 2 will be added to row 3). Multiplication with the inverse matrix subtracts the corresponding linear combination from the given row. This corresponds to one of the elementary operations of Gaussian elimination (besides the operation of transposing the rows and multiplying a row with a scalar multiple). See also Elementary matrix, a special case of a Frobenius matrix with only one off-diagonal nonzero Notes References Gene H. Golub and Charles F. Van Loan (1996). Matrix Computations, third edition, Johns Hopkins University Press. (hardback), (paperback). Matrices
Frobenius matrix
[ "Mathematics" ]
432
[ "Matrices (mathematics)", "Mathematical objects" ]
9,577,146
https://en.wikipedia.org/wiki/Cement%20mill
A cement mill (or finish mill in North American usage) is the equipment used to grind the hard, nodular clinker from the cement kiln into the fine grey powder that is cement. Most cement is currently ground in ball mills and also vertical roller mills which are more effective than ball mills. History Early hydraulic cements, such as those of James Parker, James Frost and Joseph Aspdin were relatively soft and readily ground by the primitive technology of the day, using flat millstones. The emergence of Portland cement in the 1840s made grinding considerably more difficult, because the clinker produced by the kiln is often as hard as the millstone material. Because of this, cement continued to be ground very coarsely (typically 20% over 100 μm particle diameter) until better grinding technology became available. Besides producing un-reactive cement with slow strength growth, this exacerbated the problem of unsoundness. This late, disruptive expansion is caused by hydration of large particles of calcium oxide. Fine grinding lessens this effect, and early cements had to be stored for several months to give the calcium oxide time to hydrate before it was fit for sale. From 1885 onward, the development of specialized steel led to the development of new forms of grinding equipment, and from this point onward, the typical fineness of cement began a steady rise. The progressive reduction in the proportion of larger, un-reactive cement particles has been partially responsible for the fourfold increase in the strength of Portland cement during the twentieth century. The recent history of the technology has been mainly concerned with reducing the energy consumption of the grinding process. Materials ground Portland clinker is the main constituent of most cements. In Portland cement, a little calcium sulfate (typically 3-10%) is added in order to retard the hydration of tricalcium aluminate. The calcium sulfate may consist of natural gypsum, anhydrite, or synthetic wastes such as flue-gas desulfurization gypsum. In addition, up to 5% calcium carbonate and up to 1% of other minerals may be added. It is normal to add a certain amount of water, and small quantities of organic grinding aids and performance enhancers. "Blended cements" and Masonry cements may include large additions (up to 40%) of natural pozzolans, fly ash, limestone, silica fume or metakaolin. Blastfurnace slag cement may include up to 70% ground granulated blast furnace slag. See cement. Gypsum and calcium carbonate are relatively soft minerals, and rapidly grind to ultra-fine particles. Grinding aids are typically chemicals added at a rate of 0.01-0.03% that coat the newly formed surfaces of broken mineral particles and prevent re-agglomeration. They include 1,2-propanediol, acetic acid, triethanolamine and lignosulfonates. Temperature control Heat generated in the grinding process causes gypsum (CaSO4.2H2O) to lose water, forming bassanite (CaSO4.0.2-0.7H2O) or γ-anhydrite (CaSO4.~0.05H2O). The latter minerals are rapidly soluble, and about 2% of these in cement is needed to control tricalcium aluminate hydration. If more than this amount forms, crystallization of gypsum on their re-hydration causes "false set" - a sudden thickening of the cement mix a few minutes after mixing, which thins out on re-mixing. High milling temperature causes this. On the other hand, if milling temperature is too low, insufficient rapidly soluble sulfate is available and this causes "flash set" - an irreversible stiffening of the mix. Obtaining the optimum amount of rapidly soluble sulfate requires milling with a mill exit temperature within a few degrees of 115 °C. Where the milling system is too hot, some manufacturers use 2.5% gypsum and the remaining calcium sulfate as natural α-anhydrite (CaSO4). Complete dehydration of this mixture yields the optimum 2% γ-anhydrite. In the case of some efficient modern mills, insufficient heat is generated. This is corrected by recirculating part of the hot exhaust air to the mill inlet. Ball Mills A Ball mill is a horizontal cylinder partly filled with steel balls (or occasionally other shapes) that rotates on its axis, imparting a tumbling and cascading action to the balls. Material fed through the mill is crushed by impact and ground by attrition between the balls. The grinding media are usually made of high-chromium steel. The smaller grades are occasionally cylindrical ("pebs") rather than spherical. There exists a speed of rotation (the "critical speed") at which the contents of the mill would simply ride over the roof of the mill due to centrifugal action. The critical speed (rpm) is given by: nC = 42.29/, where d is the internal diameter in metres. Ball mills are normally operated at around 75% of critical speed, so a mill with diameter 5 metres will turn at around 14 rpm. The mill is usually divided into at least two chambers (although this depends upon feed input size - mills including a roller press are mostly single-chambered), allowing the use of different sizes of grinding media. Large balls are used at the inlet, to crush clinker nodules (which can be over 25 mm in diameter). Ball diameter here is in the range 60–80 mm. In a two-chamber mill, the media in the second chamber are typically in the range 15–40 mm, although media down to 5 mm are sometimes encountered. As a general rule, the size of media has to match the size of material being ground: large media can't produce the ultra-fine particles required in the finished cement, but small media can't break large clinker particles. Mills with as many as four chambers, allowing a tight segregation of media sizes, were once used, but this is now becoming rare. Alternatives to multi-chamber mills are: pairs of mills, run in tandem, charged with different-sized media. use of alternative technology (see Roll-presses below) to crush the clinker prior to fine-grinding in a ball mill. A current of air is passed through the mill. This helps keep the mill cool, and sweeps out evaporated moisture which would otherwise cause hydration and disrupt material flow. The dusty exhaust air is cleaned, usually with bag filters. Closed-circuit systems The efficiency of the early stages of grinding in a ball mill is much greater than that for formation of ultra-fine particles, so ball mills operate most efficiently by making a coarse product, the fine fractions of this then being separated, and the coarse part being returned to the mill inlet. The proportion of the mill-exit material returned to the inlet may vary from 10 to 30% when ordinary cement is being ground, to 85-95% for extremely fine cement products. It is important for system efficiency that the minimum amount of material of finished-product fineness is returned to the inlet. Modern separators are capable of making a very precise size "cut" and contribute significantly to the reduction of energy consumption, and have the additional advantage that they cool both the product and the returned material, thus minimizing overheating. Efficient closed-circuit systems, because of their tight particle size control, lead to cements with relatively narrow particle size distributions (i.e. for a given mean particle size, they have fewer large and small particles). This is of advantage in that it maximizes the strength-production potential the clinker, because large particles are inert. As a rule of thumb, only the outer 7 μm "skin" of each particle hydrates in concrete, so any particle over 14 μm diameter always leaves an un-reacted core. However, the lack of ultra-fine particles can be a disadvantage. These particles normally pack the spaces between the larger particles in a cement paste, and if absent the deficit is made up with extra water, leading to lower strength. This can be remedied by including 5% calcium carbonate in the cement: this soft mineral produces adequate ultra-fines on the first pass through the mill. Energy consumption and output Clinker hardness The hardness of clinker is important for the energy cost of the grinding process. It depends both on the clinker's mineral composition and its thermal history. The easiest-ground clinker mineral is alite, so high-alite clinkers reduce grinding costs, although they are more expensive to make in the kiln. The toughest mineral is belite, because it is harder, and is somewhat plastic, so that crystals tend to flatten rather than shatter when impacted in the mill. The mode of burning of the clinker is also important. Clinker rapidly burned at the minimum temperature for combination, then rapidly cooled, contains small, defective crystals that grind easily. These crystals are usually also optimal for reactivity. On the other hand, long burning at excess temperature, and slow cooling, lead to large, well-formed crystals that are hard to grind and un-reactive. The effect of such a clinker can be to double milling costs. Roller mills These have been used for many years for the less exacting raw-milling process, but recently roller mills, in combination with high-efficiency separators, have been used for cement grinding. The grinding action employs much greater stress on the material than in a ball mill, and is therefore more efficient. Energy consumption is typically half that of a ball mill. However, the narrowness of the particle size distribution of the cement is problematic, and the process has yet to receive wide acceptance. High-pressure roll presses These consist of a pair of rollers set 8–30 mm apart and counter-rotating with surface speed around 0.9 - 1.8 m·s−1. The bearings of the rollers are designed to deliver a pressure of 50 MPa or more. The bed of material drawn between the rollers emerges as a slab-like agglomeration of highly fractured particles. The energy efficiency of this process is comparatively high. Systems have been designed, including a de-agglomerator and separator, that will deliver material of cement fineness. However, particle size distribution is again a problem, and roll presses are now increasingly popular as a "pre-grind" process, with the cement finished in a single chamber ball mill. This gives good cement performance, and reduces energy consumption by 20-40% compared with a standard ball mill system. Capacity of cement mills The cement mills on a cement plant are usually sized for a clinker consumption considerably greater than the output of the plant's kilns. This is for two reasons: The mills are sized to cope with peaks in market demand for cement. In temperate countries, the summer demand for cement is usually much higher than that in winter. Excess clinker produced in winter goes into storage in readiness for summer demand peaks. For this reason, plants with highly seasonal demand usually have very large clinker stores. Cement milling is the largest user of electric power on a cement plant, and because they can easily be started and stopped, it often pays to operate cement mills only during "off-peak" periods when cheaper power is available. This is also favourable for electricity producers, who can negotiate power prices with major users in order to balance their generating capacity over 24 hours. More sophisticated arrangements such as "power shedding" are often employed. This consists of the cement manufacturer shutting down the plant at short notice when the power supplier expects a critical demand peak, in return for favourable prices. Clearly, plenty of excess cement milling capacity is needed in order to "catch up" after such interruptions. Control of product quality In addition to control of temperature (mentioned above), the main requirement is to obtain a consistent fineness of the product. From the earliest times, fineness was measured by sieving the cement. As cements have become finer, the use of sieves is less applicable, but the amount retained on a 45 μm sieve is still measured, usually by air-jet sieving or wet-sieving. The amount passing this sieve (typically 95% in modern general-purpose cements) is related to the overall strength-development potential of the cement, because the larger particles are essentially unreactive. The main measure of fineness today is specific surface. Because cement particles react with water at their surface, the specific surface area is directly related to the cement's initial reactivity. By adjusting the fineness of grind, the manufacture can produce a range of products from a single clinker. Tight control of fineness is necessary in order to obtain cement with the desired consistent day-to-day performance, so round-the-clock measurements are made on the cement as it is produced, and mill feed-rates and separator settings are adjusted to maintain constant specific surface. A more comprehensive picture of fineness is given by particle size analysis, yielding a measure of the amount of each size range present, from sub-micrometer upwards. This used to be mainly a research tool, but with the advent of cheap, industrialized laser-diffraction analyzers, its use for routine control is becoming more frequent. This may take the form of a desk-top analyzer fed with automatically gathered samples in a robotized laboratory, or, increasingly commonly, instruments attached directly to the output ducts of the mill. In either case, the results can be fed directly into the mill control system, allowing complete automation of fineness control. In addition to fineness, added materials in the cement must be controlled. In the case of gypsum addition, the material used is frequently of variable quality, and it is normal practise to measure the sulfate content of the cement regularly, typically by x-ray fluorescence, using the results to adjust the gypsum feed rate. Again, this process is often completely automated. Similar measurement and control protocols are applied to other materials added, such as limestone, slag and fly-ash. Notes and references Further reading Cement Chemical equipment Industrial machinery
Cement mill
[ "Chemistry", "Engineering" ]
2,961
[ "Chemical equipment", "Industrial machinery", "nan" ]
9,577,147
https://en.wikipedia.org/wiki/1858%20Bradford%20sweets%20poisoning
In 1858 a batch of sweets in Bradford, England, was accidentally adulterated with poisonous arsenic trioxide. About of sweets were sold to the public, leading to around 20 deaths and over 200 people suffering the effects of arsenic poisoning. The adulteration of food had been practised in Britain since before the Middle Ages, but from 1800, with increasing urbanisation and the rise in shop-purchased food, adulterants became a growing problem. With the cost of sugar high, replacing it with substitutes was common. For the sweets produced in Bradford, the confectioner was supposed to purchase powdered gypsum, but a mistake at the wholesale chemist meant arsenic was purchased instead. Three men were arrested—the chemist who sold the arsenic, his assistant and the sweet maker—but all three were acquitted after the judge decided it was all accidental, there was no case for any of them to answer. The deaths led to the Adulteration of Food or Drink Act 1860, although the legislation was criticised for being too ambiguous and the penalties for breaching it too low to act as a deterrent. The deaths were also a factor in the passage of the Pharmacy Act 1868. Background The adulteration of food had been practised in the Britain since before the Middle Ages, but from 1800, with increasing urbanisation and the rise in shop-purchased food, adulterants became a growing problem. Cost was the reason adulterants were used; sugar, for example, cost 6½ d per pound; the adulterant cost ½ d per pound. The adulteration fell into three categories: firstly were harmless additions, such as chicory in coffee, adding flour to mustard and watering down milk. More serious were the addition of indigestible ingredients, including introducing alum, gypsum or chalk into white bread or tree or shrub leaves into tea leaves. Finally, dangerous additions to foods included salts of copper and red lead glaze used as colourants, mercury salts added to cheese, and the use of arsenic, sulphuric acid and nitric acid. The chemist Arthur Hill Hassall was prominent in the field of food analysis as an analytical microscopist who established levels of adulteration. Between 1850 and 1856 he examined 3,000 samples of food and found 65 per cent of them contained adulterants. Those involved in adulterating foodstuff used nicknames to hide the practice. These included "daff", "duck", "duff", "derby", "multum", "flash" and "stuff". One local name used in Bradford—an industrial town in the West Riding of Yorkshire—was "daft". In the Victorian era, arsenic was an ingredient in several household products, including medicines (for external and internal use), candles, wallpaper, soft furnishings and colourants for foods. It was also used as a poison for murder. So many people died of arsenic poisoning—both deliberate and accidental—that legislation in the form of the Arsenic Act 1851 was introduced; it was the first piece of UK legislation to attempt to control the sale of a poisonous substance. It stated that arsenic should only be sold to adults, the sale recorded in a book—signed by both seller and buyer—and that there should be a witness to the sale if the buyer was unknown to the seller. Any arsenic not to be used for medicinal purposes was to be coloured with soot or indigo to differentiate it from other white powders on sale. There was no restriction on who could sell arsenic; no licence was required and any trader was able to sell it as long as they kept a record of the sale. Although the Act controlled the method of sale, it did not bring an end to arsenic-related problems or the widespread use of arsenic in household goods. In 1860 an article in The Lancet asked: How to convict of arsenical poisoning when ladies use arsenical cosmetics; when confectioners sell arsenical sweetmeats; when paperhangers clothe our walls with arsenical hangings, and impregnate all the air with fine arsenical dust; above all, when chemists sell arsenic for a toothpowder and label it mercury? Arsenic trioxide—also known as white arsenic—is an industrially produced inorganic compound with the formula . It is white or colourless and is commonly found in either powdered or crystal form—the latter resembling sugar or sand. It is used as a wood preservative and a pesticide. As a solid it is tasteless, but when in solution, it is described as "very faint, at first sweetish, afterward very slightly metallic". William Hardaker, known to locals as "Humbug Billy", sold sweets from a stall in the Greenmarket in central Bradford. Hardaker purchased his confectionery from Joseph Neal, who made the sweets in Stone Street a few hundred yards to the north. The sweets were peppermint lozenges, made by mixing peppermint oil into a base of sugar, water and gum, which was made into a paste, then dried on boards before being cut into lozenge shapes. As the sugar was relatively expensive, Neal would substitute some of the sugar with powdered gypsum; he later justified his actions by saying "where I use a pound of it, there are people who will use a ton of it". Outbreak In the week before the regular Saturday market, Neal was due to make a batch of lozenges for Hardaker but had run out of "daft". Neal asked his lodger, James Archer, to visit a druggist in the town of Shipley away to purchase the gypsum. The druggist, Charles Hodgson, was ill in bed; his assistant, William Goddard, was on duty and had been told the daft was in a barrel in the attic. Goddard had only been working for Hodgson for a few weeks, hoping for an apprenticeship. When Archer asked for the daft, Goddard went to the attic and took of powder from a cask. What Goddard did not realise was that there were two identical barrels and he had taken powder from a barrel of arsenic trioxide; the barrel was only labelled as such on the base. Neal employed James Appleton—an experienced sweetmaker—to manufacture the lozenges. Both men thought the new batch of sweets were different from normal: Appleton thought the mixture was more free-flowing and smoother than normal, and both considered the end product was a darker shade than they were used to. Appleton, working closely with the arsenic, fell ill and suffered from fits of vomiting; Neal, who occasionally tasted the lozenges to check the quality, also began vomiting. Neither man connected their sickness to the lozenges. When Hardaker collected the sweets from Neal on 30 October 1858 he also noted the different colour from the normal batches. He negotiated the lower price of 7½ d per pound for , a discount of ½ d per pound on his usual price. Hardaker sold the sweets that Saturday for for 1½ d. He ate some of the sweets and by 5:30 pm he became ill; he went home and left his assistant to look after the stall until it closed at 11:30 pm, by which point about a thousand sweets had been sold. Investigation, arrests and court case On the Sunday morning the local police were informed of the deaths of a man named Elijah Wright and two boys, aged 9 and 14, the sons of John Scott; it was thought that the cause of the deaths was cholera. The misdiagnosis of cholera instead of arsenic poisoning was a common mistake, as the symptoms for both—vomiting, abdominal pain and diarrhoea—were similar and England was affected by an ongoing cholera pandemic. When more deaths were reported during the course of Sunday, Police Constable Campbell was sent to investigate; he made the connection between the sweets and the deaths, and visited Hardaker, seizing the remaining stock, which was just under . Campbell then went to Neal's premises and seized of fragments of the sweets. A detective, William Burniston, went to Shipley with Neal to visit Hodgson, the druggist. When Neal indicated the cask from which the powder had been dispensed, Hodgson immediately identified it as containing arsenic. Burniston arrested Goddard, the assistant. The chief constable, William Leveratt, had been kept abreast of developments, and deployed men to local inns and alehouses to spread warnings about the lozenges. He sent out bellmen to announce the warnings across the town and, at 11 pm, he had warning notices printed up and posted over night for people to see in the morning. A local doctor, John Henry Bell, hypothesised the poisonings of his patients were caused by arsenic; he gave the lozenges from one patient to an analytical chemist, who confirmed his theory. Bell treated sixty patients for the poisoning. By midday on the Monday—1 November—the number of dead had risen to twelve, with seventy-eight people seriously ill. Police visited Neal for a second time and searched his premises more thoroughly, finding hundreds of fragments still on the drying boards used when preparing the lozenges. Neal's wife also admitted that she had found other fragments and thrown them on to the fire; she had also been to their own sweetshop and brought back some bags of mixed sweets that contained lozenges or lozenge fragments. While the police were there, Neal ran off and was pursued; he was found sitting in his kitchen at home. Goddard was brought in front of the magistrates on the Monday; he was remanded in custody. An inquest was opened the following day. The magistrates also met and decided that Hodgson should also be arrested and remanded. On Wednesday the magistrates also decided that Neal should also be arrested, although he was released on bail. They heard from an analytical chemist, Felix Rimmington, who estimated that each sweet contained of arsenic; he added that four and a half grains were sufficient to kill an adult male. Rimmington later made an analysis of the lozenges which showed each contained between . By the end of Wednesday, fifteen people had been reported dead, eleven of whom were children; thirteen more were listed as "dangerously ill", and another hundred and fifty were ill but not in danger. The magistrates met again on the Friday and determined that the three men in custody should be sent to the next York assizes for trial on a charge of manslaughter by negligence. Bail for Hodgson and Neal was set at £200; that for Goddard was set at £100. The number of dead had risen to seventeen, with 196 people ill. Eventually up to twenty-one people died and over two hundred were sick; figures on the number of deaths vary, with either twenty or twenty-one people dead. The assizes opened on 9 December 1858; the judge was Mr Baron Warren. In his opening remarks he said that there was no case for Neal to answer as he was he was not present when the arsenic was purchased or when the lozenges were made, and dismissed the case against him. Goddard, Warren said, was only following the instructions of his master and so also had no case to answer; his case was also dismissed. Warren said the case against Hodgson should stand and he that he would be tried later in the month. Hodgson's trial opened on 21 December 1858 for the manslaughter of Elizabeth Mary Midgley, a seven-year-old girl. If he was found guilty of the charge, others would follow. After hearing some of the evidence, the judge stopped the proceedings and said there was no case for Hodgson to answer; he instructed the jury to record a verdict of not guilty, which they did. Legacy The deaths led to calls for legislation to stop similar events occurring, and the Adulteration of Food or Drink Act 1860 was passed into law, "on a wave of public revulsion" against the event at Bradford, according to the microbiologist John Postgate. According to the legal scholar Jillian London, "Although the Bradford incident was far from solely responsible for the passage of the 1860 Act, it was instrumental in heightening public awareness of and outrage towards adulteration". The Act stated that penalties would be applied to: Every person who shall sell any article of food or drink with which, to the knowledge of such person, any ingredient or material injurious to the health of persons eating or drinking such article has been mixed, and every person who shall sell as pure or unadulterated any article of food or drink which is adulterated or not pure ... The political economist Sébastien Rioux observes that the law only applied when adulterated food was being passed off as pure, so the protection was only on misrepresenting what was being sold, rather than ensuring the purity of all food. The medical historian James C. Whorton considers the Act "was next to useless" as its provisions were too ambiguous and the penalties for breaching it—at £5—were too low to act as a deterrent. The events at Bradford were still felt ten years later when the Pharmacy Act 1868 was passed. Legislation on pharmacies was considered from 1859 onwards, although the Royal Pharmaceutical Society opposed a Poison Bill in 1859. See also 1900 English beer poisoning Notes and references Notes References Sources Books Journals and magazines News Websites 1850s disasters in the United Kingdom 1858 disasters in Europe Bradford 1858 health disasters Adulteration Arsenic Food safety scandals Disasters in Yorkshire History of Bradford Mass poisoning Poisoning by drugs, medicaments and biological substances Health disasters in the United Kingdom Food safety in the United Kingdom History of forensic science 19th century in Yorkshire October 1858 Poisoned candy 1858 Bradford sweets poisoning
1858 Bradford sweets poisoning
[ "Chemistry", "Environmental_science" ]
2,796
[ "Toxicology", "Adulteration", "Drug safety", " medicaments and biological substances", "Poisoning by drugs" ]
9,577,500
https://en.wikipedia.org/wiki/Exosome%20complex
The exosome complex (or PM/Scl complex, often just called the exosome) is a multi-protein intracellular complex capable of degrading various types of RNA (ribonucleic acid) molecules. Exosome complexes are found in both eukaryotic cells and archaea, while in bacteria a simpler complex called the degradosome carries out similar functions. The core of the exosome contains a six-membered ring structure to which other proteins are attached. In eukaryotic cells, the exosome complex is present in the cytoplasm, nucleus, and especially the nucleolus, although different proteins interact with the exosome complex in these compartments regulating the RNA degradation activity of the complex to substrates specific to these cell compartments. Substrates of the exosome include messenger RNA, ribosomal RNA, and many species of small RNAs. The exosome has an exoribonucleolytic function, meaning it degrades RNA starting at one end (the 3′ end in this case), and in eukaryotes also an endoribonucleolytic function, meaning it cleaves RNA at sites within the molecule. Several proteins in the exosome are the target of autoantibodies in patients with specific autoimmune diseases (especially the PM/Scl overlap syndrome) and some antimetabolic chemotherapies for cancer function by blocking the activity of the exosome. In addition, mutations in exosome component 3 cause pontocerebellar hypoplasia and spinal motor neuron disease. Discovery The exosome was first discovered as an RNase in 1997 in the budding yeast Saccharomyces cerevisiae, an often-used model organism. Not long after, in 1999, it was realized that the exosome was in fact the yeast equivalent of an already described complex in human cells called the PM/Scl complex, which had been identified as an autoantigen in patients with certain autoimmune diseases years earlier (see below). Purification of this "PM/Scl complex" allowed the identification of more human exosome proteins and eventually the characterization of all components in the complex. In 2001, the increasing amount of genome data that had become available allowed the prediction of exosome proteins in archaea, although it would take another 2 years before the first exosome complex from an archaeal organism was purified. Structure Core proteins The core of the complex has a ring structure consisting of six proteins that all belong to the same class of RNases, the RNase PH-like proteins. In archaea there are two different PH-like proteins (called Rrp41 and Rrp42), each present three times in an alternating order. Eukaryotic exosome complexes have six different proteins that form the ring structure. Of these six eukaryotic proteins, three resemble the archaeal Rrp41 protein and the other three proteins are more similar to the archaeal Rrp42 protein. Located on top of this ring are three proteins that have an S1 RNA binding domain (RBD). Two proteins in addition have a K-homology (KH) domain. In eukaryotes, three different "S1" proteins are bound to the ring, whereas in archaea either one or two different "S1" proteins can be part of the exosome (although there are always three S1 subunits attached to the complex). This ring structure is very similar to that of the proteins RNase PH and PNPase. In bacteria, the protein RNase PH, which is involved in tRNA processing, forms a hexameric ring consisting of six identical RNase PH proteins. In the case of PNPase, which is a phosphorolytic RNA-degrading protein found in bacteria and the chloroplasts and mitochondria of some eukaryotic organisms, two RNase PH domains, and both an S1 and KH RNA binding domain are part of a single protein, which forms a trimeric complex that adopts a structure almost identical to that of the exosome. Because of this high similarity in both protein domains and structure, these complexes are thought to be evolutionarily related and have a common ancestor. The RNase PH-like exosome proteins, PNPase and RNase PH all belong to the RNase PH family of RNases and are phosphorolytic exoribonucleases, meaning that they use inorganic phosphate to remove nucleotides from the 3' end of RNA molecules. Associated proteins Besides these nine core exosome proteins, two other proteins often associate with the complex in eukaryotic organisms. One of these is Rrp44, a hydrolytic RNase, which belongs to the RNase R family of hydrolytic exoribonucleases (nucleases that use water to cleave the nucleotide bonds). In addition to being an exoribonucleolytic enzyme, Rrp44 also has endoribonucleolytic activity, which resides in a separate domain of the protein. In yeast, Rrp44 is associated with all exosome complexes and has a crucial role in the activity of the yeast exosome complex. While a human homologue of the protein exists, no evidence was found for a long time that its human homologue was associated with the human exosome complex. In 2010, however, it was discovered that humans have three Rrp44 homologues and two of these can be associated with the exosome complex. These two proteins most likely degrade different RNA substrates due to their different cellular localization, with one being localized in the cytoplasm (DIS3L1) and the other in the nucleus (DIS3). The second common associated protein is called Rrp6 (in yeast) or PM/Scl-100 (in human). Like Rrp44, this protein is a hydrolytic exoribonuclease, but in this case of the RNase D protein family. The protein PM/Scl-100 is most commonly part of exosome complexes in the nucleus of cells, but can form part of the cytoplasmic exosome complex as well. Regulatory proteins Apart from these two tightly bound protein subunits, many proteins interact with the exosome complex in both the cytoplasm and nucleus of cells. These loosely associated proteins may regulate the activity and specificity of the exosome complex. In the cytoplasm, the exosome interacts with AU-rich element (ARE) binding proteins (e.g. KRSP and TTP), which can promote or prevent degradation of mRNAs. The nuclear exosome associates with RNA binding proteins (e.g. MPP6/Mpp6 and C1D/Rrp47 in humans/yeast) that are required for processing certain substrates. In addition to single proteins, other protein complexes interact with the exosome. One of those is the cytoplasmic Ski complex, which includes an RNA helicase (Ski2) and is involved in mRNA degradation. In the nucleus, the processing of rRNA and snoRNA by the exosome is mediated by the TRAMP complex, which contains both RNA helicase (Mtr4) and polyadenylation (Trf4) activity. Function Enzymatic function As stated above, the exosome complex contains many proteins with ribonuclease domains. The exact nature of these ribonuclease domains has changed across evolution from bacterial to archaeal to eukaryotic complexes as various activities have been gained and lost. The exosome is primarily a 3'-5' exoribonuclease, meaning that it degrades RNA molecules from their 3' end. Exoribonucleases contained in exosome complexes are either phosphorolytic (the RNase PH-like proteins) or, in eukaryotes, hydrolytic (the RNase R and RNase D domain proteins). The phosphorolytic enzymes use inorganic phosphate to cleave the phosphodiester bonds – releasing nucleotide diphosphates. The hydrolytic enzymes use water to hydrolyse these bonds – releasing nucleotide monophosphates. In archaea, the Rrp41 subunit of the complex is a phosphorolytic exoribonuclease. Three copies of this protein are present in the ring and are responsible for the activity of the complex. In eukaryotes, none of the RNase PH subunits have retained this catalytic activity, meaning the core ring structure of the human exosome has no enzymatically active protein. Despite this loss of catalytic activity, the structure of the core exosome is highly conserved from archaea to humans, suggesting that the complex performs a vital cellular function. In eukaryotes, the absence of the phosphorolytic activity is compensated by the presence of the hydrolytic enzymes, which are responsible for the ribonuclease activity of the exosome in such organisms. As stated above, the hydrolytic proteins Rrp6 and Rrp44 are associated with the exosome in yeast and in humans, besides Rrp6, two different proteins, Dis3 and Dis3L1 can be associated at the position of the yeast Rrp44 protein. Although originally the S1 domain proteins were thought to have 3'-5' hydrolytic exoribonuclease activity as well, the existence of this activity has recently been questioned and these proteins might have just a role in binding substrates prior to their degradation by the complex. Substrates The exosome is involved in the degradation and processing of a wide variety of RNA species. In the cytoplasm of cells, it is involved in the turn-over of messenger RNA (mRNA) molecules. The complex can degrade mRNA molecules that have been tagged for degradation because they contain errors, through interactions with proteins from the nonsense mediated decay or non-stop decay pathways. In alternative fashion, mRNAs are degraded as part of their normal turnover. Several proteins that stabilize or destabilize mRNA molecules through binding to AU-rich elements in the 3' untranslated region of mRNAs interact with the exosome complex. In the nucleus, the exosome is required for the correct processing of several small nuclear RNA molecules. Finally, the nucleolus is the compartment where the majority of the exosome complexes are found. There it plays a role in the processing of the 5.8S ribosomal RNA (the first identified function of the exosome) and of several small nucleolar RNAs. Although most cells have other enzymes that can degrade RNA, either from the 3' or from the 5' end of the RNA, the exosome complex is essential for cell survival. When the expression of exosome proteins is artificially reduced or stopped, for example by RNA interference, growth stops and the cells eventually die. Both the core proteins of the exosome complex, as well as the two main associated proteins, are essential proteins. Bacteria do not have an exosome complex; however, similar functions are performed by a simpler complex that includes the protein PNPase, called the degradosome. The exosome is a key complex in cellular RNA quality control. Unlike prokaryotes, eukaryotes possess highly active RNA surveillance systems that recognise unprocessed and mis-processed RNA-protein complexes (such as ribosomes) prior to their exit from the nucleus. It is presumed that this system prevents aberrant complexes from interfering with important cellular processes such as protein synthesis. In addition to RNA processing, turnover and surveillance activities, the exosome is important for the degradation of so-called cryptic unstable transcripts (CUTs) that are produced from thousands of loci within the yeast genome. The importance of these unstable RNAs and their degradation are still unclear, but similar RNA species have also been detected in human cells. Disease Autoimmunity The exosome complex is the target of autoantibodies in patients with various autoimmune diseases. These autoantibodies are mainly found in people with the PM/Scl overlap syndrome, an autoimmune disease in which patients have symptoms from both scleroderma and either polymyositis or dermatomyositis. Autoantibodies can be detected in the serum of patients by a variety of assays. In the past, the most commonly used methods were double immunodiffusion using calf thymus extracts, immunofluorescence on HEp-2 cells or immunoprecipitation from human cell extracts. In immunoprecipitation assays with sera from anti-exosome positive sera, a distinctive set of proteins is precipitated. Already years before the exosome complex was identified, this pattern was termed the PM/Scl complex. Immunofluorescence using sera from these patients usually shows a typical staining of the nucleolus of cells, which sparked the suggestion that the antigen recognized by autoantibodies might be important in ribosome synthesis. More recently, recombinant exosome proteins have become available and these have been used to develop line immunoassays (LIAs) and enzyme linked immunosorbent assays (ELISAs) for detecting these antibodies. In these diseases, antibodies are mainly directed against two of the proteins of the complex, called PM/Scl-100 (the RNase D like protein) and PM/Scl-75 (one of the RNase PH like proteins from the ring) and antibodies recognizing these proteins are found in approximately 30% of patients with the PM/Scl overlap syndrome. Although these two proteins are the main target of the autoantibodies, other exosome subunits and associated proteins (like C1D) can be targeted in these patients. At the current time, the most sensitive way to detect these antibodies is by using a peptide, derived from the PM/Scl-100 protein, as the antigen in an ELISA, instead of complete proteins. By this method, autoantibodies are found in up to 55% of patients with the PM/Scl overlap syndrome, but they can also be detected in patients with either scleroderma, polymyositis, or dermatomyositis alone. As the autobodies are found mainly in patients that have characteristics of several different autoimmune diseases, the clinical symptoms of these patients can vary widely. The symptoms that are seen most often are the typical symptoms of the individual autoimmune diseases and include Raynaud's phenomenon, arthritis, myositis and scleroderma. Treatment of these patients is symptomatic and is similar to treatment for the individual autoimmune disease, often involving either immunosuppressive or immunomodulating drugs. Cancer treatment The exosome has been shown to be inhibited by the antimetabolite fluorouracil, a drug used in the chemotherapy of cancer. It is one of the most successful drugs for treating solid tumors. In yeast cells treated with fluorouracil, defects were found in the processing of ribosomal RNA identical to those seen when the activity of the exosome was blocked by molecular biological strategies. Lack of correct ribosomal RNA processing is lethal to cells, explaining the antimetabolic effect of the drug. Neurological disorders Mutations in exosome component 3 cause infantile spinal motor neuron disease, cerebellar atrophy, progressive microcephaly and profound global developmental delay, consistent with pontocerebellar hypoplasia type 1B (PCH1B; MIM 614678). List of subunits In archaea several exosome proteins are present in multiple copies, to form the full core of the exosome complex. In humans, two different proteins can be associated in this position. In the cytoplasm of cells, Dis3L1 is associated with the exosome, whereas in the nucleus, Dis3 can bind to the core complex. Contributes to the ribonucleolytic activity of the complex. See also The proteasome, the main protein degrading machinery of cells The spliceosome, a complex involved in RNA splicing, that also contains an RNA binding ring structure References Further reading External links Structure of the human exosome at the RCSB Protein Data Bank Structure of an archaeal exosome at the RCSB Protein Data Bank Structure of an archaeal exosome bound to RNA at the RCSB Protein Data Bank Structure of the yeast exosome protein Rrp6 at the RCSB Protein Data Bank 3D macromolecular structures of exosomes at the EM Data Bank(EMDB) Nucleic acids RNA Ribonucleases Protein complexes
Exosome complex
[ "Chemistry" ]
3,557
[ "Biomolecules by chemical classification", "Nucleic acids" ]
9,577,836
https://en.wikipedia.org/wiki/Energy%20and%20Utility%20Skills
Energy & Utility Skills is an employer-led membership organization that helps to ensure the gas, power, waste management and water industries have the skills they need - now and in the future. Function Through a range of products and services, the organization help employers attract new talent, develop their existing workforces, and assure a high level of competence across their businesses. As the UK authority on professional development and employment in the energy and utilities industries, Energy & Utility Skills help their members embrace new talent and technology to meet the challenges of a competitive global market. Market intelligence is central to their approach. From projecting skills gaps to benchmarking standards, they provide the most accurate, up-to-date information on skills and employment in the energy and utilities sector. Their partnerships with employers, Government bodies and educational institutions help them to support the UK's agenda, shape the future of the sector's workforce and ensure their stakeholders get the most from their investments. Industries Gas Power Water Waste Management Structure The Chief Executive of Energy & Utility Skills is Phil Beach. The headquarters of Energy & Utility Skills is in Solihull, West Midlands. The Energy & Utilities Independent Assessment Service (EUIAS) and the Energy & Utility Skills Register (EUSR) are both part of the Energy & Utility Skills Group. The National Skills Academy for Power in Solihull is also part of the Energy & Utility Skills Group. See also National Inspection Council for Electrical Installation Contracting, based in Bedfordshire, also known as the NICEIC References External links Energy and Utility Skills Site Education in Solihull Electric power in the United Kingdom Electrical safety Electrical trades organizations Energy education Energy organizations Engineering education in the United Kingdom Natural gas industry in the United Kingdom Natural gas safety Sector Skills Councils
Energy and Utility Skills
[ "Chemistry", "Engineering" ]
348
[ "Natural gas safety", "Natural gas technology", "Energy organizations" ]
9,578,059
https://en.wikipedia.org/wiki/Green%20chemistry%20metrics
Green chemistry metrics describe aspects of a chemical process relating to the principles of green chemistry. The metrics serve to quantify the efficiency or environmental performance of chemical processes, and allow changes in performance to be measured. The motivation for using metrics is the expectation that quantifying technical and environmental improvements can make the benefits of new technologies more tangible, perceptible, or understandable. This, in turn, is likely to aid the communication of research and potentially facilitate the wider adoption of green chemistry technologies in industry. For a non-chemist, an understandable method of describing the improvement might be a decrease of X unit cost per kilogram of compound Y. This, however, might be an over-simplification. For example, it would not allow a chemist to visualize the improvement made or to understand changes in material toxicity and process hazards. For yield improvements and selectivity increases, simple percentages are suitable, but this simplistic approach may not always be appropriate. For example, when a highly pyrophoric reagent is replaced by a benign one, a numerical value is difficult to assign but the improvement is obvious, if all other factors are similar. Numerous metrics have been formulated over time. A general problem is that the more accurate and universally applicable the metric devised, the more complex and unemployable it becomes. A good metric must be clearly defined, simple, measurable, objective rather than subjective and must ultimately drive the desired behavior. Mass-based versus impact-based metrics The fundamental purpose of metrics is to allow comparisons. If there are several economically viable ways to make a product, which one causes the least environmental harm (i.e. which is the greenest)? The metrics that have been developed to achieve that purpose fall into two groups: mass-based metrics and impact-based metrics. The simplest metrics are based upon the mass of materials rather than their impact. Atom economy, E-factor, yield, reaction mass efficiency and effective mass efficiency are all metrics that compare the mass of desired product to the mass of waste. They do not differentiate between more harmful and less harmful wastes. A process that produces less waste may appear to be greener than the alternatives according to mass-based metrics but may in fact be less green if the waste produced is particularly harmful to the environment. This serious limitation means that mass-based metrics can not be used to determine which synthetic method is greener. However, mass-based metrics have the great advantage of simplicity: they can be calculated from readily available data with few assumptions. For companies that produce thousands of products, mass-based metrics may be the only viable choice for monitoring company-wide reductions in environmental harm. In contrast, impact-based metrics such as those used in life-cycle assessment evaluate environmental impact as well as mass, making them much more suitable for selecting the greenest of several options or synthetic pathways. Some of them, such as those for acidification, ozone depletion, and resource depletion, are just as easy to calculate as mass-based metrics but require emissions data that may not be readily available. Others, such as those for inhalation toxicity, ingestion toxicity, and various forms of aquatic eco toxicity, are more complex to calculate in addition to requiring emissions data. Atom economy Atom economy was designed by Barry Trost as a framework by which organic chemists would pursue “greener” chemistry. The atom economy number is how much of the reactants remain in the final product. For a generic multi-stage reaction used for producing R: A + B → P + X P + C → Q + Y Q + D → R + Z The atom economy is calculated by The conservation of mass principle dictates that the total mass of the reactants is the same as the total mass of the products. In the above example, the sum of molecular masses of A, B, C and D should be equal to that of R, X, Y and Z. As only R is the useful product, the atoms of X, Y and Z are said to be wasted as by-products. Economic and environmental costs of disposal of these waste make a reaction with low atom economy to be "less green". A further simplified version of this is the carbon economy. It is how much carbon ends up in the useful product compared to how much carbon was used to create the product. This metric is a good simplification for use in the pharmaceutical industry as it takes into account the stoichiometry of reactants and products. Furthermore, this metric is of interest to the pharmaceutical industry where development of carbon skeletons is key to their work. The atom economy calculation is a simple representation of the “greenness” of a reaction as it can be carried out without the need for experimental results. Nevertheless, it can be useful in the process synthesis early stage design. The drawback of this type of analysis is that assumptions have to be made. In an ideal chemical process, the amount of starting materials or reactants equals the amount of all products generated and no atom is lost. However, in most processes, some of the consumed reactant atoms do not become part of the products, but remain as unreacted reactants, or are lost in some side reactions. Besides, solvents and energy used for the reaction are ignored in this calculation, but they may have non-negligible impacts to the environment. Percentage yield Percentage yield is calculated by dividing the amount of the obtained desired product by the theoretical yield. In a chemical process, the reaction is usually reversible, thus reactants are not completely converted into products; some reactants are also lost by undesired side reaction. To evaluate these losses of chemicals, actual yield has to be measured experimentally. As percentage yield is affected by chemical equilibrium, allowing one or more reactants to be in great excess can increase the yield. However, this may not be considered as a "greener" method, as it implies a greater amount of the excess reactant remain unreacted and therefore wasted. To evaluate the use of excess reactants, the excess reactant factor can be calculated. If this value is far greater than 1, then the excess reactants may be a large waste of chemicals and costs. This can be a concern when raw materials have high economic costs or environmental costs in extraction. In addition, increasing the temperature can also increase the yield of some endothermic reactions, but at the expense of consuming more energy. Hence this may not be attractive methods as well. Reaction mass efficiency The reaction mass efficiency is the percentage of actual mass of desire product to the mass of all reactants used. It takes into account both atom economy and chemical yield. Reaction mass efficiency, together with all metrics mentioned above, shows the “greenness” of a reaction but not of a process. Neither metric takes into account all waste produced. For example, these metrics could present a rearrangement as “very green” but fail to address any solvent, work-up, and energy issues that make the process less attractive. Effective mass efficiency A metric similar to reaction mass efficiency is the effective mass efficiency, as suggested by Hudlicky et al. It is defined as the percentage of the mass of the desired product relative to the mass of all non-benign reagents used in its synthesis. The reagents here may include any used reactant, solvent or catalyst. Note that when most reagents are benign, the effective mass efficiency can be greater than 100%. This metric requires further definition of a benign substance. Hudlicky defines it as “those by-products, reagents or solvents that have no environmental risk associated with them, for example, water, low-concentration saline, dilute ethanol, autoclaved cell mass, etc.”. This definition leaves the metric open to criticism, as nothing is absolutely benign (which is a subjective term), and even the substances listed in the definition have some environmental impact associated with them. The formula also fails to address the level of toxicity associated with a process. Until all toxicology data is available for all chemicals and a term dealing with these levels of “benign” reagents is written into the formula, the effective mass efficiency is not the best metric for chemistry. Environmental factor The first general metric for green chemistry remains one of the most flexible and popular ones. Roger A. Sheldon’s environmental factor (E-factor) can be made as complex and thorough or as simple as desired and useful. The E-factor of a process is the ratio of the mass of waste per mass of product: As examples, Sheldon calculated E-factors of various industries: It highlights the waste produced in the process as opposed to the reaction, thus helping those who try to fulfil one of the twelve principles of green chemistry to avoid waste production. E-factors can be combined to assess multi-step reactions step by step or in one calculation. E-factors ignore recyclable factors such as recycled solvents and re-used catalysts, which obviously increases the accuracy but ignores the energy involved in the recovery (these are often included theoretically by assuming 90% solvent recovery). The main difficulty with E-factors is the need to define system boundaries, for example, which stages of the production or product life-cycle to consider before calculations can be made. This metric is simple to apply industrially, as a production facility can measure how much material enters the site and how much leaves as product and waste, thereby directly giving an accurate global E-factor for the site. Sheldon's analyses (see table) demonstrate that oil companies produce less waste than pharmaceuticals as a percentage of material processed. This reflects the fact that the profit margins in the oil industry require them to minimise waste and find uses for products which would normally be discarded as waste. By contrast the pharmaceutical sector is more focused on molecule manufacture and quality. The (currently) high profit margins within the sector mean that there is less concern about the comparatively large amounts of waste that are produced (especially considering the volumes used) although it has to be noted that, despite the percentage waste and E-factor being high, the pharmaceutical section produces much lower tonnage of waste than any other sector. This table encouraged a number of large pharmaceutical companies to commence “green” chemistry programs. The EcoScale The EcoScale metric was proposed in an article in the Beilstein Journal of Organic Chemistry in 2006 for evaluation of the effectiveness of a synthetic reaction. It is characterized by simplicity and general applicability. Like the yield-based scale, the EcoScale gives a score from 0 to 100, but also takes into account cost, safety, technical set-up, energy and purification aspects. It is obtained by assigning a value of 100 to an ideal reaction defined as "Compound A (substrate) undergoes a reaction with (or in the presence of)inexpensive compound(s) B to give the desired compound C in 100% yield at room temperature with a minimal risk for the operator and a minimal impact on the environment", and then subtracting penalty points for non-ideal conditions. These penalty points take into account both the advantages and disadvantages of specific reagents, set-ups and technologies. References General references Green chemistry
Green chemistry metrics
[ "Chemistry", "Engineering", "Environmental_science" ]
2,315
[ "Green chemistry", "Chemical engineering", "Environmental chemistry", "nan" ]