id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
49,497
https://en.wikipedia.org/wiki/Pascal%27s%20triangle
In mathematics, Pascal's triangle is an infinite triangular array of the binomial coefficients which play a crucial role in probability theory, combinatorics, and algebra. In much of the Western world, it is named after the French mathematician Blaise Pascal, although other mathematicians studied it centuries before him in Persia, India, China, Germany, and Italy. The rows of Pascal's triangle are conventionally enumerated starting with row at the top (the 0th row). The entries in each row are numbered from the left beginning with and are usually staggered relative to the numbers in the adjacent rows. The triangle may be constructed in the following manner: In row 0 (the topmost row), there is a unique nonzero entry 1. Each entry of each subsequent row is constructed by adding the number above and to the left with the number above and to the right, treating blank entries as 0. For example, the initial number of row 1 (or any other row) is 1 (the sum of 0 and 1), whereas the numbers 1 and 3 in row 3 are added to produce the number 4 in row 4. Formula In the th row of Pascal's triangle, the th entry is denoted , pronounced " choose ". For example, the topmost entry is . With this notation, the construction of the previous paragraph may be written as for any positive integer and any integer . This recurrence for the binomial coefficients is known as Pascal's rule. History The pattern of numbers that forms Pascal's triangle was known well before Pascal's time. The Persian mathematician Al-Karaji (953–1029) wrote a now-lost book which contained the first description of Pascal's triangle. In India, the Chandaḥśāstra by the Indian lyricist Piṅgala (3rd or 2nd century BC) somewhat crypically describes a method of arranging two types of syllables to form metres of various lengths and counting them; as interpreted and elaborated by Piṅgala's 10th-century commentator Halāyudha his "method of pyramidal expansion" (meru-prastāra) for counting metres is equivalent to Pascal's triangle. It was later repeated by Omar Khayyám (1048–1131), another Persian mathematician; thus the triangle is also referred to as Khayyam's triangle () in Iran. Several theorems related to the triangle were known, including the binomial theorem. Khayyam used a method of finding nth roots based on the binomial expansion, and therefore on the binomial coefficients. Pascal's triangle was known in China during the 11th century through the work of the Chinese mathematician Jia Xian (1010–1070). During the 13th century, Yang Hui (1238–1298) defined the triangle, and it is known as Yang Hui's triangle () in China. In Europe, Pascal's triangle appeared for the first time in the Arithmetic of Jordanus de Nemore (13th century). The binomial coefficients were calculated by Gersonides during the early 14th century, using the multiplicative formula for them. Petrus Apianus (1495–1552) published the full triangle on the frontispiece of his book on business calculations in 1527. Michael Stifel published a portion of the triangle (from the second to the middle column in each row) in 1544, describing it as a table of figurate numbers. In Italy, Pascal's triangle is referred to as Tartaglia's triangle, named for the Italian algebraist Tartaglia (1500–1577), who published six rows of the triangle in 1556. Gerolamo Cardano also published the triangle as well as the additive and multiplicative rules for constructing it in 1570. Pascal's (Treatise on Arithmetical Triangle) was published posthumously in 1665. In this, Pascal collected several results then known about the triangle, and employed them to solve problems in probability theory. The triangle was later named for Pascal by Pierre Raymond de Montmort (1708) who called it (French: Mr. Pascal's table for combinations) and Abraham de Moivre (1730) who called it (Latin: Pascal's Arithmetic Triangle), which became the basis of the modern Western name. Binomial expansions Pascal's triangle determines the coefficients which arise in binomial expansions. For example, in the expansion the coefficients are the entries in the second row of Pascal's triangle: , , . In general, the binomial theorem states that when a binomial like is raised to a positive integer power , the expression expands as where the coefficients are precisely the numbers in row of Pascal's triangle: The entire left diagonal of Pascal's triangle corresponds to the coefficient of in these binomial expansions, while the next left diagonal corresponds to the coefficient of , and so on. To see how the binomial theorem relates to the simple construction of Pascal's triangle, consider the problem of calculating the coefficients of the expansion of in terms of the corresponding coefficients of , where we set for simplicity. Suppose then that Now The two summations can be reindexed with and combined to yield Thus the extreme left and right coefficients remain as 1, and for any given , the coefficient of the term in the polynomial is equal to , the sum of the and coefficients in the previous power . This is indeed the downward-addition rule for constructing Pascal's triangle. It is not difficult to turn this argument into a proof (by mathematical induction) of the binomial theorem. Since , the coefficients are identical in the expansion of the general case. An interesting consequence of the binomial theorem is obtained by setting both variables , so that In other words, the sum of the entries in the th row of Pascal's triangle is the th power of 2. This is equivalent to the statement that the number of subsets of an -element set is , as can be seen by observing that each of the elements may be independently included or excluded from a given subset. Combinations A second useful application of Pascal's triangle is in the calculation of combinations. The number of combinations of items taken at a time, i.e. the number of subsets of elements from among elements, can be found by the equation . This is equal to entry in row of Pascal's triangle. Rather than performing the multiplicative calculation, one can simply look up the appropriate entry in the triangle (constructed by additions). For example, suppose 3 workers need to be hired from among 7 candidates; then the number of possible hiring choices is 7 choose 3, the entry 3 in row 7 of the above table (taking into consideration the first row is the 0th row), which is . Relation to binomial distribution and convolutions When divided by , the th row of Pascal's triangle becomes the binomial distribution in the symmetric case where . By the central limit theorem, this distribution approaches the normal distribution as increases. This can also be seen by applying Stirling's formula to the factorials involved in the formula for combinations. This is related to the operation of discrete convolution in two ways. First, polynomial multiplication corresponds exactly to discrete convolution, so that repeatedly convolving the sequence with itself corresponds to taking powers of , and hence to generating the rows of the triangle. Second, repeatedly convolving the distribution function for a random variable with itself corresponds to calculating the distribution function for a sum of n independent copies of that variable; this is exactly the situation to which the central limit theorem applies, and hence results in the normal distribution in the limit. (The operation of repeatedly taking a convolution of something with itself is called the convolution power.) Patterns and properties Pascal's triangle has many properties and contains many patterns of numbers. Rows The sum of the elements of a single row is twice the sum of the row preceding it. For example, row 0 (the topmost row) has a value of 1, row 1 has a value of 2, row 2 has a value of 4, and so forth. This is because every item in a row produces two items in the next row: one left and one right. The sum of the elements of row  equals to . Taking the product of the elements in each row, the sequence of products is related to the base of the natural logarithm, e. Specifically, define the sequence for all as follows: Then, the ratio of successive row products is and the ratio of these ratios is The right-hand side of the above equation takes the form of the limit definition of . can be found in Pascal's triangle by use of the Nilakantha infinite series. Some of the numbers in Pascal's triangle correlate to numbers in Lozanić's triangle. The sum of the squares of the elements of row  equals the middle element of row . For example, . In general form, In any even row , the middle term minus the term two spots to the left equals a Catalan number, specifically . For example, in row 4, which is 1, 4, 6, 4, 1, we get the 3rd Catalan number . In a row , where is a prime number, all the terms in that row except the 1s are divisible by . This can be proven easily, from the multiplicative formula . Since the denominator can have no prime factors equal to , so remains in the numerator after integer division, making the entire entry a multiple of . Parity: To count odd terms in row , convert to binary. Let be the number of 1s in the binary representation. Then the number of odd terms will be . These numbers are the values in Gould's sequence. Every entry in row 2n − 1, n ≥ 0, is odd. Polarity: When the elements of a row of Pascal's triangle are alternately added and subtracted together, the result is 0. For example, row 6 is 1, 6, 15, 20, 15, 6, 1, so the formula is 1 − 6 + 15 − 20 + 15 − 6 + 1 = 0. Diagonals The diagonals of Pascal's triangle contain the figurate numbers of simplices: The diagonals going along the left and right edges contain only 1's. The diagonals next to the edge diagonals contain the natural numbers in order. The 1-dimensional simplex numbers increment by 1 as the line segments extend to the next whole number along the number line. Moving inwards, the next pair of diagonals contain the triangular numbers in order. The next pair of diagonals contain the tetrahedral numbers in order, and the next pair give pentatope numbers. The symmetry of the triangle implies that the nth d-dimensional number is equal to the dth n-dimensional number. An alternative formula that does not involve recursion is where n(d) is the rising factorial. The geometric meaning of a function Pd is: Pd(1) = 1 for all d. Construct a d-dimensional triangle (a 3-dimensional triangle is a tetrahedron) by placing additional dots below an initial dot, corresponding to Pd(1) = 1. Place these dots in a manner analogous to the placement of numbers in Pascal's triangle. To find Pd(x), have a total of x dots composing the target shape. Pd(x) then equals the total number of dots in the shape. A 0-dimensional triangle is a point and a 1-dimensional triangle is simply a line, and therefore P0(x) = 1 and P1(x) = x, which is the sequence of natural numbers. The number of dots in each layer corresponds to Pd − 1(x). Calculating a row or diagonal by itself There are simple algorithms to compute all the elements in a row or diagonal without computing other elements or factorials. To compute row with the elements , begin with . For each subsequent element, the value is determined by multiplying the previous value by a fraction with slowly changing numerator and denominator: For example, to calculate row 5, the fractions are  ,  ,  ,  and , and hence the elements are  ,   ,   , etc. (The remaining elements are most easily obtained by symmetry.) To compute the diagonal containing the elements begin again with and obtain subsequent elements by multiplication by certain fractions: For example, to calculate the diagonal beginning at , the fractions are  , and the elements are , etc. By symmetry, these elements are equal to , etc. Overall patterns and properties The pattern obtained by coloring only the odd numbers in Pascal's triangle closely resembles the fractal known as the Sierpinski triangle. This resemblance becomes increasingly accurate as more rows are considered; in the limit, as the number of rows approaches infinity, the resulting pattern is the Sierpinski triangle, assuming a fixed perimeter. More generally, numbers could be colored differently according to whether or not they are multiples of 3, 4, etc.; this results in other similar patterns. As the proportion of black numbers tends to zero with increasing n, a corollary is that the proportion of odd binomial coefficients tends to zero as n tends to infinity. Pascal's triangle overlaid on a grid gives the number of distinct paths to each square, assuming only rightward and downward steps to an adjacent square are considered. In a triangular portion of a grid (as in the images below), the number of shortest grid paths from a given node to the top node of the triangle is the corresponding entry in Pascal's triangle. On a Plinko game board shaped like a triangle, this distribution should give the probabilities of winning the various prizes. If the rows of Pascal's triangle are left-justified, the diagonal bands (colour-coded below) sum to the Fibonacci numbers. {| style="align:center;" |- align=center |bgcolor=red|1 |- align=center | style="background:orange;"|1 | style="background:yellow;"|1 |- align=center | style="background:yellow;"|1 |bgcolor=lime|2 |bgcolor=aqua|1 |- align=center |bgcolor=lime|1 |bgcolor=aqua|3 | style="background:violet;"|3 |bgcolor=red|1 |- align=center |bgcolor=aqua|1 | style="background:violet;"|4 |bgcolor=red|6 | style="background:orange;"|4 | style="background:yellow;"|1 |- align=center | style="background:violet;"|1 |bgcolor=red|5 | style="background:orange;"|10 | style="background:yellow;"|10 |bgcolor=lime|5 |bgcolor=aqua|1 |- align=center |bgcolor=red|1 | style="background:orange;"|6 | style="background:yellow;"|15 |bgcolor=lime|20 |bgcolor=aqua|15 | style="background:violet;"|6 |bgcolor=red|1 |- align=center | style="background:orange; width:40px;"|1 | style="background:yellow; width:40px;"|7 | style="background:lime; width:40px;"|21 | style="background:aqua; width:40px;"|35 | style="background:violet; width:40px;"|35 | style="background:red; width:40px;"|21 | style="background:orange; width:40px;"|7 | style="background:yellow; width:40px;"|1 |} Construction as matrix exponential Due to its simple construction by factorials, a very basic representation of Pascal's triangle in terms of the matrix exponential can be given: Pascal's triangle is the exponential of the matrix which has the sequence 1, 2, 3, 4, ... on its sub-diagonal and zero everywhere else. Construction of Clifford algebra using simplices Labelling the elements of each n-simplex matches the basis elements of Clifford algebra used as forms in Geometric Algebra rather than matrices. Recognising the geometric operations, such as rotations, allows the algebra operations to be discovered. Just as each row, , starting at 0, of Pascal's triangle corresponds to an -simplex, as described below, it also defines the number of named basis forms in dimensional Geometric algebra. The binomial theorem can be used to prove the geometric relationship provided by Pascal's triangle. This same proof could be applied to simplices except that the first column of all 1's must be ignored whereas in the algebra these correspond to the real numbers, , with basis 1. Relation to geometry of polytopes Pascal's triangle can be used as a lookup table for the number of elements (such as edges and corners) within a polytope (such as a triangle, a tetrahedron, a square, or a cube). Number of elements of simplices Let's begin by considering the 3rd line of Pascal's triangle, with values 1, 3, 3, 1. A 2-dimensional triangle has one 2-dimensional element (itself), three 1-dimensional elements (lines, or edges), and three 0-dimensional elements (vertices, or corners). The meaning of the final number (1) is more difficult to explain (but see below). Continuing with our example, a tetrahedron has one 3-dimensional element (itself), four 2-dimensional elements (faces), six 1-dimensional elements (edges), and four 0-dimensional elements (vertices). Adding the final 1 again, these values correspond to the 4th row of the triangle (1, 4, 6, 4, 1). Line 1 corresponds to a point, and Line 2 corresponds to a line segment (dyad). This pattern continues to arbitrarily high-dimensioned hyper-tetrahedrons (known as simplices). To understand why this pattern exists, one must first understand that the process of building an n-simplex from an -simplex consists of simply adding a new vertex to the latter, positioned such that this new vertex lies outside of the space of the original simplex, and connecting it to all original vertices. As an example, consider the case of building a tetrahedron from a triangle, the latter of whose elements are enumerated by row 3 of Pascal's triangle: 1 face, 3 edges, and 3 vertices. To build a tetrahedron from a triangle, position a new vertex above the plane of the triangle and connect this vertex to all three vertices of the original triangle. The number of a given dimensional element in the tetrahedron is now the sum of two numbers: first the number of that element found in the original triangle, plus the number of new elements, each of which is built upon elements of one fewer dimension from the original triangle. Thus, in the tetrahedron, the number of cells (polyhedral elements) is ; the number of faces is the number of edges is the number of new vertices is . This process of summing the number of elements of a given dimension to those of one fewer dimension to arrive at the number of the former found in the next higher simplex is equivalent to the process of summing two adjacent numbers in a row of Pascal's triangle to yield the number below. Thus, the meaning of the final number (1) in a row of Pascal's triangle becomes understood as representing the new vertex that is to be added to the simplex represented by that row to yield the next higher simplex represented by the next row. This new vertex is joined to every element in the original simplex to yield a new element of one higher dimension in the new simplex, and this is the origin of the pattern found to be identical to that seen in Pascal's triangle. Number of elements of hypercubes A similar pattern is observed relating to squares, as opposed to triangles. To find the pattern, one must construct an analog to Pascal's triangle, whose entries are the coefficients of , instead of . There are a couple ways to do this. The simpler is to begin with row 0 = 1 and row 1 = 1, 2. Proceed to construct the analog triangles according to the following rule: That is, choose a pair of numbers according to the rules of Pascal's triangle, but double the one on the left before adding. This results in: The other way of producing this triangle is to start with Pascal's triangle and multiply each entry by 2k, where k is the position in the row of the given number. For example, the 2nd value in row 4 of Pascal's triangle is 6 (the slope of 1s corresponds to the zeroth entry in each row). To get the value that resides in the corresponding position in the analog triangle, multiply 6 by . Now that the analog triangle has been constructed, the number of elements of any dimension that compose an arbitrarily dimensioned cube (called a hypercube) can be read from the table in a way analogous to Pascal's triangle. For example, the number of 2-dimensional elements in a 2-dimensional cube (a square) is one, the number of 1-dimensional elements (sides, or lines) is 4, and the number of 0-dimensional elements (points, or vertices) is 4. This matches the 2nd row of the table (1, 4, 4). A cube has 1 cube, 6 faces, 12 edges, and 8 vertices, which corresponds to the next line of the analog triangle (1, 6, 12, 8). This pattern continues indefinitely. To understand why this pattern exists, first recognize that the construction of an n-cube from an -cube is done by simply duplicating the original figure and displacing it some distance (for a regular n-cube, the edge length) orthogonal to the space of the original figure, then connecting each vertex of the new figure to its corresponding vertex of the original. This initial duplication process is the reason why, to enumerate the dimensional elements of an n-cube, one must double the first of a pair of numbers in a row of this analog of Pascal's triangle before summing to yield the number below. The initial doubling thus yields the number of "original" elements to be found in the next higher n-cube and, as before, new elements are built upon those of one fewer dimension (edges upon vertices, faces upon edges, etc.). Again, the last number of a row represents the number of new vertices to be added to generate the next higher n-cube. In this triangle, the sum of the elements of row m is equal to 3m. Again, to use the elements of row 4 as an example: , which is equal to . Counting vertices in a cube by distance Each row of Pascal's triangle gives the number of vertices at each distance from a fixed vertex in an n-dimensional cube. For example, in three dimensions, the third row (1 3 3 1) corresponds to the usual three-dimensional cube: fixing a vertex V, there is one vertex at distance 0 from V (that is, V itself), three vertices at distance 1, three vertices at distance and one vertex at distance (the vertex opposite V). The second row corresponds to a square, while larger-numbered rows correspond to hypercubes in each dimension. Fourier transform of sin(x)n+1/x As stated previously, the coefficients of (x + 1)n are the nth row of the triangle. Now the coefficients of (x − 1)n are the same, except that the sign alternates from +1 to −1 and back again. After suitable normalization, the same pattern of numbers occurs in the Fourier transform of sin(x)n+1/x. More precisely: if n is even, take the real part of the transform, and if n is odd, take the imaginary part. Then the result is a step function, whose values (suitably normalized) are given by the nth row of the triangle with alternating signs. For example, the values of the step function that results from: compose the 4th row of the triangle, with alternating signs. This is a generalization of the following basic result (often used in electrical engineering): is the boxcar function. The corresponding row of the triangle is row 0, which consists of just the number 1. If n is congruent to 2 or to 3 mod 4, then the signs start with −1. In fact, the sequence of the (normalized) first terms corresponds to the powers of i, which cycle around the intersection of the axes with the unit circle in the complex plane: Extensions Pascal's triangle may be extended upwards, above the 1 at the apex, preserving the additive property, but there is more than one way to do so. To higher dimensions Pascal's triangle has higher dimensional generalizations. The three-dimensional version is known as Pascal's pyramid or Pascal's tetrahedron, while the general versions are known as Pascal's simplices. To complex numbers When the factorial function is defined as , Pascal's triangle can be extended beyond the integers to , since is meromorphic to the entire complex plane. To arbitrary bases Isaac Newton once observed that the first five rows of Pascal's triangle, when read as the digits of an integer, are the corresponding powers of eleven. He claimed without proof that subsequent rows also generate powers of eleven. In 1964, Robert L. Morton presented the more generalized argument that each row can be read as a radix numeral, where is the hypothetical terminal row, or limit, of the triangle, and the rows are its partial products. He proved the entries of row , when interpreted directly as a place-value numeral, correspond to the binomial expansion of . More rigorous proofs have since been developed. To better understand the principle behind this interpretation, here are some things to recall about binomials: A radix numeral in positional notation (e.g. ) is a univariate polynomial in the variable , where the degree of the variable of the th term (starting with ) is . For example, . A row corresponds to the binomial expansion of . The variable can be eliminated from the expansion by setting . The expansion now typifies the expanded form of a radix numeral, as demonstrated above. Thus, when the entries of the row are concatenated and read in radix they form the numerical equivalent of . If for , then the theorem holds for with odd values of yielding negative row products. By setting the row's radix (the variable ) equal to one and ten, row becomes the product and , respectively. To illustrate, consider , which yields the row product . The numeric representation of is formed by concatenating the entries of row . The twelfth row denotes the product: with compound digits (delimited by ":") in radix twelve. The digits from through are compound because these row entries compute to values greater than or equal to twelve. To normalize the numeral, simply carry the first compound entry's prefix, that is, remove the prefix of the coefficient from its leftmost digit up to, but excluding, its rightmost digit, and use radix-twelve arithmetic to sum the removed prefix with the entry on its immediate left, then repeat this process, proceeding leftward, until the leftmost entry is reached. In this particular example, the normalized string ends with for all . The leftmost digit is for , which is obtained by carrying the of at entry . It follows that the length of the normalized value of is equal to the row length, . The integral part of contains exactly one digit because (the number of places to the left the decimal has moved) is one less than the row length. Below is the normalized value of . Compound digits remain in the value because they are radix residues represented in radix ten: See also Bean machine, Francis Galton's "quincunx" Bell triangle Bernoulli's triangle Binomial expansion Cellular automata Euler triangle Floyd's triangle Gaussian binomial coefficient Hockey-stick identity Leibniz harmonic triangle Multiplicities of entries in Pascal's triangle (Singmaster's conjecture) Pascal matrix Pascal's pyramid Pascal's simplex Proton NMR, one application of Pascal's triangle Star of David theorem Trinomial expansion Trinomial triangle Polynomials calculating sums of powers of arithmetic progressions References External links The Old Method Chart of the Seven Multiplying Squares (from the Ssu Yuan Yü Chien of Chu Shi-Chieh, 1303, depicting the first nine rows of Pascal's triangle) Pascal's Treatise on the Arithmetic Triangle (page images of Pascal's treatise, 1654; summary) Factorial and binomial topics Blaise Pascal Triangles of numbers
Pascal's triangle
[ "Mathematics" ]
6,096
[ "Factorial and binomial topics", "Triangles of numbers", "Combinatorics" ]
49,503
https://en.wikipedia.org/wiki/Inductively%20coupled%20plasma%20mass%20spectrometry
Inductively coupled plasma mass spectrometry (ICP-MS) is a type of mass spectrometry that uses an inductively coupled plasma to ionize the sample. It atomizes the sample and creates atomic and small polyatomic ions, which are then detected. It is known and used for its ability to detect metals and several non-metals in liquid samples at very low concentrations. It can detect different isotopes of the same element, which makes it a versatile tool in isotopic labeling. Compared to atomic absorption spectroscopy, ICP-MS has greater speed, precision, and sensitivity. However, compared with other types of mass spectrometry, such as thermal ionization mass spectrometry (TIMS) and glow discharge mass spectrometry (GD-MS), ICP-MS introduces many interfering species: argon from the plasma, component gases of air that leak through the cone orifices, and contamination from glassware and the cones. Components Inductively coupled plasma An inductively coupled plasma is a plasma that is energized (ionized) by inductively heating the gas with an electromagnetic coil, and contains a sufficient concentration of ions and electrons to make the gas electrically conductive. Not all of the gas needs to be ionized for the gas to have the characteristics of a plasma; as little as 1% ionization creates a plasma. The plasmas used in spectrochemical analysis are essentially electrically neutral, with each positive charge on an ion balanced by a free electron. In these plasmas the positive ions are almost all singly charged and there are few negative ions, so there are nearly equal numbers of ions and electrons in each unit volume of plasma. The ICPs have two operation modes, called capacitive (E) mode with low plasma density and inductive (H) mode with high plasma density, and E to H heating mode transition occurs with external inputs. The Inductively Coupled Plasma Mass Spectrometry is operated in the H mode. What makes Inductively Coupled Plasma Mass Spectrometry (ICP-MS) unique to other forms of inorganic mass spectrometry is its ability to sample the analyte continuously, without interruption. This is in contrast to other forms of inorganic mass spectrometry; Glow Discharge Mass Spectrometry (GDMS) and Thermal Ionization Mass Spectrometry (TIMS), that require a two-stage process: Insert sample(s) into a vacuum chamber, seal the vacuum chamber, pump down the vacuum, energize sample, thereby sending ions into the mass analyzer. With ICP-MS the sample to be analyzed is sitting at atmospheric pressure. Through the effective use of differential pumping; multiple vacuum stages separate by differential apertures (holes), the ions created in the argon plasma are, with the aid of various electrostatic focusing techniques, transmitted through the mass analyzer to the detector(s) and counted. Not only does this enable the analyst to radically increase sample throughput (amount of samples over time), but has also made it possible to do what is called "time resolved acquisition". Hyphenated techniques like Liquid Chromatography ICP-MS (LC-ICP-MS); Laser Ablation ICP-MS (LA-ICP-MS); Flow Injection ICP-MS (FIA-ICP-MS), etc. have benefited from this relatively new technology. It has stimulated the development new tools for research including geochemistry and forensic chemistry; biochemistry and oceanography. Additionally, increases in sample throughput from dozens of samples a day to hundreds of samples a day have revolutionized environmental analysis, reducing costs. Fundamentally, this is all due to the fact that while the sample resides at environmental pressure, the analyzer and detector are at 1/10,000,000 of that same pressure during normal operation. An inductively coupled plasma (ICP) for spectrometry is sustained in a torch that consists of three concentric tubes, usually made of quartz, although the inner tube (injector) can be sapphire if hydrofluoric acid is being used. The end of this torch is placed inside an induction coil supplied with a radio-frequency electric current. A flow of argon gas (usually 13 to 18 liters per minute) is introduced between the two outermost tubes of the torch and an electric spark is applied for a short time to introduce free electrons into the gas stream. These electrons interact with the radio-frequency magnetic field of the induction coil and are accelerated first in one direction, then the other, as the field changes at high frequency (usually 27.12 million cycles per second). The accelerated electrons collide with argon atoms, and sometimes a collision causes an argon atom to part with one of its electrons. The released electron is in turn accelerated by the rapidly changing magnetic field. The process continues until the rate of release of new electrons in collisions is balanced by the rate of recombination of electrons with argon ions (atoms that have lost an electron). This produces a ‘fireball’ that consists mostly of argon atoms with a rather small fraction of free electrons and argon ions. The temperature of the plasma is very high, of the order of 10,000 K. The plasma also produces ultraviolet light, so for safety should not be viewed directly. The ICP can be retained in the quartz torch because the flow of gas between the two outermost tubes keeps the plasma away from the walls of the torch. A second flow of argon (around 1 liter per minute) is usually introduced between the central tube and the intermediate tube to keep the plasma away from the end of the central tube. A third flow (again usually around 1 liter per minute) of gas is introduced into the central tube of the torch. This gas flow passes through the centre of the plasma, where it forms a channel that is cooler than the surrounding plasma but still much hotter than a chemical flame. Samples to be analyzed are introduced into this central channel, usually as a mist of liquid formed by passing the liquid sample into a nebulizer. To maximise plasma temperature (and hence ionisation efficiency) and stability, the sample should be introduced through the central tube with as little liquid (solvent load) as possible, and with consistent droplet sizes. A nebuliser can be used for liquid samples, followed by a spray chamber to remove larger droplets, or a desolvating nebuliser can be used to evaporate most of the solvent before it reaches the torch. Solid samples can also be introduced using laser ablation. The sample enters the central channel of the ICP, evaporates, molecules break apart, and then the constituent atoms ionise. At the temperatures prevailing in the plasma a significant proportion of the atoms of many chemical elements are ionized, each atom losing its most loosely bound electron to form a singly charged ion. The plasma temperature is selected to maximise ionisation efficiency for elements with a high first ionisation energy, while minimising second ionisation (double charging) for elements that have a low second ionisation energy. Mass spectrometry For coupling to mass spectrometry, the ions from the plasma are extracted through a series of cones into a mass spectrometer, usually a quadrupole. The ions are separated on the basis of their mass-to-charge ratio and a detector receives an ion signal proportional to the concentration. The concentration of a sample can be determined through calibration with certified reference material such as single or multi-element reference standards. ICP-MS also lends itself to quantitative determinations through isotope dilution, a single point method based on an isotopically enriched standard. In order to increase reproducibility and compensate for errors by sensitivity variation, an internal standard can be added. Other mass analyzers coupled to ICP systems include double focusing magnetic-electrostatic sector systems with both single and multiple collector, as well as time of flight systems (both axial and orthogonal accelerators have been used). Applications One of the largest volume uses for ICP-MS is in the medical and forensic field, specifically, toxicology. A physician may order a metal assay for a number of reasons, such as suspicion of heavy metal poisoning, metabolic concerns, and even hepatological issues. Depending on the specific parameters unique to each patient's diagnostic plan, samples collected for analysis can range from whole blood, urine, plasma, serum, to even packed red blood cells. Another primary use for this instrument lies in the environmental field. Such applications include water testing for municipalities or private individuals all the way to soil, water and other material analysis for industrial purposes. In recent years, industrial and biological monitoring has presented another major need for metal analysis via ICP-MS. Individuals working in factories where exposure to metals is likely and unavoidable, such as a battery factory, are required by their employer to have their blood or urine analyzed for metal toxicity on a regular basis. This monitoring has become a mandatory practice implemented by the U.S. Occupational Safety and Health Administration, in an effort to protect workers from their work environment and ensure proper rotation of work duties (i.e. rotating employees from a high exposure position to a low exposure position). ICP-MS is also used widely in the geochemistry field for radiometric dating, in which it is used to analyze relative abundance of different isotopes, in particular uranium and lead. ICP-MS is more suitable for this application than the previously used thermal ionization mass spectrometry, as species with high ionization energy such as osmium and tungsten can be easily ionized. For high precision ratio work, multiple collector instruments are normally used to reduce the effect noise on the calculated ratios. In the field of flow cytometry, a new technique uses ICP-MS to replace the traditional fluorochromes. Briefly, instead of labelling antibodies (or other biological probes) with fluorochromes, each antibody is labelled with a distinct combinations of lanthanides. When the sample of interest is analysed by ICP-MS in a specialised flow cytometer, each antibody can be identified and quantitated by virtue of a distinct ICP "footprint". In theory, hundreds of different biological probes can thus be analysed in an individual cell, at a rate of ca. 1,000 cells per second. Because elements are easily distinguished in ICP-MS, the problem of compensation in multiplex flow cytometry is effectively eliminated. Laser ablation inductively coupled plasma mass spectrometry (LA-ICP-MS) is a powerful technique for the elemental analysis of a wide variety of materials encountered in forensic casework. (LA-ICP-MS) has already successfully been applied to applications in forensics, metals, glasses, soils, car paints, bones and teeth, printing inks, trace elemental, fingerprint, and paper. Among these, forensic glass analysis stands out as an application for which this technique has great utility to provide highly. Car hit and runs, burglaries, assaults, drive-by shootings and bombings such as these situations may cause glass fragments that could be used as evidence of association in glass transfer conditions. LA-ICP-MS is considered one of the best techniques for analysis of glass due to the short time for sample preparation and sample, small sample size of less than 250 nanograms. In addition there is no need for complex procedure and handling of dangerous materials that is used for digestion of the samples. This allows detecting major, minor and tracing elements with high level of precision and accuracy. There are set of properties that are used to measure glass sample such as physical and optical properties including color, thickness, density, refractive index (RI) and also, if necessary, elemental analysis can be conducted in order to enhance the value of an association. Pharmaceutical industry In the pharmaceutical industry, ICP-MS is used for detecting inorganic impurities in pharmaceuticals and their ingredients. New and reduced maximum permitted exposure levels of heavy metals from dietary supplements, introduced in USP (United States Pharmacopeia) «〈232〉Elemental Impurities—Limits» and USP «〈232〉Elemental Impurities—Procedures», will increase the need for ICP-MS technology, where, previously, other analytic methods have been sufficient. Cosmetics, such as lipstick, recovered from a crime scene may provide valuable forensic information. Lipstick smears left on cigarette butts, glassware, clothing, bedding; napkins, paper, etc. may be valuable evidence. Lipstick recovered from clothing or skin may also indicate physical contact between individuals. Forensic analysis of recovered lipstick smear evidence can provide valuable information on the recent activities of a victim or suspect. Trace elemental analysis of lipstick smears could be used to complement existing visual comparative procedures to determine the lipstick brand and color. Single Particle Inductively Coupled Plasma Mass Spectroscopy (SP ICP-MS) was designed for particle suspensions in 2000 by Claude Degueldre. He first tested this new methodology at the Forel Institute of the University of Geneva and presented this new analytical approach at the 'Colloid 2oo2' symposium during the spring 2002 meeting of the EMRS, and in the proceedings in 2003. This study presents the theory of SP ICP-MS and the results of tests carried out on clay particles (montmorillonite) as well as other suspensions of colloids. This method was then tested on thorium dioxide nanoparticles by Degueldre & Favarger (2004), zirconium dioxide by Degueldre et al (2004) and gold nanoparticles, which are used as a substrate in nanopharmacy, and published by Degueldre et al (2006). Subsequently, the study of uranium dioxide nano- and micro-particles gave rise to a detailed publication, Ref. Degueldre et al (2006). Since 2010 the interest for SP ICP-MS has exploded. Previous forensic techniques employed for the organic analysis of lipsticks by compositional comparison include thin layer chromatography (TLC), gas chromatography (GC), and high-performance liquid chromatography (HPLC). These methods provide useful information regarding the identification of lipsticks. However, they all require long sample preparation times and destroy the sample. Nondestructive techniques for the forensic analysis of lipstick smears include UV fluorescence observation combined with purge-and-trap gas chromatography, microspectrophotometry and scanning electron microscopy-energy dispersive spectroscopy (SEM-EDS), and Raman spectroscopy. Metal speciation A growing trend in the world of elemental analysis has revolved around the speciation, or determination of oxidation state of certain metals such as chromium and arsenic. The toxicity of those elements varies with the oxidation state, so new regulations from food authorities requires speciation of some elements. One of the primary techniques to achieve this is to separate the chemical species with high-performance liquid chromatography (HPLC) or field flow fractionation (FFF) and then measure the concentrations with ICP-MS. Quantification of proteins and biomolecules There is an increasing trend of using ICP-MS as a tool in speciation analysis, which normally involves a front end chromatograph separation and an elemental selective detector, such as AAS and ICP-MS. For example, ICP-MS may be combined with size exclusion chromatography and preparative native PAGE for identifying and quantifying metalloproteins in biofluids. Also the phosphorylation status of proteins can be analyzed. In 2007, a new type of protein tagging reagents called metal-coded affinity tags (MeCAT) were introduced to label proteins quantitatively with metals, especially lanthanides. The MeCAT labelling allows relative and absolute quantification of all kind of proteins or other biomolecules like peptides. MeCAT comprises a site-specific biomolecule tagging group with at least a strong chelate group which binds metals. The MeCAT labelled proteins can be accurately quantified by ICP-MS down to low attomol amount of analyte which is at least 2–3 orders of magnitude more sensitive than other mass spectrometry based quantification methods. By introducing several MeCAT labels to a biomolecule and further optimization of LC-ICP-MS detection limits in the zeptomol range are within the realm of possibility. By using different lanthanides MeCAT multiplexing can be used for pharmacokinetics of proteins and peptides or the analysis of the differential expression of proteins (proteomics) e.g. in biological fluids. Breakable PAGE SDS-PAGE (DPAGE, dissolvable PAGE), two-dimensional gel electrophoresis or chromatography is used for separation of MeCAT labelled proteins. Flow-injection ICP-MS analysis of protein bands or spots from DPAGE SDS-PAGE gels can be easily performed by dissolving the DPAGE gel after electrophoresis and staining of the gel. MeCAT labelled proteins are identified and relatively quantified on peptide level by MALDI-MS or ESI-MS. Elemental analysis The ICP-MS allows determination of elements with atomic mass ranges 7 to 250 (Li to U), and sometimes higher. Some masses are prohibited, such as 40 Da, due to the abundance of argon in the sample. Other interference regions may include mass 80 (due to the argon dimer) and mass 56 (due to ArO), the latter of which greatly hinders Fe detection unless the instrument is fitted with a reaction chamber. Such interferences can be reduced by using a high resolution ICP-MS (HR-ICP-MS) which uses two or more slits to constrict the beam and distinguish between nearby peaks. This comes at the cost of sensitivity. For example, distinguishing iron from argon requires a resolving power of about 10,000, which may reduce the iron sensitivity by around 99%. Interfering species can alternatively be distinguished through the use of a collision chamber, which can filter gasses by either chemical reaction or physical collision. A single collector ICP-MS may use a multiplier in pulse counting mode to amplify very low signals, an attenuation grid or a multiplier in analogue mode to detect medium signals, and a Faraday cup/bucket to detect larger signals. A multi-collector ICP-MS may have more than one of any of these, typically Faraday buckets which are more cost-effective than other collectors. With this combination, a dynamic range of 12 orders of magnitude, from 1 part per quadrillion (ppq) to 100 parts per million (ppm) is possible. ICP-MS is a common method for the determination of cadmium in biological samples. Unlike atomic absorption spectroscopy, which can only measure a single element at a time, ICP-MS has the capability to scan for all elements simultaneously. This allows rapid sample processing. A simultaneous ICP-MS that can record the entire analytical spectrum from lithium to uranium in every analysis won the Silver Award at the 2010 Pittcon Editors' Awards. An ICP-MS may use multiple scan modes, each one striking a different balance between speed and precision. Using the magnet alone to scan is slow due to hysteresis but is precise. Electrostatic plates can be used in addition to the magnet to increase the speed, and with multiple collectors can allow a scan of every element from Lithium 6 to Uranium Oxide 256 in less than a quarter of a second. For low detection limits, interfering species and high precision, the counting time can increase substantially. The rapid scanning, large dynamic range and large mass range of ICP-MS is ideally suited to measuring multiple unknown concentrations and isotope ratios in samples that have had minimal preparation (an advantage over TIMS). The analysis of seawater, urine, and digested whole rock samples are examples of industry applications. These properties also lend well to laser-ablated rock samples, where the scanning rate is fast enough to enable a real-time plot of any number of isotopes. This also allows easy spatial mapping of mineral grains. Hardware In terms of input and output, ICP-MS instrument consumes prepared sample material and translates it into mass-spectral data. Actual analytical procedure takes some time; after that time the instrument can be switched to work on the next sample. Series of such sample measurements requires the instrument to have plasma ignited, meanwhile a number of technical parameters has to be stable in order for the results obtained to have feasibly accurate and precise interpretation. Maintaining the plasma requires a constant supply of carrier gas (usually, pure argon) and increased power consumption of the instrument. When these additional running costs are not considered justified, plasma and most of auxiliary systems can be turned off. In such standby mode only pumps are working to keep proper vacuum in mass-spectrometer. The constituents of ICP-MS instrument are designed to allow for reproducible and/or stable operation. Sample introduction The first step in analysis is the introduction of the sample. This has been achieved in ICP-MS through a variety of means. The most common method is the use of analytical nebulizers. A nebulizer converts liquids into an aerosol, and that aerosol can then be swept into the plasma to create the ions. Nebulizers work best with simple liquid samples (i.e. solutions). However, there have been instances of their use with more complex materials like a slurry. Many varieties of nebulizers have been coupled to ICP-MS, including pneumatic, cross-flow, Babington, ultrasonic, and desolvating types. The aerosol generated is often treated to limit it to only smallest droplets, commonly by means of a Peltier cooled double pass or cyclonic spray chamber. Use of autosamplers makes this easier and faster, especially for routine work and large numbers of samples. A Desolvating Nebuliser (DSN) may also be used; this uses a long heated capillary, coated with a fluoropolymer membrane, to remove most of the solvent and reduce the load on the plasma. Matrix removal introduction systems are sometimes used for samples, such as seawater, where the species of interest are at trace levels, and are surrounded by much more abundant contaminants. Laser ablation is another method. Though less common in the past, it has become popular as a means of sample introduction, thanks to increased ICP-MS scanning speeds. In this method, a pulsed UV laser is focused on the sample and creates a plume of ablated material, which can be swept into the plasma. This allows geochemists to spatially map the isotope composition in cross-sections of rock samples, a tool which is lost if the rock is digested and introduced as a liquid sample. Lasers for this task are built to have highly controllable power outputs and uniform radial power distributions, to produce craters which are flat bottomed and of a chosen diameter and depth. For both Laser Ablation and Desolvating Nebulisers, a small flow of nitrogen may also be introduced into the argon flow. Nitrogen exists as a dimer, so has more vibrational modes and is more efficient at receiving energy from the RF coil around the torch. Other methods of sample introduction are also utilized. Electrothermal vaporization (ETV) and in torch vaporization (ITV) use hot surfaces (graphite or metal, generally) to vaporize samples for introduction. These can use very small amounts of liquids, solids, or slurries. Other methods like vapor generation are also known. Plasma torch The plasma used in an ICP-MS is made by partially ionizing argon gas (Ar → Ar+ + e−). The energy required for this reaction is obtained by pulsing an alternating electric current in load coil that surrounds the plasma torch with a flow of argon gas. After the sample is injected, the plasma's extreme temperature causes the sample to separate into individual atoms (atomization). Next, the plasma ionizes these atoms (M → M+ + e−) so that they can be detected by the mass spectrometer. An inductively coupled plasma (ICP) for spectrometry is sustained in a torch that consists of three concentric tubes, usually made of quartz. The two major designs are the Fassel and Greenfield torches. The end of this torch is placed inside an induction coil supplied with a radio-frequency electric current. A flow of argon gas (usually 14 to 18 liters per minute) is introduced between the two outermost tubes of the torch and an electrical spark is applied for a short time to introduce free electrons into the gas stream. These electrons interact with the radio-frequency magnetic field of the induction coil and are accelerated first in one direction, then the other, as the field changes at high frequency (usually 27.12 MHz or 40 MHz). The accelerated electrons collide with argon atoms, and sometimes a collision causes an argon atom to part with one of its electrons. The released electron is in turn accelerated by the rapidly changing magnetic field. The process continues until the rate of release of new electrons in collisions is balanced by the rate of recombination of electrons with argon ions (atoms that have lost an electron). This produces a ‘fireball’ that consists mostly of argon atoms with a rather small fraction of free electrons and argon ions. Advantage of argon Making the plasma from argon, instead of other gases, has several advantages. First, argon is abundant (in the atmosphere, as a result of the radioactive decay of potassium) and therefore cheaper than other noble gases. Argon also has a higher first ionization potential than all other elements except He, F, and Ne. Because of this high ionization energy, the reaction (Ar+ + e− → Ar) is more energetically favorable than the reaction (M+ + e− → M). This ensures that the sample remains ionized (as M+) so that the mass spectrometer can detect it. Argon can be purchased for use with the ICP-MS in either a refrigerated liquid or a gas form. However it is important to note that whichever form of argon purchased, it should have a guaranteed purity of 99.9% Argon at a minimum. It is important to determine which type of argon will be best suited for the specific situation. Liquid argon is typically cheaper and can be stored in a greater quantity as opposed to the gas form, which is more expensive and takes up more tank space. If the instrument is in an environment where it gets infrequent use, then buying argon in the gas state will be most appropriate as it will be more than enough to suit smaller run times and gas in the cylinder will remain stable for longer periods of time, whereas liquid argon will suffer loss to the environment due to venting of the tank when stored over extended time frames. However, if the ICP-MS is to be used routinely and is on and running for eight or more hours each day for several days a week, then going with liquid argon will be the most suitable. If there are to be multiple ICP-MS instruments running for long periods of time, then it will most likely be beneficial for the laboratory to install a bulk or micro bulk argon tank which will be maintained by a gas supply company, thus eliminating the need to change out tanks frequently as well as minimizing loss of argon that is left over in each used tank as well as down time for tank changeover. Helium can be used either in place of, or mixed with, argon for plasma generation. Helium's higher first ionisation energy allows greater ionisation and therefore higher sensitivity for hard-to-ionise elements. The use of pure helium also avoids argon-based interferences such as ArO. However, many of the interferences can be mitigated by use of a collision cell, and the greater cost of helium has prevented its use in commercial ICP-MS. Transfer of ions into vacuum The carrier gas is sent through the central channel and into the very hot plasma. The sample is then exposed to radio frequency which converts the gas into a plasma. The high temperature of the plasma is sufficient to cause a very large portion of the sample to form ions. This fraction of ionization can approach 100% for some elements (e.g. sodium), but this is dependent on the ionization potential. A fraction of the formed ions passes through a ~1 mm hole (sampler cone) and then a ~0.4 mm hole (skimmer cone). The purpose of which is to allow a vacuum that is required by the mass spectrometer. The vacuum is created and maintained by a series of pumps. The first stage is usually based on a roughing pump, most commonly a standard rotary vane pump. This removes most of the gas and typically reaches a pressure of around 133 Pa. Later stages have their vacuum generated by more powerful vacuum systems, most often turbomolecular pumps. Older instruments may have used oil diffusion pumps for high vacuum regions. Ion optics Before mass separation, a beam of positive ions has to be extracted from the plasma and focused into the mass-analyzer. It is important to separate the ions from UV photons, energetic neutrals and from any solid particles that may have been carried into the instrument from the ICP. Traditionally, ICP-MS instruments have used transmitting ion lens arrangements for this purpose. Examples include the Einzel lens, the Barrel lens, Agilent's Omega Lens and Perkin-Elmer's Shadow Stop. Another approach is to use ion guides (quadrupoles, hexapoles, or octopoles) to guide the ions into mass analyzer along a path away from the trajectory of photons or neutral particles. Yet another approach is Varian patented used by Analytik Jena ICP-MS 90 degrees reflecting parabolic "Ion Mirror" optics, which are claimed to provide more efficient ion transport into the mass-analyzer, resulting in better sensitivity and reduced background. Analytik Jena ICP-MS PQMS is the most sensitive instrument on the market. A sector ICP-MS will commonly have four sections: an extraction acceleration region, steering lenses, an electrostatic sector and a magnetic sector. The first region takes ions from the plasma and accelerates them using a high voltage. The second uses may use a combination of parallel plates, rings, quadrupoles, hexapoles and octopoles to steer, shape and focus the beam so that the resulting peaks are symmetrical, flat topped and have high transmission. The electrostatic sector may be before or after the magnetic sector depending on the particular instrument, and reduces the spread in kinetic energy caused by the plasma. This spread is particularly large for ICP-MS, being larger than Glow Discharge and much larger than TIMS. The geometry of the instrument is chosen so that the instrument the combined focal point of the electrostatic and magnetic sectors is at the collector, known as Double Focusing (or Double Focussing). If the mass of interest has a low sensitivity and is just below a much larger peak, the low mass tail from this larger peak can intrude onto the mass of interest. A Retardation Filter might be used to reduce this tail. This sits near the collector, and applies a voltage equal but opposite to the accelerating voltage; any ions that have lost energy while flying around the instrument will be decelerated to rest by the filter. Collision reaction cell and iCRC The collision/reaction cell is used to remove interfering ions through ion/neutral reactions. Collision/reaction cells are known under several names. The dynamic reaction cell is located before the quadrupole in the ICP-MS device. The chamber has a quadrupole and can be filled with reaction (or collision) gases (ammonia, methane, oxygen or hydrogen), with one gas type at a time or a mixture of two of them, which reacts with the introduced sample, eliminating some of the interference. The integrated Collisional Reaction Cell (iCRC) used by Analytik Jena ICP-MS is a mini-collision cell installed in front of the parabolic ion mirror optics that removes interfering ions by injecting a collisional gas (He), or a reactive gas (H2), or a mixture of the two, directly into the plasma as it flows through the skimmer cone and/or the sampler cone. The iCRC removed interfering ions using a collisional kinetic energy discrimination (KED) phenomenon and chemical reactions with interfering ions similarly to traditionally used larger collision cells. Routine maintenance As with any piece of instrumentation or equipment, there are many aspects of maintenance that need to be encompassed by daily, weekly and annual procedures. The frequency of maintenance is typically determined by the sample volume and cumulative run time that the instrument is subjected to. One of the first things that should be carried out before the calibration of the ICP-MS is a sensitivity check and optimization. This ensures that the operator is aware of any possible issues with the instrument and if so, may address them before beginning a calibration. Typical indicators of sensitivity are Rhodium levels, Cerium/Oxide ratios and DI water blanks. One common standard practice is to measure a standard tuning solution provided by the ICP manufacturer every time the plasma torch is started. Then the instrument is auto-calibrated for optimum sensitivity and the operator obtains a report providing certain parameters such as sensitivity, mass resolution and estimated amount of oxidized species and double-positive charged species. One of the most frequent forms of routine maintenance is replacing sample and waste tubing on the peristaltic pump, as these tubes can get worn fairly quickly resulting in holes and clogs in the sample line, resulting in skewed results. Other parts that will need regular cleaning and/or replacing are sample tips, nebulizer tips, sample cones, skimmer cones, injector tubes, torches and lenses. It may also be necessary to change the oil in the interface roughing pump as well as the vacuum backing pump, depending on the workload put on the instrument. Sample preparation For most clinical methods using ICP-MS, there is a relatively simple and quick sample prep process. The main component to the sample is an internal standard, which also serves as the diluent. This internal standard consists primarily of deionized water, with nitric or hydrochloric acid and indium and/or gallium. The addition of volatile acids allows for the sample to decompose into its gaseous components in the plasma which minimizes the ability for concentrated salts and solvent loads to clog the cones and contaminate the instrument. Depending on the sample type, usually 5 mL of the internal standard is added to a test tube along with 10–500 microliters of sample. This mixture is then vortexed for several seconds or until mixed well and then loaded onto the autosampler tray. For other applications that may involve very viscous samples or samples that have particulate matter, a process known as sample digestion may have to be carried out before it can be pipetted and analyzed. This adds an extra first step to the above process and therefore makes the sample prep more lengthy. References External links Scientific techniques Mass spectrometry Laboratory equipment Analytical chemistry
Inductively coupled plasma mass spectrometry
[ "Physics", "Chemistry" ]
7,365
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Mass spectrometry", "nan", "Matter" ]
49,535
https://en.wikipedia.org/wiki/Thought%20experiment
A thought experiment is a hypothetical situation in which a hypothesis, theory, or principle is laid out for the purpose of thinking through its consequences. The concept is also referred to using the German-language term within the work of the physicist Ernst Mach and includes thoughts about what may have occurred if a different course of action were taken. The importance of this ability is that it allows the experimenter to imagine what may occur in the future, as well as the implications of alternate courses of action. History The ancient Greek , "was the most ancient pattern of mathematical proof", and existed before Euclidean mathematics, where the emphasis was on the conceptual, rather than on the experimental part of a thought experiment. Johann Witt-Hansen established that Hans Christian Ørsted was the first to use the equivalent German term . Ørsted was also the first to use the equivalent term in 1820. By 1883, Ernst Mach used in a different sense, to denote exclusively the conduct of a experiment that would be subsequently performed as a by his students. Physical and mental experimentation could then be contrasted: Mach asked his students to provide him with explanations whenever the results from their subsequent, real, physical experiment differed from those of their prior, imaginary experiment. The English term thought experiment was coined as a calque of , and it first appeared in the 1897 English translation of one of Mach's papers. Prior to its emergence, the activity of posing hypothetical questions that employed subjunctive reasoning had existed for a very long time for both scientists and philosophers. The irrealis moods are ways to categorize it or to speak about it. This helps explain the extremely wide and diverse range of the application of the term thought experiment once it had been introduced into English. Galileo's demonstration that falling objects must fall at the same rate regardless of their masses was a significant step forward in the history of modern science. This is widely thought to have been a straightforward physical demonstration, involving climbing up the Leaning Tower of Pisa and dropping two heavy weights off it, whereas in fact, it was a logical demonstration, using the thought experiment technique. The experiment is described by Galileo in (1638) (from Italian: 'Mathematical Discourses and Demonstrations') thus: Uses The common goal of a thought experiment is to explore the potential consequences of the principle in question: "A thought experiment is a device with which one performs an intentional, structured process of intellectual deliberation in order to speculate, within a specifiable problem domain, about potential consequents (or antecedents) for a designated antecedent (or consequent)." Yeates, 2004, p. 150. Given the structure of the experiment, it may not be possible to perform it; and, even if it could be performed, there need not be an intention to perform it. Examples of thought experiments include Schrödinger's cat, illustrating quantum indeterminacy through the manipulation of a perfectly sealed environment and a tiny bit of radioactive substance, and Maxwell's demon, which attempts to demonstrate the ability of a hypothetical finite being to violate the 2nd law of thermodynamics. It is a common element of science-fiction stories. Thought experiments, which are well-structured, well-defined hypothetical questions that employ subjunctive reasoning (irrealis moods) – "What might happen (or, what might have happened) if . . . " – have been used to pose questions in philosophy at least since Greek antiquity, some pre-dating Socrates. In physics and other sciences many thought experiments date from the 19th and especially the 20th Century, but examples can be found at least as early as Galileo. In thought experiments, we gain new information by rearranging or reorganizing already known empirical data in a new way and drawing new (a priori) inferences from them, or by looking at these data from a different and unusual perspective. In Galileo's thought experiment, for example, the rearrangement of empirical experience consists of the original idea of combining bodies of different weights. Thought experiments have been used in philosophy (especially ethics), physics, and other fields (such as cognitive psychology, history, political science, economics, social psychology, law, organizational studies, marketing, and epidemiology). In law, the synonym "hypothetical" is frequently used for such experiments. Regardless of their intended goal, all thought experiments display a patterned way of thinking that is designed to allow us to explain, predict, and control events in a better and more productive way. Theoretical consequences In terms of their theoretical consequences, thought experiments generally: challenge (or even refute) a prevailing theory, often involving the device known as reductio ad absurdum, (as in Galileo's original argument, a proof by contradiction), confirm a prevailing theory, establish a new theory, or simultaneously refute a prevailing theory and establish a new theory through a process of mutual exclusion Practical applications Thought experiments can produce some very important and different outlooks on previously unknown or unaccepted theories. However, they may make those theories themselves irrelevant, and could possibly create new problems that are just as difficult, or possibly more difficult to resolve. In terms of their practical application, thought experiments are generally created to: challenge the prevailing status quo (which includes activities such as correcting misinformation (or misapprehension), identify flaws in the argument(s) presented, to preserve (for the long-term) objectively established fact, and to refute specific assertions that some particular thing is permissible, forbidden, known, believed, possible, or necessary); extrapolate beyond (or interpolate within) the boundaries of already established fact; predict and forecast the (otherwise) indefinite and unknowable future; explain the past; the retrodiction, postdiction and hindcasting of the (otherwise) indefinite and unknowable past; facilitate decision making, choice, and strategy selection; solve problems, and generate ideas; move current (often insoluble) problems into another, more helpful, and more productive problem space (e.g.: functional fixedness); attribute causation, preventability, blame, and responsibility for specific outcomes; assess culpability and compensatory damages in social and legal contexts; ensure the repeat of past success; or examine the extent to which past events might have occurred differently. ensure the (future) avoidance of past failures Types Generally speaking, there are seven types of thought experiments in which one reasons from causes to effects, or effects to causes: Prefactual Prefactual (before the fact) thought experiments – the term prefactual was coined by Lawrence J. Sanna in 1998 – speculate on possible future outcomes, given the present, and ask "What will be the outcome if event E occurs?". Counterfactual Counterfactual (contrary to established fact) thought experiments – the term counterfactual was coined by Nelson Goodman in 1947, extending Roderick Chisholm's (1946) notion of a "contrary-to-fact conditional" – speculate on the possible outcomes of a different past; and ask "What might have happened if A had happened instead of B?" (e.g., "If Isaac Newton and Gottfried Leibniz had cooperated with each other, what would mathematics look like today?"). The study of counterfactual speculation has increasingly engaged the interest of scholars in a wide range of domains such as philosophy, psychology, cognitive psychology, history, political science, economics, social psychology, law, organizational theory, marketing, and epidemiology. Semifactual Semifactual thought experiments – the term semifactual was coined by Nelson Goodman in 1947 – speculate on the extent to which things might have remained the same, despite there being a different past; and asks the question Even though X happened instead of E, would Y have still occurred? (e.g., Even if the goalie had moved left, rather than right, could he have intercepted a ball that was traveling at such a speed?). Semifactual speculations are an important part of clinical medicine. Predictive The activity of prediction attempts to project the circumstances of the present into the future. According to David Sarewitz and Roger Pielke (1999, p123), scientific prediction takes two forms: "The elucidation of invariant – and therefore predictive – principles of nature"; and "[Using] suites of observational data and sophisticated numerical models in an effort to foretell the behavior or evolution of complex phenomena". Although they perform different social and scientific functions, the only difference between the qualitatively identical activities of predicting, forecasting, and nowcasting is the distance of the speculated future from the present moment occupied by the user. Whilst the activity of nowcasting, defined as "a detailed description of the current weather along with forecasts obtained by extrapolation up to 2 hours ahead", is essentially concerned with describing the current state of affairs, it is common practice to extend the term "to cover very-short-range forecasting up to 12 hours ahead" (Browning, 1982, p.ix). Hindcasting The activity of hindcasting involves running a forecast model after an event has happened in order to test whether the model's simulation is valid. Retrodiction The activity of retrodiction (or postdiction) involves moving backward in time, step-by-step, in as many stages as are considered necessary, from the present into the speculated past to establish the ultimate cause of a specific event (e.g., reverse engineering and forensics). Given that retrodiction is a process in which "past observations, events, add and data are used as evidence to infer the process(es) that produced them" and that diagnosis "involve[s] going from visible effects such as symptoms, signs and the like to their prior causes", the essential balance between prediction and retrodiction could be characterized as: regardless of whether the prognosis is of the course of the disease in the absence of treatment, or of the application of a specific treatment regimen to a specific disorder in a particular patient. Backcasting The activity of backcasting – the term backcasting was coined by John Robinson in 1982 – involves establishing the description of a very definite and very specific future situation. It then involves an imaginary moving backward in time, step-by-step, in as many stages as are considered necessary, from the future to the present to reveal the mechanism through which that particular specified future could be attained from the present. Backcasting is not concerned with predicting the future: According to Jansen (1994, p. 503: Fields Thought experiments have been used in a variety of fields, including philosophy, law, physics, and mathematics. In philosophy they have been used at least since classical antiquity, some pre-dating Socrates. In law, they were well known to Roman lawyers quoted in the Digest. In physics and other sciences, notable thought experiments date from the 19th and, especially, the 20th century; but examples can be found at least as early as Galileo. Philosophy In philosophy, a thought experiment typically presents an imagined scenario with the intention of eliciting an intuitive or reasoned response about the way things are in the thought experiment. (Philosophers might also supplement their thought experiments with theoretical reasoning designed to support the desired intuitive response.) The scenario will typically be designed to target a particular philosophical notion, such as morality, or the nature of the mind or linguistic reference. The response to the imagined scenario is supposed to tell us about the nature of that notion in any scenario, real or imagined. For example, a thought experiment might present a situation in which an agent intentionally kills an innocent for the benefit of others. Here, the relevant question is not whether the action is moral or not, but more broadly whether a moral theory is correct that says morality is determined solely by an action's consequences (See Consequentialism). John Searle imagines a man in a locked room who receives written sentences in Chinese, and returns written sentences in Chinese, according to a sophisticated instruction manual. Here, the relevant question is not whether or not the man understands Chinese, but more broadly, whether a functionalist theory of mind is correct. It is generally hoped that there is universal agreement about the intuitions that a thought experiment elicits. (Hence, in assessing their own thought experiments, philosophers may appeal to "what we should say," or some such locution.) A successful thought experiment will be one in which intuitions about it are widely shared. But often, philosophers differ in their intuitions about the scenario. Other philosophical uses of imagined scenarios arguably are thought experiments also. In one use of scenarios, philosophers might imagine persons in a particular situation (maybe ourselves), and ask what they would do. For example, in the veil of ignorance, John Rawls asks us to imagine a group of persons in a situation where they know nothing about themselves, and are charged with devising a social or political organization. The use of the state of nature to imagine the origins of government, as by Thomas Hobbes and John Locke, may also be considered a thought experiment. Søren Kierkegaard explored the possible ethical and religious implications of Abraham's binding of Isaac in Fear and Trembling. Similarly, Friedrich Nietzsche, in On the Genealogy of Morals, speculated about the historical development of Judeo-Christian morality, with the intent of questioning its legitimacy. An early written thought experiment was Plato's allegory of the cave. Another historic thought experiment was Avicenna's "Floating Man" thought experiment in the 11th century. He asked his readers to imagine themselves suspended in the air isolated from all sensations in order to demonstrate human self-awareness and self-consciousness, and the substantiality of the soul. Science Scientists tend to use thought experiments as imaginary, "proxy" experiments prior to a real, "physical" experiment (Ernst Mach always argued that these gedankenexperiments were "a necessary precondition for physical experiment"). In these cases, the result of the "proxy" experiment will often be so clear that there will be no need to conduct a physical experiment at all. Scientists also use thought experiments when particular physical experiments are impossible to conduct (Carl Gustav Hempel labeled these sorts of experiment "theoretical experiments-in-imagination"), such as Einstein's thought experiment of chasing a light beam, leading to special relativity. This is a unique use of a scientific thought experiment, in that it was never carried out, but led to a successful theory, proven by other empirical means. Properties Further categorization of thought experiments can be attributed to specific properties. Possibility In many thought experiments, the scenario would be nomologically possible, or possible according to the laws of nature. John Searle's Chinese room is nomologically possible. Some thought experiments present scenarios that are not nomologically possible. In his Twin Earth thought experiment, Hilary Putnam asks us to imagine a scenario in which there is a substance with all of the observable properties of water (e.g., taste, color, boiling point), but is chemically different from water. It has been argued that this thought experiment is not nomologically possible, although it may be possible in some other sense, such as metaphysical possibility. It is debatable whether the nomological impossibility of a thought experiment renders intuitions about it moot. In some cases, the hypothetical scenario might be considered metaphysically impossible, or impossible in any sense at all. David Chalmers says that we can imagine that there are zombies, or persons who are physically identical to us in every way but who lack consciousness. This is supposed to show that physicalism is false. However, some argue that zombies are inconceivable: we can no more imagine a zombie than we can imagine that 1+1=3. Others have claimed that the conceivability of a scenario may not entail its possibility. Causal reasoning The first characteristic pattern that thought experiments display is their orientation in time. They are either: Antefactual speculations: experiments that speculate about what might have happened prior to a specific, designated event, or Postfactual speculations: experiments that speculate about what may happen subsequent to (or consequent upon) a specific, designated event. The second characteristic pattern is their movement in time in relation to "the present moment standpoint" of the individual performing the experiment; namely, in terms of: Their temporal direction: are they past-oriented or future-oriented? Their temporal sense: (a) in the case of past-oriented thought experiments, are they examining the consequences of temporal "movement" from the present to the past, or from the past to the present? or, (b) in the case of future-oriented thought experiments, are they examining the consequences of temporal "movement" from the present to the future, or from the future to the present? Relation to real experiments The relation to real experiments can be quite complex, as can be seen again from an example going back to Albert Einstein. In 1935, with two coworkers, he published a paper on a newly created subject called later the EPR effect (EPR paradox). In this paper, starting from certain philosophical assumptions, on the basis of a rigorous analysis of a certain, complicated, but in the meantime assertedly realizable model, he came to the conclusion that quantum mechanics should be described as "incomplete". Niels Bohr asserted a refutation of Einstein's analysis immediately, and his view prevailed. After some decades, it was asserted that feasible experiments could prove the error of the EPR paper. These experiments tested the Bell inequalities published in 1964 in a purely theoretical paper. The above-mentioned EPR philosophical starting assumptions were considered to be falsified by the empirical fact (e.g. by the optical real experiments of Alain Aspect). Thus thought experiments belong to a theoretical discipline, usually to theoretical physics, but often to theoretical philosophy. In any case, it must be distinguished from a real experiment, which belongs naturally to the experimental discipline and has "the final decision on true or not true", at least in physics. Interactivity Thought experiments can also be interactive where the author invites people into his thought process through providing alternative paths with alternative outcomes within the narrative, or through interaction with a programmed machine, like a computer program. Thanks to the advent of the Internet, the digital space has lent itself as a new medium for a new kind of thought experiments. The philosophical work of Stefano Gualeni, for example, focuses on the use of virtual worlds to materialize thought experiments and to playfully negotiate philosophical ideas. His arguments were originally presented in his 2015 book Virtual Worlds as Philosophical Tools. Gualeni's argument is that the history of philosophy has, until recently, merely been the history of written thought, and digital media can complement and enrich the limited and almost exclusively linguistic approach to philosophical thought. He considers virtual worlds (like those interactively encountered in videogames) to be philosophically viable and advantageous. This is especially the case in thought experiments, when the recipients of a certain philosophical notion or perspective are expected to objectively test and evaluate different possible courses of action, or in cases where they are confronted with interrogatives concerning non-actual or non-human phenomenologies. Examples Humanities Doomsday argument (anthropic principle) The Lady, or the Tiger? (human nature) The beer question (U.S. politics) Physics Bell's spaceship paradox (special relativity) Brownian ratchet (Richard Feynman's "perpetual motion" machine that does not violate the second law and does no work at thermal equilibrium) Bucket argument – argues that space is absolute, not relational Dyson sphere Einstein's box Elitzur–Vaidman bomb-tester (quantum mechanics) EPR paradox (quantum mechanics) (forms of this have been performed) Everett phone (quantum mechanics) Feynman sprinkler (classical mechanics) Galileo's Leaning Tower of Pisa experiment (rebuttal of Aristotelian Gravity) Galileo's ship (classical relativity principle) 1632 GHZ experiment (quantum mechanics) Heisenberg's microscope (quantum mechanics) Kepler's Dream (change of point of view as support for the Copernican hypothesis) Ladder paradox (special relativity) Laplace's demon Maxwell's demon (thermodynamics) 1871 Mermin's device (quantum mechanics) Moving magnet and conductor problem Newton's cannonball (Newton's laws of motion) Popper's experiment (quantum mechanics) Quantum pseudo telepathy (quantum mechanics) Quantum suicide and immortality (quantum mechanics) Renninger negative-result experiment (quantum mechanics) Schrödinger's cat (quantum mechanics) Sticky bead argument (general relativity) The Monkey and the Hunter (gravitation) Twin paradox (special relativity) Wheeler's delayed choice experiment (quantum mechanics) Wigner's friend (quantum mechanics) Philosophy Artificial brain Avicenna's Floating Man Beetle in a box Bellum omnium contra omnes Big Book (ethics) Brain-in-a-vat (epistemology, philosophy of mind) Brainstorm machine Buridan's ass Changing places (reflexive monism, philosophy of mind) Chesterton's fence China brain (physicalism, philosophy of mind) Chinese room (philosophy of mind, artificial intelligence, cognitive science) Coherence (philosophical gambling strategy) Condillac's Statue (epistemology) Experience machine (ethics) Gettier problem (epistemology) Ḥayy ibn Yaqẓān (epistemology) Hilary Putnam's Twin Earth thought experiment in the philosophy of language and philosophy of mind If a tree falls in a forest Inverted spectrum Kavka's toxin puzzle Mary's room (philosophy of mind) Molyneux's Problem (admittedly, this oscillated between empirical and a-priori assessment) Newcomb's paradox Original position (politics) Philosophical zombie (philosophy of mind, artificial intelligence, cognitive science) Plank of Carneades Roko's basilisk Ship of Theseus, The (concept of identity) Shoemaker's "Time Without Change" (metaphysics) Simulated reality (philosophy, computer science, cognitive science) Social contract theories Survival lottery (ethics) Swamp man (personal identity, philosophy of mind) Teleportation (metaphysics) The transparent eyeball The violinist (ethics) Ticking time bomb scenario (ethics) Trolley problem (ethics) Utility monster (ethics) Zeno's paradoxes (classical Greek problems of the infinite) Mathematics Balls and vase problem (infinity and cardinality) Gabriel's Horn (infinity) Hilbert's paradox of the Grand Hotel (infinity) Infinite monkey theorem (probability) Lottery paradox (probability) Sleeping beauty paradox (probability) Biology Levinthal paradox Rotating locomotion in living systems Computer science Braitenberg vehicles (robotics, neural control and sensing systems) (some have been built) Dining Philosophers (computer science) Halting problem (limits of computability) Turing machine (limits of computability) Two Generals' Problem Economics Broken window fallacy (law of unintended consequences, opportunity cost) Laffer Curve See also Alternate history Brainstorm machine Ding an sich Einstein's thought experiments Futures studies Futures techniques Heuristic Mathematical proof N-universes Possible world Scenario planning Scenario test Theoretical physics Notes References Further reading Brendal, Elke, "Intuition Pumps and the Proper Use of Thought Experiments", Dialectica, Vol.58, No.1, (March 2004, pp. 89–108. Ćorić, Dragana (2020), "The Importance of Thought Experiments", Journal of Eastern-European Criminal Law, Vol.2020, No.1, (2020), pp. 127–135. Cucic, D.A. & Nikolic, A.S., "A short insight about thought experiment in modern physics", 6th International Conference of the Balkan Physical Union BPU6, Istanbul – Turkey, 2006. Dennett, D.C., "Intuition Pumps", pp. 180–197 in Brockman, J., The Third Culture: Beyond the Scientific Revolution, Simon & Schuster, (New York), 1995. Galton, F., "Statistics of Mental Imagery", Mind, Vol.5, No.19, (July 1880), pp. 301–318. Hempel, C.G., "Typological Methods in the Natural and Social Sciences", pp. 155–171 in Hempel, C.G. (ed.), Aspects of Scientific Explanation and Other Essays in the Philosophy of Science, The Free Press, (New York), 1965. Jacques, V., Wu, E., Grosshans, F., Treussart, F., Grangier, P. Aspect, A., & Roch, J. (2007). Experimental Realization of Wheeler's Delayed-Choice Gedanken Experiment, Science, 315, p. 966–968. Kuhn, T., "A Function for Thought Experiments", in The Essential Tension (Chicago: University of Chicago Press, 1979), pp. 240–265. Mach, E., "On Thought Experiments", pp. 134–147 in Mach, E., Knowledge and Error: Sketches on the Psychology of Enquiry, D. Reidel Publishing Co., (Dordrecht), 1976. [Translation of Erkenntnis und Irrtum (5th edition, 1926.]. Popper, K., "On the Use and Misuse of Imaginary Experiments, Especially in Quantum Theory", pp. 442–456, in Popper, K., The Logic of Scientific Discovery, Harper Torchbooks, (New York), 1968. Stuart, M. T., Fehige, Y. and Brown, J. R. (2018). The Routledge Companion to Thought Experiments. London: Routledge. Witt-Hansen, J., "H.C. Ørsted, Immanuel Kant and the Thought Experiment", Danish Yearbook of Philosophy, Vol.13, (1976), pp. 48–65. Bibliography Adams, Scott, God's Debris: A Thought Experiment, Andrews McMeel Publishing, (USA), 2001 Browning, K.A. (ed.), Nowcasting, Academic Press, (London), 1982. Buzzoni, M., Thought Experiment in the Natural Sciences, Koenigshausen+Neumann, Wuerzburg 2008 Cohen, Martin, "Wittgenstein's Beetle and Other Classic Thought Experiments", Blackwell (Oxford) 2005 Cohnitz, D., Gedankenexperimente in der Philosophie, Mentis Publ., (Paderborn, Germany), 2006. Craik, K.J.W., The Nature of Explanation, Cambridge University Press, (Cambridge), 1943. Cushing, J.T., Philosophical Concepts in Physics: The Historical Relation Between Philosophy and Scientific Theories, Cambridge University Press, (Cambridge), 1998. DePaul, M. & Ramsey, W. (eds.), Rethinking Intuition: The Psychology of Intuition and Its Role in Philosophical Inquiry, Rowman & Littlefield Publishers, (Lanham), 1998. Gendler, T.S. & Hawthorne, J., Conceivability and Possibility, Oxford University Press, (Oxford), 2002. Gendler, T.S., Thought Experiment: On the Powers and Limits of Imaginary Cases, Garland, (New York), 2000. Häggqvist, S., Thought Experiments in Philosophy, Almqvist & Wiksell International, (Stockholm), 1996. Hanson, N.R., Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science, Cambridge University Press, (Cambridge), 1962. Harper, W.L., Stalnaker, R. & Pearce, G. (eds.), Ifs: Conditionals, Belief, Decision, Chance, and Time, D. Reidel Publishing Co., (Dordrecht), 1981. Hesse, M.B., Models and Analogies in Science, Sheed and Ward, (London), 1963. Holyoak, K.J. & Thagard, P., Mental Leaps: Analogy in Creative Thought, A Bradford Book, The MIT Press, (Cambridge), 1995. Horowitz, T. & Massey, G.J. (eds.), Thought Experiments in Science and Philosophy, Rowman & Littlefield, (Savage), 1991. Kahn, H., Thinking About the Unthinkable, Discus Books, (New York), 1971. Kuhne, U., Die Methode des Gedankenexperiments, Suhrkamp Publ., (Frankfurt/M, Germany), 2005. Leatherdale, W.H., The Role of Analogy, Model and Metaphor in Science, North-Holland Publishing Company, (Amsterdam), 1974. . Translated to English by Karen Jelved, Andrew D. Jackson, and Ole Knudsen, (translators 1997). Roese, N.J. & Olson, J.M. (eds.), What Might Have Been: The Social Psychology of Counterfactual Thinking, Lawrence Erlbaum Associates, (Mahwah), 1995. Shanks, N. (ed.), Idealization IX: Idealization in Contemporary Physics (Poznan Studies in the Philosophy of the Sciences and the Humanities, Volume 63), Rodopi, (Amsterdam), 1998. Shick, T. & Vaugn, L., Doing Philosophy: An Introduction through Thought Experiments (Second Edition), McGraw Hill, (New York), 2003. Sorensen, R.A., Thought Experiments, Oxford University Press, (Oxford), 1992. Tetlock, P.E. & Belkin, A. (eds.), Counterfactual Thought Experiments in World Politics, Princeton University Press, (Princeton), 1996. Thomson, J.J. {Parent, W. (ed.)}, Rights, Restitution, and Risks: Essays in Moral Theory, Harvard University Press, (Cambridge), 1986. Vosniadou, S. & Ortony. A. (eds.), Similarity and Analogical Reasoning, Cambridge University Press, (Cambridge), 1989. Wilkes, K.V., Real People: Personal Identity without Thought Experiments, Oxford University Press, (Oxford), 1988. Yeates, L.B., Thought Experimentation: A Cognitive Approach, Graduate Diploma in Arts (By Research) Dissertation, University of New South Wales, 2004. External links Stevinus, Galileo, and Thought Experiments Short essay by S. Abbas Raza of 3 Quarks Daily Thought experiment generator, a visual aid to running your own thought experiment Calques Conceptual modelling Critical thinking History of science Imagination Philosophical arguments Philosophical methodology Sources of knowledge
Thought experiment
[ "Technology" ]
6,450
[ "History of science", "History of science and technology" ]
49,557
https://en.wikipedia.org/wiki/Castle
A castle is a type of fortified structure built during the Middle Ages predominantly by the nobility or royalty and by military orders. Scholars usually consider a castle to be the private fortified residence of a lord or noble. This is distinct from a mansion, palace, and villa, whose main purpose was exclusively for pleasance and are not primarily fortresses but may be fortified. Use of the term has varied over time and, sometimes, has also been applied to structures such as hill forts and 19th- and 20th-century homes built to resemble castles. Over the Middle Ages, when genuine castles were built, they took on a great many forms with many different features, although some, such as curtain walls, arrowslits, and portcullises, were commonplace. European-style castles originated in the 9th and 10th centuries after the fall of the Carolingian Empire, which resulted in its territory being divided among individual lords and princes. These nobles built castles to control the area immediately surrounding them and they were both offensive and defensive structures: they provided a base from which raids could be launched as well as offering protection from enemies. Although their military origins are often emphasised in castle studies, the structures also served as centres of administration and symbols of power. Urban castles were used to control the local populace and important travel routes, and rural castles were often situated near features that were integral to life in the community, such as mills, fertile land, or a water source. Many northern European castles were originally built from earth and timber but had their defences replaced later by stone. Early castles often exploited natural defences, lacking features such as towers and arrowslits and relying on a central keep. In the late 12th and early 13th centuries, a scientific approach to castle defence emerged. This led to the proliferation of towers, with an emphasis on flanking fire. Many new castles were polygonal or relied on concentric defence – several stages of defence within each other that could all function at the same time to maximise the castle's firepower. These changes in defence have been attributed to a mixture of castle technology from the Crusades, such as concentric fortification, and inspiration from earlier defences, such as Roman forts. Not all the elements of castle architecture were military in nature, so that devices such as moats evolved from their original purpose of defence into symbols of power. Some grand castles had long winding approaches intended to impress and dominate their landscape. Although gunpowder was introduced to Europe in the 14th century, it did not significantly affect castle building until the 15th century, when artillery became powerful enough to break through stone walls. While castles continued to be built well into the 16th century, new techniques to deal with improved cannon fire made them uncomfortable and undesirable places to live. As a result, true castles went into a decline and were replaced by artillery star forts with no role in civil administration, and château or country houses that were indefensible. From the 18th century onwards, there was a renewed interest in castles with the construction of mock castles, part of a Romantic revival of Gothic architecture, but they had no military purpose. Definition Etymology The word castle is derived from the Latin word castellum, which is a diminutive of the word castrum, meaning "fortified place". The Old English castel, Occitan castel or chastel, French château, Spanish castillo, Portuguese castelo, Italian castello, and a number of words in other languages also derive from castellum. The word castle was introduced into English shortly before the Norman Conquest of 1066 to denote this type of building, which was then new to England. Defining characteristics In its simplest terms, the definition of a castle accepted amongst academics is "a private fortified residence". This contrasts with earlier fortifications, such as Anglo-Saxon burhs and walled cities such as Constantinople and Antioch in the Middle East; castles were not communal defences but were built and owned by the local feudal lords, either for themselves or for their monarch. Feudalism was the link between a lord and his vassal where, in return for military service and the expectation of loyalty, the lord would grant the vassal land. In the late 20th century, there was a trend to refine the definition of a castle by including the criterion of feudal ownership, thus tying castles to the medieval period; however, this does not necessarily reflect the terminology used in the medieval period. During the First Crusade (1096–1099), the Frankish armies encountered walled settlements and forts that they indiscriminately referred to as castles, but which would not be considered as such under the modern definition. Castles served a range of purposes, the most important of which were military, administrative, and domestic. As well as defensive structures, castles were also offensive tools which could be used as a base of operations in enemy territory. Castles were established by Norman invaders of England for both defensive purposes and to pacify the country's inhabitants. As William the Conqueror advanced through England, he fortified key positions to secure the land he had taken. Between 1066 and 1087, he established 36 castles such as Warwick Castle, which he used to guard against rebellion in the English Midlands. Towards the end of the Middle Ages, castles tended to lose their military significance due to the advent of powerful cannons and permanent artillery fortifications; as a result, castles became more important as residences and statements of power. A castle could act as a stronghold and prison but was also a place where a knight or lord could entertain his peers. Over time the aesthetics of the design became more important, as the castle's appearance and size began to reflect the prestige and power of its occupant. Comfortable homes were often fashioned within their fortified walls. Although castles still provided protection from low levels of violence in later periods, eventually they were succeeded by country houses as high-status residences. Terminology Castle is sometimes used as a catch-all term for all kinds of fortifications, and as a result has been misapplied in the technical sense. An example of this is Maiden Castle which, despite the name, is an Iron Age hill fort which had a very different origin and purpose. Although castle has not become a generic term for a manor house (like château in French and Schloss in German), many manor houses contain castle in their name while having few if any of the architectural characteristics, usually as their owners liked to maintain a link to the past and felt the term castle was a masculine expression of their power. In scholarship the castle, as defined above, is generally accepted as a coherent concept, originating in Europe and later spreading to parts of the Middle East, where they were introduced by European Crusaders. This coherent group shared a common origin, dealt with a particular mode of warfare, and exchanged influences. In different areas of the world, analogous structures shared features of fortification and other defining characteristics associated with the concept of a castle, though they originated in different periods and circumstances and experienced differing evolutions and influences. For example, shiro in Japan, described as castles by historian Stephen Turnbull, underwent "a completely different developmental history, were built in a completely different way and were designed to withstand attacks of a completely different nature". While European castles built from the late 12th and early 13th century onwards were generally stone, shiro were predominantly timber buildings into the 16th century. By the 16th century, when Japanese and European cultures met, fortification in Europe had moved beyond castles and relied on innovations such as the Italian trace italienne and star forts. Common features Motte A motte was an earthen mound with a flat top. It was often artificial, although sometimes it incorporated a pre-existing feature of the landscape. The excavation of earth to make the mound left a ditch around the motte, called a moat (which could be either wet or dry). Although the motte is commonly associated with the bailey to form a motte-and-bailey castle, this was not always the case and there are instances where a motte existed on its own. "Motte" refers to the mound alone, but it was often surmounted by a fortified structure, such as a keep, and the flat top would be surrounded by a palisade. It was common for the motte to be reached over a flying bridge (a bridge over the ditch from the counterscarp of the ditch to the edge of the top of the mound), as shown in the Bayeux Tapestry's depiction of Château de Dinan. Sometimes a motte covered an older castle or hall, whose rooms became underground storage areas and prisons beneath a new keep. Bailey and enceinte A bailey, also called a ward, was a fortified enclosure. It was a common feature of castles, and most had at least one. The keep on top of the motte was the domicile of the lord in charge of the castle and a bastion of last defence, while the bailey was the home of the rest of the lord's household and gave them protection. The barracks for the garrison, stables, workshops, and storage facilities were often found in the bailey. Water was supplied by a well or cistern. Over time the focus of high status accommodation shifted from the keep to the bailey; this resulted in the creation of another bailey that separated the high status buildings – such as the lord's chambers and the chapel – from the everyday structures such as the workshops and barracks. From the late 12th century there was a trend for knights to move out of the small houses they had previously occupied within the bailey to live in fortified houses in the countryside. Although often associated with the motte-and-bailey type of castle, baileys could also be found as independent defensive structures. These simple fortifications were called ringworks. The enceinte was the castle's main defensive enclosure, and the terms "bailey" and "enceinte" are linked. A castle could have several baileys but only one enceinte. Castles with no keep, which relied on their outer defences for protection, are sometimes called enceinte castles; these were the earliest form of castles, before the keep was introduced in the 10th century. Keep A keep was a great tower or other building that served as the main living quarters of the castle and usually the most strongly defended point of a castle before the introduction of concentric defence. "Keep" was not a term used in the medieval period – the term was applied from the 16th century onwards – instead "donjon" was used to refer to great towers, or turris in Latin. In motte-and-bailey castles, the keep was on top of the motte. "Dungeon" is a corrupted form of "donjon" and means a dark, unwelcoming prison. Although often the strongest part of a castle and a last place of refuge if the outer defences fell, the keep was not left empty in case of attack but was used as a residence by the lord who owned the castle, or his guests or representatives. At first, this was usual only in England, when after the Norman Conquest of 1066 the "conquerors lived for a long time in a constant state of alert"; elsewhere the lord's wife presided over a separate residence (domus, aula or mansio in Latin) close to the keep, and the donjon was a barracks and headquarters. Gradually, the two functions merged into the same building, and the highest residential storeys had large windows; as a result for many structures, it is difficult to find an appropriate term. The massive internal spaces seen in many surviving donjons can be misleading; they would have been divided into several rooms by light partitions, as in a modern office building. Even in some large castles the great hall was separated only by a partition from the lord's chamber, his bedroom and to some extent his office. Curtain wall Curtain walls were defensive walls enclosing a bailey. They had to be high enough to make scaling the walls with ladders difficult and thick enough to withstand bombardment from siege engines which, from the 15th century onwards, included gunpowder artillery. A typical wall could be thick and tall, although sizes varied greatly between castles. To protect them from undermining, curtain walls were sometimes given a stone skirt around their bases. Walkways along the tops of the curtain walls allowed defenders to rain missiles on enemies below, and battlements gave them further protection. Curtain walls were studded with towers to allow enfilading fire along the wall. Arrowslits in the walls did not become common in Europe until the 13th century, for fear that they might compromise the wall's strength. Gatehouse The entrance was often the weakest part in a circuit of defences. To overcome this, the gatehouse was developed, allowing those inside the castle to control the flow of traffic. In earth and timber castles, the gateway was usually the first feature to be rebuilt in stone. The front of the gateway was a blind spot and to overcome this, projecting towers were added on each side of the gate in a style similar to that developed by the Romans. The gatehouse contained a series of defences to make a direct assault more difficult than battering down a simple gate. Typically, there were one or more portcullises – a wooden grille reinforced with metal to block a passage – and arrowslits to allow defenders to harry the enemy. The passage through the gatehouse was lengthened to increase the amount of time an assailant had to spend under fire in a confined space and unable to retaliate. It is a popular myth that murder holes – openings in the ceiling of the gateway passage – were used to pour boiling oil or molten lead on attackers; the price of oil and lead and the distance of the gatehouse from fires meant that this was impractical. This method was, however, a common practice in Middle Eastern and Mediterranean castles and fortifications, where such resources were abundant. They were most likely used to drop objects on attackers, or to allow water to be poured on fires to extinguish them. Provision was made in the upper storey of the gatehouse for accommodation so the gate was never left undefended, although this arrangement later evolved to become more comfortable at the expense of defence. During the 13th and 14th centuries the barbican was developed. This consisted of a rampart, ditch, and possibly a tower, in front of the gatehouse which could be used to further protect the entrance. The purpose of a barbican was not just to provide another line of defence but also to dictate the only approach to the gate. Moat A moat is a ditch surrounding a castle – or dividing one part of a castle from another – and could be either dry or filled with water. Its purpose often had a defensive purpose, preventing siege towers from reaching walls making mining harder, but could also be ornamental. Water moats were found in low-lying areas and were usually crossed by a drawbridge, although these were often replaced by stone bridges. The site of the 13th-century Caerphilly Castle in Wales covers over and the water defences, created by flooding the valley to the south of the castle, are some of the largest in Western Europe. Battlements Battlements were most often found surmounting curtain walls and the tops of gatehouses, and comprised several elements: crenellations, hoardings, machicolations, and loopholes. Crenellation is the collective name for alternating crenels and merlons: gaps and solid blocks on top of a wall. Hoardings were wooden constructs that projected beyond the wall, allowing defenders to shoot at, or drop objects on, attackers at the base of the wall without having to lean perilously over the crenellations, thereby exposing themselves to retaliatory fire. Machicolations were stone projections on top of a wall with openings that allowed objects to be dropped on an enemy at the base of the wall in a similar fashion to hoardings. Arrowslits Arrowslits, also commonly called loopholes, were narrow vertical openings in defensive walls which allowed arrows or crossbow bolts to be fired on attackers. The narrow slits were intended to protect the defender by providing a very small target, but the size of the opening could also impede the defender if it was too small. A smaller horizontal opening could be added to give an archer a better view for aiming. Sometimes a sally port was included; this could allow the garrison to leave the castle and engage besieging forces. It was usual for the latrines to empty down the external walls of a castle and into the surrounding ditch. Postern A postern is a secondary door or gate in a concealed location, usually in a fortification such as a city wall. Great hall The great hall was a large, decorated room where a lord received his guests. The hall represented the prestige, authority, and richness of the lord. Events such as feasts, banquets, social or ceremonial gatherings, meetings of the military council, and judicial trials were held in the great hall. Sometimes the great hall existed as a separate building, in that case, it was called a hall-house. History Antecedents Historian Charles Coulson states that the accumulation of wealth and resources, such as food, led to the need for defensive structures. The earliest fortifications originated in the Fertile Crescent, the Indus Valley, Europe, Egypt, and China where settlements were protected by large walls. In Northern Europe, hill forts were first developed in the Bronze Age, which then proliferated across Europe in the Iron Age. Hillforts in Britain typically used earthworks rather than stone as a building material. Many earthworks survive today, along with evidence of palisades to accompany the ditches. In central and western Europe, oppida emerged in the 2nd century BC; these were densely inhabited fortified settlements, such as the oppidum of Manching. Some oppida walls were built on a massive scale, utilising stone, wood, iron and earth in their construction. The Romans encountered fortified settlements such as hill forts and oppida when expanding their territory into northern Europe. Their defences were often effective, and were only overcome by the extensive use of siege engines and other siege warfare techniques, such as at the Battle of Alesia. The Romans' own fortifications (castra) varied from simple temporary earthworks thrown up by armies on the move, to elaborate permanent stone constructions, notably the milecastles of Hadrian's Wall. Roman forts were generally rectangular with rounded corners – a "playing-card shape". In the medieval period, castles were influenced by earlier forms of elite architecture, contributing to regional variations. Importantly, while castles had military aspects, they contained a recognisable household structure within their walls, reflecting the multi-functional use of these buildings. Origins (9th and 10th centuries) The subject of the emergence of castles in Europe is a complex matter which has led to considerable debate. Discussions have typically attributed the rise of the castle to a reaction to attacks by Magyars, Muslims, and Vikings and a need for private defence. The breakdown of the Carolingian Empire led to the privatisation of government, and local lords assumed responsibility for the economy and justice. However, while castles proliferated in the 9th and 10th centuries the link between periods of insecurity and building fortifications is not always straightforward. Some high concentrations of castles occur in secure places, while some border regions had relatively few castles. It is likely that the castle evolved from the practice of fortifying a lordly home. The greatest threat to a lord's home or hall was fire as it was usually a wooden structure. To protect against this, and keep other threats at bay, there were several courses of action available: create encircling earthworks to keep an enemy at a distance; build the hall in stone; or raise it up on an artificial mound, known as a motte, to present an obstacle to attackers. While the concept of ditches, ramparts, and stone walls as defensive measures is ancient, raising a motte is a medieval innovation. A bank and ditch enclosure was a simple form of defence, and when found without an associated motte is called a ringwork; when the site was in use for a prolonged period, it was sometimes replaced by a more complex structure or enhanced by the addition of a stone curtain wall. Building the hall in stone did not necessarily make it immune to fire as it still had windows and a wooden door. This led to the elevation of windows to the second storey – to make it harder to throw objects in – and to move the entrance from ground level to the second storey. These features are seen in many surviving castle keeps, which were the more sophisticated version of halls. Castles were not just defensive sites but also enhanced a lord's control over his lands. They allowed the garrison to control the surrounding area, and formed a centre of administration, providing the lord with a place to hold court. Building a castle sometimes required the permission of the king or other high authority. In 864 the King of West Francia, Charles the Bald, prohibited the construction of castella without his permission and ordered them all to be destroyed. This is perhaps the earliest reference to castles, though military historian R. Allen Brown points out that the word castella may have applied to any fortification at the time. In some countries the monarch had little control over lords, or required the construction of new castles to aid in securing the land so was unconcerned about granting permission – as was the case in England in the aftermath of the Norman Conquest and the Holy Land during the Crusades. Switzerland is an extreme case of there being no state control over who built castles, and as a result there were 4,000 in the country. There are very few castles dated with certainty from the mid-9th century. Converted into a donjon around 950, Château de Doué-la-Fontaine in France is the oldest standing castle in Europe. 11th century From 1000 onwards, references to castles in texts such as charters increased greatly. Historians have interpreted this as evidence of a sudden increase in the number of castles in Europe around this time; this has been supported by archaeological investigation which has dated the construction of castle sites through the examination of ceramics. The increase in Italy began in the 950s, with numbers of castles increasing by a factor of three to five every 50 years, whereas in other parts of Europe such as France and Spain the growth was slower. In 950, Provence was home to 12 castles; by 1000, this figure had risen to 30, and by 1030 it was over 100. Although the increase was slower in Spain, the 1020s saw a particular growth in the number of castles in the region, particularly in contested border areas between Christian and Muslim lands. Despite the common period in which castles rose to prominence in Europe, their form and design varied from region to region. In the early 11th century, the motte and keep – an artificial mound with a palisade and tower on top – was the most common form of castle in Europe, everywhere except Scandinavia. While Britain, France, and Italy shared a tradition of timber construction that was continued in castle architecture, Spain more commonly used stone or mud-brick as the main building material. The Muslim invasion of the Iberian Peninsula in the 8th century introduced a style of building developed in North Africa reliant on tapial, pebbles in cement, where timber was in short supply. Although stone construction would later become common elsewhere, from the 11th century onwards it was the primary building material for Christian castles in Spain, while at the same time timber was still the dominant building material in north-west Europe. Historians have interpreted the widespread presence of castles across Europe in the 11th and 12th centuries as evidence that warfare was common, and usually between local lords. Castles were introduced into England shortly before the Norman Conquest in 1066. Before the 12th century castles were as uncommon in Denmark as they had been in England before the Norman Conquest. The introduction of castles to Denmark was a reaction to attacks from Wendish pirates, and they were usually intended as coastal defences. The motte and bailey remained the dominant form of castle in England, Wales, and Ireland well into the 12th century. At the same time, castle architecture in mainland Europe became more sophisticated. The donjon was at the centre of this change in castle architecture in the 12th century. Central towers proliferated, and typically had a square plan, with walls thick. Their decoration emulated Romanesque architecture, and sometimes incorporated double windows similar to those found in church bell towers. Donjons, which were the residence of the lord of the castle, evolved to become more spacious. The design emphasis of donjons changed to reflect a shift from functional to decorative requirements, imposing a symbol of lordly power upon the landscape. This sometimes led to compromising defence for the sake of display. Innovation and scientific design (12th century) See also maison forte, French article here Until the 12th century, stone-built and earth and timber castles were contemporary, but by the late 12th century the number of castles being built went into decline. This has been partly attributed to the higher cost of stone-built fortifications, and the obsolescence of timber and earthwork sites, which meant it was preferable to build in more durable stone. Although superseded by their stone successors, timber and earthwork castles were by no means useless. This is evidenced by the continual maintenance of timber castles over long periods, sometimes several centuries; Owain Glyndŵr's 11th-century timber castle at Sycharth was still in use by the start of the 15th century, its structure having been maintained for four centuries. At the same time there was a change in castle architecture. Until the late 12th century castles generally had few towers; a gateway with few defensive features such as arrowslits or a portcullis; a great keep or donjon, usually square and without arrowslits; and the shape would have been dictated by the lay of the land (the result was often irregular or curvilinear structures). The design of castles was not uniform, but these were features that could be found in a typical castle in the mid-12th century. By the end of the 12th century or the early 13th century, a newly constructed castle could be expected to be polygonal in shape, with towers at the corners to provide enfilading fire for the walls. The towers would have protruded from the walls and featured arrowslits on each level to allow archers to target anyone nearing or at the curtain wall. These later castles did not always have a keep, but this may have been because the more complex design of the castle as a whole drove up costs and the keep was sacrificed to save money. The larger towers provided space for habitation to make up for the loss of the donjon. Where keeps did exist, they were no longer square but polygonal or cylindrical. Gateways were more strongly defended, with the entrance to the castle usually between two half-round towers which were connected by a passage above the gateway – although there was great variety in the styles of gateway and entrances – and one or more portcullis. A peculiar feature of Muslim castles in the Iberian Peninsula was the use of detached towers, called Albarrana towers, around the perimeter as can be seen at the Alcazaba of Badajoz. Probably developed in the 12th century, the towers provided flanking fire. They were connected to the castle by removable wooden bridges, so if the towers were captured the rest of the castle was not accessible. When seeking to explain this change in the complexity and style of castles, antiquarians found their answer in the Crusades. It seemed that the Crusaders had learned much about fortification from their conflicts with the Saracens and exposure to Byzantine architecture. There were legends such as that of Lalys – an architect from Palestine who reputedly went to Wales after the Crusades and greatly enhanced the castles in the south of the country – and it was assumed that great architects such as James of Saint George originated in the East. In the mid-20th century this view was cast into doubt. Legends were discredited, and in the case of James of Saint George it was proven that he came from Saint-Georges-d'Espéranche, in France. If the innovations in fortification had derived from the East, it would have been expected for their influence to be seen from 1100 onwards, immediately after the Christians were victorious in the First Crusade (1096–1099), rather than nearly 100 years later. Remains of Roman structures in Western Europe were still standing in many places, some of which had flanking round-towers and entrances between two flanking towers. The castle builders of Western Europe were aware of and influenced by Roman design; late Roman coastal forts on the English "Saxon Shore" were reused and in Spain the wall around the city of Ávila imitated Roman architecture when it was built in 1091. Historian Smail in Crusading warfare argued that the case for the influence of Eastern fortification on the West has been overstated, and that Crusaders of the 12th century in fact learned very little about scientific design from Byzantine and Saracen defences. A well-sited castle that made use of natural defences and had strong ditches and walls had no need for a scientific design. An example of this approach is Kerak. Although there were no scientific elements to its design, it was almost impregnable, and in 1187 Saladin chose to lay siege to the castle and starve out its garrison rather than risk an assault. During the late 11th and 12th centuries in what is now south-central Turkey the Hospitallers, Teutonic Knights and Templars established themselves in the Armenian Kingdom of Cilicia, where they discovered an extensive network of sophisticated fortifications which had a profound impact on the architecture of Crusader castles. Most of the Armenian military sites in Cilicia are characterized by: multiple bailey walls laid with irregular plans to follow the sinuosities of the outcrops; rounded and especially horseshoe-shaped towers; finely-cut often rusticated ashlar facing stones with intricate poured cores; concealed postern gates and complex bent entrances with slot machicolations; embrasured loopholes for archers; barrel, pointed or groined vaults over undercrofts, gates and chapels; and cisterns with elaborate scarped drains. Civilian settlement are often found in the immediate proximity of these fortifications. After the First Crusade, Crusaders who did not return to their homes in Europe helped found the Crusader states of the Principality of Antioch, the County of Edessa, the Kingdom of Jerusalem, and the County of Tripoli. The castles they founded to secure their acquisitions were designed mostly by Syrian master-masons. Their design was very similar to that of a Roman fort or Byzantine tetrapyrgia which were square in plan and had square towers at each corner that did not project much beyond the curtain wall. The keep of these Crusader castles would have had a square plan and generally be undecorated. While castles were used to hold a site and control movement of armies, in the Holy Land some key strategic positions were left unfortified. Castle architecture in the East became more complex around the late 12th and early 13th centuries after the stalemate of the Third Crusade (1189–1192). Both Christians and Muslims created fortifications, and the character of each was different. Saphadin, the 13th-century ruler of the Saracens, created structures with large rectangular towers that influenced Muslim architecture and were copied again and again, however they had little influence on Crusader castles. 13th to 15th centuries In the early 13th century, Crusader castles were mostly built by Military Orders including the Knights Hospitaller, Knights Templar, and Teutonic Knights. The orders were responsible for the foundation of sites such as Krak des Chevaliers, Margat, and Belvoir. Design varied not just between orders, but between individual castles, though it was common for those founded in this period to have concentric defences. The concept, which originated in castles such as Krak des Chevaliers, was to remove the reliance on a central strongpoint and to emphasise the defence of the curtain walls. There would be multiple rings of defensive walls, one inside the other, with the inner ring rising above the outer so that its field of fire was not completely obscured. If assailants made it past the first line of defence they would be caught in the killing ground between the inner and outer walls and have to assault the second wall. Concentric castles were widely copied across Europe, for instance when Edward I of England – who had himself been on Crusade – built castles in Wales in the late 13th century, four of the eight he founded had a concentric design. Not all the features of the Crusader castles from the 13th century were emulated in Europe. For instance, it was common in Crusader castles to have the main gate in the side of a tower and for there to be two turns in the passageway, lengthening the time it took for someone to reach the outer enclosure. It is rare for this bent entrance to be found in Europe. One of the effects of the Livonian Crusade in the Baltic was the introduction of stone and brick fortifications. Although there were hundreds of wooden castles in Prussia and Livonia, the use of bricks and mortar was unknown in the region before the Crusaders. Until the 13th century and start of the 14th centuries, their design was heterogeneous, however this period saw the emergence of a standard plan in the region: a square plan, with four wings around a central courtyard. It was common for castles in the East to have arrowslits in the curtain wall at multiple levels; contemporary builders in Europe were wary of this as they believed it weakened the wall. Arrowslits did not compromise the wall's strength, but it was not until Edward I's programme of castle building that they were widely adopted in Europe. The Crusades also led to the introduction of machicolations into Western architecture. Until the 13th century, the tops of towers had been surrounded by wooden galleries, allowing defenders to drop objects on assailants below. Although machicolations performed the same purpose as the wooden galleries, they were probably an Eastern invention rather than an evolution of the wooden form. Machicolations were used in the East long before the arrival of the Crusaders, and perhaps as early as the first half of the 8th century in Syria. The greatest period of castle building in Spain was in the 11th to 13th centuries, and they were most commonly found in the disputed borders between Christian and Muslim lands. Conflict and interaction between the two groups led to an exchange of architectural ideas, and Spanish Christians adopted the use of detached towers. The Spanish Reconquista, driving the Muslims out of the Iberian Peninsula, was complete in 1492. Although France has been described as "the heartland of medieval architecture", the English were at the forefront of castle architecture in the 12th century. French historian François Gebelin wrote: "The great revival in military architecture was led, as one would naturally expect, by the powerful kings and princes of the time; by the sons of William the Conqueror and their descendants, the Plantagenets, when they became dukes of Normandy. These were the men who built all the most typical twelfth-century fortified castles remaining today". Despite this, by the beginning of the 15th century, the rate of castle construction in England and Wales went into decline. The new castles were generally of a lighter build than earlier structures and presented few innovations, although strong sites were still created such as that of Raglan in Wales. At the same time, French castle architecture came to the fore and led the way in the field of medieval fortifications. Across Europe – particularly the Baltic, Germany, and Scotland – castles were built well into the 16th century. Advent of gunpowder Artillery powered by gunpowder was introduced to Europe in the 1320s and spread quickly. Handguns, which were initially unpredictable and inaccurate weapons, were not recorded until the 1380s. Castles were adapted to allow small artillery pieces – averaging between  – to fire from towers. These guns were too heavy for a man to carry and fire, but if he supported the butt end and rested the muzzle on the edge of the gun port he could fire the weapon. The gun ports developed in this period show a unique feature, that of a horizontal timber across the opening. A hook on the end of the gun could be latched over the timber so the gunner did not have to take the full recoil of the weapon. This adaptation is found across Europe, and although the timber rarely survives, there is an intact example at Castle Doornenburg in the Netherlands. Gunports were keyhole shaped, with a circular hole at the bottom for the weapon and a narrow slit on top to allow the gunner to aim. This form is very common in castles adapted for guns, found in Egypt, Italy, Scotland, and Spain, and elsewhere in between. Other types of port, though less common, were horizontal slits – allowing only lateral movement – and large square openings, which allowed greater movement. The use of guns for defence gave rise to artillery castles, such as that of Château de Ham in France. Defences against guns were not developed until a later stage. Ham is an example of the trend for new castles to dispense with earlier features such as machicolations, tall towers, and crenellations. Bigger guns were developed, and in the 15th century became an alternative to siege engines such as the trebuchet. The benefits of large guns over trebuchets – the most effective siege engine of the Middle Ages before the advent of gunpowder – were those of a greater range and power. In an effort to make them more effective, guns were made ever bigger, although this hampered their ability to reach remote castles. By the 1450s guns were the preferred siege weapon, and their effectiveness was demonstrated by Mehmed II at the Fall of Constantinople. The response towards more effective cannons was to build thicker walls and to prefer round towers, as the curving sides were more likely to deflect a shot than a flat surface. While this sufficed for new castles, pre-existing structures had to find a way to cope with being battered by cannon. An earthen bank could be piled behind a castle's curtain wall to absorb some of the shock of impact. Often, castles constructed before the age of gunpowder were incapable of using guns as their wall-walks were too narrow. A solution to this was to pull down the top of a tower and to fill the lower part with the rubble to provide a surface for the guns to fire from. Lowering the defences in this way had the effect of making them easier to scale with ladders. A more popular alternative defence, which avoided damaging the castle, was to establish bulwarks beyond the castle's defences. These could be built from earth or stone and were used to mount weapons. Bastions and star forts (16th century) Around 1500, the innovation of the angled bastion was developed in Italy. With developments such as these, Italy pioneered permanent artillery fortifications, which took over from the defensive role of castles. From this evolved star forts, also known as trace italienne. The elite responsible for castle construction had to choose between the new type that could withstand cannon fire and the earlier, more elaborate style. The first was ugly and uncomfortable and the latter was less secure, although it did offer greater aesthetic appeal and value as a status symbol. The second choice proved to be more popular as it became apparent that there was little point in trying to make the site genuinely defensible in the face of cannon. For a variety of reasons, not least of which is that many castles have no recorded history, there is no firm number of castles built in the medieval period. However, it has been estimated that between 75,000 and 100,000 were built in western Europe; of these around 1,700 were in England and Wales and around 14,000 in German-speaking areas. Some true castles were built in the Americas by the Spanish and French colonies. The first stage of Spanish fort construction has been termed the "castle period", which lasted from 1492 until the end of the 16th century. Starting with Fortaleza Ozama, "these castles were essentially European medieval castles transposed to America". Among other defensive structures (including forts and citadels), castles were also built in New France towards the end of the 17th century. In Montreal the artillery was not as developed as on the battle-fields of Europe, some of the region's outlying forts were built like the fortified manor houses of France. Fort Longueuil, built from 1695 to 1698 by a baronial family, has been described as "the most medieval-looking fort built in Canada". The manor house and stables were within a fortified bailey, with a tall round turret in each corner. The "most substantial castle-like fort" near Montréal was Fort Senneville, built in 1692 with square towers connected by thick stone walls, as well as a fortified windmill. Stone forts such as these served as defensive residences, as well as imposing structures to prevent Iroquois incursions. Although castle construction faded towards the end of the 16th century, castles did not necessarily all fall out of use. Some retained a role in local administration and became law courts, while others are still handed down in aristocratic families as hereditary seats. A particularly famous example of this is Windsor Castle in England which was founded in the 11th century and is home to the monarch of the United Kingdom. In other cases they still had a role in defence. Tower houses, which are closely related to castles and include pele towers, were defended towers that were permanent residences built in the 14th to 17th centuries. Especially common in Ireland and Scotland, they could be up to five storeys high and succeeded common enclosure castles and were built by a greater social range of people. While unlikely to provide as much protection as a more complex castle, they offered security against raiders and other small threats. Later use and revival castles According to archaeologists Oliver Creighton and Robert Higham, "the great country houses of the seventeenth to twentieth centuries were, in a social sense, the castles of their day". Though there was a trend for the elite to move from castles into country houses in the 17th century, castles were not completely useless. In later conflicts, such as the English Civil War (1641–1651), many castles were refortified, although subsequently slighted to prevent them from being used again. Some country residences, which were not meant to be fortified, were given a castle appearance to scare away potential invaders such as adding turrets and using small windows. An example of this is the 16th century Bubaqra Castle in Bubaqra, Malta, which was modified in the 18th century. Revival or mock castles became popular as a manifestation of a Romantic interest in the Middle Ages and chivalry, and as part of the broader Gothic Revival in architecture. Examples of these castles include Chapultepec in Mexico, Neuschwanstein in Germany, and Edwin Lutyens' Castle Drogo (1911–1930) – the last flicker of this movement in the British Isles. While churches and cathedrals in a Gothic style could faithfully imitate medieval examples, new country houses built in a "castle style" differed internally from their medieval predecessors. This was because to be faithful to medieval design would have left the houses cold and dark by contemporary standards. Artificial ruins, built to resemble remnants of historic edifices, were also a hallmark of the period. They were usually built as centre pieces in aristocratic planned landscapes. Follies were similar, although they differed from artificial ruins in that they were not part of a planned landscape, but rather seemed to have no reason for being built. Both drew on elements of castle architecture such as castellation and towers, but served no military purpose and were solely for display. A toy castle is used as a common children attraction in playing fields and fun parks, such as the castle of the Playmobil FunPark in Ħal Far, Malta. Construction Once the site of a castle had been selected – whether a strategic position or one intended to dominate the landscape as a mark of power – the building material had to be selected. An earth and timber castle was cheaper and easier to erect than one built from stone. The costs involved in construction are not well-recorded, and most surviving records relate to royal castles. A castle with earthen ramparts, a motte, timber defences and buildings could have been constructed by an unskilled workforce. The source of man-power was probably from the local lordship, and the tenants would already have the necessary skills of felling trees, digging, and working timber necessary for an earth and timber castle. Possibly coerced into working for their lord, the construction of an earth and timber castle would not have been a drain on a client's funds. In terms of time, it has been estimated that an average sized motte – high and wide at the summit – would have taken 50 people about 40 working days. An exceptionally expensive motte and bailey was that of Clones in Ireland, built in 1211 for UK£20. The high cost, relative to other castles of its type, was because labourers had to be imported. The cost of building a castle varied according to factors such as their complexity and transport costs for material. It is certain that stone castles cost a great deal more than those built from earth and timber. Even a very small tower, such as Peveril Castle, would have cost around UK£200. In the middle were castles such as Orford, which was built in the late 12th century for UK£1,400, and at the upper end were those such as Dover, which cost about UK£7,000 between 1181 and 1191. Spending on the scale of the vast castles such as Château Gaillard (an estimated UK£15,000 to UK£20,000 between 1196 and 1198) was easily supported by The Crown, but for lords of smaller areas, castle building was a very serious and costly undertaking. It was usual for a stone castle to take the best part of a decade to finish. The cost of a large castle built over this time (anywhere from UK£1,000 to UK£10,000) would take the income from several manors, severely impacting a lord's finances. Costs in the late 13th century were of a similar order, with castles such as Beaumaris and Rhuddlan costing UK£14,500 and UK£9,000 respectively. Edward I's campaign of castle-building in Wales cost UK£80,000 between 1277 and 1304, and UK£95,000 between 1277 and 1329. Renowned designer Master James of Saint George, responsible for the construction of Beaumaris, explained the cost: Not only were stone castles expensive to build in the first place, but their maintenance was a constant drain. They contained a lot of timber, which was often unseasoned and as a result needed careful upkeep. For example, it is documented that in the late 12th century repairs at castles such as Exeter and Gloucester cost between UK£20 and UK£50 annually. Medieval machines and inventions, such as the treadwheel crane, became indispensable during construction, and techniques of building wooden scaffolding were improved upon from Antiquity. When building in stone a prominent concern of medieval builders was to have quarries close at hand. There are examples of some castles where stone was quarried on site, such as Chinon, Château de Coucy and Château Gaillard. When it was built in 992 in France the stone tower at Château de Langeais was high, wide, and long with walls averaging . The walls contain of stone and have a total surface (both inside and out) of . The tower is estimated to have taken 83,000 average working days to complete, most of which was unskilled labour. Many countries had both timber and stone castles, however Denmark had few quarries and as a result most of its castles are earth and timber affairs, or later on built from brick. Brick-built structures were not necessarily weaker than their stone-built counterparts. Brick castles are less common in England than stone or earth and timber constructions, and often it was chosen for its aesthetic appeal or because it was fashionable, encouraged by the brick architecture of the Low Countries. For example, when Tattershall Castle in England was built between 1430 and 1450, there was plenty of stone available nearby, but the owner, Lord Cromwell, chose to use brick. About 700,000 bricks were used to build the castle, which has been described as "the finest piece of medieval brick-work in England". Most Spanish castles were built from stone, whereas castles in Eastern Europe were usually of timber construction. On the Construction of the Castle of Safed, written in the early 1260s, describes the construction of a new castle at Safed. It is "one of the fullest" medieval accounts of a castle's construction. Social centre Due to the lord's presence in a castle, it was a centre of administration from where he controlled his lands. He relied on the support of those below him, as without the support of his more powerful tenants a lord could expect his power to be undermined. Successful lords regularly held court with those immediately below them on the social scale, but absentees could expect to find their influence weakened. Larger lordships could be vast, and it would be impractical for a lord to visit all his properties regularly, so deputies were appointed. This especially applied to royalty, who sometimes owned land in different countries. To allow the lord to concentrate on his duties regarding administration, he had a household of servants to take care of chores such as providing food. The household was run by a chamberlain, while a treasurer took care of the estate's written records. Royal households took essentially the same form as baronial households, although on a much larger scale and the positions were more prestigious. An important role of the household servants was the preparation of food; the castle kitchens would have been a busy place when the castle was occupied, called on to provide large meals. Without the presence of a lord's household, usually because he was staying elsewhere, a castle would have been a quiet place with few residents, focused on maintaining the castle. As social centres castles were important places for display. Builders took the opportunity to draw on symbolism, through the use of motifs, to evoke a sense of chivalry that was aspired to in the Middle Ages amongst the elite. Later structures of the Romantic revival would draw on elements of castle architecture such as battlements for the same purpose. Castles have been compared with cathedrals as objects of architectural pride, and some castles incorporated gardens as ornamental features. The right to crenellate, when granted by a monarch – though it was not always necessary – was important not just as it allowed a lord to defend his property but because crenellations and other accoutrements associated with castles were prestigious through their use by the elite. Licences to crenellate were also proof of a relationship with or favour from the monarch, who was the one responsible for granting permission. Courtly love was the eroticisation of love between the nobility. Emphasis was placed on restraint between lovers. Though sometimes expressed through chivalric events such as tournaments, where knights would fight wearing a token from their lady, it could also be private and conducted in secret. The legend of Tristan and Iseult is one example of stories of courtly love told in the Middle Ages. It was an ideal of love between two people not married to each other, although the man might be married to someone else. It was not uncommon or ignoble for a lord to be adulterous – Henry I of England had over 20 bastards for instance – but for a lady to be promiscuous was seen as dishonourable. The purpose of marriage between the medieval elites was to secure land. Girls were married in their teens, but boys did not marry until they came of age. There is a popular conception that women played a peripheral role in the medieval castle household, and that it was dominated by the lord himself. This derives from the image of the castle as a martial institution, but most castles in England, France, Ireland, and Scotland were never involved in conflicts or sieges, so the domestic life is a neglected facet. The lady was given a dower of her husband's estates – usually about a third – which was hers for life, and her husband would inherit on her death. It was her duty to administer them directly, as the lord administered his own land. Despite generally being excluded from military service, a woman could be in charge of a castle, either on behalf of her husband or if she was widowed. Because of their influence within the medieval household, women influenced construction and design, sometimes through direct patronage; historian Charles Coulson emphasises the role of women in applying "a refined aristocratic taste" to castles due to their long term residence. Locations and landscapes The positioning of castles was influenced by the available terrain. Whereas hill castles such as Marksburg were common in Germany, where 66 per cent of all known medieval were highland area while 34 per cent were on low-lying land, they formed a minority of sites in England. Because of the range of functions they had to fulfil, castles were built in a variety of locations. Multiple factors were considered when choosing a site, balancing between the need for a defendable position with other considerations such as proximity to resources. For instance many castles are located near Roman roads, which remained important transport routes in the Middle Ages, or could lead to the alteration or creation of new road systems in the area. Where available it was common to exploit pre-existing defences such as building with a Roman fort or the ramparts of an Iron Age hillfort. A prominent site that overlooked the surrounding area and offered some natural defences may also have been chosen because its visibility made it a symbol of power. Urban castles were particularly important in controlling centres of population and production, especially with an invading force, for instance in the aftermath of the Norman Conquest of England in the 11th century the majority of royal castles were built in or near towns. As castles were not simply military buildings but centres of administration and symbols of power, they had a significant impact on the surrounding landscape. Placed by a frequently-used road or river, the toll castle ensured that a lord would get his due toll money from merchants. Rural castles were often associated with mills and field systems due to their role in managing the lord's estate, which gave them greater influence over resources. Others were adjacent to or in royal forests or deer parks and were important in their upkeep. Fish ponds were a luxury of the lordly elite, and many were found next to castles. Not only were they practical in that they ensured a water supply and fresh fish, but they were a status symbol as they were expensive to build and maintain. Although sometimes the construction of a castle led to the destruction of a village, such as at Eaton Socon in England, it was more common for the villages nearby to have grown as a result of the presence of a castle. Sometimes planned towns or villages were created around a castle. The benefits of castle building on settlements was not confined to Europe. When the 13th-century Safad Castle was founded in Galilee in the Holy Land, the 260 villages benefitted from the inhabitants' newfound ability to move freely. When built, a castle could result in the restructuring of the local landscape, with roads moved for the convenience of the lord. Settlements could also grow naturally around a castle, rather than being planned, due to the benefits of proximity to an economic centre in a rural landscape and the safety given by the defences. Not all such settlements survived, as once the castle lost its importance – perhaps succeeded by a manor house as the centre of administration – the benefits of living next to a castle vanished and the settlement depopulated. During and shortly after the Norman Conquest of England, castles were inserted into important pre-existing towns to control and subdue the populace. They were usually located near any existing town defences, such as Roman walls, although this sometimes resulted in the demolition of structures occupying the desired site. In Lincoln, 166 houses were destroyed to clear space for the castle, and in York agricultural land was flooded to create a moat for the castle. As the military importance of urban castles waned from their early origins, they became more important as centres of administration, and their financial and judicial roles. When the Normans invaded Ireland, Scotland, and Wales in the 11th and 12th centuries, settlement in those countries was predominantly non-urban, and the foundation of towns was often linked with the creation of a castle. The location of castles in relation to high status features, such as fish ponds, was a statement of power and control of resources. Also often found near a castle, sometimes within its defences, was the parish church. This signified a close relationship between feudal lords and the Church, one of the most important institutions of medieval society. Even elements of castle architecture that have usually been interpreted as military could be used for display. The water features of Kenilworth Castle in England – comprising a moat and several satellite ponds – forced anyone approaching a water castle entrance to take a very indirect route, walking around the defences before the final approach towards the gateway. Another example is that of the 14th-century Bodiam Castle, also in England; although it appears to be a state of the art, advanced castle it is in a site of little strategic importance, and the moat was shallow and more likely intended to make the site appear impressive than as a defence against mining. The approach was long and took the viewer around the castle, ensuring they got a good look before entering. Moreover, the gunports were impractical and unlikely to have been effective. Warfare As a static structure, castles could often be avoided. Their immediate area of influence was about and their weapons had a short range even early in the age of artillery. However, leaving an enemy behind would allow them to interfere with communications and make raids. Garrisons were expensive and as a result often small unless the castle was important. Cost also meant that in peacetime garrisons were smaller, and small castles were manned by perhaps a couple of watchmen and gate-guards. Even in war, garrisons were not necessarily large as too many people in a defending force would strain supplies and impair the castle's ability to withstand a long siege. In 1403, a force of 37 archers successfully defended Caernarfon Castle against two assaults by Owain Glyndŵr's allies during a long siege, demonstrating that a small force could be effective. Early on, manning a castle was a feudal duty of vassals to their magnates, and magnates to their kings, however this was later replaced with paid forces. A garrison was usually commanded by a constable whose peacetime role would have been looking after the castle in the owner's absence. Under him would have been knights who by benefit of their military training would have acted as a type of officer class. Below them were archers and bowmen, whose role was to prevent the enemy reaching the walls as can be seen by the positioning of arrowslits. If it was necessary to seize control of a castle an army could either launch an assault or lay siege. It was more efficient to starve the garrison out than to assault it, particularly for the most heavily defended sites. Without relief from an external source, the defenders would eventually submit. Sieges could last weeks, months, and in rare cases years if the supplies of food and water were plentiful. A long siege could slow down the army, allowing help to come or for the enemy to prepare a larger force for later. Such an approach was not confined to castles, but was also applied to the fortified towns of the day. On occasion, siege castles would be built to defend the besiegers from a sudden sally and would have been abandoned after the siege ended one way or another. If forced to assault a castle, there were many options available to the attackers. For wooden structures, such as early motte-and-baileys, fire was a real threat and attempts would be made to set them alight as can be seen in the Bayeux Tapestry. Projectile weapons had been used since antiquity and the mangonel and petraria – from Eastern and Roman origins respectively – were the main two that were used into the Middle Ages. The trebuchet, which probably evolved from the petraria in the 13th century, was the most effective siege weapon before the development of cannons. These weapons were vulnerable to fire from the castle as they had a short range and were large machines. Conversely, weapons such as trebuchets could be fired from within the castle due to the high trajectory of its projectile, and would be protected from direct fire by the curtain walls. Ballistas or springalds were siege engines that worked on the same principles as crossbows. With their origins in Ancient Greece, tension was used to project a bolt or javelin. Missiles fired from these engines had a lower trajectory than trebuchets or mangonels and were more accurate. They were more commonly used against the garrison rather than the buildings of a castle. Eventually cannons developed to the point where they were more powerful and had a greater range than the trebuchet, and became the main weapon in siege warfare. Walls could be undermined by a sap. A mine leading to the wall would be dug and once the target had been reached, the wooden supports preventing the tunnel from collapsing would be burned. It would cave in and bring down the structure above. Building a castle on a rock outcrop or surrounding it with a wide, deep moat helped prevent this. A counter-mine could be dug towards the besiegers' tunnel; assuming the two converged, this would result in underground hand-to-hand combat. Mining was so effective that during the siege of Margat in 1285 when the garrison were informed a sap was being dug they surrendered. Battering rams were also used, usually in the form of a tree trunk given an iron cap. They were used to force open the castle gates, although they were sometimes used against walls with less effect. As an alternative to the time-consuming task of creating a breach, an escalade could be attempted to capture the walls with fighting along the walkways behind the battlements. In this instance, attackers would be vulnerable to arrow fire. A safer option for those assaulting a castle was to use a siege tower, sometimes called a belfry. Once ditches around a castle were partially filled in, these wooden, movable towers could be pushed against the curtain wall. As well as offering some protection for those inside, a siege tower could overlook the interior of a castle, giving bowmen an advantageous position from which to unleash missiles. See also Footnotes References Bibliography Further reading Medieval defences Masonry
Castle
[ "Engineering" ]
12,873
[ "Construction", "Masonry" ]
49,569
https://en.wikipedia.org/wiki/Bayes%27%20theorem
Bayes' theorem (alternatively Bayes' law or Bayes' rule, after Thomas Bayes) gives a mathematical rule for inverting conditional probabilities, allowing one to find the probability of a cause given its effect. For example, if the risk of developing health problems is known to increase with age, Bayes' theorem allows the risk to someone of a known age to be assessed more accurately by conditioning it relative to their age, rather than assuming that the person is typical of the population as a whole. Based on Bayes' law, both the prevalence of a disease in a given population and the error rate of an infectious disease test must be taken into account to evaluate the meaning of a positive test result and avoid the base-rate fallacy. One of Bayes' theorem's many applications is Bayesian inference, an approach to statistical inference, where it is used to invert the probability of observations given a model configuration (i.e., the likelihood function) to obtain the probability of the model configuration given the observations (i.e., the posterior probability). History Bayes' theorem is named after Thomas Bayes (), a minister, statistician, and philosopher. Bayes used conditional probability to provide an algorithm (his Proposition 9) that uses evidence to calculate limits on an unknown parameter. His work was published in 1763 as An Essay Towards Solving a Problem in the Doctrine of Chances. Bayes studied how to compute a distribution for the probability parameter of a binomial distribution (in modern terminology). After Bayes's death, his family gave his papers to a friend, the minister, philosopher, and mathematician Richard Price. Price significantly edited the unpublished manuscript for two years before sending it to a friend who read it aloud at the Royal Society on 23 December 1763. Price edited Bayes's major work "An Essay Towards Solving a Problem in the Doctrine of Chances" (1763), which appeared in Philosophical Transactions, and contains Bayes' theorem. Price wrote an introduction to the paper that provides some of the philosophical basis of Bayesian statistics and chose one of the two solutions Bayes offered. In 1765, Price was elected a Fellow of the Royal Society in recognition of his work on Bayes's legacy. On 27 April, a letter sent to his friend Benjamin Franklin was read out at the Royal Society, and later published, in which Price applies this work to population and computing 'life-annuities'. Independently of Bayes, Pierre-Simon Laplace used conditional probability to formulate the relation of an updated posterior probability from a prior probability, given evidence. He reproduced and extended Bayes's results in 1774, apparently unaware of Bayes's work, in 1774, and summarized his results in Théorie analytique des probabilités (1812). The Bayesian interpretation of probability was developed mainly by Laplace. About 200 years later, Sir Harold Jeffreys put Bayes's algorithm and Laplace's formulation on an axiomatic basis, writing in a 1973 book that Bayes' theorem "is to the theory of probability what the Pythagorean theorem is to geometry". Stephen Stigler used a Bayesian argument to conclude that Bayes' theorem was discovered by Nicholas Saunderson, a blind English mathematician, some time before Bayes, but that is disputed. Martyn Hooper and Sharon McGrayne have argued that Richard Price's contribution was substantial: Statement of theorem Bayes' theorem is stated mathematically as the following equation: where and are events and . is a conditional probability: the probability of event occurring given that is true. It is also called the posterior probability of given . is also a conditional probability: the probability of event occurring given that is true. It can also be interpreted as the likelihood of given a fixed because . and are the probabilities of observing and respectively without any given conditions; they are known as the prior probability and marginal probability. Proof For events Bayes' theorem may be derived from the definition of conditional probability: where is the probability of both A and B being true. Similarly, Solving for and substituting into the above expression for yields Bayes' theorem: For continuous random variables For two continuous random variables X and Y, Bayes' theorem may be analogously derived from the definition of conditional density: Therefore, General case Let be the conditional distribution of given and let be the distribution of . The joint distribution is then . The conditional distribution of given is then determined by Existence and uniqueness of the needed conditional expectation is a consequence of the Radon–Nikodym theorem. This was formulated by Kolmogorov in 1933. Kolmogorov underlines the importance of conditional probability, writing, "I wish to call attention to ... the theory of conditional probabilities and conditional expectations". Bayes' theorem determines the posterior distribution from the prior distribution. Uniqueness requires continuity assumptions. Bayes' theorem can be generalized to include improper prior distributions such as the uniform distribution on the real line. Modern Markov chain Monte Carlo methods have boosted the importance of Bayes' theorem, including in cases with improper priors. Examples Recreational mathematics Bayes' rule and computing conditional probabilities provide a method to solve a number of popular puzzles, such as the Three Prisoners problem, the Monty Hall problem, the Two Child problem, and the Two Envelopes problem. Drug testing Suppose, a particular test for whether someone has been using cannabis is 90% sensitive, meaning the true positive rate (TPR) = 0.90. Therefore, it leads to 90% true positive results (correct identification of drug use) for cannabis users. The test is also 80% specific, meaning true negative rate (TNR) = 0.80. Therefore, the test correctly identifies 80% of non-use for non-users, but also generates 20% false positives, or false positive rate (FPR) = 0.20, for non-users. Assuming 0.05 prevalence, meaning 5% of people use cannabis, what is the probability that a random person who tests positive is really a cannabis user? The Positive predictive value (PPV) of a test is the proportion of persons who are actually positive out of all those testing positive, and can be calculated from a sample as: PPV = True positive / Tested positive If sensitivity, specificity, and prevalence are known, PPV can be calculated using Bayes' theorem. Let mean "the probability that someone is a cannabis user given that they test positive", which is what PPV means. We can write: The denominator is a direct application of the Law of Total Probability. In this case, it says that the probability that someone tests positive is the probability that a user tests positive times the probability of being a user, plus the probability that a non-user tests positive, times the probability of being a non-user. This is true because the classifications user and non-user form a partition of a set, namely the set of people who take the drug test. This combined with the definition of conditional probability results in the above statement. In other words, if someone tests positive, the probability that they are a cannabis user is only 19%—because in this group, only 5% of people are users, and most positives are false positives coming from the remaining 95%. If 1,000 people were tested: 950 are non-users and 190 of them give false positive (0.20 × 950) 50 of them are users and 45 of them give true positive (0.90 × 50) The 1,000 people thus have 235 positive tests, of which only 45 are genuine, about 19%. Sensitivity or specificity The importance of specificity can be seen by showing that even if sensitivity is raised to 100% and specificity remains at 80%, the probability that someone who tests positive is a cannabis user rises only from 19% to 21%, but if the sensitivity is held at 90% and the specificity is increased to 95%, the probability rises to 49%. Cancer rate If all patients with pancreatic cancer have a certain symptom, it does not follow that anyone who has that symptom has a 100% chance of getting pancreatic cancer. Assuming the incidence rate of pancreatic cancer is 1/100000, while 10/99999 healthy individuals have the same symptoms worldwide, the probability of having pancreatic cancer given the symptoms is 9.1%, and the other 90.9% could be "false positives" (that is, falsely said to have cancer; "positive" is a confusing term when, as here, the test gives bad news). Based on incidence rate, the following table presents the corresponding numbers per 100,000 people. Which can then be used to calculate the probability of having cancer when you have the symptoms: Defective item rate A factory produces items using three machines—A, B, and C—which account for 20%, 30%, and 50% of its output, respectively. Of the items produced by machine A, 5% are defective, while 3% of B's items and 1% of C's are defective. If a randomly selected item is defective, what is the probability it was produced by machine C? Once again, the answer can be reached without using the formula by applying the conditions to a hypothetical number of cases. For example, if the factory produces 1,000 items, 200 will be produced by A, 300 by B, and 500 by C. Machine A will produce 5% × 200 = 10 defective items, B 3% × 300 = 9, and C 1% × 500 = 5, for a total of 24. Thus 24/1000 (2.4%) of the total output will be defective and the likelihood that a randomly selected defective item was produced by machine C is 5/24 (~20.83%). This problem can also be solved using Bayes' theorem: Let Xi denote the event that a randomly chosen item was made by the i th machine (for i = A,B,C). Let Y denote the event that a randomly chosen item is defective. Then, we are given the following information: If the item was made by the first machine, then the probability that it is defective is 0.05; that is, P(Y | XA) = 0.05. Overall, we have To answer the original question, we first find P(Y). That can be done in the following way: Hence, 2.4% of the total output is defective. We are given that Y has occurred and we want to calculate the conditional probability of XC. By Bayes' theorem, Given that the item is defective, the probability that it was made by machine C is 5/24. C produces half of the total output but a much smaller fraction of the defective items. Hence the knowledge that the item selected was defective enables us to replace the prior probability P(XC) = 1/2 by the smaller posterior probability P(XC | Y) = 5/24. Interpretations The interpretation of Bayes' rule depends on the interpretation of probability ascribed to the terms. The two predominant interpretations are described below. Bayesian interpretation In the Bayesian (or epistemological) interpretation, probability measures a "degree of belief". Bayes' theorem links the degree of belief in a proposition before and after accounting for evidence. For example, suppose it is believed with 50% certainty that a coin is twice as likely to land heads than tails. If the coin is flipped a number of times and the outcomes observed, that degree of belief will probably rise or fall, but might remain the same, depending on the results. For proposition A and evidence B, P (A), the prior, is the initial degree of belief in A. P (A | B), the posterior, is the degree of belief after incorporating news that B is true. the quotient represents the support B provides for A. For more on the application of Bayes' theorem under the Bayesian interpretation of probability, see Bayesian inference. Frequentist interpretation In the frequentist interpretation, probability measures a "proportion of outcomes". For example, suppose an experiment is performed many times. P(A) is the proportion of outcomes with property A (the prior) and P(B) is the proportion with property B. P(B | A) is the proportion of outcomes with property B out of outcomes with property A, and P(A | B) is the proportion of those with A out of those with B (the posterior). The role of Bayes' theorem can be shown with tree diagrams. The two diagrams partition the same outcomes by A and B in opposite orders, to obtain the inverse probabilities. Bayes' theorem links the different partitionings. Example An entomologist spots what might, due to the pattern on its back, be a rare subspecies of beetle. A full 98% of the members of the rare subspecies have the pattern, so P(Pattern | Rare) = 98%. Only 5% of members of the common subspecies have the pattern. The rare subspecies is 0.1% of the total population. How likely is the beetle having the pattern to be rare: what is P(Rare | Pattern)? From the extended form of Bayes' theorem (since any beetle is either rare or common), Forms Events Simple form For events A and B, provided that P(B) ≠ 0, In many applications, for instance in Bayesian inference, the event B is fixed in the discussion and we wish to consider the effect of its having been observed on our belief in various possible events A. In such situations the denominator of the last expression, the probability of the given evidence B, is fixed; what we want to vary is A. Bayes' theorem shows that the posterior probabilities are proportional to the numerator, so the last equation becomes: In words, the posterior is proportional to the prior times the likelihood. If events A1, A2, ..., are mutually exclusive and exhaustive, i.e., one of them is certain to occur but no two can occur together, we can determine the proportionality constant by using the fact that their probabilities must add up to one. For instance, for a given event A, the event A itself and its complement ¬A are exclusive and exhaustive. Denoting the constant of proportionality by c, we have: Adding these two formulas we deduce that: or Alternative form Another form of Bayes' theorem for two competing statements or hypotheses is: For an epistemological interpretation: For proposition A and evidence or background B, is the prior probability, the initial degree of belief in A. is the corresponding initial degree of belief in not-A, that A is false, where is the conditional probability or likelihood, the degree of belief in B given that A is true. is the conditional probability or likelihood, the degree of belief in B given that A is false. is the posterior probability, the probability of A after taking into account B. Extended form Often, for some partition {Aj} of the sample space, the event space is given in terms of P(Aj) and P(B | Aj). It is then useful to compute P(B) using the law of total probability: Or (using the multiplication rule for conditional probability), In the special case where A is a binary variable: Random variables Consider a sample space Ω generated by two random variables X and Y with known probability distributions. In principle, Bayes' theorem applies to the events A = {X = x} and B = {Y = y}. Terms become 0 at points where either variable has finite probability density. To remain useful, Bayes' theorem can be formulated in terms of the relevant densities (see Derivation). Simple form If X is continuous and Y is discrete, where each is a density function. If X is discrete and Y is continuous, If both X and Y are continuous, Extended form A continuous event space is often conceptualized in terms of the numerator terms. It is then useful to eliminate the denominator using the law of total probability. For fY(y), this becomes an integral: Bayes' rule in odds form Bayes' theorem in odds form is: where is called the Bayes factor or likelihood ratio. The odds between two events is simply the ratio of the probabilities of the two events. Thus: Thus the rule says that the posterior odds are the prior odds times the Bayes factor; in other words, the posterior is proportional to the prior times the likelihood. In the special case that and , one writes , and uses a similar abbreviation for the Bayes factor and for the conditional odds. The odds on is by definition the odds for and against . Bayes' rule can then be written in the abbreviated form or, in words, the posterior odds on equals the prior odds on times the likelihood ratio for given information . In short, posterior odds equals prior odds times likelihood ratio. For example, if a medical test has a sensitivity of 90% and a specificity of 91%, then the positive Bayes factor is . Now, if the prevalence of this disease is 9.09%, and if we take that as the prior probability, then the prior odds is about 1:10. So after receiving a positive test result, the posterior odds of having the disease becomes 1:1, which means that the posterior probability of having the disease is 50%. If a second test is performed in serial testing, and that also turns out to be positive, then the posterior odds of having the disease becomes 10:1, which means a posterior probability of about 90.91%. The negative Bayes factor can be calculated to be 91%/(100%-90%)=9.1, so if the second test turns out to be negative, then the posterior odds of having the disease is 1:9.1, which means a posterior probability of about 9.9%. The example above can also be understood with more solid numbers: assume the patient taking the test is from a group of 1,000 people, 91 of whom have the disease (prevalence of 9.1%). If all 1,000 take the test, 82 of those with the disease will get a true positive result (sensitivity of 90.1%), 9 of those with the disease will get a false negative result (false negative rate of 9.9%), 827 of those without the disease will get a true negative result (specificity of 91.0%), and 82 of those without the disease will get a false positive result (false positive rate of 9.0%). Before taking any test, the patient's odds for having the disease is 91:909. After receiving a positive result, the patient's odds for having the disease is which is consistent with the fact that there are 82 true positives and 82 false positives in the group of 1,000. Correspondence to other mathematical frameworks Propositional logic Where the conditional probability is defined, it can be seen to capture the implication . The probabilistic calculus then mirrors or even generalizes various logical inference rules. Beyond, for example, assigning binary truth values, here one assigns probability values to statements. The assertion is captured by the assertion , i.e. that the conditional probability take the extremal probability value . Likewise, the assertion of a negation of an implication is captured by the assignment of . So, for example, if , then (if it is defined) , which entails , the implication introduction in logic. Similarly, as the product of two probabilities equaling necessitates that both factors are also , one finds that Bayes' theorem entails , which now also includes modus ponens. For positive values , if it equals , then the two conditional probabilities are equal as well, and vice versa. Note that this mirrors the generally valid . On the other hand, reasoning about either of the probabilities equalling classically entails the following contrapositive form of the above: . Bayes' theorem with negated gives . Ruling out the extremal case (i.e. ), one has and in particular . Ruling out also the extremal case , one finds they attain the maximum simultaneously: which (at least when having ruled out explosive antecedents) captures the classical contraposition principle . Subjective logic Bayes' theorem represents a special case of deriving inverted conditional opinions in subjective logic expressed as: where denotes the operator for inverting conditional opinions. The argument denotes a pair of binomial conditional opinions given by source , and the argument denotes the prior probability (aka. the base rate) of . The pair of derivative inverted conditional opinions is denoted . The conditional opinion generalizes the probabilistic conditional , i.e. in addition to assigning a probability the source can assign any subjective opinion to the conditional statement . A binomial subjective opinion is the belief in the truth of statement with degrees of epistemic uncertainty, as expressed by source . Every subjective opinion has a corresponding projected probability . The application of Bayes' theorem to projected probabilities of opinions is a homomorphism, meaning that Bayes' theorem can be expressed in terms of projected probabilities of opinions: Hence, the subjective Bayes' theorem represents a generalization of Bayes' theorem. Generalizations Bayes theorem for 3 events A version of Bayes' theorem for 3 events results from the addition of a third event , with on which all probabilities are conditioned: Derivation Using the chain rule And, on the other hand The desired result is obtained by identifying both expressions and solving for . Use in genetics In genetics, Bayes' rule can be used to estimate the probability that someone has a specific genotype. Many people seek to assess their chances of being affected by a genetic disease or their likelihood of being a carrier for a recessive gene of interest. A Bayesian analysis can be done based on family history or genetic testing to predict whether someone will develop a disease or pass one on to their children. Genetic testing and prediction is common among couples who plan to have children but are concerned that they may both be carriers for a disease, especially in communities with low genetic variance. Using pedigree to calculate probabilities Example of a Bayesian analysis table for a female's risk for a disease based on the knowledge that the disease is present in her siblings but not in her parents or any of her four children. Based solely on the status of the subject's siblings and parents, she is equally likely to be a carrier as to be a non-carrier (this likelihood is denoted by the Prior Hypothesis). The probability that the subject's four sons would all be unaffected is 1/16 (⋅⋅⋅) if she is a carrier and about 1 if she is a non-carrier (this is the Conditional Probability). The Joint Probability reconciles these two predictions by multiplying them together. The last line (the Posterior Probability) is calculated by dividing the Joint Probability for each hypothesis by the sum of both joint probabilities. Using genetic test results Parental genetic testing can detect around 90% of known disease alleles in parents that can lead to carrier or affected status in their children. Cystic fibrosis is a heritable disease caused by an autosomal recessive mutation on the CFTR gene, located on the q arm of chromosome 7. Here is a Bayesian analysis of a female patient with a family history of cystic fibrosis (CF) who has tested negative for CF, demonstrating how the method was used to determine her risk of having a child born with CF: because the patient is unaffected, she is either homozygous for the wild-type allele, or heterozygous. To establish prior probabilities, a Punnett square is used, based on the knowledge that neither parent was affected by the disease but both could have been carriers: Given that the patient is unaffected, there are only three possibilities. Within these three, there are two scenarios in which the patient carries the mutant allele. Thus the prior probabilities are and . Next, the patient undergoes genetic testing and tests negative for cystic fibrosis. This test has a 90% detection rate, so the conditional probabilities of a negative test are 1/10 and 1. Finally, the joint and posterior probabilities are calculated as before. After carrying out the same analysis on the patient's male partner (with a negative test result), the chance that their child is affected is the product of the parents' respective posterior probabilities for being carriers times the chance that two carriers will produce an affected offspring (). Genetic testing done in parallel with other risk factor identification Bayesian analysis can be done using phenotypic information associated with a genetic condition. When combined with genetic testing, this analysis becomes much more complicated. Cystic fibrosis, for example, can be identified in a fetus with an ultrasound looking for an echogenic bowel, one that appears brighter than normal on a scan. This is not a foolproof test, as an echogenic bowel can be present in a perfectly healthy fetus. Parental genetic testing is very influential in this case, where a phenotypic facet can be overly influential in probability calculation. In the case of a fetus with an echogenic bowel, with a mother who has been tested and is known to be a CF carrier, the posterior probability that the fetus has the disease is very high (0.64). But once the father has tested negative for CF, the posterior probability drops significantly (to 0.16). Risk factor calculation is a powerful tool in genetic counseling and reproductive planning but cannot be treated as the only important factor. As above, incomplete testing can yield falsely high probability of carrier status, and testing can be financially inaccessible or unfeasible when a parent is not present. See also Bayesian epistemology Inductive probability Quantum Bayesianism Why Most Published Research Findings Are False, a 2005 essay in metascience by John Ioannidis Regular conditional probability Bayesian persuasion Notes References Bibliography Further reading External links Bayesian statistics Probability theorems Theorems in statistics
Bayes' theorem
[ "Mathematics" ]
5,420
[ "Mathematical problems", "Theorems in probability theory", "Mathematical theorems", "Theorems in statistics" ]
7,183,054
https://en.wikipedia.org/wiki/Norilsk%20Nickel
Norilsk Nickel (), or Nornickel, is a Russian nickel and palladium mining and smelting company. Its largest operations are located in the Norilsk–Talnakh area near the Yenisei River in the north of Siberia. It also has holdings in Nikel, Zapolyarny, and Monchegorsk on the Kola Peninsula, in Harjavalta in western Finland, and in South Africa. Headquartered in Moscow, Norilsk Nickel is the world's largest producer of refined nickel and the 11th largest copper producer. The company is listed on MICEX-RTS. As of March 2021, its key shareholders were Vladimir Potanin's Olderfrey Holdings Ltd (34.59%) and Oleg Deripaska's Rusal (27.82%). In December 2010, Norilsk Nickel made a share buyback offer for Rusal's 25% share in the company for $12 billion, but the offer was declined. In 2012, Potanin's Interros holding, Rusal, and Roman Abramovich signed a shareholder agreement on the size of dividend payouts to end a conflict over the matter, as well as issues around the company's broader strategy and management. The agreement expires on June 1, 2023, and the prospects of its extension or suspension are unclear. In March 2019, Abramovich sold a 1.7% stake in the company for $551 million, predominantly to British-based and Russian investors. Potanin and Deripaska's Rusal were blocked from purchasing any shares. In 2021, the company's revenue amounted to 856 billion rubles. History Mining began in the Norilsk area in the 1920s. The Soviet government established the Norilsk Combine in 1935 and passed control to the NKVD. In 1943, Norilsk produced 4,000 tonnes of refined nickel and in 1945 hit the target figure of 10,000 tonnes. The mining and metal production originally used forced labour from the Gulag system. In 1993, after the fall of the Soviet Union, a joint-stock company called RAO Norilsk Nickel was created. Two years later, control over the deeply indebted company, which was bleeding cash at a rate of about $2 million a day against the background of falling nickel prices, was sold to a private company, Interros. By the end of privatization in 1997, the company had moved into the black, and workers were being paid. The current average pay exceeds $1,000 per month with an annual paid leave of two to three months. Nevertheless, the working and living conditions in Norilsk remain harsh, although they are improving as the company shuts down old factories that are the source of excessive pollution. In July 2000, Norilsk Nickel joined forces with the St. Petersburg Research Institute of the Arctic and Antarctic (), to investigate the potential use of decommissioned nuclear powered submarines, both from the United States and Russia, to transport materials along the Northern Sea Route (). Overhaul and refit costs came to $72–80 million per submarine, which included modifying its ice-breaking bow to cut through ice up to 215 cm (85 in) thick in seawater and up to 150 cm (59 in) in the freshwater mouth of the Yenisei. Decommissioned Typhoon submarines were expected to transport up to 12,000 tonnes of supplies and nickel between Dudinka and either Murmansk or Arkhangelsk. In 2000, the Murmansk Shipping Company (MMP or MSCO) () provided icebreaker services at a charge of $11.35 per tonne of cargo. Three submarines - the project feasibility threshold - were scheduled for refit and overhaul between 2000 and 2003. However, the stakeholders failed to reach an agreement as to who would conduct and cover the refit and overhaul of the submarines. Furthermore, money was not the only issue. Under the existing international agreements, decommissioned nuclear-powered submarines from the two countries’ navies had to be dismantled. Should this obstacle be addressed, subsequent ownership of the refitted submarines also remained unclear: whether they would remain the assets of the Ministry of Defense or would be transferred to another governmental agency. One of the options suggested by Nornickel was to establish a joint transportation company that would lease the vessels. In 2002, Nornickel accounted for the most of MMP's shipping along the Northern Sea Route. In 2008, Aker Yards signed a contract with Norilsk Nickel for the delivery of four container/cargo ships for Arctic operations, with an option for a fifth. In 2002, MMC Norilsk Nickel began purchasing gold mining assets, which were spun off in 2005 as Polyus Gold. In 2003, the company took control of Stillwater Mining Company, the only palladium producer in the U.S. Stillwater operates a platinum group metals (PGM) facility in Stillwater, Montana. In November 2010, Norilsk Nickel announced the sale of Stillwater. Throughout 2007, Norilsk acquired a host of mining and metallurgical assets abroad, transforming into a multinational company with operations in Australia, Botswana, Finland, Russia, South Africa, and the United States. Norilsk Nickel signed its key deal on June 28, 2007, acquiring about 90 percent of Canada's LionOre Mining International Ltd, the world's tenth-largest nickel producer at the time. This takeover, valued at $6.4 billion, was the biggest foreign acquisition by a Russian company at the time, making Norilsk Nickel the world's largest nickel producer. On February 27, 2008, Norilsk Nickel diversified into the coal mining industry through North Star LLC by obtaining mining rights to the amount of 33.6 million rubles for the estimated 5.7 billion tonnes of coal at the Syradasai Field near the port of Dikson () in the Taymyrsky Dolgano-Nenetsky District (). In the coal mining industry, it competed with Rio Tinto and BHP Billiton. By the estimates of North Star LLC (), a firm affiliated with Nornickel, developing the field would require an investment of $1.5 billion, which including the necessary expansion of the port of Dikson, another Nornickel asset. The only competitor for the rights to the Syradasai Field was Golevskaya Mining Company LLC (). The Syradasai Field is 105 to 120 km southeast of Dikson in the Taimyr-Turukhansk support zone (). A 120-kilometer road and railway was expected to connect the deep-sea port on Cape Chaika to the massive coal deposit by 2019. CC VostokUgol () or Vostok Coal planned to export up to 10 million tonnes of coal annually from the open-pit mine to Western Europe and the Asia-Pacific regions. In 2016 Nornickel ranked below 65 other oil, gas and mining companies in a list of 92 involved in onshore resource extraction above the Arctic Circle, in terms of handling indigenous rights. In 2018, North Star LLC changed owners to become part of businessman Roman Trotsenko's AEON Group. Neither Nornickel nor AEON disclosed the transfer of ownership terms. In the Arctic Environmental Responsibility Index (AERI), Norilsk Nickel is ranked No. 38 out of 120 oil, gas, and mining companies involved in resource extraction north of the Arctic Circle. In April 2024, the United States and the United Kingdom announced a ban on imports of Russian aluminum, copper, and nickel. Due to sanctions, Norilsk Nickel planned to move some of its copper smelting to China and establish a joint venture with a Chinese company. Finished copper products would be sold as Chinese products to avoid Western sanctions. China is Norilsk Nickel's largest export market from 2023. Nickel is a critical metal in electric vehicle batteries, and palladium is critical element in catalytic converters, a component in natural gas vehicles. This plan was motivated not only by circumvent Western sanctions, but also China's significantly less stringent environmental standards than those in Russia. Operations Nornickel is Russia's largest non-ferrous metallurgy company and one of the 10 largest private enterprises in the country. In 2019, the company produced 229,000 tonnes of nickel, 499,000 tonnes of copper, 2.9 million ounces of palladium, and 0.7 million ounces of platinum. Globally, Nornickel ranks: First in nickel production (accounting for 14% of global and 96% of Russian production). Bloomberg hails Nornickel as the world's most efficient nickel producer First in palladium production with a share of 41% Third in platinum production with a share of 10% Nornickel also produces rhodium, cobalt, copper, silver, gold, iridium, ruthenium, selenium, tellurium, and sulfur. Proven and possible reserves: 6.5 million tonnes of nickel 11.6 million tonnes of copper 118 million troy ounces of platinum-group metals The company's revenue in 2020 reached $15.5 billion, with net profits of $3.6 billion. Formed 250 million years ago during the eruption of the Siberian Traps igneous province (STIP), the Norilsk-Talnakh nickel deposits are the largest nickel-copper-palladium deposits in the world. The STIP disgorged over 1 million cubic kilometers of lava, a large portion of it through a series of flat-lying lava conduits below Norilsk and the Talnakh Mountains. The Siberian Traps are considered to be responsible for the late-Permian mass extinction event. The district's first mineral resources were discovered in the 1840s when Alexander von Middendorff's expedition found the local coal deposits. In the 1860s, Friedrich Schmidt described the coal and surface copper ore found in the field that would later be called Norilsk 1. In the early days of the Soviet Union, Nikolay Urvantsev's expeditions revealed several industrially significant deposits. The 1930s saw the construction of the Norilsk Mining and Metallurgy Combine, which remains the pillar of local industry to date. The fields are located along the deep Norilsk-Khatanga Fault, and most mining operations employ underground methods. The area is believed to hold around 35% of the world's known nickel reserves, as well as 10% of its copper, 15% of its cobalt, and 40% of its platinum-group metals. The district's fields are divided into two clusters: the Norilsk Cluster in the southwest and the Talnakh Cluster in the northeast. In 2022 Norilsk Nickel reiterated its output guidance for the year and said that operations remain uninterrupted. In the first update since the invasion of Ukraine, the miner said first-quarter nickel production increased 10% year-on-year to 52,000 tons. Palladium output declined 8% to 706,000 ounces and platinum fell by 12% to 163,000 ounces, but only from a higher-than-normal level a year ago. Norilsk Ore Cluster The cluster is located below Norilsk's city center and to the south of it, in the north-eastern part of the Norilsk Geological Basin. In 2021, Norilsk Nickel estimated the mineral reserves of the cluster at 156.6 million tonnes of ore, 400,000 tonnes of nickel, 600,000 tonnes of copper, and 25.6 million troy ounces of platinum-group metals. The rights to some of the cluster's deposits belong to Russian Platinum, but the corporation is unable to start mining because Nornickel, which controls the remote area's infrastructure, is blocking access. The two conflicting parties have a protracted history of negotiating a joint mining enterprise. Norilsk 1 Field The district's first actively developed field is located in the south of Norilsk's city center and to the south of the city. It is 30 to 350 meters thick. The northern part of the deposit consists of two branches: the “Coal Stream” and the “Bear Stream”. Extraction has been ongoing since the 1940s at the Zapolyarny Mine through both underground and open-pit mining of the Coal Stream and Bear Stream quarries. The reserves of its northern section have mostly been depleted, and mining in the Coal Stream quarry has ceased. Russian Platinum obtained the mining rights to the southern section in 2012 but has not yet used them because of its conflict with Nornickel. Norilsk 2 Field The field is located near Mount Gudchikha to the east of Norilsk 1. In 1926, Nikolay Urvantsev discovered copper-nickel ore in the area, and mining began in the 1930s. However, the deposit turned out to be minor, and the decision was made to focus on Norilsk 1. Prospecting continued throughout the 1950s, but after the Talnakh Cluster was discovered, Norilsk 2 was abandoned. Maslov Field Located to the south of Norilsk 1, this field is believed to be an offshoot of the latter. Prospecting began here in the 1970s. The field stretches for over six kilometers from north to south and includes the northern and southern sections. Two by four kilometers in size, the northern section is up to 300 meters thick, while the southern estate, which is up to 400 meters thick, has an area of three by 1.5 kilometers[8]. The mining rights to the deposit belong to Nornickel, which in 2019 announced plans to launch an underground mining operation by 2029. Chernogorskoye Field The field is located to the east of the Maslov Field near Mount Chernaya. An intrusion with a mineral composition similar to that of Norilsk 1 is up to 200 meters thick. In 2021, Russian Platinum signed a memorandum with VEB.RF and VTB to develop the field. The plans include open-pit mining in the eastern field section with an option for the subsequent underground development of its western part. Talnakh Ore Cluster The cluster is located below the Talnakh District and to its northeast, in the southwest of the Kharayelakh Geological Basin. Following the discovery of its rich reserves of copper-and-nickel ore, the cluster became Norilsk's primary source of mineral resources. Its proven resources include over 100 kinds of ore minerals, many of which were previously unknown to science: talnakhite, godlevskite, shadlunite, taimyrite, sobolevskite, mayakite, and more. In 2021, Nornickel assessed the cluster's mineral reserves at 1,5 billion tonnes of ore, 11.2 million tonnes of nickel, 11.2 million tonnes of copper, and 231.7 million troy ounces of platinum-group metals. Talnakh Field The field stretches from north to south along the Norilsk-Khatanga Fault and includes its graben and the adjacent intrusions from the east. The primary development facilities are the Mayak, the Komsomolsky, and the Skalisty mines. Oktyabrskoye Field The field is located to the west of the Norilsk-Khatanga Fault. The primary development facilities are the Oktyabrsky and the Taimyrsky mines. The Oktyabrsky deposit accounts for about half of Norilsk Nickel's ore production. Production divisions The company currently has five core operational divisions in three countries: The Polar Division of MMC Norilsk Nickel and ancillary activities, located on the Taimyr Peninsula Kola MMC and ancillary activities, located on the Kola Peninsula (incorporating the Pechenganickel Combine in Nikel and Zapolyarny and the Severonickel Combine in Monchegorsk). The plant in Nikel closed in December 2020. Norilsk Nickel Harjavalta, Finland's only nickel refining plant, purchased from OM Group in 2007 Norilsk Nickel Africa, which includes stakes in mines in Botswana (85% of Tati Nickel) and in South Africa (50% of Nkomati), both formerly owned by LionOre Environmental problems Norilsk Nickel is known to be one of Russia's largest industrial polluters, releasing approximately 1.9 million tonnes of sulfur dioxide into the air annually as of 2020, accounting for 1.9% of global emissions. Ore is smelted on site in Norilsk. The smelting is directly responsible for severe pollution, including acid rain and smog. The pollution originating from the Kola division of the company was also affecting Norway, which has been offering financial support to clean up the operation since 1990. In December 2020, Norilsk Nickel shut down its old smelter in the town of Nikel on the Russia-Norway border. In 2008, Rosprirodnadzor (the Federal Environmental, Industrial, and Nuclear Supervision Service of Russia) demanded that a 4.35-billion ruble ($60-million) fine be imposed on Nornickel for polluting minor rivers with wastewater. The environmental problems at Norilsk stretch back for decades. Back in 2004, oligarch Mikhail Prokhorov claimed that Nornickel would resolve most of the area's environmental issues within 5–6 years. By 2008, this timeline had been moved to 2015. However, Nornickel claims to be a socially responsible business and invests in modernization. Norilsk Nickel has been working consistently to reduce emissions of major air pollutants. In 2006, the company reported an investment of more than $5 million in the maintenance and overhaul of its dust-and-gas recovery and removal systems. The company asserts a commitment of nearly $1.4 million for its air pollution prevention plan. However, according to the official statistics, emissions remain extremely high. In 2006, Blacksmith Institute, an international non-profit organization, included Norilsk in its list of the world's 10 most polluted places. Nornickel wrote a protest letter but to no avail. According to local environmental experts, in spite of minor reductions in overall pollution levels, the levels of SO2, HS, phenol, formaldehyde, and dust have increased, with the levels of nickel and copper showing 50% growth. The morbidity rate remains stable, though the mortality rate is decreasing. In 2010, Vladimir Putin visited Norilsk and complained about the pollution, threatening a “significant increase in environmental fines” if the company did not modernize its plant. By 2013, owner Vladimir Potanin had begun to invest in environmental measures. In June 2016, Norilsk shut down one of its factories, which was emitting 380,000 tonnes of sulfur dioxide every year, 25% of the total of sulfur emissions in the city, in an effort to clean up its environmental record. It also said it would invest 300 billion rubles to modernize manufacturing by 2020. In 2016, Norilsk Nickel admitted that a spillage at one of its facilities had been responsible for a river in the Russian Arctic turning blood-red. The heavy rains on 5 September 2016 caused a filtration dam at the Nadezhda Plant to overflow into the Daldykan River. Indigenous groups have accused the company of lax safety standards. At the end of 2016, Nornickel signed a contract with Canadian company SNC-Lavalin to introduce sulfur dioxide filtration and storage technologies on its plant in Norilsk in what was lauded as one of the largest environmental projects of its kind. Once the project reached completion in 2020, sulfur emissions dropped by up to 75%. In April 2018, amid rising pressure from the Russian government and Western investment funds, the company announced its plans to invest in a processing plant worth $1 billion, which would convert sulfur dioxide produced during the metal smelting process into gypsum. The plant will be finished in 2022, in time for the company to meet its target of reducing harmful emissions by 75% and avoid financial fines 100 times higher than the current ones. In 2019, the group's total environmental protection expenditures were reported to have rocketed by 117.9%. The cornerstone of Nornickel's environmental program is the $3.5-billion SO2 Project. Aimed at recycling toxic SO2 emissions, the goal of the project is to achieve a 75% cut in SO2 emissions in Nornickel's hometown of Norilsk by 2023, growing to 90% by 2025. In 2020, Nornickel presented a new environmental strategy with ambitious targets to be reached by 2030 in six environmental protection areas. To honor its commitments, the company shut down Kola MMC's smelting shop in Nikel in 2020, eliminating 100% of sulfur dioxide emissions near the Russia-Norway border, followed by its copper smelter in Monchegorsk in March 2021. Combined with Nornickel's other green initiatives, these steps are expected to ensure an 85% decrease in sulfur dioxide pollution in the Murmansk Region by late 2021. In December 2020, Norilsk Nickel reiterated plans to cut group sulfur dioxide emissions in the Norilsk area by 90% by 2025 from 2015 levels and earmarked $5.5 billion for environmental projects, including $3.6 billion for sulfur dioxide capture and processing. 2020 fuel spill On May 29, 2020, a Soviet-era fuel storage tank owned by Nornickel subsidiary Norilsk-Taimyr Energy (NTEK) collapsed, flooding the nearby Daldykan River with some 20,000 tonnes of diesel. Russian President Vladimir Putin declared a state of emergency. The diesel oil was intended for the NTEK coal-fired combined heat and power plant as backup fuel. The fuel storage tank failed when the underlying permafrost began to soften. An area of up to 350 square kilometers (135 square miles) was contaminated. The cleanup efforts were complicated by a lack of roads and the river being too shallow for boats or barges to pass. Former deputy head of Rosprirodnadzor Oleg Mitvol estimated the clean-up cost at about 100 billion rubles ($1.5 billion) and set a timeline of five to 10 years. In September 2020, the company reported having collected more than 90% of the leaked fuel. Environmental Resources Management, the international company which provides Norilsk Nickel with consulting services on environmental issues, identified the cause of the accident as subsidence resulting from the gradual melting of the permafrost on which the piles supporting the fuel storage tank stood. According to the results of the official investigation, some of the piles were shorter than the designed length and rested on the permafrost rather than being sunk into the bedrock. According to specialists, the average annual temperature in Russia is growing more than 2.5–2.8 times faster than the global average. Russia's Far North, including the Taymyr Peninsula, is heating up faster than anywhere else in the country, melting the permafrost on which many structures stand. However, Zhanna Petukhova, director of the Arctic Permafrost Research Center, says that the tanks do in fact stand on piles driven into the bedrock, rather than the permafrost. She believes the accident is more likely to have been due to the poor condition of equipment dating back to the Soviet era. In February 2021, the Krasnoyarsk Arbitration Court ordered Nornickel to pay 146 billion rubles ($2 billion) in compensation for the spill damage to support environmental projects in the Krasnoyarsk Territory. Nornickel had claimed the damages should be calculated at 21 billion rubles ($280 million). Finland Norilsk Nickel's nickel-cobalt refinery at Harjavalta, western Finland, released 66 tonnes of nickel as nickel sulphate into the local Kokemäenjoki (Kokemäki River) on 5–6 July 2014. Refinery coolant water recirculated from the river was accidentally contaminated by process water over a 30-hour period through an equipment failure. Nickel concentrations were 400 times normal levels, with the accident becoming the largest known leak in Finnish history. Elevated nickel values in river waters were recorded for ~20 days before declining to normal levels. In December 2020, the company reported, citing a research paper, that the population of mussels in Kokemäenjoki had been recovering, purporting that the water protection measures had been successful and the burden on the river had been reduced. Carbon footprint Norilsk Nickel reported Total CO2e emissions (Direct + Indirect) for 31 December 2020 at 9,699 Kt (-253 /-2.5% y-o-y). There is little evidence of a consistent declining trend as yet. Related organizations The in St Petersburg is in charge of the design and construction of Nornickel's facilities. Gipronickel does research in every field of metallurgy, including extraction, patenting, design and more. Norilsk Nickel uses the Yenisei River port of Dudinka to load its finished product on ships for export. The Moscow-based Interros Holding Company is the controlling shareholder of Nornickel. Nornickel also attempted to operate Nakety/Bogota, a nickel mine on the island of New Caledonia in the South Pacific, in partnership with Argosy Minerals of Australia but has withdrawn from this project. In 2016, Nornickel established the Global Palladium Fund to promote industrial demand for palladium and to reduce volatility in the palladium market. The fund's objective is to act as a platform to facilitate cooperation between major palladium holders. Competition Nornickel's chief competitors in the production of nickel and of palladium are Vale, BHP and Anglo American Platinum. Fleet The company's fleet provides sea transportation of cargo and concentrates from Norilsk to ports with rail connections. In 2008, Norilsk Nickel commissioned the construction of five ice-breaking cargo freighters. See also Nickel mining and extraction Copper mining and extraction Emily Ann and Maggie Hays nickel mines London Platinum and Palladium Market London Metal Exchange References External links The Moscow Times: Norilsk Nickel withdraws from Nakety Bogota project MBendi's MMC Norilsk Nickel information page The Metallurgical Complex at Norilsk in Siberia JSC MMC Norilsk Nickel Company History Russian metals firm admits spillage turned river blood red The Guardian, 2016. Mining companies of Russia Nickel mining companies Copper mining companies of Russia Palladium mining companies Platinum mining companies Companies based in Moscow Krasnoyarsk Krai Non-renewable resource companies established in 1993 1993 establishments in Russia Companies listed on the Moscow Exchange Companies in the MOEX Norilsk Vladimir Potanin
Norilsk Nickel
[ "Chemistry" ]
5,531
[ "Metallurgical industry of Russia", "Metallurgical industry by country" ]
7,183,233
https://en.wikipedia.org/wiki/Excessive%20daytime%20sleepiness
Excessive daytime sleepiness (EDS) is characterized by persistent sleepiness and often a general lack of energy, even during the day after apparently adequate or even prolonged nighttime sleep. EDS can be considered as a broad condition encompassing several sleep disorders where increased sleep is a symptom, or as a symptom of another underlying disorder like narcolepsy, circadian rhythm sleep disorder, sleep apnea or idiopathic hypersomnia. Some persons with EDS, including those with hypersomnias like narcolepsy and idiopathic hypersomnia, are compelled to nap repeatedly during the day; fighting off increasingly strong urges to sleep during inappropriate times such as while driving, while at work, during a meal, or in conversations. As the compulsion to sleep intensifies, the ability to complete tasks sharply diminishes, often mimicking the appearance of intoxication. During occasional unique and/or stimulating circumstances, a person with EDS can sometimes remain animated, awake and alert, for brief or extended periods of time. EDS can affect the ability to function in family, social, occupational, or other settings. A proper diagnosis of the underlying cause and ultimately treatment of symptoms and/or the underlying cause can help mitigate such complications. According to the National Sleep Foundation, around 20 percent of people experience EDS. Causes EDS can be a symptom of a number of factors and disorders. Specialists in sleep medicine are trained to diagnose them. Some are: Insufficient quality or quantity of night time sleep Obstructive sleep apnea Misalignments of the body's circadian pacemaker with the environment (e.g., jet lag, shift work, or other circadian rhythm sleep disorders) Another underlying sleep disorder, such as narcolepsy, sleep apnea, idiopathic hypersomnia, or restless legs syndrome Disorders such as clinical depression or atypical depression Tumors, head trauma, anemia, kidney failure, hypothyroidism, or an injury to the central nervous system Drug abuse Genetic predisposition Vitamin deficiency, such as biotin deficiency Particular classes of prescription and over-the-counter medication Long COVID Diagnosis An adult who is compelled to nap repeatedly during the day may have excessive daytime sleepiness (EDS); however, it is important to distinguish between occasional daytime sleepiness and EDS, which is chronic. A number of tools for screening for EDS have been developed. One is the Epworth Sleepiness Scale (ESS) which grades the results of a questionnaire with eight questions referring to situations encountered in daily life. The ESS generates a numerical score from zero (0) to 24 where a score of ten [10] or higher may indicate that the person should consult a specialist in sleep medicine for further evaluation. Another tool is the Multiple Sleep Latency Test (MSLT), which has been used since the 1970s. It is used to measure the time it takes from the start of a daytime nap period to the first signs of sleep, called sleep latency. Subjects undergo a series of five 20-minute sleeping opportunities with an absence of alerting factors at 2-hour intervals on one day. The test is based on the idea that the sleepier people are, the faster they will fall asleep. The Maintenance of Wakefulness Test (MWT) is also used to quantitatively assess daytime sleepiness. This test is performed in a sleep diagnostic center. The test is similar to the MSLT as it also relies on a measurement of initial sleep latency. However, during this test, the patient is instructed to try to stay awake under soporific conditions for a defined time. The use of electroencephalography (EEG) readings is essential for the objective diagnosis of EDS. The initial sleep latency employed in the MSLT and the MWT is mainly derived from EEG recordings. Moreover, power characteristics in the alpha-band of resting-state EEG readings, correlating with somnolence, also showed a correlation with the presence of EDS. Treatment Treatment of excessive daytime sleepiness (EDS) relies on identifying and treating the underlying disorder which may cure the person from the EDS. Drugs like modafinil, armodafinil, pitolisant (Wakix), sodium oxybate (Xyrem) oral solution, have been approved as treatment for EDS symptoms in the United States. There is declining usage of other drugs such as methylphenidate (Ritalin), dextroamphetamine (Dexedrine), amphetamine, lisdexamfetamine (Vyvanse), methamphetamine (Desoxyn), and pemoline (Cylert), as these stimulants may have several adverse effects. If EDS is caused by obstructive sleep apnea (OSA), it is recommended that people with OSA use continuous positive airway pressure (CPAP) therapy, that is a sleep breathing apparatus to prevent OSA, before starting intake of wake-promoting agents such as modafinil. See also Kleine–Levin syndrome References External links Sleep disorders de:Narkolepsie#Exzessive Tagesschläfrigkeit
Excessive daytime sleepiness
[ "Biology" ]
1,070
[ "Behavior", "Sleep", "Sleep disorders" ]
7,183,468
https://en.wikipedia.org/wiki/Viodentia
viodentia (sometimes written with an uppercase v) is a pseudonym used by the creator of FairUse4WM, a program that removes Microsoft's copy protection technology from Windows Media Video (".WMV") files. These files are used by popular music download sites such as Rhapsody, Yahoo! Music, and Napster. Background A number of prominent websites use DRM to ensure that media and other downloads are unable to be copied for software piracy or other improper purposes. This copy protection system also has the effect of preventing what would otherwise be claimed as fair use - legitimate owners backing up paid downloads in case of loss or damage to their computer data, or for use in on other devices where legal to do so. viodentia and FairUse4WM "viodentia" is the pseudonym of one or more individuals who wrote software that would enable users to remove the protection mechanism from their media, thus allowing it to be copied for legal use. Forum posts under the name "viodentia" stated in 2006 that the program was to enable exercise of fair-use rights only, and excluded import of cryptographic data needed for piracy usage. A lawsuit was later filed by Microsoft, on the basis that FairUse4WM contained proprietary computer code from Microsoft's Windows and/or was a derivative work of Microsoft's Windows Media Format SDK or other Microsoft DRM technologies. According to an interview published by the weblog Engadget, Viodentia does not live in the United States. Lawsuit On 22 September 2006, Microsoft filed a federal lawsuit against John Does 1-10 a/k/a "viodentia", hoping to identify the person or persons. An online post by Viodentia contains an implicit defense against Microsoft's allegations of copyright infringement: "FairUse4WM has been my own creation, and has never involved Microsoft source code. I link with Microsoft's static libraries provided with the compiler and various platform SDK files." Unable to identify or locate Viodentia, Microsoft dropped the lawsuit without prejudice in 2007. References External links FairUse4WM – a WM/DRM removal program, a thread at Doom9's Forum started by Viodentia and containing a number of comments from them Microsoft vs John Does 1-10 FairUse4WM Court Filings Multimedia software Digital rights management circumvention software Anonymity pseudonyms
Viodentia
[ "Technology" ]
504
[ "Multimedia", "Multimedia software" ]
7,183,778
https://en.wikipedia.org/wiki/Wide%20chord
Wide chord describes the fan blades on certain turbofan engines that have a blade design with a specific geometry - In layman's terms, they would be described as having wider blades than other jet engines. The technology was pioneered by Geoff Wilde at Rolls-Royce in the 1970s. Overview The main fan on a jet engine consists of a number of aerofoils mounted at the rotational center on the fan disk, and as the engine core rotates the fan blades accelerate an air mass and create the force to move forward which gives thrust (in accordance with Newton's third law). In theory the larger the fan diameter (the line from the tip of one fanblade to its opposite member) the greater the thrust. In practical applications, fan size is limited by the weight, the space available around the aircraft and by the increased drag (resistance) generated by the larger frontal area. In the race to achieve better fuel economy, more thrust and less weight & noise from jet engines, designers have refined the blade design and materials to extract more thrust for any given fan disk area. One significant improvement is to make blade chords wider and, more recently, alter the blade geometry to give it a scimitar-like shape. Further refinements include making the blades from a light material such as titanium and to manufacture them with a hollow cross-section. Modern jet engines such as the Rolls-Royce Trent 900 and the Engine Alliance GP7000, which both power the Airbus A380, are examples of engines with wide-chord fans. Key design considerations A wide chord fan has fewer, wider blades compared to the narrower blades on earlier technology fans. The blades are often hollow and made from titanium. The wide-chord fan blade was designed and developed at Rolls-Royce Barnoldswick in Lancashire. The manufacturing process uses superplastic forming, diffusion-bonded technology to achieve a light weight, strong design. References External links Jet engines
Wide chord
[ "Technology" ]
391
[ "Jet engines", "Engines" ]
7,184,741
https://en.wikipedia.org/wiki/Disk%27O
The Disk'O (also known as Skater or Surf's Up) is a type of flat ride manufactured by Zamperla of Italy. The ride is a larger version of a Rockin' Tug, also manufactured by Zamperla. Versions Ride On a traditional Disk'O, Mega Disk'O or Disk'O Coaster, riders sit on a circular platform with outward-facing seats. On a Skater or a Skater Coaster, riders sit on a rectangular platform with inwards facing seats. On a Surf's Up, riders stand on a rectangular platform. Regardless of the model, the ride experience is very similar. The platform moves back and forth along a halfpipe track while spinning. A Disk'O Coaster or a Skater Coaster both feature a small hill in the middle of the halfpipe. Installations References External links Zamperla Rides Amusement rides Zamperla Italian inventions
Disk'O
[ "Physics", "Technology" ]
183
[ "Physical systems", "Machines", "Amusement rides" ]
7,184,831
https://en.wikipedia.org/wiki/Primary%20pseudoperfect%20number
In mathematics, and particularly in number theory, N is a primary pseudoperfect number if it satisfies the Egyptian fraction equation where the sum is over only the prime divisors of N. Properties Equivalently, N is a primary pseudoperfect number if it satisfies Except for the primary pseudoperfect number N = 2, this expression gives a representation for N as the sum of distinct divisors of N. Therefore, each primary pseudoperfect number N (except N = 2) is also pseudoperfect. The eight known primary pseudoperfect numbers are 2, 6, 42, 1806, 47058, 2214502422, 52495396602, 8490421583559688410706771261086 . The first four of these numbers are one less than the corresponding numbers in Sylvester's sequence, but then the two sequences diverge. It is unknown whether there are infinitely many primary pseudoperfect numbers, or whether there are any odd primary pseudoperfect numbers. The prime factors of primary pseudoperfect numbers sometimes may provide solutions to Znám's problem, in which all elements of the solution set are prime. For instance, the prime factors of the primary pseudoperfect number 47058 form the solution set {2,3,11,23,31} to Znám's problem. However, the smaller primary pseudoperfect numbers 2, 6, 42, and 1806 do not correspond to solutions to Znám's problem in this way, as their sets of prime factors violate the requirement that no number in the set can equal one plus the product of the other numbers. Anne (1998) observes that there is exactly one solution set of this type that has k primes in it, for each k ≤ 8, and conjectures that the same is true for larger k. If a primary pseudoperfect number N is one less than a prime number, then N × (N + 1) is also primary pseudoperfect. For instance, 47058 is primary pseudoperfect, and 47059 is prime, so 47058 × 47059 = 2214502422 is also primary pseudoperfect. History Primary pseudoperfect numbers were first investigated and named by Butske, Jaje, and Mayernik (2000). Using computational search techniques, they proved the remarkable result that for each positive integer r up to 8, there exists exactly one primary pseudoperfect number with precisely r (distinct) prime factors, namely, the rth known primary pseudoperfect number. Those with 2 ≤ r ≤ 8, when reduced modulo 288, form the arithmetic progression 6, 42, 78, 114, 150, 186, 222, as was observed by Sondow and MacMillan (2017). See also Giuga number References . . . External links Integer sequences Egyptian fractions
Primary pseudoperfect number
[ "Mathematics" ]
588
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
7,185,031
https://en.wikipedia.org/wiki/Trestolone
Trestolone, also known as 7α-methyl-19-nortestosterone (MENT), is an experimental androgen/anabolic steroid (AAS) and progestogen medication which has been under development for potential use as a form of hormonal birth control for men and in androgen replacement therapy for low testosterone levels in men but has never been marketed for medical use. It is given as an implant that is placed into fat. As trestolone acetate, an androgen ester and prodrug of trestolone, the medication can also be given by injection into muscle. Side effects Trestolone is an AAS, and hence is an agonist of the androgen receptor, the biological target of androgens like testosterone. It is also a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone. Due to its androgenic and progestogenic activity, trestolone has antigonadotropic effects. These effects result in reversible suppression of sperm production and are responsible for the contraceptive effects of trestolone in men. Trestolone was first described in 1963. Subsequently, it was not studied again until 1990. Development of trestolone for potential clinical use started by 1993 and continued thereafter. No additional development appears to have been conducted since 2013. The medication was developed by the Population Council, a non-profit, non-governmental organization dedicated to reproductive health. Medical uses Trestolone is an experimental medication and is not currently approved for medical use. It has been under development for potential use as a male hormonal contraceptive and in androgen replacement therapy for low testosterone levels. The medication has been studied and developed for use as a subcutaneous implant. An androgen ester and prodrug of trestolone, trestolone acetate, has also been developed, for use via intramuscular injection. Side effects Trestolone may cause sexual dysfunction (e.g., decreased sex drive, reduced erectile function) and decreased bone mineral density due to estrogen deficiency. Pharmacology Pharmacodynamics As an AAS, trestolone is an agonist of the androgen receptor (AR), similarly to androgens like testosterone and dihydrotestosterone (DHT). Trestolone is not a substrate for 5α-reductase and hence is not potentiated or inactivated in so-called "androgenic" tissues like the skin, hair follicles, and prostate gland. As such, it has a high ratio of anabolic to androgenic activity, similarly to other nandrolone derivatives. Trestolone is a substrate for aromatase and hence produces the estrogen 7α-methylestradiol as a metabolite. However, trestolone has only weak estrogenic activity and an amount that would appear to be insufficient for replacement purposes, as evidenced by decreased bone mineral density in men treated with it for hypogonadism. Trestolone also has potent progestogenic activity. Both the androgenic and progestogenic activity of trestolone are thought to be involved in its antigonadotropic activity. Mechanism of action Spermatozoa are produced in the testes of males in a process called spermatogenesis. In order to render a man infertile, a hormone-based male contraceptive method must stop spermatogenesis by interrupting the release of gonadotropins from the pituitary gland. Even in low concentrations, trestolone is a potent inhibitor of the release of the gonadotropins, luteinizing hormone (LH) and follicle stimulating hormone (FSH). In order for spermatogenesis to occur in the testes, both FSH and testosterone must be present. By inhibiting release of FSH, trestolone creates an endocrine environment in which conditions for spermatogenesis are not ideal. Manufacture of sperm is further impaired by the suppression of LH, which in turn drastically curtails the production of testosterone. Sufficient regular doses of trestolone cause severe oligozoospermia or azoospermia, and therefore infertility, in most men. Trestolone-induced infertility has been found to be quickly reversible upon discontinuation. When LH release is inhibited, the amount of testosterone made in the testes declines dramatically. As a result of trestolone's gonadotropin-suppressing qualities, levels of serum testosterone fall sharply in men treated with sufficient amounts of the medication. Testosterone is the main hormone responsible for maintenance of male secondary sex characteristics. Normally, an inadequate testosterone level causes undesirable effects such as fatigue, loss of skeletal muscle mass, reduced libido, and weight gain. However, the androgenic and anabolic properties of trestolone largely ameliorate this essentially, trestolone replaces testosterone's role as the primary male hormone in the body. Pharmacokinetics The pharmacokinetic properties of trestolone, such as poor oral bioavailability and short elimination half-life, make it unsuitable for oral administration or long-term intramuscular injection. As such, trestolone must be administered parenterally via a different and more practical route such as subcutaneous implant, transdermal patch, or topical gel. Trestolone acetate, a prodrug of trestolone, can be administered via intramuscular injection. Chemistry Trestolone, also known as 7α-methyl-19-nortestosterone (MENT) or as 7α-methylestr-4-en-17β-ol-3-one, is a synthetic estrane steroid and a derivative of nandrolone (19-nortestosterone). It is a modification of nandrolone with a methyl group at the C7α position. Closely related AAS include 7α-methyl-19-norandrostenedione (MENT dione, trestione) (an androgen prohormone of trestolone) and dimethandrolone (7α,11β-dimethyl-19-nortestosterone) (the C11β methylated derivative of trestolone), as well as mibolerone (7α,17α-dimethyl-19-nortestosterone) and dimethyltrienolone (7α,17α-dimethyl-δ9,11-19-nortestosterone). The progestin tibolone (7α-methyl-17α-ethynyl-δ5(10)-19-nortestosterone) is also closely related to trestolone. History Trestolone was first described in 1963. However, it was not subsequently studied again until 1990. Development of trestolone for potential use in male hormonal contraception and androgen replacement therapy was started by 1993, and continued thereafter. No additional development appears to have been conducted since 2013. Trestolone was developed by the Population Council, a non-profit, non-governmental organization dedicated to reproductive health.. Society and culture Generic names Trestolone is the generic name of the drug and its . It is also commonly known as 7α-methyl-19-nortestosterone (MENT). References Abandoned drugs Secondary alcohols Anabolic–androgenic steroids Antigonadotropins Contraception for males Estranes Experimental methods of birth control Hormonal contraception Ketones Progestogens Synthetic estrogens
Trestolone
[ "Chemistry" ]
1,603
[ "Ketones", "Functional groups", "Drug safety", "Abandoned drugs" ]
7,185,405
https://en.wikipedia.org/wiki/Cahen%27s%20constant
In mathematics, Cahen's constant is defined as the value of an infinite series of unit fractions with alternating signs: Here denotes Sylvester's sequence, which is defined recursively by Combining these fractions in pairs leads to an alternative expansion of Cahen's constant as a series of positive unit fractions formed from the terms in even positions of Sylvester's sequence. This series for Cahen's constant forms its greedy Egyptian expansion: This constant is named after (also known for the Cahen–Mellin integral), who was the first to introduce it and prove its irrationality. Continued fraction expansion The majority of naturally occurring mathematical constants have no known simple patterns in their continued fraction expansions. Nevertheless, the complete continued fraction expansion of Cahen's constant is known: it is where the sequence of coefficients is defined by the recurrence relation All the partial quotients of this expansion are squares of integers. Davison and Shallit made use of the continued fraction expansion to prove that is transcendental. Alternatively, one may express the partial quotients in the continued fraction expansion of Cahen's constant through the terms of Sylvester's sequence: To see this, we prove by induction on that . Indeed, we have , and if holds for some , then where we used the recursion for in the first step respectively the recursion for in the final step. As a consequence, holds for every , from which it is easy to conclude that . Best approximation order Cahen's constant has best approximation order . That means, there exist constants such that the inequality has infinitely many solutions , while the inequality has at most finitely many solutions . This implies (but is not equivalent to) the fact that has irrationality measure 3, which was first observed by . To give a proof, denote by the sequence of convergents to Cahen's constant (that means, ). But now it follows from and the recursion for that for every . As a consequence, the limits and (recall that ) both exist by basic properties of infinite products, which is due to the absolute convergence of . Numerically, one can check that . Thus the well-known inequality yields and for all sufficiently large . Therefore has best approximation order 3 (with ), where we use that any solution to is necessarily a convergent to Cahen's constant. Notes References External links Mathematical constants Real transcendental numbers
Cahen's constant
[ "Mathematics" ]
499
[ "Mathematical constants", "Mathematical objects", "Numbers", "nan" ]
7,185,428
https://en.wikipedia.org/wiki/Random%20dynamical%20system
In the mathematical field of dynamical systems, a random dynamical system is a dynamical system in which the equations of motion have an element of randomness to them. Random dynamical systems are characterized by a state space S, a set of maps from S into itself that can be thought of as the set of all possible equations of motion, and a probability distribution Q on the set that represents the random choice of map. Motion in a random dynamical system can be informally thought of as a state evolving according to a succession of maps randomly chosen according to the distribution Q. An example of a random dynamical system is a stochastic differential equation; in this case the distribution Q is typically determined by noise terms. It consists of a base flow, the "noise", and a cocycle dynamical system on the "physical" phase space. Another example is discrete state random dynamical system; some elementary contradistinctions between Markov chain and random dynamical system descriptions of a stochastic dynamics are discussed. Motivation 1: Solutions to a stochastic differential equation Let be a -dimensional vector field, and let . Suppose that the solution to the stochastic differential equation exists for all positive time and some (small) interval of negative time dependent upon , where denotes a -dimensional Wiener process (Brownian motion). Implicitly, this statement uses the classical Wiener probability space In this context, the Wiener process is the coordinate process. Now define a flow map or (solution operator) by (whenever the right hand side is well-defined). Then (or, more precisely, the pair ) is a (local, left-sided) random dynamical system. The process of generating a "flow" from the solution to a stochastic differential equation leads us to study suitably defined "flows" on their own. These "flows" are random dynamical systems. Motivation 2: Connection to Markov Chain An i.i.d random dynamical system in the discrete space is described by a triplet . is the state space, . is a family of maps of . Each such map has a matrix representation, called deterministic transition matrix. It is a binary matrix but it has exactly one entry 1 in each row and 0s otherwise. is the probability measure of the -field of . The discrete random dynamical system comes as follows, The system is in some state in , a map in is chosen according to the probability measure and the system moves to the state in step 1. Independently of previous maps, another map is chosen according to the probability measure and the system moves to the state . The procedure repeats. The random variable is constructed by means of composition of independent random maps, . Clearly, is a Markov Chain. Reversely, can, and how, a given MC be represented by the compositions of i.i.d. random transformations? Yes, it can, but not unique. The proof for existence is similar with Birkhoff–von Neumann theorem for doubly stochastic matrix. Here is an example that illustrates the existence and non-uniqueness. Example: If the state space and the set of the transformations expressed in terms of deterministic transition matrices. Then a Markov transition matrix can be represented by the following decomposition by the min-max algorithm, In the meantime, another decomposition could be Formal definition Formally, a random dynamical system consists of a base flow, the "noise", and a cocycle dynamical system on the "physical" phase space. In detail. Let be a probability space, the noise space. Define the base flow as follows: for each "time" , let be a measure-preserving measurable function: for all and ; Suppose also that , the identity function on ; for all , . That is, , , forms a group of measure-preserving transformation of the noise . For one-sided random dynamical systems, one would consider only positive indices ; for discrete-time random dynamical systems, one would consider only integer-valued ; in these cases, the maps would only form a commutative monoid instead of a group. While true in most applications, it is not usually part of the formal definition of a random dynamical system to require that the measure-preserving dynamical system is ergodic. Now let be a complete separable metric space, the phase space. Let be a -measurable function such that for all , , the identity function on ; for (almost) all , is continuous; satisfies the (crude) cocycle property: for almost all , In the case of random dynamical systems driven by a Wiener process , the base flow would be given by . This can be read as saying that "starts the noise at time instead of time 0". Thus, the cocycle property can be read as saying that evolving the initial condition with some noise for seconds and then through seconds with the same noise (as started from the seconds mark) gives the same result as evolving through seconds with that same noise. Attractors for random dynamical systems The notion of an attractor for a random dynamical system is not as straightforward to define as in the deterministic case. For technical reasons, it is necessary to "rewind time", as in the definition of a pullback attractor. Moreover, the attractor is dependent upon the realisation of the noise. See also Chaos theory Diffusion process Stochastic control References Stochastic differential equations Stochastic processes
Random dynamical system
[ "Mathematics" ]
1,114
[ "Random dynamical systems", "Dynamical systems" ]
7,185,509
https://en.wikipedia.org/wiki/Category%20of%20finite-dimensional%20Hilbert%20spaces
In mathematics, the category FdHilb has all finite-dimensional Hilbert spaces for objects and the linear transformations between them as morphisms. Whereas the theory described by the normal category of Hilbert spaces, Hilb, is ordinary quantum mechanics, the corresponding theory on finite dimensional Hilbert spaces is called fdQM. Properties This category is monoidal, possesses finite biproducts, and is dagger compact. According to a theorem of Selinger, the category of finite-dimensional Hilbert spaces is complete in the dagger compact category. Many ideas from Hilbert spaces, such as the no-cloning theorem, hold in general for dagger compact categories. See that article for additional details. References Monoidal categories Dagger categories Hilbert spaces
Category of finite-dimensional Hilbert spaces
[ "Physics", "Mathematics" ]
146
[ "Mathematical structures", "Category theory stubs", "Hilbert spaces", "Monoidal categories", "Quantum mechanics", "Category theory", "Categories in category theory", "Dagger categories" ]
7,185,671
https://en.wikipedia.org/wiki/Uniformly%20Cauchy%20sequence
In mathematics, a sequence of functions from a set S to a metric space M is said to be uniformly Cauchy if: For all , there exists such that for all : whenever . Another way of saying this is that as , where the uniform distance between two functions is defined by Convergence criteria A sequence of functions {fn} from S to M is pointwise Cauchy if, for each x ∈ S, the sequence {fn(x)} is a Cauchy sequence in M. This is a weaker condition than being uniformly Cauchy. In general a sequence can be pointwise Cauchy and not pointwise convergent, or it can be uniformly Cauchy and not uniformly convergent. Nevertheless, if the metric space M is complete, then any pointwise Cauchy sequence converges pointwise to a function from S to M. Similarly, any uniformly Cauchy sequence will tend uniformly to such a function. The uniform Cauchy property is frequently used when the S is not just a set, but a topological space, and M is a complete metric space. The following theorem holds: Let S be a topological space and M a complete metric space. Then any uniformly Cauchy sequence of continuous functions fn : S → M tends uniformly to a unique continuous function f : S → M. Generalization to uniform spaces A sequence of functions from a set S to a uniform space U is said to be uniformly Cauchy if: For any entourange E of U, there exists such that, for all , whenever . See also Modes of convergence (annotated index) Functional analysis Convergence (mathematics)
Uniformly Cauchy sequence
[ "Mathematics" ]
337
[ "Sequences and series", "Functions and mappings", "Convergence (mathematics)", "Mathematical structures", "Functional analysis", "Mathematical analysis", "Mathematical analysis stubs", "Mathematical objects", "Mathematical relations" ]
7,186,101
https://en.wikipedia.org/wiki/The%20Foretelling
"The Foretelling" is the first episode of the BBC sitcom The Black Adder, the first series of the long-running comedy programme Blackadder. It marks Rowan Atkinson's debut as the character Edmund Blackadder, and is the first appearance of the recurring characters Baldrick (Tony Robinson) and Percy (Tim McInnerny). The comedy actor Peter Cook guest stars as King Richard III. The Black Adder is a historical comedy set in late Medieval England on the cusp of the Tudor Period, and centres on the eponymous "Black Adder", the pseudonym adopted from this episode onwards by Edmund Plantagenet, Duke of Edinburgh. The premise is that Henry Tudor did not become king in 1485, but instead rewrote history to portray himself as the man who killed Richard III. The show sets out to rectify the situation by telling the "real story" and presents the alternate history of King Richard IV. The script of this episode contains many lines and situations which borrow from or parody William Shakespeare's plays Richard III and Macbeth. Plot A prologue introduces the episode with a narrative describing the Tudor King Henry VII as one of history's greatest liars - along with Nicolaus Copernicus, instigator of the heliocentrism cosmology theory - and establishes the show's premise that he rewrote history to suit his own ends. The narrator dispels the popular depiction of King Richard III of England as a scheming murderer; he appears as a villainous hunchback, hobbling towards his young nephews with a dagger, but the dagger is revealed to be a toy and the hunchback is a sack of presents. A close-up of one of the children fades to a shot of the bearded Richard, Duke of York (Brian Blessed) roaring with laughter, as the narrator declares that he grew up to be "a big, strong boy", and that it was he who was crowned king after winning the Battle of Bosworth Field, not Henry. The story opens in England in the year 1485 on the eve of the Battle of Bosworth. A feast is held at the castle of King Richard III of England as his court prepares for the next day's battle with the forces led by Henry Tudor. The King (Peter Cook) gives a speech parodying the opening of Shakespeare's play. A young lord's overzealous cheering raises the King's attention, who asks Richard, Duke of York, about the cheerer's identity. Richard doesn't recognise him but his eldest son, Harry, informs him that it is his second son, Edmund – though Richard never calls him that. instand he calls Edmund other names such as "Edna", starting a running gag lasting throughout the series. He asks Edmund if he will be participating in the battle; Edmund's buffoonish answer makes the King uneasy, but Richard promises that he will place Edmund far away from the King. Edmund and his friend, Lord Percy Percy, Duke of Northumberland, are joined by a servant Baldrick, who with a bit of flattery manages to win enough favour with Edmund to be chosen as his squire for the morning battle. The next day, both Edmund and Baldrick oversleep. Once woken by Edmund's mother, Gertrude of Flanders, they rush to the battlefield, Edmund by horse and Baldrick by mule. Edmund is initially eager to fight but, observing the combatants from afar, he comes to the realisation that fighting could lead to death. He decides at that moment to remain a spectator and then hides behind a bush to relieve himself. Meanwhile, the King has won the battle but lost his horse. Telling the Duke of York that he will meet him back at the castle, he wanders off to search for another horse, stumbling across Edmund's steed. Noticing an attempt to steal his horse, Edmund draws his sword and decapitates the apparent thief, only recognising him as King Richard III afterwards. With Baldrick's help, Edmund hides the body in a cottage but forgets the head, which Percy brings, claiming it to be his triumph until realising whose head it is. Before they can escape, a wounded knight begs to be sheltered in exchange for his land and money, but Edmund and Baldrick shake him off. Returning to the castle, Edmund reveals that King Richard is dead, startling his mother and also his father, who has freshly returned from battle. Any doubts are dispelled by Harry, who brings the King's corpse back to the castle from the cottage. Edmund fears retribution for his crime but as everyone assumes Henry Tudor to be the murderer, Edmund escapes punishment, while his father is hailed as the new king, Richard IV. Edmund, now a royal prince, resolves to become more assertive, hoping to gain his father's respect and approval, and gives himself the title "The Black Adder" (at Baldrick's suggestion who dissuaded him from his first idea, "The Black Vegetable"). To his dismay, Edmund finds out that Percy brought the wounded knight from the cottage back to the castle, but after hearing of his wealth, Edmund lets him stay without asking any further questions. Later, Edmund finds himself haunted by the headless ghost of his great-uncle, who openly accuses him of beheading him and even calling him "Edna" in order to taunt him. During the celebratory banquet in honour of the new king, a portrait of Henry Tudor is presented for ridicule, and Edmund is horrified to learn that the wounded man he is sheltering is actually the enemy. Edmund rushes back to his room only to find Henry Tudor gone. Edmund pursues him but the ghost of Richard III chases Edmund into a foggy meadow, where he meets three witches who address the Black Adder as "Ruler of men, Ravisher of women, Slayer of kings" and predict that he shall one day become king. Edmund thus proclaims "History, here I come!" When he leaves the meadow, the witches remark among themselves that they had expected Henry Tudor to look different, before realising that they had prophesied to the wrong person again. Cast The closing credits of this episode list the cast members "in order of precedence". Peter Cook as Richard III Brian Blessed as Richard IV Peter Benson as Henry VII Robert East as Harry, Prince of Wales Rowan Atkinson as Edmund, Duke of Edinburgh Tim McInnerny as Percy, heir to the Duchy of Northumberland Elspet Gray as The Queen Philip Kendall as the Painter Kathleen St John as Goneril Barbara Miller as Regan Gretchen Franklin as Cordelia Tony Robinson as Baldrick Production "The Foretelling" featured a guest star appearance by veteran comedian Peter Cook as Richard III. Cook had previously worked with Rowan Atkinson, having appeared together in The Secret Policeman's Ball (1979) and Peter Cook & Co (1980). Cook's appearance in this episode as Richard III caused him some alarm; both producer John Lloyd and co-star Brian Blessed have recalled that Cook was very nervous about playing the part. Cook was also not fond of adhering to a script and his lines contained many improvisations. In the end, parts of Cook's performance took the form of a mock-heroic parody of Laurence Olivier's portrayal of the king in the 1955 film version of Shakespeare's play. References to Shakespeare This first episode of The Black Adder contains many references to the works of Shakespeare and, as with subsequent episodes in this series, the end credits include an acknowledgement of "additional dialogue by William Shakespeare". Most obviously, the script of "The Foretelling" draws on material from Richard III but a number of other aspects of the episode also parody Shakespeare's other works: The prologue introduces King Richard III at first as a deformed Shakespearean villain before revealing him to be a kindly and avuncular man who teases his young nephews with a pretend hump, humorously demolishing traditional portrayals of the character. The villainous role is instead taken on by Prince Edmund. Richard's queen consort and Edmund's mother is Gertrude of Flanders; in Hamlet, the protagonist's mother is Gertrude, Queen of Denmark. In the opening banquet scene, King Richard gives a speech which is a pastiche of Richard's opening soliloquy, "Now is the winter of our discontent ..." (Richard III, Act I, Scene I). Before the Battle of Bosworth, King Richard rouses his troops with a speech, "Once more unto the breach, dear friends, once more..." – words taken directly from King Henry V's speech at the Siege of Harfleur (Henry V, Act 3, Scene I). After the battle, Peter Cook's King Richard is heard cheerfully calling "A horse! a horse! my kingdom for a horse!" (Richard III, Act V, Scene IV) in a bathetic style, as if he is whistling for a pet dog. Upon discovering the decapitated body of King Richard, Prince Harry makes a sorrowful, mock-heroic speech which comprises one of Mark Antony's lines from Julius Caesar, "O! pardon me, thou bleeding piece of earth," (Julius Caesar, Act III, Scene I), and Horatio's line from Hamlet, "And flights of angels sing thee to their rest" (Hamlet, Act 5, Scene 2). The appearance of Richard III's ghost to haunt Prince Edmund during the victory banquet is based closely on the haunting of Macbeth by Banquo's ghost (Macbeth, Act III Scene IV). In the final scene, Prince Edmund confronts three witches who foretell that he will become king, in a parody of the Three Witches from Macbeth (Macbeth, Act III Scene IV). In the episode credits, the Black Adder witches are given the names of Goneril, Regan and Cordelia, the names of King Lear's three daughters. References External links 1980s British television series premieres 1983 British television episodes Blackadder episodes Cultural depictions of Edward V Cultural depictions of Henry VII of England Cultural depictions of Nicolaus Copernicus Fiction set in the 1480s Television shows based on Macbeth Television shows written by Richard Curtis Television shows written by Rowan Atkinson Works based on Richard III (play) Cultural depictions of Richard of Shrewsbury, Duke of York
The Foretelling
[ "Astronomy" ]
2,135
[ "Cultural depictions of astronomers", "Cultural depictions of Nicolaus Copernicus" ]
7,186,253
https://en.wikipedia.org/wiki/Extended%20precision
Extended precision refers to floating-point number formats that provide greater precision than the basic floating-point formats. Extended-precision formats support a basic format by minimizing roundoff and overflow errors in intermediate values of expressions on the base format. In contrast to extended precision, arbitrary-precision arithmetic refers to implementations of much larger numeric types (with a storage count that usually is not a power of two) using special software (or, rarely, hardware). Extended-precision implementations There is a long history of extended floating-point formats reaching back nearly to the middle of the last century.. Various manufacturers have used different formats for extended precision for different machines. In many cases the format of the extended precision is not quite the same as a scale-up of the ordinary single- and double-precision formats it is meant to extend. In a few cases the implementation was merely a software-based change in the floating-point data format, but in most cases extended precision was implemented in hardware, either built into the central processor itself, or more often, built into the hardware of an optional, attached processor called a "floating-point unit" (FPU) or "floating-point processor" (FPP), accessible to the CPU as a fast input / output device. IBM extended-precision formats The IBM 1130, sold in 1965, offered two floating-point formats: A 32-bit "standard precision" format and a 40-bit "extended precision" format. Standard-precision format contains a 24-bit two's complement significand while extended-precision utilizes a 32-bit two's complement significand. The latter format makes full use of the CPU's 32-bit integer operations. The characteristic in both formats is an 8-bit field containing the power of two biased by 128. Floating-point arithmetic operations are performed by software, and double precision is not supported at all. The extended format occupies three 16-bit words, with the extra space simply ignored. The IBM System/360 supports a 32-bit "short" floating-point format and a 64-bit "long" floating-point format. The 360/85 and follow-on System/370 add support for a 128-bit "extended" format. These formats are still supported in the current design, where they are now called the "hexadecimal floating-point" (HFP) formats. Microsoft MBF extended-precision format The Microsoft BASIC port for the 6502 CPU, such as in adaptations like Commodore BASIC, AppleSoft BASIC, KIM-1 BASIC or MicroTAN BASIC, supports an extended 40-bit variant of the floating-point format Microsoft Binary Format (MBF) since 1977. IEEE 754 extended-precision formats The IEEE 754 floating-point standard recommends that implementations provide extended-precision formats. The standard specifies the minimum requirements for an extended format but does not specify an encoding. The encoding is the implementor's choice. The IA32, x86-64, and Itanium processors support what is by far the most influential format on this standard, the Intel 80-bit (64-bit significand) "double extended" format, described in the next section. The Motorola 6888x math coprocessors and the Motorola 68040 and 68060 processors also support a 64-bit significand extended-precision format (similar to the Intel format, although padded to a 96-bit format with 16 unused bits inserted between the exponent and significand fields, and values with exponent zero and bit 63 one are normalized values). The follow-on Coldfire processors do not support this 96-bit extended-precision format. The FPA10 math coprocessor for early ARM processors also supports a 64-bit significand extended-precision format (similar to the Intel format although padded to a 96-bit format with 16 zero bits inserted between the sign and the exponent fields), but without correct rounding. The x87 and Motorola 68881 80-bit formats meet the requirements of the IEEE 754-1985 double extended format, as does the IEEE 754 128-bit binary format. x86 extended-precision format The x86 extended-precision format is an 80-bit format first implemented in the Intel 8087 math coprocessor and is supported by all processors that are based on the x86 design that incorporate a floating-point unit (FPU). The Intel 8087 was the first x86 device which supported floating-point arithmetic in hardware. It was designed to support a 32-bit "single precision" format and a 64-bit "double-precision" format for encoding and interchanging floating-point numbers. The extended format was designed not to store data at higher precision, but rather to allow for the computation of temporary double results more reliably and accurately by minimising overflow and roundoff-errors in intermediate calculations. All the floating-point registers in the 8087 hold this format, and it automatically converts numbers to this format when loading registers from memory and also converts results back to the more conventional formats when storing the registers back into memory. To enable intermediate subexpression results to be saved in extended precision scratch variables and continued across programming language statements, and otherwise interrupted calculations to resume where they were interrupted, it provides instructions which transfer values between these internal registers and memory without performing any conversion, which therefore enables access to the extended format for calculations – also reviving the issue of the accuracy of functions of such numbers, but at a higher precision. The floating-point units (FPU) on all subsequent x86 processors have supported this format. As a result, software can be developed which takes advantage of the higher precision provided by this format. William Kahan, a primary designer of the x87 arithmetic and initial IEEE 754 standard proposal notes on the development of the x87 floating point: "An extended format as wide as we dared (80 bits) was included to serve the same support role as the 13 decimal internal format serves in Hewlett-Packard's 10 decimal calculators." Moreover, Kahan notes that 64 bits was the widest significand across which carry propagation could be done without increasing the cycle time on the 8087, and that the x87 extended precision was designed to be extensible to higher precision in future processors: "For now the 10 byte extended format is a tolerable compromise between the value of extra-precise arithmetic and the price of implementing it to run fast; very soon two more bytes of precision will become tolerable, and ultimately a 16 byte format. ... That kind of gradual evolution towards wider precision was already in view when IEEE Standard 754 for Floating-Point Arithmetic was framed." This 80-bit format uses one bit for the sign of the significand, 15 bits for the exponent field (i.e. the same range as the 128-bit quadruple precision IEEE 754 format) and 64 bits for the significand. The exponent field is biased by 16383, meaning that 16383 has to be subtracted from the value in the exponent field to compute the actual An exponent field value of 32767 (all fifteen bits 1) is reserved so as to enable the representation of special states such as infinity and Not a Number. If the exponent field is zero, the value is a denormal number and the exponent of 2 is −16382. In the following table, "" is the value of the sign bit (0 means positive, 1 means negative), "" is the value of the exponent field interpreted as a positive integer, and "" is the significand interpreted as a positive binary number, where the binary point is located between bits 63 and 62. The "" field is the combination of the integer and fraction parts in the above diagram. {| class="wikitable" |+ Interpretation of the fields of an x86 Extended-Precision value ! Exponent !colspan=2| Significand !rowspan=2| Meaning |- !align="center"| !align="center"| !align="center"| |- !rowspan=3| all 0 |align="center" rowspan=2| 0 ||align="center"| 0 || Zero. The sign bit gives the sign of the zero, which usually is meaningless. |- |align="center"| non-zero || Denormal. The value is |- |align="center"| 1 || align="center" | anything || Pseudo Denormal. The 80387 and later properly interpret this value but will not generate it. The value is |- ! ! ! ! |- !rowspan=7| all 1 |align="center" rowspan=2| 00 || align="center" | 0 || Pseudo-infinity. The sign bit gives the sign of the infinity. The 8087 and 80287 treat this as Infinity. The 80387 and later treat this as an invalid operand. |- |align="center"| non-zero || Pseudo 'Not a Number'. The sign bit is meaningless. The 8087 and 80287 treat this as a Signaling Not a Number. The 80387 and later treat this as an invalid operand. |- |align="center"| 01 || align="center" | anything || Pseudo 'Not a Number'. The sign bit is meaningless. The 8087 and 80287 treat this as a Signaling Not a Number. The 80387 and later treat this as an invalid operand. |- |align="center" rowspan=2| 10 || align="center" | 0 || Infinity. The sign bit gives the sign of the infinity. The 8087 and 80287 treat this as a Signaling Not a Number. The 8087 and 80287 coprocessors used the pseudo-infinity representation for infinities. |- |align="center"| non-zero || Signalling 'Not a Number', the sign bit is meaningless. |- |align="center" rowspan=2| 11 || align="center" | 0 || Floating-point Indefinite, the result of invalid calculations such as square root of a negative number, logarithm of a negative number, , , infinity times 0, and others, when the processor has been configured to not generate exceptions for invalid operands. The sign bit is meaningless. This is a special case of a Quiet Not a Number. |- |align="center"| non-zero || Quiet 'Not a Number', the sign bit is meaningless. The 8087 and 80287 treat this as a Signaling Not a Number. |- ! ! ! ! |- !rowspan=2| anyother |align="center"| 0 || anything || Unnormal. Only generated on the 8087 and 80287. The 80387 and later treat this as an invalid operand. The value is |- |align="center"| 1 || anything || Normalized value. The value is |} In contrast to the single- and double-precision formats, this format does not utilize an implicit / hidden bit. Rather, bit 63 contains the integer part of the significand and bits 62–0 hold the fractional part. Bit 63 will be 1 on all normalized numbers. There were several advantages to this design when the 8087 was being developed: Calculations can be completed a little faster if all bits of the significand are present in the register. A 64-bit significand provides sufficient precision to avoid loss of precision when the results are converted back to double-precision format in the vast number of cases. This format provides a mechanism for indicating precision loss due to underflow which can be carried through further operations. For example, the calculation generates the intermediate result which is a denormal and also involves precision loss. The product of all of the terms is which can be represented as a normalized number. The 80287 could complete this calculation and indicate the loss of precision by returning an "denormal" result (exponent not 0, bit 63 = 0). Processors since the 80387 no longer generate unnormals and do not support unnormal inputs to operations. They will generate a denormal if an underflow occurs but will generate a normalized result if subsequent operations on the denormal can be normalized. Introduction to use The 80-bit floating-point format was widely available by 1984, after the development of C, Fortran and similar computer languages, which initially offered only the common 32- and 64-bit floating-point sizes. On the x86 design most C compilers now support 80-bit extended precision via the long double type, and this was specified in the C99 / C11 standards (IEC 60559 floating-point arithmetic (Annex F)). Compilers on x86 for other languages often support extended precision as well, sometimes via nonstandard extensions: For example, Turbo Pascal offers an type, and several Fortran compilers have a type (analogous to and ). Such compilers also typically include extended-precision mathematical subroutines, such as square root and trigonometric functions, in their standard libraries. Working range The 80-bit floating-point format has a range (including subnormals) from approximately to Although this format is usually described as giving approximately eighteen significant digits of precision (the floor of the minimum guaranteed precision). The use of decimal when talking about binary is unfortunate because most decimal fractions are recurring sequences in binary just as is in decimal. Thus, a value such as 10.15, is represented in binary as equivalent to 10.1499996185 etc. in decimal for but 10.15000000000000035527 etc. in : inter-conversion will involve approximation, except for those few decimal fractions that represent an exact binary value, such as 0.625 . For , the decimal string is 10.1499999999999999996530553 etc. The last 9 digit is the eighteenth fractional digit and thus the twentieth significant digit of the string. Bounds on conversion between decimal and binary for the 80-bit format can be given as follows: If a decimal string with at most 18 significant digits is correctly rounded to an 80-bit IEEE 754 binary floating-point value (as on input) then converted back to the same number of significant decimal digits (as for output), then the final string will exactly match the original; while, conversely, if an 80-bit IEEE 754 binary floating-point value is correctly converted and (nearest) rounded to a decimal string with at least 21 significant decimal digits then converted back to binary format it will exactly match the original. These approximations are particularly troublesome when specifying the best value for constants in formulae to high precision, as might be calculated via arbitrary-precision arithmetic. Need for the 80-bit format A notable example of the need for a minimum of 64 bits of precision in the significand of the extended-precision format is the need to avoid precision loss when performing exponentiation on double-precision values. The x86 floating-point units do not provide an instruction that directly performs exponentiation: Instead they provide a set of instructions that a program can use in sequence to perform exponentiation using the equation: In order to avoid precision loss, the intermediate results "" and "" must be computed with much higher precision, because effectively both the exponent and the significand fields of must fit into the significand field of the intermediate result. Subsequently, the significand field of the intermediate result is split between the exponent and significand fields of the final result when is calculated. The following discussion describes this requirement in more detail. With a little unpacking, an IEEE 754 double-precision value can be represented as: where is the sign of the exponent (either 0 or 1), is the unbiased exponent, which is an integer that ranges from 0 to 1023, and is the significand which is a 53-bit value that falls in the range Negative numbers and zero can be ignored because the logarithm of these values is undefined. For purposes of this discussion does not have 53 bits of precision because it is constrained to be greater than or equal to one i.e. the hidden bit does not count towards the precision (Note that in situations where is less than 1, the value is actually a de-normal and therefore may have already suffered precision loss. This situation is beyond the scope of this article). Taking the log of this representation of a double-precision number and simplifying results in the following: This result demonstrates that when taking base 2 logarithm of a number, the sign of the exponent of the original value becomes the sign of the logarithm, the exponent of the original value becomes the integer part of the significand of the logarithm, and the significand of the original value is transformed into the fractional part of the significand of the logarithm. Because is an integer in the range 0 to 1023, up to 10 bits to the left of the radix point are needed to represent the integer part of the logarithm. Because falls in the range the value of will fall in the range so at least 52 bits are needed to the right of the radix point to represent the fractional part of the logarithm. Combining 10 bits to the left of the radix point with 52 bits to the right of the radix point means that the significand part of the logarithm must be computed to at least 62 bits of precision. In practice values of less than require 53 bits to the right of the radix point and values of less than require 54 bits to the right of the radix point to avoid precision loss. Balancing this requirement for added precision to the right of the radix point, exponents less than 512 only require 9 bits to the left of the radix point and exponents less than 256 require only 8 bits to the left of the radix point. The final part of the exponentiation calculation is computing The "intermediate result" consists of an integer part "" added to a fractional part "". If the intermediate result is negative then a slight adjustment is needed to get a positive fractional part because both "" and "" are negative numbers. For positive intermediate results: For negative intermediate results: Thus the integer part of the intermediate result ("" or plus a bias becomes the exponent of the final result and transformed positive fractional part of the intermediate result: or becomes the significand of the final result. In order to supply 52 bits of precision to the final result, the positive fractional part must be maintained to at least 52 bits. In conclusion, the exact number of bits of precision needed in the significand of the intermediate result is somewhat data dependent but 64 bits is sufficient to avoid precision loss in the vast majority of exponentiation computations involving double-precision numbers. The number of bits needed for the exponent of the extended-precision format follows from the requirement that the product of two double-precision numbers should not overflow when computed using the extended format. The largest possible exponent of a double-precision value is 1023 so the exponent of the largest possible product of two double-precision numbers is 2047 (an 11-bit value). Adding in a bias to account for negative exponents means that the exponent field must be at least 12 bits wide. Combining these requirements: 1 bit for the sign, 12 bits for the biased exponent, and 64 bits for the significand means that the extended-precision format would need at least 77 bits. Engineering considerations resulted in the final definition of the 80-bit format (in particular the IEEE 754 standard requires the exponent range of an extended-precision format to match that of the next largest, quad, precision format which is 15 bits). Another example of calculations that benefit from extended precision arithmetic are iterative refinement schemes, used to indirectly clean out errors accumulated in the direct solution during the typically very large number of calculations made for numerical linear algebra. Language support Some C / C++ implementations (e.g., GNU Compiler Collection (GCC), Clang, Intel C++) implement long double using 80-bit floating-point numbers on x86 systems. However, this is implementation-defined behavior and is not required, but allowed by the standard, as specified for IEEE 754 hardware in the C99 standard "Annex F IEC 60559 floating-point arithmetic". GCC also provides __float80 and __float128 types. Some Common Lisp implementations (e.g. CMU Common Lisp, Embeddable Common Lisp) implement long-float using 80-bit floating-point numbers on x86 systems. The D programming language implements real using the largest floating-point size implemented in hardware, for example 80 bits for x86 CPUs. On other machines, this will be the widest floating-point type natively supported by the CPU, or 64-bit double precision, whichever is wider. Turbo Pascal (and Object Pascal or Delphi) has an extended 80-bit type available in addition to real / single (32 bits) and double (64 bits), either natively (when a 80x87 coprocessor is present) or emulated (through the Turbo87 library); this extended type is available on 16-, 32-, and 64-bit platforms, possibly with padding. The Racket run-time system provides the 80-bit extflonum datatype on x86 systems. The Swift standard library provides the Float80 datatype. The PowerBASIC BASIC compiler provides EXT or EXTENDED 10-byte extended-precision floating-point data type. Zig provides a f80 type since version 0.10.0. See also GNU MPFR – the GNU "Multiple Precision Floating-Point Reliably" library for C IBM hexadecimal floating-point IEEE 754 long double x87 Footnotes References Computer arithmetic Floating point types
Extended precision
[ "Mathematics" ]
4,676
[ "Computer arithmetic", "Arithmetic" ]
7,186,519
https://en.wikipedia.org/wiki/Hypothetical%20astronomical%20object
Various unknown astronomical objects have been hypothesized throughout recorded history. For example, in the 5th century BCE, the philosopher Philolaus defined a hypothetical astronomical object which he called the "Central Fire", around which he proposed other celestial bodies (including the Sun) moved. Types of hypothetical astronomical objects Hypothetical astronomical objects have been speculated to exist both inside and outside of the Solar System, and speculation has included different kinds of stars, planets, and other astronomical objects. For hypothetical astronomical objects in the Solar System, see: List of hypothetical Solar System objects For hypothetical stars, see: Hypothetical star For hypothetical brown dwarfs, see: List of brown dwarfs For hypothetical black holes, see: Hypothetical black hole For extrasolar moons, all of which are currently hypothetical, see: Extrasolar moon For stars, planets or moons whose existence is not accepted by science, see: Planetary objects proposed in religion, astrology, ufology and pseudoscience and Stars proposed in religion For hypothetical planets in fiction, see: Fictional planets of the Solar System Hypothetical planet types Hypothetical types of extrasolar planets include: References
Hypothetical astronomical object
[ "Astronomy" ]
222
[ "Astronomical hypotheses", "Hypothetical astronomical objects", "Astronomical objects", "Astronomical myths" ]
7,186,647
https://en.wikipedia.org/wiki/Facet%20%28geometry%29
In geometry, a facet is a feature of a polyhedron, polytope, or related geometric structure, generally of dimension one less than the structure itself. More specifically: In three-dimensional geometry, a facet of a polyhedron is any polygon whose corners are vertices of the polyhedron, and is not a face. To facet a polyhedron is to find and join such facets to form the faces of a new polyhedron; this is the reciprocal process to stellation and may also be applied to higher-dimensional polytopes. In polyhedral combinatorics and in the general theory of polytopes, a face that has dimension n − 1 (an (n − 1)-face or hyperface) is also called a facet. A facet of a simplicial complex is a maximal simplex, that is a simplex that is not a face of another simplex of the complex. For (boundary complexes of) simplicial polytopes this coincides with the meaning from polyhedral combinatorics. References External links Polyhedra Polyhedral combinatorics Polytopes Broad-concept articles
Facet (geometry)
[ "Mathematics" ]
237
[ "Polyhedral combinatorics", "Combinatorics" ]
7,186,688
https://en.wikipedia.org/wiki/Trestolone%20acetate
Trestolone acetate (; developmental code names CDB-903, NSC-69948, U-15614; also known as 7α-methyl-19-nortestosterone 17β-acetate (MENT acetate) and 7α-methylestr-4-en-17β-ol-3-one 17β-acetate) is a synthetic and injected anabolic–androgenic steroid (AAS) and a derivative of nandrolone (19-nortestosterone) which was never marketed. It is an androgen ester – specifically, the C17 acetate ester of trestolone (7α-methyl-19-nortestosterone; MENT). The medication was first described in 1963. See also List of androgen esters References Abandoned drugs Acetate esters Androgen esters Anabolic–androgenic steroids Estranes Ketones Sex hormone esters and conjugates Progestogens Synthetic estrogens
Trestolone acetate
[ "Chemistry" ]
213
[ "Ketones", "Functional groups", "Drug safety", "Abandoned drugs" ]
7,187,296
https://en.wikipedia.org/wiki/Space%20Integrated%20GPS/INS
Space Integrated GPS/INS (SIGI) is a strapdown Inertial Navigation Unit (INU) developed and built by Honeywell International to control and stabilize spacecraft during flight. SIGI has integrated global positioning and inertial navigation technology to provide three navigation solutions : Pure inertial, GPS-only and blended GPS/INS. Current and Future Usage SIGI have been employed on the International Space Station, the Japanese H-II Transfer Vehicle (HTV) the Boeing X-37, CST-100 Starliner and X-40. SIGI is also proposed as the primary navigation system for Orion, which is scheduled to replace the Space Shuttle. See also Air navigation Spherical trigonometry Miniature Inertial Measurement Unit (MIMU) References External links Relative Navigation and Attitude Determination Near the International Space Station Avionics Radio navigation Navigational equipment Spacecraft communication
Space Integrated GPS/INS
[ "Astronomy", "Technology", "Engineering" ]
178
[ "Spacecraft communication", "Spacecraft stubs", "Avionics", "Astronomy stubs", "Aircraft instruments", "Aerospace engineering" ]
7,187,555
https://en.wikipedia.org/wiki/Gammasphere
The Gammasphere is a third generation gamma ray spectrometer used to study rare and exotic nuclear physics. It consists of 110 Compton-suppressed large volume, high-purity germanium detectors arranged in a spherical shell. Gammasphere has been used to perform a variety of experiments in nuclear physics. Most experiments involve using heavy ion nuclear fusion to form a highly excited atomic nucleus. This nucleus may then emit protons, neutrons, or alpha particles followed by a shower of tens of gamma rays. Gammasphere is used to measure properties of these gamma-rays for tens of millions of such gamma ray showers. The resultant data are analyzed to gain a deeper understanding of the properties of nuclei. Gammasphere was built in the early 1990s and has operated at the 88-inch cyclotron at Berkeley National Laboratory and at Argonne National Laboratory. In the movie Hulk, Bruce Banner is zapped by a machine called the Gammasphere. The actual Gammasphere, which detects rather than emits gamma rays, was used as a model for the device shown in the movie. See also Canadian Penning Trap Mass Spectrometer Helical Orbit Spectrometer (HELIOS) References External links LBNL: LBL site. ANL site Gammasphere Online Booklet Homepage Spectrometers
Gammasphere
[ "Physics", "Chemistry" ]
262
[ "Spectrometers", "Spectroscopy", "Spectrum (physical sciences)" ]
7,188,879
https://en.wikipedia.org/wiki/Immortal%20DNA%20strand%20hypothesis
The immortal DNA strand hypothesis was proposed in 1975 by John Cairns as a mechanism for adult stem cells to minimize mutations in their genomes. This hypothesis proposes that instead of segregating their DNA during mitosis in a random manner, adult stem cells divide their DNA asymmetrically, and retain a distinct template set of DNA strands (parental strands) in each division. By retaining the same set of template DNA strands, adult stem cells would pass mutations arising from errors in DNA replication on to non-stem cell daughters that soon terminally differentiate (end mitotic divisions and become a functional cell). Passing on these replication errors would allow adult stem cells to reduce their rate of accumulation of mutations that could lead to serious genetic disorders such as cancer. Although evidence for this mechanism exists, whether it is a mechanism acting in adult stem cells in vivo is still controversial. Methods Two main assays are used to detect immortal DNA strand segregation: label-retention and label-release pulse/chase assays. In the label-retention assay, the goal is to mark 'immortal' or parental DNA strands with a DNA label such as tritiated thymidine or bromodeoxyuridine (BrdU). These types of DNA labels will incorporate into the newly synthesized DNA of dividing cells during S phase. A pulse of DNA label is given to adult stem cells under conditions where they have not yet delineated an immortal DNA strand. During these conditions, the adult stem cells are either dividing symmetrically (thus with each division a new 'immortal' strand is determined and in at least one of the stem cells the immortal DNA strand will be marked with DNA label), or the adult stem cells have not yet been determined (thus their precursors are dividing symmetrically, and once they differentiate into adult stem cells and choose an 'immortal' strand, the 'immortal strand' will already have been marked). Experimentally, adult stem cells are undergoing symmetric divisions during growth and after wound healing, and are not yet determined at neonatal stages. Once the immortal DNA strand is labelled and the adult stem cell has begun or resumed asymmetric divisions, the DNA label is chased out. In symmetric divisions (most mitotic cells), DNA is segregating randomly and the DNA label will be diluted out to levels below detection after five divisions. If, however, cells are using an immortal DNA strand mechanism, then all the labeled DNA will continue to co-segregate with the adult stem cell, and after five (or more) divisions will still be detected within the adult stem cell. These cells are sometimes called label-retaining cells (LRCs). In the label-release assay, the goal is to mark the newly synthesized DNA that is normally passed on to the daughter (non-stem) cell. A pulse of DNA label is given to adult stem cells under conditions where they are dividing asymmetrically. Under conditions of homeostasis, adult stem cells should be dividing asymmetrically so that the same number of adult stem cells is maintained in the tissue compartment. After pulsing for long enough to label all the newly replicated DNA, the DNA label is chased out (each DNA replication now incorporates unlabeled nucleotides) and the adult stem cells are assayed for loss of the DNA label after two cell divisions. If cells are using a random segregation mechanism, then enough DNA label should remain in the cell to be detected. If, however, the adult stem cells are using an immortal DNA strand mechanism, they are obligated to retain the unlabeled 'immortal' DNA, and will release all the newly synthesized labeled DNA to their differentiating daughter cells in two divisions. Some scientists have combined the two approaches, by first using one DNA label to label the immortal strands, allowing to adult stem cells to begin dividing asymmetrically, and then using a different DNA label to label the newly synthesized DNA. Thus, the adult stem cells will retain one DNA label and release the other within two divisions. Evidence Evidence for the immortal DNA strand hypothesis has been found in various systems. One of the earliest studies by Karl Lark et al. demonstrated co-segregation of DNA in the cells of plant root tips. Plant root tips labeled with tritiated thymidine tended to segregate their labeled DNA to the same daughter cell. Though not all the labeled DNA segregated to the same daughter, the amount of thymidine-labeled DNA seen in the daughter with less label corresponded to the amount that would have arisen from sister-chromatid exchange. Later studies by Christopher Potten et al. (2002), using pulse/chase experiments with tritiated thymidine, found long-term label-retaining cells in the small intestinal crypts of neonatal mice. These researchers hypothesized that long-term incorporation of tritiated thymidine occurred because neonatal mice have undeveloped small intestines, and that pulsing tritiated thymidine soon after the birth of the mice allowed the 'immortal' DNA of adult stem cells to be labeled during their formation. These long-term cells were demonstrated to be actively cycling, as demonstrated by incorporation and release of BrdU. Since these cells were cycling but continued to contain the BrdU label in their DNA, the researchers reasoned that they must be segregating their DNA using an immortal DNA strand mechanism. Joshua Merok et al. from the lab of James Sherley engineered mammalian cells with an inducible p53 gene that controls asymmetric divisions. BrdU pulse/chase experiments with these cells demonstrated that chromosomes segregated non-randomly only when the cells were induced to divide asymmetrically like adult stem cells. These asymmetrically dividing cells provide an in vitro model for demonstration and investigation of immortal strand mechanisms. Scientists have strived to demonstrate that this immortal DNA strand mechanism exists in vivo in other types of adult stem cells. In 1996 Nik Zeps published the first paper demonstrating label retaining cells were present in the mouse mammary gland and this was confirmed in 2005 by Gilbert Smith who also published evidence that a subset of mouse mammary epithelial cells could retain DNA label and release DNA label in a manner consistent with the immortal DNA strand mechanism. Soon after, scientists from the laboratory of Derek van der Kooy showed that mice have neural stem cells that are BrdU-retaining and continue to be mitotically active. Asymmetric segregation of DNA was shown using real-time imaging of cells in culture. In 2006, scientists in the lab of Shahragim Tajbakhsh presented evidence that muscle satellite cells, which are proposed to be adult stem cells of the skeletal muscle compartment, exhibited asymmetric segregation of BrdU-labelled DNA when put into culture. They also had evidence that demonstrated BrdU release kinetics consistent with an immortal DNA strand mechanism were operating in vivo, using juvenile mice and mice with muscle regeneration induced by freezing. These experiments supporting the immortal strand hypothesis, however, are not conclusive. While the Lark experiments demonstrated co-segregation, the co-segregation may have been an artifact of radiation from the tritium. Although Potten identified the cycling, label-retaining cells as adult stem cells, these cells are difficult to identify unequivocally as adult stem cells. While the engineered cells provide an elegant model for co-segregation of chromosomes, studies with these cells were done in vitro with engineered cells. Some features may not be present in vivo or may be absent in vitro. In May 2007 evidence in support of the Immortal DNA Strand theory was discovered by Michael Conboy et al., using the muscle stem/satellite cell model during tissue regeneration, where there is tremendous cell division during a relatively brief period of time. Using two BrdU analogs to label template and newly synthesized DNA strands, they saw that about half of the dividing cells in regenerating muscle sort the older "Immortal" DNA to one daughter cell and the younger DNA to the other. In keeping with the stem cell hypothesis, the more undifferentiated daughter typically inherited the chromatids with the older DNA, while the more differentiated daughter inherited the younger DNA. Experimental evidence against the immortal strand hypothesis is sparse. In one study, researchers incorporated tritiated thymidine into dividing murine epidermal basal cells. They followed the release of tritiated thymidine after various chase periods, but the pattern of release was not consistent with the immortal strand hypothesis. Although they found label-retaining cells, they were not within the putative stem cell compartment. With increasing lengths of time for the chase periods, these label-retaining cells were located farther from the putative stem cell compartment, suggesting that the label-retaining cells had moved. However, finding conclusive evidence against the immortal strand hypothesis has proven difficult. DNA template strand segregation was studied in the developing zebrafish. During larval development there was rapid depletion of older DNA template strands from stem cell niches in the retina, brain and intestine. Using high resolution microscopy, no evidence of asymmetric template strand segregation (in over 100 cell pairs) was found, making it improbable that in developing zebrafish asymmetric DNA segregation avoids mutational burden as proposed by the immortal strand hypothesis. Further models After Cairns first proposed the immortal DNA strand mechanism, the theory has undergone several updated refinements. In 2002, he proposed that in addition to using immortal DNA strand mechanisms to segregate DNA, when the immortal DNA strands of adult stem cells undergo damage, they will choose to die (apoptose) rather than use DNA repair mechanisms that are normally used in non-stem cells. Emmanuel David Tannenbaum and James Sherley developed a quantitative model describing how repair of point mutations might differ in adult stem cells. They found that in adult stem cells, repair was most efficient if they used an immortal DNA strand mechanism for segregating DNA, rather than a random segregation mechanism. This method would be beneficial because it avoids wrongly fixing DNA mutations in both DNA strands and propagating the mutation. Mechanisms The complete proof of a concept generally requires a plausible mechanism that could mediate the effect. Although controversial, there is a suggestion that this could be provided by the Dynein Motor. This paper is accompanied by a comment summarizing the findings and background. However, this work has highly respected biologists among its detractors as exemplified by a further comment on a paper by the same authors from 2006. The authors have rebutted the criticism. See also Telomere References DNA Stem cells Cell biology Developmental biology
Immortal DNA strand hypothesis
[ "Biology" ]
2,156
[ "Behavior", "Cell biology", "Developmental biology", "Reproduction" ]
7,189,098
https://en.wikipedia.org/wiki/Bordeaux%20Segalen%20University
Bordeaux Segalen University (; originally called University of Victor Segalen Bordeaux II) was one of four universities in Bordeaux (together with Bordeaux 1, Michel de Montaigne Bordeaux 3 and Montesquieu Bordeaux 4). In 2014, it merged with Bordeaux 1 and Bordeaux 4 to form University of Bordeaux. Bordeaux Segalen was specialized in Life and Health Sciences and Human and Social Sciences. It consisted of three UFRs of medicine, one UFR of pharmacy, one of odontology, one of human and social sciences (psychology, sociology, ethnology, educational sciences, cognitive sciences), one of mathematics applied to human and life sciences, one of life sciences (human biology, biology of extreme environments, neurosciences), one of oenology, one of sports sciences, a higher school of biotechnology (ESTBB) and three institutes, one of public health (ISPED), one for hydrotherapy (in Dax), and one for cognitics (cognitive engineering - IdC, now ENSC) Bordeaux Segalen contained the UFR d'Oenologie, a reputed oenological institute founded in 1880 by Ulysse Gayon, the same year of foundation as the similar faculty of University of California at Davis. Since 2003, a team led by Dominique Martin of the Bordeaux University Hospital, has been rehearsing for the first human operation in zero gravity, using Zero-G aircraft. The operation is part of a project to develop surgical robots in space that are guided via satellite by Earth-based doctors. The project is developed with backing from the European Space Agency (ESA). Presidency Succession of presidents: Prof. Henri Bricaud, elected on December 21, 1970 Pr Jacques Latrille, elected on December 19, 1975 Pr Jean Tavernier, elected October 19, 1980, re-elected February 15, 1982 Pr Dominique Ducassou, elected on December 17, 1987 Pr Jacques Beylot, elected on November 17, 1992 Pr Josy Reiffers, elected on November 17, 1997 Pr Bernard Bégaud, elected on September 30, 2002 Pr Manuel Tunon de Lara, was elected on January 29, 2008. Points of interest Jardin botanique de Talence See also University of Bordeaux List of public universities in France by academy Victor Segalen References External links University of Bordeaux 2 1968 establishments in France Bordeaux 2
Bordeaux Segalen University
[ "Astronomy" ]
479
[ "Outer space stubs", "Outer space", "Astronomy stubs" ]
7,189,249
https://en.wikipedia.org/wiki/Benzethonium%20chloride
Benzethonium chloride, also known as hyamine is a synthetic quaternary ammonium salt. This compound is an odorless white solid, soluble in water. It has surfactant, antiseptic, and anti-infective properties and it is used as a topical antimicrobial agent in first aid antiseptics. It is also found in cosmetics and toiletries such as soap, mouthwashes, anti-itch ointments, and antibacterial moist towelettes. Benzethonium chloride is also used in the food industry as a hard surface disinfectant. Uses Antimicrobial Benzethonium chloride exhibits a broad spectrum of microbiocidal activity against bacteria, fungi, mold, and viruses. Independent testing shows that benzethonium chloride is highly effective against such pathogens as methicillin-resistant Staphylococcus aureus, Salmonella, Escherichia coli, Clostridioides difficile, hepatitis B virus, hepatitis C virus, herpes simplex virus (HSV), human immunodeficiency virus (HIV), respiratory syncytial virus (RSV), and norovirus. The US Food and Drug Administration (FDA) specifies that the safe and effective concentrations for benzethonium chloride are 0.1-0.2% in first aid products. Aqueous solutions of benzethonium chloride are not absorbed through the skin. It is not approved in the US and Europe for use as a food additive. Being a quaternary ammonium salt, it is more toxic than negatively charged surfactants. However, in a two-year study on rats, there was no evidence of carcinogenic activity. It is available under trade names Salanine, BZT, Diapp, Quatrachlor, Polymine D, Phemithyn, Antiseptol, Disilyn, Phermerol, and others. It is also found in several grapefruit seed extract preparations and can be used as a preservative, as in the anaesthetics ketamine and alfaxalone. Other uses In addition to its highly effective antimicrobial activity, benzethonium chloride contains a positively charged nitrogen atom covalently bonded to four carbon atoms. This positive charge attracts it to the skin and hair. This contributes to a soft, powdery after-feel on the skin and hair, as well as long-lasting persistent activity against micro-organisms. Also, this positively charged hydrophilic part of the molecule makes it a cationic detergent. Benzethonium chloride is also used to titrate the quantity of sodium dodecyl sulfate in a mixture of sodium dodecyl sulfate, sodium chloride and sodium sulfate, using dimidium bromide-sulphan blue as an indicator. It precipitates as turbidity with anionic polymers in aqueous solution, allowing it to be used to estimate the amount of such polymers present in a sample. This test is used in commercial and industrial water treatment, where polyacrylates, polymaleates, and sulfonated polymers are commonly employed as dispersants. Methylbenzethonium chloride A related compound () is used to treat Leishmania major infections. Regulation Some data has suggested that long-term exposure to antibacterial ingredients could contribute to bacterial resistance or hormonal effects. Furthermore, there is little evidence that the use of such ingredients in consumer soaps is actually more effective than plain soap and water. In September 2016, the Food and Drug Administration issued a ban on nineteen consumer antiseptic wash ingredients. A ruling on benzethonium chloride, along with two other similar ingredients, was deferred for a year to allow for more data collection. References Chlorides Quaternary ammonium compounds Cationic surfactants Antiseptics Disinfectants Preservatives Phenol ethers Benzyl compounds Glycol ethers Ethanolamines
Benzethonium chloride
[ "Chemistry" ]
839
[ "Chlorides", "Inorganic compounds", "Salts" ]
7,189,344
https://en.wikipedia.org/wiki/Populus%20trichocarpa
Populus trichocarpa, the black cottonwood, western balsam-poplar or California poplar, is a deciduous broadleaf tree species native to western North America. It is used for timber, and is notable as a model organism in plant biology. Description It is a large tree, growing to a height of and a trunk diameter over . It ranks 3rd in poplar species in the American Forests Champion Tree Registry. It is normally fairly short-lived, but some trees may live up to 400 years. A cottonwood in Willamette Mission State Park near Salem, Oregon, holds the national and world records. Last measured in April 2008, this black cottonwood was found to be standing at tall, around, with 527 points. The bark is grey and covered with lenticels, becoming thick and deeply fissured on old trees. The bark can become hard enough to cause sparks when cut with a chainsaw. The stem is grey in the older parts and light brown in younger parts. The crown is usually roughly conical and quite dense. In large trees, the lower branches droop downwards. Spur shoots are common. The wood has a light coloring and a straight grain. The leaves are usually long with a glossy, dark green upper side and glaucous, light grey-green underside; larger leaves may be up to long and may be produced on stump sprouts and very vigorous young trees. The leaves are alternate, elliptical with a crenate margin and an acute tip, and reticulate venation. The petiole is reddish. The buds are conical, long, narrow, and sticky, with a strong balsam scent in spring when they open. P. trichocarpa has an extensive and aggressive root system, which can invade and damage drainage systems. Sometimes, the roots can even damage the foundations of buildings by drying out the soil. In 2016, the first direct evidence was published indicating that wild P. trichocarpa fixes nitrogen. Reproduction Flowering and fruiting P. trichocarpa is normally dioecious; male and female catkins are borne on separate trees. The species reaches flowering age around 10 years. Flowers may appear in early March to late May in Washington and Oregon, and sometimes as late as mid-June in northern and interior British Columbia, Idaho, and Montana. Staminate catkins contain 30 to 60 stamens, elongated to 2 to 3 cm, and are deciduous. The pollen can be an allergen. Pistillate catkins at maturity are 8 to 20 cm long with rotund-ovate, tricarpellate subsessile fruits 5 to 8 mm long. Each capsule contains many minute seeds with long, white, cottony hairs. Seed production and dissemination The seed ripens and is disseminated by late May to late June in Oregon and Washington, but frequently not until mid-July in Idaho and Montana. Abundant seed crops are usually produced every year. Attached to its cotton, the seed is light and buoyant and can be transported long distances by wind and water. Although highly viable, longevity of P. trichocarpa seed under natural conditions may be as short as two weeks to a month. This can be increased with cold storage. Seedling development Moist seedbeds are essential for high germination, and seedling survival depends on continuously favorable conditions during the first month. Wet bottomlands of rivers and major streams frequently provide such conditions, particularly where bare soil has been exposed or new soil laid down. Germination is epigeal (above ground). P. trichocarpa seedlings do not usually become established in abundance after logging unless special measures are taken to prepare the bare, moist seedbeds required for initial establishment. Where seedlings become established in great numbers, they thin out naturally by age five because the weaker seedlings of this shade-intolerant species are suppressed. Vegetative reproduction Due to its high levels of rooting hormones, P. trichocarpa sprouts readily. After logging operations, it sometimes regenerates naturally from rooting of partially buried fragments of branches or from stumps. Sprouting from roots also occurs. The species also has the ability to abscise shoots complete with green leaves. These shoots drop to the ground and may root where they fall or may be dispersed by water transport. In some situations, abscission may be one means of colonizing exposed sandbars. Taxonomy "Trichocarpa" is Greek for "hairy fruits". These scientific names are now considered synonymous with P. trichocarpa: Distribution and habitat The native range of P. trichocarpa covers large sections of western North America. It extends from Southeast Alaska's Kodiak Island and Cook Inlet to latitude 62° 30° N., through British Columbia and the forested areas of Washington and Oregon, to the mountains in southern California and northern Baja California (31°N). It is also found inland, generally on the west side of the Rocky Mountains, in British Columbia, southwestern Alberta, western Montana, and north-to-central Idaho. Scattered small populations have been noted in southeastern Alberta, eastern Montana, western North Dakota, western Wyoming, Utah, and Nevada. Black cottonwood grows on alluvial sites, riparian habitats, and moist woods on mountain slopes, from sea level to elevations of . It often forms extensive stands on bottomlands of major streams and rivers at low elevations along the Pacific Coast, west of the Cascade Range. In eastern Washington and other dry areas, it is restricted to protected valleys and canyon bottoms, along streambanks, and edges of ponds and meadows. It grows on a variety of soils from moist silts, gravels, and sands to rich humus, loams, and occasionally clays. Black cottonwood is a pioneer species that grows best in full sunlight and commonly establishes on recently disturbed alluvium. Seeds are numerous and widely dispersed because of their cottony tufts, enabling the species to colonize even burn sites, if conditions for establishment are met. Seral communities dominated or codominated by cottonwood are maintained by periodic flooding or other types of soil disturbance. Black cottonwood has low drought tolerance; it is flood-tolerant but cannot tolerate brackish water or stagnant pools. P. trichocarpa has been one of the most successful introductions of trees to the otherwise almost treeless Faroe Islands. The species was imported from Alaska to Iceland in 1944 and has since become one of the most widespread trees in the country. Ecology Although the most populous cottonwood of the Pacific Northwest, it hybridizes with the region's three other species: balsam poplar, plains cottonwood, and narrowleaf cottonwood; all four have similar appearances and provide habitats for various animals. Cottonwoods are shade intolerant. Black cottonwood thrives by colonizing disturbed sites, but can be replaced by conifers. The wood is relatively weak and waterlogged, often splitting during freezes. It is susceptible to rot as well. Woodpeckers create cavities which various animals can use for nests. Larger birds nest in the large upper branches. Beavers use the trees as food and dam-building material. Cultivation It is grown as an ornamental tree, valued for its fast growth and scented foliage in spring, detectable from over 100 m distance. The roots are however invasive, and it can damage the foundations of buildings on shrinkable clay soils if planted nearby (Mitchel 1996). Branches can be added to potted plants to stimulate rooting. Uses Traditional The tree was and is significant for many Native American tribes of the Western United States. Some Native Americans consumed cottonwood inner bark and sap, feeding their horses the inner bark and foliage. The wood, roots and bark have been used for firewood, canoe making, rope, fish traps, baskets and structures. The gum-like sap was used as a glue or as waterproofing. The Quinault used it for post wood. The Cowlitz made the base (hearth board) of their fire-making tool, a bow drill, with its wood. The Squaxin cut young branches for building sweat lodges. Medicinal The tree had medicinal value as well. The Squaxin used the bark for sore throats and for the treatment of tuberculosis, as well as water and the bruised leaves as an antiseptic mixture. The Klallam used the buds for an eye treatment. For the Quinault, they extracted gum from the burls and applied it to cuts on the skin. Modern Commercial extracts are produced from the fragrant buds for use as a perfume in cosmetics. Lumber P. trichocarpa wood is light-weight and although not particularly strong, is strong for its weight. The wood material has short, fine cellulose fibres that are used in pulp for high-quality book and magazine paper. The wood is also excellent for production of plywood. Living trees are used as windbreaks. This species grows very quickly; trees in plantations in Great Britain have reached tall in 11 years, and tall in 28 years. It can reach suitable size for pulp production in 10–15 years and about 25 years for timber production. As a model species Populus trichocarpa has several qualities that makes it a good model species for trees: Model genome size (although significantly larger than the other model plant, Arabidopsis thaliana) Rapid growth (for a tree) Reaches reproductive maturity 4–6 years Economically important It represents a phenotypically diverse genus For these reasons, the species has been extensively studied. Its genome sequence was published in 2006. More than 121,000 expressed sequence tags have been sequenced from it. The wide range of topics studied by using P. trichocarpa include the effects of ethylene, lignin biosynthesis, drought tolerance, and wood formation. Cultural significance The Chehalis believed that the tree was intelligent and had a form of special physical agency, moving on its own without the need of wind. Due to this belief, they refused to use it for firewood. Genome The sequence of P. trichocarpa is that of an individual female specimen "Nisqually-1", named after the Nisqually River in Washington, where the specimen was collected. The sequencing was performed at the Joint Genome Institute using the shotgun method. The depth of the sequencing was about 7.5 x (meaning that each base pair was sequenced on average 7.5 times). Genome annotation was done primarily by the Joint Genome Institute, the Oak Ridge National Laboratory, the Umeå Plant Science Centre, and Genome Canada. Prior to the publication of P. trichocarpa genome the only available plant genomes were those of thale cress and rice, both of which are herbaceous. P. trichocarpa is the first woody plant genome to be sequenced. Considering the economic importance of wood and wood products, the availability of a tree genome was necessary. The sequence also allows evolutionary comparisons and the elucidation of basic molecular differences between herbaceous and woody plants. Characteristics Size: 485 million base pairs (human genome: 3 billion base pairs) Proportion of heterochromatin to euchromatin: 3:7 Number of chromosomes: 19 Number of putative genes: 45,555, the largest number of genes ever recorded (estimate in September 2008) Mitochondrial genome: 803,000 base pairs, 52 genes Chloroplast genome: 157,000 base pairs, 101 genes Somatic mosaicism Genome-wide analysis of 11 clumps of P. trichocarpa trees reveals significant genetic differences between the roots and the leaves and branches of the same tree. The variation within a specimen is as much as found between unrelated trees. These results may be important in resolving debate in evolutionary biology regarding somatic mutation (that evolution can occur within individuals, not solely among populations), with a variety of implications. References Further reading Populus genome at the JGI website Popgenie: The Populus Genome Integrative Explorer Plants for a Future: Populus trichocarpa Forbes, R. D. (2006). Morrisey Old Growth Cottonwood Forest (pdf file) Mitchell, A. F. (1996). Alan Mitchell's Trees of Britain. Collins. . Davis, T. Neil. (1981). Cottonwood and Balsam Poplar ..................................................................................... Garden plants of North America Trees of Alberta Flora of the Sierra Nevada (United States) Plant models Ornamental trees Plants used in traditional Native American medicine trichocarpa Trees of Northern America Flora without expected TNC conservation status
Populus trichocarpa
[ "Biology" ]
2,676
[ "Model organisms", "Plant models" ]
7,189,385
https://en.wikipedia.org/wiki/Tacit%20collusion
Tacit collusion is a collusion between competitors who do not explicitly exchange information but achieve an agreement about coordination of conduct. There are two types of tacit collusion: concerted action and conscious parallelism. In a concerted action also known as concerted activity, competitors exchange some information without reaching any explicit agreement, while conscious parallelism implies no communication. In both types of tacit collusion, competitors agree to play a certain strategy without explicitly saying so. It is also called oligopolistic price coordination or tacit parallelism. A dataset of gasoline prices of BP, Caltex, Woolworths, Coles, and Gull from Perth gathered in the years 2001 to 2015 was used to show by statistical analysis the tacit collusion between these retailers. BP emerged as a price leader and influenced the behavior of the competitors. As result, the timing of price jumps became coordinated and the margins started to grow in 2010. Conscious parallelism In competition law, some sources use conscious parallelism as a synonym to tacit collusion in order to describe pricing strategies among competitors in an oligopoly that occurs without an actual agreement or at least without any evidence of an actual agreement between the players. In result, one competitor will take the lead in raising or lowering prices. The others will then follow suit, raising or lowering their prices by the same amount, with the understanding that greater profits result. This practice can be harmful to consumers who, if the market power of the firm is used, can be forced to pay monopoly prices for goods that should be selling for only a little more than the cost of production. Nevertheless, it is very hard to prosecute because it may occur without any collusion between the competitors. Courts have held that no violation of the antitrust laws occurs where firms independently raise or lower prices, but that a violation can be shown when plus factors occur, such as firms being motivated to collude and taking actions against their own economic self-interests. This procedure of the courts is sometimes called as setting of a conspiracy theory. Price leadership Oligopolists usually try not to engage in price cutting, excessive advertising or other forms of competition. Thus, there may be unwritten rules of collusive behavior such as price leadership. Price leadership is the form of a tacit collusion, whereby firms orient at the price set by a leader. A price leader will then emerge and set the general industry price, with other firms following suit. For example, see the case of British Salt Limited and New Cheshire Salt Works Limited. Classical economic theory holds that Pareto efficiency is attained at a price equal to the incremental cost of producing additional units. Monopolies are able to extract optimum revenue by offering fewer units at a higher cost. An oligopoly where each firm acts independently tends toward equilibrium at the ideal, but such covert cooperation as price leadership tends toward higher profitability for all, though it is an unstable arrangement. There exist two types of price leadership. In dominant firm price leadership, the price leader is the biggest firm. In barometric firm price leadership, the most reliable firm emerges as the best barometer of market conditions, or the firm could be the one with the lowest costs of production, leading other firms to follow suit. Although this firm might not be dominating the industry, its prices are believed to reflect market conditions which are the most satisfactory, as the firm would most likely be a good forecaster of economic changes. Auctions In repeated auctions, bidders might participate in a tacit collusion to keep bids low. A profitable collusion is possible, if the number of bidders is finite and the identity of the winner is publicly observable. It can be very difficult or even impossible for the seller to detect such collusion from the distribution of bids only. In case of spectrum auctions, some sources claim that a tacit collusion is easily upset:"It requires that all the bidders reach an implicit agreement about who should get what. With thirty diverse bidders unable to communicate about strategy except through their bids, forming such unanimous agreement is difficult at best." Nevertheless, Federal Communications Commission (FCC) experimented with precautions for spectrum auctions like restricting visibility of bids, limiting the number of bids and anonymous bidding. So called click-box bidding used by governmental agencies in spectrum auctions restricts the number of valid bids and offers them as a list to a bidder to choose from. Click-box bidding was invented in 1997 by FCC to prevent bidders from signalling bidding information by embedding it into digits of the bids. Economic theory predicts a higher difficulty for tacit collusions due to those precautions. In general, transparency in auctions always increases the risk of a tacit collusion. Algorithms Once the competitors are able to use algorithms to determine prices, a tacit collusion between them imposes a much higher danger. E-commerce is one of the major premises for algorithmic tacit collusion. Complex pricing algorithms are essential for the development of e-commerce. European Commissioner Margrethe Vestager mentioned an early example of algorithmic tacit collusion in her speech on "Algorithms and Collusion" on March 16, 2017, described as follows: "A few years ago, two companies were selling a textbook called The Making of a Fly. One of those sellers used an algorithm which essentially matched its rival’s price. That rival had an algorithm which always set a price 27% higher than the first. The result was that prices kept spiralling upwards, until finally someone noticed what was going on, and adjusted the price manually. By that time, the book was selling – or rather, not selling – for 23 million dollars a copy." An OECD Competition Committee Roundtable "Algorithms and Collusion" took place in June 2017 in order to address the risk of possible anti-competitive behaviour by algorithms. It is important to distinguish between simple algorithms intentionally programmed to raise price according to the competitors and more sophisticated self-learning AI algorithms with more general goals. Self-learning AI algorithms might form a tacit collusion without the knowledge of their human programmers as result of the task to determine optimal prices in any market situation. Duopoly example Tacit collusion is best understood in the context of a duopoly and the concept of game theory (namely, Nash equilibrium). Let's take an example of two firms A and B, who both play an advertising game over an indefinite number of periods (effectively saying 'infinitely many'). Both of the firms' payoffs are contingent upon their own action, but more importantly the action of their competitor. They can choose to stay at the current level of advertising or choose a more aggressive advertising strategy. If either firm chooses low advertising while the other chooses high, then the low-advertising firm will suffer a great loss in market share while the other experiences a boost. If they both choose high advertising, then neither firms' market share will increase but their advertising costs will increase, thus lowering their profits. If they both choose to stay at the normal level of advertising, then sales will remain constant without the added advertising expense. Thus, both firms will experience a greater payoff if they both choose normal advertising (this set of actions is unstable, as both are tempted to defect to higher advertising to increase payoffs). A payoff matrix is presented with numbers given: Notice that Nash's equilibrium is set at both firms choosing an aggressive advertising strategy. This is to protect themselves against lost sales. This game is an example of a prisoner's dilemma. In general, if the payoffs for colluding (normal, normal) are greater than the payoffs for cheating (aggressive, aggressive), then the two firms will want to collude (tacitly). Although this collusive arrangement is not an equilibrium in the one-shot game above, repeating the game allows the firms to sustain collusion over long time periods. This can be achieved, for example if each firm's strategy is to undertake normal advertising so long as its rival does likewise, and to pursue aggressive advertising forever as soon as its rival has used an aggressive advertising campaign at least once (see: grim trigger) (this threat is credible since symmetric use of aggressive advertising is a Nash equilibrium of each stage of the game). Each firm must then weigh the short term gain of $30 from 'cheating' against the long term loss of $35 in all future periods that comes as part of its punishment. Provided that firms care enough about the future, collusion is an equilibrium of this repeated game. To be more precise, suppose that firms have a discount factor . The discounted value of the cost to cheating and being punished indefinitely are . The firms therefore prefer not to cheat (so that collusion is an equilibrium) if . See also References Cartels Pricing Anti-competitive practices Competition law Bidding strategy Game theory Cheating in business Pricing controversies
Tacit collusion
[ "Mathematics" ]
1,817
[ "Game theory" ]
7,189,398
https://en.wikipedia.org/wiki/Chipcom
Chipcom Corporation was an early pioneering company in the Ethernet hub industry. Their products allowed Local Networks to be aggregated in a single place instead of being distributed across the length of a single coaxial cable. They competed with now-gone companies such as Cabletron Systems, SynOptics, Ungermann-Bass, David Systems, Digital Equipment Corporation, and American Photonics, all of which were early entrants in the "LAN Hub" industry. Chipcom also was involved in Token Ring, FDDI, and Asynchronous Transfer Mode (ATM). Some of Chipcom's innovations at the time are well-documented in the trade press of the era, such as Computerworld. In 1995, Chipcom was acquired by 3Com for $700 million in stock., although Cabletron was also interested in buying the company. 3Com was acquired by Hewlett-Packard in 2011. The firm's CEO at the time was John Robert Held. References Ethernet Networking companies of the United States Defunct networking companies Computer companies established in 1983 Computer companies disestablished in 1995 Defunct companies based in Massachusetts Defunct computer companies of the United States Defunct computer hardware companies
Chipcom
[ "Technology" ]
241
[ "Computing stubs", "Computer network stubs" ]
7,189,698
https://en.wikipedia.org/wiki/Arp%20240
Arp 240 is a pair of interacting spiral galaxies located in the constellation Virgo. The two galaxies are listed together as Arp 240 in the Atlas of Peculiar Galaxies. The galaxy on the right is known as NGC 5257, while the galaxy on the left is known as NGC 5258. Both galaxies are distorted by the gravitational interaction, and both are connected by a tidal bridge, as can be seen in images of these galaxies. One supernova has been observed in NGC 5258: SN 2020dko (type Ia, mag. 19). References External links Unbarred spiral galaxies Intermediate spiral galaxies Peculiar galaxies Interacting galaxies Luminous infrared galaxies Virgo (constellation) NGC objects 08641 48330 240
Arp 240
[ "Astronomy" ]
146
[ "Virgo (constellation)", "Constellations" ]
7,189,886
https://en.wikipedia.org/wiki/Optimal%20stopping
In mathematics, the theory of optimal stopping or early stopping is concerned with the problem of choosing a time to take a particular action, in order to maximise an expected reward or minimise an expected cost. Optimal stopping problems can be found in areas of statistics, economics, and mathematical finance (related to the pricing of American options). A key example of an optimal stopping problem is the secretary problem. Optimal stopping problems can often be written in the form of a Bellman equation, and are therefore often solved using dynamic programming. Definition Discrete time case Stopping rule problems are associated with two objects: A sequence of random variables , whose joint distribution is something assumed to be known A sequence of 'reward' functions which depend on the observed values of the random variables in 1: Given those objects, the problem is as follows: You are observing the sequence of random variables, and at each step , you can choose to either stop observing or continue If you stop observing at step , you will receive reward You want to choose a stopping rule to maximize your expected reward (or equivalently, minimize your expected loss) Continuous time case Consider a gain process defined on a filtered probability space and assume that is adapted to the filtration. The optimal stopping problem is to find the stopping time which maximizes the expected gain where is called the value function. Here can take value . A more specific formulation is as follows. We consider an adapted strong Markov process defined on a filtered probability space where denotes the probability measure where the stochastic process starts at . Given continuous functions , and , the optimal stopping problem is This is sometimes called the MLS (which stand for Mayer, Lagrange, and supremum, respectively) formulation. Solution methods There are generally two approaches to solving optimal stopping problems. When the underlying process (or the gain process) is described by its unconditional finite-dimensional distributions, the appropriate solution technique is the martingale approach, so called because it uses martingale theory, the most important concept being the Snell envelope. In the discrete time case, if the planning horizon is finite, the problem can also be easily solved by dynamic programming. When the underlying process is determined by a family of (conditional) transition functions leading to a Markov family of transition probabilities, powerful analytical tools provided by the theory of Markov processes can often be utilized and this approach is referred to as the Markov method. The solution is usually obtained by solving the associated free-boundary problems (Stefan problems). A jump diffusion result Let be a Lévy diffusion in given by the SDE where is an -dimensional Brownian motion, is an -dimensional compensated Poisson random measure, , , and are given functions such that a unique solution exists. Let be an open set (the solvency region) and be the bankruptcy time. The optimal stopping problem is: It turns out that under some regularity conditions, the following verification theorem holds: If a function satisfies where the continuation region is , on , and on , where is the infinitesimal generator of then for all . Moreover, if on Then for all and is an optimal stopping time. These conditions can also be written is a more compact form (the integro-variational inequality): on Examples Coin tossing (Example where converges) You have a fair coin and are repeatedly tossing it. Each time, before it is tossed, you can choose to stop tossing it and get paid (in dollars, say) the average number of heads observed. You wish to maximise the amount you get paid by choosing a stopping rule. If Xi (for i ≥ 1) forms a sequence of independent, identically distributed random variables with Bernoulli distribution and if then the sequences , and are the objects associated with this problem. House selling (Example where does not necessarily converge) You have a house and wish to sell it. Each day you are offered for your house, and pay to continue advertising it. If you sell your house on day , you will earn , where . You wish to maximise the amount you earn by choosing a stopping rule. In this example, the sequence () is the sequence of offers for your house, and the sequence of reward functions is how much you will earn. Secretary problem (Example where is a finite sequence) You are observing a sequence of objects which can be ranked from best to worst. You wish to choose a stopping rule which maximises your chance of picking the best object. Here, if (n is some large number) are the ranks of the objects, and is the chance you pick the best object if you stop intentionally rejecting objects at step i, then and are the sequences associated with this problem. This problem was solved in the early 1960s by several people. An elegant solution to the secretary problem and several modifications of this problem is provided by the more recent odds algorithm of optimal stopping (Bruss algorithm). Search theory Economists have studied a number of optimal stopping problems similar to the 'secretary problem', and typically call this type of analysis 'search theory'. Search theory has especially focused on a worker's search for a high-wage job, or a consumer's search for a low-priced good. Parking problem A special example of an application of search theory is the task of optimal selection of parking space by a driver going to the opera (theater, shopping, etc.). Approaching the destination, the driver goes down the street along which there are parking spaces – usually, only some places in the parking lot are free. The goal is clearly visible, so the distance from the target is easily assessed. The driver's task is to choose a free parking space as close to the destination as possible without turning around so that the distance from this place to the destination is the shortest. Option trading In the trading of options on financial markets, the holder of an American option is allowed to exercise the right to buy (or sell) the underlying asset at a predetermined price at any time before or at the expiry date. Therefore, the valuation of American options is essentially an optimal stopping problem. Consider a classical Black–Scholes set-up and let be the risk-free interest rate and and be the dividend rate and volatility of the stock. The stock price follows geometric Brownian motion under the risk-neutral measure. When the option is perpetual, the optimal stopping problem is where the payoff function is for a call option and for a put option. The variational inequality is for all where is the exercise boundary. The solution is known to be (Perpetual call) where and (Perpetual put) where and On the other hand, when the expiry date is finite, the problem is associated with a 2-dimensional free-boundary problem with no known closed-form solution. Various numerical methods can, however, be used. See Black–Scholes model#American options for various valuation methods here, as well as Fugit for a discrete, tree based, calculation of the optimal time to exercise. See also Halting problem Markov decision process Optional stopping theorem Prophet inequality Stochastic control Sequential analysis References Citations Sources Thomas S. Ferguson, "Who solved the secretary problem?" Statistical Science, Vol. 4.,282–296, (1989) F. Thomas Bruss. "Sum the odds to one and stop." Annals of Probability, Vol. 28, 1384–1391,(2000) F. Thomas Bruss. "The art of a right decision: Why decision makers want to know the odds-algorithm." Newsletter of the European Mathematical Society, Issue 62, 14–20, (2006) Mathematical finance Sequential methods Dynamic programming
Optimal stopping
[ "Mathematics" ]
1,556
[ "Applied mathematics", "Mathematical finance" ]
7,190,681
https://en.wikipedia.org/wiki/PPS.tv
PPS.tv (PPStream) is a Chinese peer-to-peer streaming video network software. Since the target users are on the Chinese mainland, there is no official English version, and the vast majority of channels are from East Asia, mostly Mainland China, Japan, South Korea, Hong Kong, Taiwan, Malaysia, and Singapore. Programmes vary from Chinese movies to Japanese anime, sports channels, as well as popular American TV and films. It had an 8.9% market share in China in Q3 2010, placing it third, behind Youku and Tudou. In May 2013, the online video business of PPS.tv was purchased by Baidu for $370 million. After the acquisition, PPS.tv continued to operate as a sub-brand under iQIYI, Baidu's online video platform. Applications However, the nature of peer to peer serving means that each user of the system is also a server. The upload speed of standard home broadband connections is usually a fraction of the download speed, so several upload sources may be required by each additional peer. Additionally, on services with high contention ratios or poorly configured switches, large numbers of people attempting to use the service may slow all internet usage to unusable speeds. Acting as an upload server to the limit of one's uploads, bandwidth increases the round trip time for webpage requests, making web browsing while using PPS.tv difficult. See also PPLive References External links Official site IQIYI Chinese entertainment websites File sharing networks Streaming television Peercasting Software that uses Qt Peer-to-peer software 2006 establishments in China
PPS.tv
[ "Technology" ]
335
[ "Multimedia", "Streaming television" ]
7,190,735
https://en.wikipedia.org/wiki/Verdier%20duality
In mathematics, Verdier duality is a cohomological duality in algebraic topology that generalizes Poincaré duality for manifolds. Verdier duality was introduced in 1965 by as an analog for locally compact topological spaces of Alexander Grothendieck's theory of Poincaré duality in étale cohomology for schemes in algebraic geometry. It is thus (together with the said étale theory and for example Grothendieck's coherent duality) one instance of Grothendieck's six operations formalism. Verdier duality generalises the classical Poincaré duality of manifolds in two directions: it applies to continuous maps from one space to another (reducing to the classical case for the unique map from a manifold to a one-point space), and it applies to spaces that fail to be manifolds due to the presence of singularities. It is commonly encountered when studying constructible or perverse sheaves. Verdier duality Verdier duality states that (subject to suitable finiteness conditions discussed below) certain derived image functors for sheaves are actually adjoint functors. There are two versions. Global Verdier duality states that for a continuous map of locally compact Hausdorff spaces, the derived functor of the direct image with compact (or proper) supports has a right adjoint in the derived category of sheaves, in other words, for (complexes of) sheaves (of abelian groups) on and on we have Local Verdier duality states that in the derived category of sheaves on Y. It is important to note that the distinction between the global and local versions is that the former relates morphisms between complexes of sheaves in the derived categories, whereas the latter relates internal Hom-complexes and so can be evaluated locally. Taking global sections of both sides in the local statement gives the global Verdier duality. These results hold subject to the compactly supported direct image functor having finite cohomological dimension. This is the case if there is a bound such that the compactly supported cohomology vanishes for all fibres (where ) and . This holds if all the fibres are at most -dimensional manifolds or more generally at most -dimensional CW-complexes. The discussion above is about derived categories of sheaves of abelian groups. It is instead possible to consider a ring and (derived categories of) sheaves of -modules; the case above corresponds to . The dualizing complex on is defined to be where p is the map from to a point. Part of what makes Verdier duality interesting in the singular setting is that when is not a manifold (a graph or singular algebraic variety for example) then the dualizing complex is not quasi-isomorphic to a sheaf concentrated in a single degree. From this perspective the derived category is necessary in the study of singular spaces. If is a finite-dimensional locally compact space, and the bounded derived category of sheaves of abelian groups over , then the Verdier dual is a contravariant functor defined by It has the following properties: Relation to classical Poincaré duality Poincaré duality can be derived as a special case of Verdier duality. Here one explicitly calculates cohomology of a space using the machinery of sheaf cohomology. Suppose X is a compact orientable n-dimensional manifold, k is a field and is the constant sheaf on X with coefficients in k. Let be the constant map to a point. Global Verdier duality then states To understand how Poincaré duality is obtained from this statement, it is perhaps easiest to understand both sides piece by piece. Let be an injective resolution of the constant sheaf. Then by standard facts on right derived functors is a complex whose cohomology is the compactly supported cohomology of X. Since morphisms between complexes of sheaves (or vector spaces) themselves form a complex we find that where the last non-zero term is in degree 0 and the ones to the left are in negative degree. Morphisms in the derived category are obtained from the homotopy category of chain complexes of sheaves by taking the zeroth cohomology of the complex, i.e. For the other side of the Verdier duality statement above, we have to take for granted the fact that when X is a compact orientable n-dimensional manifold which is the dualizing complex for a manifold. Now we can re-express the right hand side as We finally have obtained the statement that By repeating this argument with the sheaf kX replaced with the same sheaf placed in degree i we get the classical Poincaré duality See also Poincaré duality Six operations Coherent duality Derived category References , Exposés I and II contain the corresponding theory in the étale situation Topology Homological algebra Sheaf theory Duality theories
Verdier duality
[ "Physics", "Mathematics" ]
1,007
[ "Mathematical structures", "Sheaf theory", "Topology", "Space", "Duality theories", "Geometry", "Category theory", "Fields of abstract algebra", "Spacetime", "Homological algebra" ]
7,190,885
https://en.wikipedia.org/wiki/Four-dimensionalism
In philosophy, four-dimensionalism (also known as the doctrine of temporal parts) is the ontological position that an object's persistence through time is like its extension through space. Thus, an object that exists in time has temporal parts in the various subregions of the total region of time it occupies, just like an object that exists in a region of space has at least one part in every subregion of that space. Four-dimensionalists typically argue for treating time as analogous to space, usually leading them to endorse the doctrine of eternalism. This is a philosophical approach to the ontological nature of time, according to which all points in time are equally "real", as opposed to the presentist idea that only the present is real. As some eternalists argue by analogy, just as all spatially distant objects and events are as real as those close to us, temporally distant objects and events are as real as those currently present to us. Perdurantism—or perdurance theory—is a closely related philosophical theory of persistence and identity, according to which an individual has distinct temporal parts throughout its existence, and the persisting object is the sum or set of all of its temporal parts. This sum or set is colloquially referred to as a "space-time worm", which has earned the perdurantist view the moniker of "the worm view". While all perdurantists are plausibly considered four dimensionalists, at least one variety of four dimensionalism does not count as perdurantist in nature. This variety, known as exdurantism or the "stage view", is closely akin to the perdurantist position. They also countenance a view of persisting objects that have temporal parts that succeed one another through time. However, instead of identifying the persisting object as the entire set or sum of its temporal parts, the exdurantist argues that any object under discussion is a single stage (time-slice, temporal part, etc.), and that the other stages or parts that comprise the persisting object are related to that part by a "temporal counterpart" relation. Though they have often been conflated, eternalism is a theory of what time is like and what times exist, while perdurantism is a theory about persisting objects and their identity conditions over time. Eternalism and perdurantism tend to be discussed together because many philosophers argue for a combination of eternalism and perdurantism. Sider (1997) uses the term four-dimensionalism to refer to perdurantism, but Michael Rea uses the term "four-dimensionalism" to mean the view that presentism is false as opposed to "perdurantism", the view that endurantism is false and persisting objects have temporal parts. Four-dimensionalism about material objects Four-dimensionalism is a name for different positions. One of these uses four-dimensionalism as a position of material objects with respect to dimensions. Four-dimensionalism is the view that in addition to spatial parts, objects have temporal parts. According to this view, four-dimensionalism cannot be used as a synonym for perdurantism. Perdurantists have to hold a four-dimensional view of material objects: it is impossible that perdurantists, who believe that objects persist by having different temporal parts at different times, do not believe in temporal parts. However, the reverse is not true. Four-dimensionalism is compatible with either perdurantism or exdurantism. A-series and B-series J.M.E. McTaggart in The Unreality of Time identified two descriptions of time, which he called the A-series and the B-series. The A-series identifies positions in time as past, present, or future, and thus assumes that the "present" has some objective reality, as in both presentism and the growing block universe. The B-series defines a given event as earlier or later than another event, but does not assume an objective present, as in four-dimensionalism. Much of the contemporary literature in the metaphysics of time has been taken to spring forth from this distinction, and thus takes McTaggart's work as its starting point. Contrast with three-dimensionalism Unlike the four dimensionalist, the three dimensionalist considers time to be a unique dimension that is not analogous to the three spatial dimensions: length, width and height. Whereas the four dimensionalist proposes that objects are extended across time, the three dimensionalist adheres to the belief that all objects are wholly present at any moment at which they exist. While the three dimensionalist agrees that the parts of an object can be differentiated based on their spatial dimensions, they do not believe an object can be differentiated into temporal parts across time. For example, in the three dimensionalist account, "Descartes in 1635" is the same object as "Descartes in 1620", and both are identical to Descartes, himself. However, the four dimensionalist considers these to be distinct temporal parts. Prominent arguments in favor of four-dimensionalism Several lines of argumentation have been advanced in favor of four-dimensionalism: Firstly, four-dimensional accounts of time are argued to better explain paradoxes of change over time (often referred to as the paradox of the Ship of Theseus) than three-dimensional theories. A contemporary account of this paradox is introduced in Ney (2014), but the original problem has its roots in Greek antiquity. A typical Ship of Theseus paradox involves taking some changeable object with multiple material parts, for example a ship, then sequentially removing and replacing its parts until none of the original components are left. At each stage of the replacement, the ship is presumably identical with the original, since the replacement of a single part need not destroy the ship and create an entirely new one. But, it is also plausible that an object with none of the same material parts as another is not identical with the original object. So, how can an object survive the replacement of any of its parts, and in fact all of its parts? The four-dimensionalist can argue that the persisting object is a single space-time worm which has all the replacement stages as temporal parts, or in the case of the stage view that each succeeding stage bears a temporal counterpart relation to the original stage under discussion. Secondly, problems of temporary intrinsics are argued to be best explained by four-dimensional views of time that involve temporal parts. As presented by David Lewis, the problem of temporary intrinsics involves properties of an object that are both had by that object regardless of how anything else in the world is (and thus intrinsic), and subject to change over time (thus temporary). Shape is argued to be one such property. So, if an object is capable of having a particular shape, and also changing its shape at another time, there must be some way for the same object to be, say, both round and square. Lewis argues that separate temporal parts having the incompatible properties best explains an object being able to change its shape in this way, because other accounts of three-dimensional time eliminate intrinsic properties by indexing them to times and making them relational instead of intrinsic. See also Extended modal realism Four-dimensional space Multiple occupancy view Rietdijk–Putnam argument advocating this position Spacetime World line Light cone References Sources Armstrong, David M. (1980) "Identity Through Time", pages 67,8 in Peter van Inwagen (editor), Time and Cause, D. Reidel. Hughes, C. (1986) "Is a Thing Just the Sum of Its Parts?", Proceedings of the Aristotelian Society 85: 213-33. Heller, Mark (1984). "Temporal Parts of Four Dimensional Objects", Philosophical Studies 46: 323-34. Reprinted in Rea 1997: 12.-330. Heller, Mark (1990) The Ontology of Physical Objects: Four-dimensional Hunks of Matter, Cambridge University Press. Heller, Mark (1992) "Things Change", Philosophy and Phenomenological Research 52: 695-304 Heller, Mark (1993) "Varieties of Four Dimensionalism", Australasian Journal of Philosophy 71: 47-59. Lewis, David (1983). "Survival and Identity", in Philosophical Papers, Volume 1, 55-7. Oxford University Press. With postscripts. Originally published in Amelie O. Rorty, editor (1976) The Identities of Persons University of California Press, pages 17-40. Lewis, David (1986a). On the Plurality of Worlds. Oxford: Basil Blackwell. Lewis, David (1986b). Philosophical Papers, Volume 2. Oxford: Oxford University Press. McTaggart John Ellis (1908) The Unreality of time, originally published in Mind: A Quarterly Review of Psychology and Philosophy 17: 456-473. (1976) "Survival and identity", pages 17-40 in editor, The identities of persons. Berkeley: University of California Press. Google books (2004) "A defense of presentism", pages 47-82 in editor, Oxford Studies in Metaphysics, Volume 1, Oxford University Press. Google books (2005) Review of Four-dimensionalism: an ontology of persistence and time by Theodore Sider, Ars Disputandi 5 (1985) "Can amoebae divide without multiplying?", Australasian Journal of Philosophy 63(3): 299–319. External links Rea, M. C., "Four Dimensionalism" in The Oxford Handbook for Metaphysics. Oxford Univ. Press. Describes presentism and four-dimensionalism. "Time" in the Internet Encyclopedia of Philosophy'' Theories of time Philosophy of physics Spacetime
Four-dimensionalism
[ "Physics", "Mathematics" ]
2,016
[ "Philosophy of physics", "Applied and interdisciplinary physics", "Vector spaces", "Space (mathematics)", "Theory of relativity", "Spacetime" ]
7,191,018
https://en.wikipedia.org/wiki/Intel%20Paragon
The Intel Paragon is a discontinued series of massively parallel supercomputers that was produced by Intel in the 1990s. The Paragon XP/S is a productized version of the experimental Touchstone Delta system that was built at Caltech, launched in 1992. The Paragon superseded Intel's earlier iPSC/860 system, to which it is closely related. The Paragon series is based on the Intel i860 RISC microprocessor. Up to 2048 (later, up to 4096) i860s are connected in a 2D grid. In 1993, an entry-level Paragon XP/E variant was announced with up to 32 compute nodes. The system architecture is a partitioned system, with the majority of the system comprising diskless compute nodes and a small number of I/O nodes interactive service nodes. Since the bulk of the nodes have no permanent storage, it is possible to "Red/Black switch" the compute partition from classified to unclassified by disconnecting one set of I/O nodes with classified disks and then connecting an unclassified I/O partition. Intel intended the Paragon to run the OSF/1 AD distributed operating system on all processors. However, this was found to be inefficient in practice, and a light-weight kernel called SUNMOS was developed at Sandia National Laboratories to replace OSF/1 AD on the Paragon's compute processors. Oak Ridge National Laboratory operated a Paragon XP/S 150 MP, one of the largest Paragon systems, for several years. The prototype for the Intel Paragon was the Intel Delta, built by Intel with funding from DARPA and installed operationally at the California Institute of Technology in the late 1980s with funding from the National Science Foundation. The Delta was one of the few computers to sit significantly above the curve of Moore's Law. Compute nodes The computer boards was produced in two variants: the GP16 with 16 MB of memory and two CPUs, and the MP16 with three CPUs. Each node has a B-NIC interface that connects to the mesh routers on the backplane. The compute nodes are diskless and performed all I/O over the mesh. During system software development, a light-pen was duct-taped to the status LED on one board and a timer interrupt was used to bit bang a serial port. The B-NIC ASIC is the square chip with the circular heat-sink. I/O nodes The IO boards have either SCSI drive interfaces or HiPPI network connections and are used to provide data to the compute nodes. They do not run any user applications. The MP64 I/O node has three i860 CPUs and an i960 CPU used in the disk controller. References External links Intel products Massively parallel computers Supercomputers Very long instruction word computing 32-bit computers Intel supercomputers
Intel Paragon
[ "Technology" ]
595
[ "Supercomputers", "Supercomputing" ]
7,191,551
https://en.wikipedia.org/wiki/Promegakaryocyte
A promegakaryocyte is a precursor cell for a megakaryocyte, the development of which proceeds as follows: CFU-Meg (hematopoietic stem cell/hemocytoblast) → megakaryoblast → promegakaryocyte → megakaryocyte Promegakaryocytes and other precursor cells to megakaryocytes arise from pluripotential hematopoietic progenitors, also known as hemocytoblasts. The megakaryoblast is then produced, followed by the promegakaryocyte, the granular megakaryocyte, and then the mature megakaryocyte. When it is in its promegakaryocyte stage, it is considered an undifferentiated cell. When the megakaryoblast matures into the promegakaryocyte, it undergoes endoreduplication and forms a promegakaryocyte which has multiple nuclei, azurophilic granules, and a basophilic cytoplasm. The promegakaryocyte has rotary motion, but no forward migration. Megakaryocyte pieces will eventually break off and begin circulating the body as platelets. Platelets are very important because of their role in blood clotting, immune response, and the formation of new blood vessels. References External links "Marrow aspirate, 10x. Promegakaryocyte" at ttuhsc.edu "Megakaryocytes: Promegakaryocyte" at bloodline.net Immune system Blood cells
Promegakaryocyte
[ "Biology" ]
336
[ "Immune system", "Organ systems" ]
7,191,630
https://en.wikipedia.org/wiki/Promonocyte
A promonocyte (or premonocyte) is a cell arising from a monoblast and developing into a monocyte. See also Pluripotential hemopoietic stem cell Additional images External links "Monocyte Development" at tulane.edu Slide at marist.edu - "Bone marrow smear" "Maturation Sequence" at hematologyatlas.com (Promonocyte is in seventh row.) Blood cells Immune system
Promonocyte
[ "Biology" ]
100
[ "Immune system", "Organ systems" ]
7,191,704
https://en.wikipedia.org/wiki/Cryo
Cryo- is from the Ancient Greek κρύος (krúos, “ice, icy cold, chill, frost”). Uses of the prefix Cryo- include: Physics and geology Cryogenics, the study of the production and behaviour of materials at very low temperatures and the study of producing extremely low temperatures Cryoelectronics, the study of superconductivity under cryogenic conditions and its applications Cryosphere, those portions of Earth's surface where water ice naturally occurs Cryotron, a switch that uses superconductivity Cryovolcano, a theoretical type of volcano that erupts volatiles instead of molten rock Biology and medicine Cryobiology, the branch of biology that studies the effects of low temperatures on living things Cryonics, the low-temperature preservation of people who cannot be sustained by contemporary medicine Cryoprecipitate, a blood-derived protein product used to treat some bleeding disorders Cryotherapy, medical treatment using cold Cryoablation, tissue removal using cold Cryosurgery, surgery using cold Cryo-electron microscopy (cryoEM), a technique that fires beams of electrons at proteins that have been frozen in solution, to deduce the biomolecules’ structure Other uses Cryo Interactive, a video game company Cryos, a planet in the video game Darkspore See also Kryo, a brand of CPUs by Qualcomm External links Cryogenics Cryobiology Cryonics Superconductivity
Cryo
[ "Physics", "Chemistry", "Materials_science", "Engineering", "Biology" ]
315
[ "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Physical quantities", "Superconductivity", "Cryobiology", "Cryogenics", "Materials science", "Condensed matter physics", "Biochemistry", "Electrical resistance and conductance" ]
7,191,759
https://en.wikipedia.org/wiki/FutureCoal
FutureCoal is an international non-profit, non-governmental association based in London, United Kingdom. It was created to represent the global coal industry. The association was formerly called the World Coal Association (WCA) until 2023 and the World Coal Institute (WCI) until 2010. FutureCoal undertakes lobbying, organises workshops, and provides coal information to decision makers in international energy and environmental policy and research discussions, as well as supplying information to the general public and educational organisations on the benefits and issues surrounding the use of coal. It also promotes clean coal technologies. It has participated in a number of United Nations and International Energy Agency (IEA) workshops, boards, and forums, including the UN Commission on Sustainable Development, the UN Framework Convention on Climate Change, the IEA Working Party on Fossil Fuels, and the IEA Coal Industry Advisory Board. It is also part of the Carbon Sequestration Leadership Forum. It is co-author of a report on the future of coal in ASEAN nations. In 2019, the WCA appointed a new CEO, Michelle Manook who previously worked in mining services firm Orica. In November 2023, the WCA rebranded to "FutureCoal: The Global Alliance for Sustainable Coal". See also Carbon capture and storage Clean coal Coal lobby Coal mining Coal-mining region References External links World Coal Association World Coal Association members Coal in the United Kingdom Coal organizations International energy organizations International organisations based in London Organisations based in the City of Westminster Science and technology in London
FutureCoal
[ "Engineering" ]
307
[ "Coal organizations", "International energy organizations", "Energy organizations" ]
7,192,444
https://en.wikipedia.org/wiki/Hirzebruch%20signature%20theorem
In differential topology, an area of mathematics, the Hirzebruch signature theorem (sometimes called the Hirzebruch index theorem) is Friedrich Hirzebruch's 1954 result expressing the signature of a smooth closed oriented manifold by a linear combination of Pontryagin numbers called the L-genus. It was used in the proof of the Hirzebruch–Riemann–Roch theorem. Statement of the theorem The L-genus is the genus for the multiplicative sequence of polynomials associated to the characteristic power series The first two of the resulting L-polynomials are: (for further L-polynomials see or ). By taking for the the Pontryagin classes of the tangent bundle of a 4n dimensional smooth closed oriented manifold M one obtains the L-classes of M. Hirzebruch showed that the n-th L-class of M evaluated on the fundamental class of M, , is equal to , the signature of M (i.e. the signature of the intersection form on the 2nth cohomology group of M): Sketch of proof of the signature theorem René Thom had earlier proved that the signature was given by some linear combination of Pontryagin numbers, and Hirzebruch found the exact formula for this linear combination by introducing the notion of the genus of a multiplicative sequence. Since the rational oriented cobordism ring is equal to the polynomial algebra generated by the oriented cobordism classes of the even dimensional complex projective spaces, it is enough to verify that for all i. Generalizations The signature theorem is a special case of the Atiyah–Singer index theorem for the signature operator. The analytic index of the signature operator equals the signature of the manifold, and its topological index is the L-genus of the manifold. By the Atiyah–Singer index theorem these are equal. References Sources F. Hirzebruch, The Signature Theorem. Reminiscences and recreation. Prospects in Mathematics, Annals of Mathematical Studies, Band 70, 1971, S. 3–31. Theorems in algebraic topology Theorems in differential topology
Hirzebruch signature theorem
[ "Mathematics" ]
435
[ "Theorems in algebraic topology", "Theorems in differential topology", "Theorems in topology" ]
7,192,614
https://en.wikipedia.org/wiki/Cash%20carrier
Cash carriers were used in shops and department stores to carry customers' payments from the sales assistant to the cashier and to carry the change and receipt back again. The benefits of a "centralised" cash system were that it could be more closely supervised by management, there was less opportunity for pilfering (as change would be counted both by the cashier and by the sales assistant), and it freed up the assistant to attend to the customer and perhaps make further or better sales. Cash balls The earliest type was a two-piece hollow wooden ball which ran along sloping rails, carrying cash and sales docket or receipt. One set of rails sloped down from sales desk to cash office and another set sloped in the opposite direction. This was known as a cash railway. William Stickney Lamson of Lowell, Massachusetts patented this system in 1881. His invention soon attracted the interest of other shopkeepers, and in 1882 along with Meldon Stephen Giles, the Lamson Cash Carrier Company was incorporated in Boston. A working example can be seen in the Co-operative store at Beamish Museum in North East England, and one is still in its original location in the Up-To-Date Store, now a museum, at Coolamon, New South Wales. Wire carriers The next type was a carriage suspended on pulleys from a wire between sales desk, launched from a catapult. The best-known types were "Rapid Wire" and "Air-Line." Air-Line Company The Air-Line Company was based in the United States. It manufactured a Gipe designed system. A cord passed over multiple pulleys to propel the car. Lamsons took over Air-Line and cars usually have "Air-Line" on one side and "Lamson" on the other. Baldwin Baldwins were based in Chicago. Their cash carrier systems were usually known as "Baldwin Flyers". British Cash & Parcel Conveyors A British competitor to Lamson which eventually was subsumed. Dart Cash Dart Cash was a British company established by a grocer from Stoke on Trent, William Alfred Edwards. It was a simple gravity carrier patented in 1918. Later enhancements included a spring for propulsion. As well as wire systems, Dart also made pneumatic cash carriers. Gipe Gipe was an American company founded by Emanuel Clarence Gipe of Freeport, Illinois. Gipe installations were popular in England. The car had two sets of wheels: the upper set ran on one wire and the lower set below a second wire. It was propelled by pulling the wires apart at the sending station by a lever arrangement. Lamson The Lamson Company dominated the market. It was known at various times as the Lamson Cash Carrier Company, the Lamson Cash Railway Company, the Lamson Store Service Company, the Lamson Consolidated Store Service Company, the Lamson Company Inc. and in the UK the Lamson Engineering Company Ltd. Lamsons purchased the Rapid Service Store Railway Company of Detroit which licensed an invention by Robert McCarty of Detroit, Michigan and their system became known as Lamson Rapid Wire. They also made cable systems and pneumatic tube systems. Sturtevants Sturtevants of Boston, Massachusetts was an offshoot of an American company. They purchased part of Reid Brothers around the early 1920s and the pneumatic tube business of Cooke, Troughton and Simms. In 1949 the part that handled pneumatic tubes was acquired by Lamsons. Cable systems This system was developed by Joseph Martin of Vermont. In cable systems there was a continuously moving cable around the shop passing the counters and the cashier, driven by an electric motor. When a payment was to be sent, the sales assistant put it in a carrier and clipped it to the cable. The carrier was guided by light metal tracks. It was detached at the cashier's station, the transaction was dealt with, and the change and receipt were returned along the cable again. Twenty or more stations could easily be operated with a 1 horse-power motor. Lamsons offered two main types of system: the "Perfection" and the high-level "Preferred" where there was a "drop point" at the sales counter. The first shop to use the Lamson cable system was the Boston Store in Brockton (owned by James Edgar), which was founded in 1890. Although quite common in the United States, there were few installations in the United Kingdom. The best late survivor was at Joyners General Store in Moose Jaw, Saskatchewan, but the building burned down on New Year's Day, 2004. Pneumatic tube systems Several of the above companies also made pneumatic tube systems - see Lamson tube. They are still installed in a few shops. Modern pneumatic tube systems are also now used in supermarkets for moving cash in bulk from tills to the central cash office. An 1898 account of a pneumatic tube system installed in Kirkcaldie & Stains department store in Wellington, New Zealand, states:In the basement is a half horse-power Crossley Bros. gas engine, which works a rotary blower, and this in turn supplies the compressed air required for the whole of the system. Distributed throughout the premises are 19 "stations" situated behind the various counters, these stations consisting simply of a valve in the pneumatic tubes which are carried to all parts of the building. On the second floor, in the vicinity of the tea room, these pneumatic tubes are centralised, and present very much the appearance of the front of a pipe organ — in fact, the room already goes by the name of the "organ loft". Each "pipe" of the "organ" terminates in a valve, and in what may be called the "keyboard" of the "organ" are placed a number of small wells, used for the reception of all kinds of coins, from a halfpenny up to a sovereign. Here sits the lady cashier, who occupies a very important and responsible position. A customer, we will say, in the dress department makes a purchase, and hands the saleswoman a sovereign in payment of a bill for 15s 6d. The bill and the sovereign are placed in a small round box, known as a "poppet"; the saleswoman opens the valve of the station behind her counter, places in it the poppet (which is made to fit the tube), shuts the valve again, and, hey presto! the poppet and its contents are sent up the tube to the "organ loft" and almost into the hands of the cashier. That official quickly opens the poppet, puts in it the bill and the necessary change, opens the valve and places it inside, closes the valve, and away goes the poppet on its return journey, the whole transaction occupying but very few seconds. Each station and the poppets in use at it will be numbered, so that there is no possibility of the cashier sending a poppet to the wrong station, and the whole system promises to work with a degree of smoothness and swiftness which cannot fail to give the most complete satisfaction. Notes References External links Carrier Pneumatics Retail store elements Payment methods in retailing
Cash carrier
[ "Technology" ]
1,468
[ "Components", "Retail store elements" ]
7,192,871
https://en.wikipedia.org/wiki/Barium%20chromate
Barium chromate, is a yellow sand like powder with the formula BaCrO4. It is a known oxidizing agent and produces a green flame when heated, a result of the barium ions. History The first naturally occurring barium chromate was found in the country of Jordan. The brown crystals found perched on host rocks were named hashemite in honor of the Hashemite Kingdom of Jordan. The hashemite crystals range in color from light yellowish-brown to a darker greenish-brown and are usually less than 1mm in length. The hashemite crystals are not composed of pure barium chromate but instead contain some small sulfur content as well. The different crystals contain a range of sulfur impurities ranging from the more pure dark crystals, Ba1.00(Cr0.93, S0.07)1.00O4, to the less pure light crystals, Ba1.00(Cr0.64, S0.36)1.00O4. Hashemite was found to be an isostructural chromate analog of baryte, BaSO4. Preparation and Reactions It can be synthesized by reacting barium hydroxide or barium chloride with potassium chromate. \mathsf{{Ba(OH)2} + K2CrO4 -> BaCrO4(v) + 2KOH} Alternatively, it can be created by the interaction of barium chloride with sodium chromate. The precipitate is then washed, filtered, and dried. It is very insoluble in water, but is soluble in acids: Ksp = [Ba2+][CrO42−] = 2.1 × 10−10 It can react with barium hydroxide in the presence of sodium azide to create barium chromate(V). The reaction releases oxygen and water. \mathsf{{4BaCrO4}+2Ba(OH)2 ->[\ce{NaN3}] {2Ba3(CrO4)2} +O_2{\uparrow} + 2H2O{\uparrow}} Common Uses Barium chromate has been found to be useful in many capacities. The compound is often used as a carrier for the chromium ions. One such case is the use of barium chromate as a sulfate scavenger in chromium electroplating baths. Over time the chromium concentration of the bath will decrease until the bath is no longer functional. Adding barium chromate enhances the life of the bath by adding to the chromic acid concentration. Barium chromate is an oxidizing agent, making it useful as a burn rate modifier in pyrotechnic compositions. It is especially useful in delay compositions such as delay fuses. Barium chromate is used as a corrosion inhibitive pigment when zinc-alloy electroplating surfaces. When mixed with solid fumaric acid, barium chromate can be used in the removal of impurities and residual moisture from organic dry-cleaning solvents or from petroleum fuels. Barium chromate is also used in the composition of a catalyst for alkane dehydrogenation. Barium has also been used to color paints. The pigment known as lemon yellow often contained barium chromate mixed with lead sulfate. Due to its moderate tinting strength lemon yellow was not employed very frequently in oil painting. Pierre-Auguste Renoir and Claude Monet are known to have painted with lemon yellow. Research In 2004 a method was found for making single-crystalline ABO4 type nanorods. This method consisted of a modified template synthesis technique that was originally used for the synthesis of organic microtubules. Nanoparticles are allowed to grow in the pores of alumina membranes of various sizes. The varying sizes of the pores allow the growth to be controlled and cause the shapes to be reproducible. The alumina is then dissolved, leaving the nanoparticles behind intact. The synthesis can be carried out at room temperature, greatly reducing the cost and constrictions on conditions. In 2010, a study was conducted on four hexavalent chromium compounds to test the carcinogenic effects of chromium. The chromium ions accumulate in the bronchial bifurcation sites, settling into the tissue and inducing tumors. Using zinc chromate as a standard, it was discovered that barium chromate is both genotoxic and cytotoxic. The cytotoxicity was determined to most likely be a result of the genotoxicity, but the cause of the genotoxicity is yet unknown. Safety Barium chromate is toxic. Chromates, when pulverized and inhaled, are carcinogens. References Further study Kühn, H. and Curran, M., Strontium, Barium and Calcium Chromates, in Artists' Pigments. A Handbook of Their History and Characteristics, Vol. 1: Feller, R.L. (Ed.) Oxford University Press 1986, p. 205 – 207. Lemon yellow, ColourLex Barium compounds Chromates Pyrotechnic oxidizers Oxidizing agents
Barium chromate
[ "Chemistry" ]
1,102
[ "Chromates", "Redox", "Oxidizing agents", "Salts" ]
7,193,138
https://en.wikipedia.org/wiki/Bistriflimide
Bistriflimide, also known variously as bis(trifluoromethane)sulfonimide, bis(trifluoromethanesulfonyl)imide, bis(trifluoromethanesulfonyl)imidate (and variations thereof), informally and somewhat inaccurately as triflimide or triflimidate, or by the abbreviations TFSI or NTf2, is a non-coordinating anion with the chemical formula [(CF3SO2)2N]−. Its salts are typically referred to as being metal triflimidates. Applications The anion is widely used in ionic liquids (such as trioctylmethylammonium bis(trifluoromethylsulfonyl)imide), since it is less toxic and more stable than more "traditional" counterions such as tetrafluoroborate. This anion is also of importance in lithium-ion and lithium metal batteries (LiTFSI) because of its high dissociation and conductivity. It has the added advantage of suppressing crystallinity in poly(ethylene oxide), which increases the conductivity of that polymer below its melting point at 50 °C. Bistriflimidic acid The conjugate acid of bistriflimide, which is frequently referred to by the trivial name bistriflimidic acid (CAS: 82113-65-3), is a commercially available superacid. It is a crystalline compound, but is hygroscopic to the point of being deliquescent. Owing to its very high acidity and good compatibility with organic solvents it has been employed as a catalyst in a wide range of chemical reactions. Its pKa value in water cannot be accurately determined but in acetonitrile it has been estimated as −0.10 and in 1,2-dichloroethane −12.3 (relative to the pKa value of 2,4,6-trinitrophenol (picric acid), anchored to zero to crudely approximate the aqueous pKa scale), making it more acidic than triflic acid (pKaMeCN = 0.70, pKaDCE(relative to picric acid) = −11.4). Naming Developing an IUPAC name for bistriflimide that indicates the structure and reactivity is challenging, and changes to current names have been proposed. The main difficulty arises from the ambiguous use of the word amide to mean an acylated (including sulfonylated) amine or the anionic form of an amine. Likewise, imide can refer to a bisacylated amine or a twice deprotonated amine. Thus, depending on the system used, there is ambiguity as to whether amide or imide is being used to refer to the parent acid or the anion. (The anion has been referred to as an amidate or imidate in an attempt to distinguish it from the acid.) The complications in naming these compounds was highlighted in an article by the IUPAC. Since then, the IUPAC has recommended (2013) that derivatives of anionic nitrogen can be named as azanides, so bis(trifluoromethanesulfonyl)azanide would be an acceptable and unambiguous name for the bistriflimide anion. The parent acid, whose trivial name is triflimidic acid, would then be called bis(trifluoromethanesulfonyl)azane. The name 1,1,1-trifluoro-N-((trifluoromethyl)sulfonyl)methanesulfonamide is also an unambiguous IUPAC-acceptable name, though the symmetry of the molecule is not apparent from this construction. See also Triflic acid Triflidic acid Comins' reagent References Non-coordinating anions Sulfonamides Superacids Trifluoromethyl compounds
Bistriflimide
[ "Chemistry" ]
851
[ "Superacids", "Coordination chemistry", "Acids", "Non-coordinating anions" ]
2,180,494
https://en.wikipedia.org/wiki/Cut%20%28graph%20theory%29
In graph theory, a cut is a partition of the vertices of a graph into two disjoint subsets. Any cut determines a cut-set, the set of edges that have one endpoint in each subset of the partition. These edges are said to cross the cut. In a connected graph, each cut-set determines a unique cut, and in some cases cuts are identified with their cut-sets rather than with their vertex partitions. In a flow network, an s–t cut is a cut that requires the source and the sink to be in different subsets, and its cut-set only consists of edges going from the source's side to the sink's side. The capacity of an s–t cut is defined as the sum of the capacity of each edge in the cut-set. Definition A cut is a partition of of a graph into two subsets and . The cut-set of a cut is the set of edges that have one endpoint in and the other endpoint in . If and are specified vertices of the graph , then an – cut is a cut in which belongs to the set and belongs to the set . In an unweighted undirected graph, the size or weight of a cut is the number of edges crossing the cut. In a weighted graph, the value or weight is defined by the sum of the weights of the edges crossing the cut. A bond is a cut-set that does not have any other cut-set as a proper subset. Minimum cut A cut is minimum if the size or weight of the cut is not larger than the size of any other cut. The illustration on the right shows a minimum cut: the size of this cut is 2, and there is no cut of size 1 because the graph is bridgeless. The max-flow min-cut theorem proves that the maximum network flow and the sum of the cut-edge weights of any minimum cut that separates the source and the sink are equal. There are polynomial-time methods to solve the min-cut problem, notably the Edmonds–Karp algorithm. Maximum cut A cut is maximum if the size of the cut is not smaller than the size of any other cut. The illustration on the right shows a maximum cut: the size of the cut is equal to 5, and there is no cut of size 6, or |E| (the number of edges), because the graph is not bipartite (there is an odd cycle). In general, finding a maximum cut is computationally hard. The max-cut problem is one of Karp's 21 NP-complete problems. The max-cut problem is also APX-hard, meaning that there is no polynomial-time approximation scheme for it unless P = NP. However, it can be approximated to within a constant approximation ratio using semidefinite programming. Note that min-cut and max-cut are not dual problems in the linear programming sense, even though one gets from one problem to other by changing min to max in the objective function. The problem is the dual of the problem. Sparsest cut The sparsest cut problem is to bipartition the vertices so as to minimize the ratio of the number of edges across the cut divided by the number of vertices in the smaller half of the partition. This objective function favors solutions that are both sparse (few edges crossing the cut) and balanced (close to a bisection). The problem is known to be NP-hard, and the best known approximation algorithm is an approximation due to . Cut space The family of all cut sets of an undirected graph is known as the cut space of the graph. It forms a vector space over the two-element finite field of arithmetic modulo two, with the symmetric difference of two cut sets as the vector addition operation, and is the orthogonal complement of the cycle space. If the edges of the graph are given positive weights, the minimum weight basis of the cut space can be described by a tree on the same vertex set as the graph, called the Gomory–Hu tree. Each edge of this tree is associated with a bond in the original graph, and the minimum cut between two nodes s and t is the minimum weight bond among the ones associated with the path from s to t in the tree. See also Connectivity (graph theory) Graph cuts in computer vision Split (graph theory) Vertex separator Bridge (graph theory) Cutwidth References Graph connectivity Combinatorial optimization
Cut (graph theory)
[ "Mathematics" ]
914
[ "Mathematical relations", "Graph connectivity", "Graph theory" ]
2,180,501
https://en.wikipedia.org/wiki/International%20Committee%20for%20Information%20Technology%20Standards
The InterNational Committee for Information Technology Standards (INCITS), (pronounced "insights"), is an ANSI-accredited standards development organization composed of Information technology developers. It was formerly known as the X3 and NCITS. INCITS is the central U.S. forum dedicated to creating technology standards. INCITS is accredited by the American National Standards Institute (ANSI) and is affiliated with the Information Technology Industry Council, a global policy advocacy organization that represents U.S. and global innovation companies. INCITS coordinates technical standards activity between ANSI in the US and joint ISO/IEC committees worldwide. This provides a mechanism to create standards that will be implemented in many nations. As such, INCITS' Executive Board also serves as ANSI's Technical Advisory Group for ISO/IEC Joint Technical Committee 1. JTC 1 is responsible for International standardization in the field of information technology. INCITS operates through consensus. Governance INCITS is guided by its Executive Board. The INCITS Executive Board established more than 40 Technical Committees, Task Groups and Expert Groups that are constantly developing standards for new technologies and updating standards for older products. Mission An open, collaborative community that enhances the competitiveness of U.S. organizations and brings technological advancement to society through the development and promotion of consensus-driven U.S. and global Information Technology standards. Standards development More than 2000 standards have been created and approved through the INCITS process, with many more in development. American National Standards are voluntary and serve U.S. interests well because all materially affected stakeholders have the opportunity to work together to create them. INCITS-approved standards only become mandatory when, and if, they are adopted or referenced by the government or when market forces make them imperative. Given the responsibilities and the expenditures associated with U.S. participation in international standards activities, INCITS considers participation as a "P" member of ISO/IEC JTC 1, as a declaration of support for the international committee's technical work. INCITS policy is to adopt as "Identical" American National Standards all ISO/IEC or ISO standards that fall within its program of work, with exceptions as outlined in our procedures. Accordingly, INCITS will adopt as "Identical" American National Standards all ISO/IEC or ISO standards that fall within its program of work. Similarly, INCITS will withdraw any such adopted American National Standard that has been withdrawn as an ISO/IEC or ISO International Standards. History INCITS was established in 1961 as the Accredited Standards Committee X3, Information Technology and is sponsored by Information Technology Industry Council (ITI), a trade association representing providers of information technology products and services then known as the Business Equipment Manufacturers Association (BEMA) and later renamed the Computer and Business Equipment Manufacturers' Association (CBEMA). The first organizational meeting was in February 1961 with ITI (CBEMA then) taking Secretariat responsibility. X3 was established under American National Standards Institute (ANSI) procedures. The forum was renamed Accredited Standards Committee NCITS, National Committee for Information Technology Standards in 1997, and the current name was approved in 2001. References External links Technical Committees, Task Groups, Study Groups INCITS/Artificial Intelligence INCITS/ATA Storage Interfaces (formerly known as INCITS/T13) INCITS/Biometrics (formerly known as INCITS/M1) INCITS/Biometrics Data Interchange (formerly known as INCITS/M1.7) INCITS/Biometric Performance Testing (formerly known as INCITS/M1.5) INCITS/Blockchain INCITS/Brain Computer Interfaces INCITS/Character Sets and Internationalization (formerly known as INCITS/L2) INCITS/Cloud Computing (formerly known as INCITS/Cloud38) INCITS/Cybersecurity and Privacy (formerly known as INCITS/CS1) INCITS/Data Management (formerly known as INCITS/DM32) INCITS/Fibre Channel (formerly known as INCITS/T11) INCITS/Fibre Channel Physical Variants (formerly known as INCITS/T11.2) INCITS/Fibre Channel Interconnection Schemes (formerly known as INCITS/T11.3) INCITS/Geographic Information Systems (GIS) (formerly known as INCITS/L1) INCITS/Graphics & Imaging (formerly known as INCITS/H3) INCITS/ID-Cards (formerly known as INCITS/B10) INCITS/Secure Identification Proximity Devices (formerly known as INCITS/B10.5) INCITS/Driver’s License/ID Cards (formerly known as INCITS/B10.8) INCITS/ID-Cards Test Methods (formerly known as INCITS/B10.11) INCITS/Inclusive Terminology INCITS/Internet of Things (IoT) INCITS/IT and Data Center Sustainability (formerly known as INCITS/ITS39) INCITS/Multimedia Coding (formerly known as INCITS/L3) INCITS/MPEG (formerly known as INCITS/L3.1) INCITS/JPEG (formerly known as INCITS/L3.2) INCITS/Programming Languages (formerly known as INCITS/PL22) INCITS/Fortran (formerly known as INCITS/PL22.3) INCITS/C Language (formerly known as INCITS/PL22.11) INCITS/C++ (formerly known as INCITS/PL22.16) INCITS/Networks (formerly known as INCITS/T3) INCITS/Office Equipment (formerly known as INCITS/W1) INCITS/SCSI (formerly known as INCITS/T10) INCITS/Software and Systems Engineering Others Homepage of INCITS, includes a list of INCITS standards Contact INCITS ANSI Accredited Standards Developers (ANSI Accredited SDO ) JTC 1 Homepage Charles A. Phillips Papers, 1959-1985 (Historical reference to BEMA) Organizations established in 1961 Standards organizations in the United States Information technology organizations
International Committee for Information Technology Standards
[ "Technology" ]
1,222
[ "Information technology", "Information technology organizations" ]
2,180,532
https://en.wikipedia.org/wiki/Hauptvermutung
The Hauptvermutung of geometric topology is a now refuted conjecture asking whether any two triangulations of a triangulable space have subdivisions that are combinatorially equivalent, i.e. the subdivided triangulations are built up in the same combinatorial pattern. It was originally formulated as a conjecture in 1908 by Ernst Steinitz and Heinrich Franz Friedrich Tietze, but it is now known to be false. History The non-manifold version was disproved by John Milnor in 1961 using Reidemeister torsion. The manifold version is true in dimensions . The cases and were proved by Tibor Radó and Edwin E. Moise in the 1920s and 1950s, respectively. An obstruction to the manifold version was formulated by Andrew Casson and Dennis Sullivan in 1967–69 (originally in the simply-connected case), using the Rochlin invariant and the cohomology group . In dimension , a homeomorphism of m-dimensional piecewise linear manifolds has an invariant such that is isotopic to a piecewise linear (PL) homeomorphism if and only if . In the simply-connected case and with , is homotopic to a PL homeomorphism if and only if . This quantity is now seen as a relative version of the triangulation obstruction of Robion Kirby and Laurent C. Siebenmann, obtained in 1970. The Kirby–Siebenmann obstruction is defined for any compact m-dimensional topological manifold M again using the Rochlin invariant. For , the manifold M has a PL structure (i.e., it can be triangulated by a PL manifold) if and only if , and if this obstruction is 0, the PL structures are parametrized by . In particular there are only a finite number of essentially distinct PL structures on M. For compact simply-connected manifolds of dimension 4, Simon Donaldson found examples with an infinite number of inequivalent PL structures, and Michael Freedman found the E8 manifold which not only has no PL structure, but (by work of Casson) is not even homeomorphic to a simplicial complex. In 2013, Ciprian Manolescu proved that there exist compact topological manifolds of dimension 5 (and hence of any dimension greater than 5) that are not homeomorphic to a simplicial complex. Thus Casson's example illustrates a more general phenomenon that is not merely limited to dimension 4. Notes References External links Additional material, including original sources Disproved conjectures Geometric topology Structures on manifolds Surgery theory
Hauptvermutung
[ "Mathematics" ]
530
[ "Topology", "Geometric topology" ]
2,180,548
https://en.wikipedia.org/wiki/List%20of%20Fibre%20Channel%20standards
Fibre Channel 2005 FC-SATA (under development) FC-PI-2 INCITS 404 2004 FC-SP ANSI INCITS 1570-D FC-GS-4 (Fibre Channel Generic Services)ANSI INCITS 387. Includes the following standards: FC-GS-2 ANSI INCITS 288 (1999) FC-GS-3 ANSI INCITS 348 (2001) FC-SW-3 INCITS 384. Includes the following standards: FC-SW INCITS 321 (1998) FC-SW-2 INCITS 355 (2001) FC-DA INCITS TR-36. Includes the following standards: FC-FLA INCITS TR-20 (1998) FC-PLDA INCITS TR-19 (1998) 2003 FC-FS INCITS 373. Includes the following standards: FC-PH ANSI X3.230 (1994) FC-PH-2 ANSI X3.297 (1997) FC-PH-3 ANSI X3.303 (1998) FC-BB-2 INCITS 372 FC-SB-3 INCITS 374. Replaces: FC-SB ANSI X3.271 (1996) FC-SB-2 INCITS 374 (2001) 2002 FC-VI INCITS 357 FC-MI INCITS/TR-30 FC-PI INCITS 352 2001 FC-SB-2 INCITS 374. Replaced by: FC-SB-3 INCITS 374 (2003) FC-SW-2 INCITS 355. Replaced by: FC-SW-3 INCITS 384 (2004) FC-GS-3 ANSI INCITS 348. Replaced by: FC-GS-4 ANSI INCITS 387 (2004) 1999 FC-AL-2 INCITS 332 FC-TAPE INCITS TR-24 FC-GS-2 ANSI INCITS 288 (1999). Replaced by: FC-GS-4 ANSI INCITS 387 (2004) 1998 FC-PH-3 ANSI X3.303. Replaced by: FC-FS INCITS 373 (2003) FC-FLA INCITS TR-20. Replaced by: FC-DA INCITS TR-36 (2004) FC-PLDA INCITS TR-19. Replaced by: FC-DA INCITS TR-36 (2004) FC-SW INCITS 321. Replaced by: FC-SW-3 INCITS 384 (2004) 1997 FC-PH-2 ANSI X3.297. Replaced by: FC-FS INCITS 373 1996 FC-SB ANSI X3.271. Replaced by: FC-SB-3 INCITS 374 FC-AL ANSI X3.272 1994 FC-PH ANSI X3.230. Replaced by: FC-FS INCITS 373 (2003) Others: FC-LS: Fibre Channel Link Services FC-HBA API for Fibre Channel HBA management FC-GS-3 CT Fibre Channel Global Services Common Transport RFCs - Transmission of IPv6, IPv4, and Address Resolution Protocol (ARP) Packets over Fibre Channel, 2006 - Transmission of IPv6 Packets over Fibre Channel (Obsoleted by: RFC 4338) - IP and ARP over Fibre Channel (Obsoleted by: RFC 4338) - Securing Block Storage Protocols over IP SNMP-related specifications RFCs - MIB for Fibre-Channel Security Protocols (FC-SP) - Fibre Channel Registered State Change Notification (RSCN) MIB - Fibre-Channel Zone Server MIB - Fibre-Channel Fabric Configuration Server MIB - The Virtual Fabrics MIB - MIB for Fibre Channel's Fabric Shortest Path First (FSPF) Protocol - Fibre Channel Routing Information MIB - Fibre Channel Fabric Address Manager MIB - Fibre-Channel Name Server MIB - Definitions of Managed Objects for Internet Fibre Channel Protocol iFCP - Fibre Channel Management MIB - Definitions of Managed Objects for the Fabric Element in Fibre Channel Standard (Obsoleted by: RFC 4044) References Fibre Channel Fibre Channel standards Fibre Channel
List of Fibre Channel standards
[ "Technology" ]
834
[ "Computing-related lists" ]
2,180,593
https://en.wikipedia.org/wiki/Kirby%E2%80%93Siebenmann%20class
In mathematics, more specifically in geometric topology, the Kirby–Siebenmann class is an obstruction for topological manifolds to allow a PL-structure. The KS-class For a topological manifold M, the Kirby–Siebenmann class is an element of the fourth cohomology group of M that vanishes if M admits a piecewise linear structure. It is the only such obstruction, which can be phrased as the weak equivalence of TOP/PL with an Eilenberg–MacLane space. The Kirby-Siebenmann class can be used to prove the existence of topological manifolds that do not admit a PL-structure. Concrete examples of such manifolds are , where stands for Freedman's E8 manifold. The class is named after Robion Kirby and Larry Siebenmann, who developed the theory of topological and PL-manifolds. See also Hauptvermutung References Homology theory Geometric topology Structures on manifolds Surgery theory
Kirby–Siebenmann class
[ "Mathematics" ]
198
[ "Topology stubs", "Topology", "Geometric topology" ]
2,180,754
https://en.wikipedia.org/wiki/Donaldson%27s%20theorem
In mathematics, and especially differential topology and gauge theory, Donaldson's theorem states that a definite intersection form of a compact, oriented, smooth manifold of dimension 4 is diagonalizable. If the intersection form is positive (negative) definite, it can be diagonalized to the identity matrix (negative identity matrix) over the . The original version of the theorem required the manifold to be simply connected, but it was later improved to apply to 4-manifolds with any fundamental group. History The theorem was proved by Simon Donaldson. This was a contribution cited for his Fields medal in 1986. Idea of proof Donaldson's proof utilizes the moduli space of solutions to the anti-self-duality equations on a principal -bundle over the four-manifold . By the Atiyah–Singer index theorem, the dimension of the moduli space is given by where is a Chern class, is the first Betti number of , and is the dimension of the positive-definite subspace of with respect to the intersection form. When is simply-connected with definite intersection form, possibly after changing orientation, one always has and . Thus taking any principal -bundle with , one obtains a moduli space of dimension five. This moduli space is non-compact and generically smooth, with singularities occurring only at the points corresponding to reducible connections, of which there are exactly many. Results of Clifford Taubes and Karen Uhlenbeck show that whilst is non-compact, its structure at infinity can be readily described. Namely, there is an open subset of , say , such that for sufficiently small choices of parameter , there is a diffeomorphism . The work of Taubes and Uhlenbeck essentially concerns constructing sequences of ASD connections on the four-manifold with curvature becoming infinitely concentrated at any given single point . For each such point, in the limit one obtains a unique singular ASD connection, which becomes a well-defined smooth ASD connection at that point using Uhlenbeck's removable singularity theorem. Donaldson observed that the singular points in the interior of corresponding to reducible connections could also be described: they looked like cones over the complex projective plane . Furthermore, we can count the number of such singular points. Let be the -bundle over associated to by the standard representation of . Then, reducible connections modulo gauge are in a 1-1 correspondence with splittings where is a complex line bundle over . Whenever we may compute: , where is the intersection form on the second cohomology of . Since line bundles over are classified by their first Chern class , we get that reducible connections modulo gauge are in a 1-1 correspondence with pairs such that . Let the number of pairs be . An elementary argument that applies to any negative definite quadratic form over the integers tells us that , with equality if and only if is diagonalizable. It is thus possible to compactify the moduli space as follows: First, cut off each cone at a reducible singularity and glue in a copy of . Secondly, glue in a copy of itself at infinity. The resulting space is a cobordism between and a disjoint union of copies of (of unknown orientations). The signature of a four-manifold is a cobordism invariant. Thus, because is definite: , from which one concludes the intersection form of is diagonalizable. Extensions Michael Freedman had previously shown that any unimodular symmetric bilinear form is realized as the intersection form of some closed, oriented four-manifold. Combining this result with the Serre classification theorem and Donaldson's theorem, several interesting results can be seen: 1) Any indefinite non-diagonalizable intersection form gives rise to a four-dimensional topological manifold with no differentiable structure (so cannot be smoothed). 2) Two smooth simply-connected 4-manifolds are homeomorphic, if and only if, their intersection forms have the same rank, signature, and parity. See also Unimodular lattice Donaldson theory Yang–Mills equations Rokhlin's theorem Notes References Differential topology Theorems in topology Quadratic forms
Donaldson's theorem
[ "Mathematics" ]
841
[ "Mathematical theorems", "Quadratic forms", "Theorems in topology", "Topology", "Differential topology", "Mathematical problems", "Number theory" ]
2,180,929
https://en.wikipedia.org/wiki/Asherah%20pole
An Asherah pole is a sacred tree or pole that stood near Canaanite religious locations to honor the goddess Asherah. The relation of the literary references to an asherah and archaeological finds of Judaean pillar-figurines has engendered a literature of debate. The asherim were also cult objects related to the worship of Asherah, the consort of either Ba'al or, as inscriptions from Kuntillet ‘Ajrud and Khirbet el-Qom attest, Yahweh, and thus objects of contention among competing cults. Most English translations of the Hebrew Bible translate the Hebrew words ( ) or ( ) to "Asherah poles". References from the Hebrew Bible Asherim are mentioned in the Hebrew Bible in the books of Exodus, Deuteronomy, Judges, the Books of Kings, the second Book of Chronicles, and the books of Isaiah, Jeremiah, and Micah. The term often appears as merely , () referred to as "groves" in the King James Version, which follows the Septuagint rendering as (alsos), pl. (alsē) and the Vulgate , and "poles" in the New Revised Standard Version; no word that may be translated as "poles" appears in the text. Scholars have indicated, however, that the plural use of the term (English "Asherahs", translating Hebrew or ) provides ample evidence that reference is being made to objects of worship rather than a transcendent figure. The Hebrew Bible suggests that the poles were made of wood. In the sixth chapter of the Book of Judges, God is recorded as instructing the Israelite judge Gideon to cut down an Asherah pole that was next to an altar to Baal. The wood was to be used for a burnt offering. Deuteronomy 16:21 states that YHWH (rendered as "the ") hated Asherim whether rendered as poles: "Do not set up any [wooden] Asherah [pole] beside the altar you build to the your God" or as living trees: "You shall not plant any tree as an Asherah beside the altar of the Lord your God which you shall make". That Asherahs were not always living trees is shown in 1 Kings 14:23: "their asherim, beside every luxuriant tree". However, the record indicates that the Jewish people often departed from this ideal. For example, King Manasseh placed an Asherah pole in the Holy Temple (2 Kings 21:7). King Josiah's reforms in the late 7th century BC included the destruction of many Asherah poles (2 Kings 23:14). Exodus 34:13 states: "Break down their altars, smash their sacred stones and cut down their Asherim [Asherah poles]." Asherah poles in biblical archaeology Biblical archaeologists have suggested that until the 6th century BC the Israelite peoples had household shrines, or at least figurines, of Asherah, which are strikingly common in the archaeological remains. Thus, the pro-Yahwist prophets and priests were the "innovators" whilst Asherah worshippers were the "traditionalists". Joan E. Taylor suggests the temple menorah’s iconography can be traced to representations of a sacred tree, possibly “based on the form of an asherah, perhaps one associated in particular with Bethel.” However, Rachel Hachlili finds this hypothesis unlikely. Raphael Patai identified the pillar figurines with Asherah in The Hebrew Goddess. Purpose So far, the purpose of Asherah poles are unknown. Due to its role in Iron Age Yahwism, some suggest they were embodiments of Yahweh himself. Evidence for the latter includes pro-Yahwist kings like Jehu not destroying Asherah poles, despite violently suppressing non-Yahwist cults. In addition, the Yahwist inscription of Kuntillet ʿAjrud in the Sinai Peninsula pairs Yahweh with Asherah. Scholars believe Asherah is merely a cultic object or temple but others argue that it is a generic name for any consort of Yahweh. Ronald Hendel argues a middle ground is possible, where the Asherah pole is a symbol of the eponymous goddess but is believed to be the mediator between the worshipper and Yahweh, where she becomes the "effective bestower of blessing". Stéphanie Anthonioz says that early references to Asherah poles in the Hebrew Bible (i.e. ) were built on the awareness that Yahweh had a consort, from the perspective of many Israelites. With the exception of Deuteronomists, many Near Easterners believed symbols and cult images, like the Asherah pole, were reflections of the divine and the divine themselves in their anthropomorphized forms. See also Baetyl, type of sacred standing stone High place, raised place of worship Ceremonial pole Sacred trees and groves in Germanic paganism and mythology Matzevah, sacred pillar (Hebrew Bible) or Jewish headstone Kanrodai, sacred pillar in Japanese religions Xoanon Menhir, orthostat, or standing stone: upright stone, typically from the Bronze Age Stele, stone or wooden slab erected as a monument Trees in mythology Maqam Boaz and Jachin Judean pillar figures References Sources Ancient Israel and Judah Hebrew Bible objects Levantine mythology Religious objects Trees in religion Book of Exodus Book of Deuteronomy Book of Judges Books of Kings Books of Chronicles Book of Isaiah Book of Jeremiah Book of Micah Asherah Iconography
Asherah pole
[ "Physics" ]
1,151
[ "Religious objects", "Physical objects", "Matter" ]
2,181,039
https://en.wikipedia.org/wiki/Qubit%20field%20theory
A qubit field theory is a quantum field theory in which the canonical commutation relations involved in the quantisation of pairs of observables are relaxed. Specifically, it is a quantum field theory in which, unlike most other quantum field theories, the pair of observables is not required to always commute. Theory In many ordinary quantum field theories, constraining one observable to a fixed value results in the uncertainty of the other observable being infinite (c.f. uncertainty principle), and as a consequence there is potentially an infinite amount of information involved. In the situation of the standard position-momentum commutation (where the uncertainty principle is most commonly cited), this implies that a fixed, finite, volume of space has an infinite capacity to store information. However, Bekenstein's bound hints that the information storage capacity ought to be finite. Qubit field theory seeks to resolve this issue by removing the commutation restriction, allowing the capacity to store information to be finite; hence the name qubit, which derives from quantum-bit or quantised-bit. David Deutsch has presented a group of qubit field theories which, despite not requiring commutation of certain observables, still presents the same observable results as ordinary quantum field theory. J. Hruby has presented a supersymmetric extension. References External links Qubit Field Theory by David Deutsch Quantum field theory
Qubit field theory
[ "Physics" ]
301
[ "Quantum field theory", "Quantum mechanics", "Quantum physics stubs" ]
2,181,154
https://en.wikipedia.org/wiki/Lead%20hydrogen%20arsenate
Lead hydrogen arsenate, also called lead arsenate, acid lead arsenate or LA, chemical formula PbHAsO4, is an inorganic insecticide formerly used to control pests including gypsy moth, potato beetle and rats. Lead arsenate was the most extensively used arsenical insecticide. Two principal formulations of lead arsenate were marketed: basic lead arsenate (Pb5OH(AsO4)3, CASN: 1327-31-7) and acid lead arsenate (PbHAsO4). It is now banned for use as a pesticide in countries such as the US and UK as it is considered too toxic and persistent. Production and structure It is usually produced using the following reaction, which leads to formation of the desired product as a solid precipitate: Pb(NO3)2 + H3AsO4 → PbHAsO4 +2 HNO3 It has the same structure as the hydrogen phosphate PbHPO4. Like lead sulfate PbSO4, these salts are poorly soluble. Uses As an insecticide, it was introduced in 1898 used against the gypsy moth in Massachusetts. It represented a less soluble and less toxic alternative to then-used Paris Green, which is about 10x more toxic. It also adhered better to the surface of the plants, further enhancing and prolonging its insecticidal effect. Lead arsenate was widely used in Australia, Canada, New Zealand, US, England, France, North Africa, and many other areas, principally against the codling moth and snow-white linden moth. It was used mainly on apples, but also on other fruit trees, garden crops, turfgrasses, and against mosquitoes. In combination with ammonium sulfate, it was used in southern California as a winter treatment on lawns to kill crab grass seed. The search for a substitute was commenced in 1919, when it was found that its residues remain in the products despite washing their surfaces. Alternatives were found to be less effective or more toxic to plants and animals, until 1947 when DDT was found. US EPA banned use of lead arsenate on food crops in 1988. Safety LD50 is 1050 mg/kg (rat, oral). Morel mushrooms growing in old apple orchards that had been treated with lead arsenate may accumulate levels of toxic lead and arsenic that are unhealthy for human consumption. Lead arsenate was used as an insecticide in deciduous fruit trees from 1892 until around 1947 in Washington. Peryea et al. studied the distribution of Pb and As in these soils, concluding that these levels were above maximum tolerance levels. This indicates that these levels could be of environmental concern and potentially could be contaminating the groundwater in the area. See also Calcium arsenate References External links Case Studies in Environmental Medicine - Arsenic Toxicity Case Studies in Environmental Medicine - Lead Toxicity National Pollutant Inventory - Lead and Lead Compounds Fact Sheet Lead arsenate history Lead(II) compounds Hydrogen compounds Arsenates Inorganic insecticides
Lead hydrogen arsenate
[ "Chemistry" ]
624
[ "Inorganic insecticides", "Inorganic compounds" ]
2,181,194
https://en.wikipedia.org/wiki/Aladin%20Sky%20Atlas
Aladin is an interactive software sky atlas, created in France. It allows the user to visualize digitized astronomical images, superimpose entries from astronomical catalogues or databases, and interactively access related data and information from the SIMBAD database, the VizieR service and other archives for all known sources in the field. Created in 1999, Aladin has become a widely used VO portal capable of addressing challenges such as locating data of interest, accessing and exploring distributed datasets, visualizing multi-wavelength data. Compliance with existing or emerging VO standards, interconnection with other visualisation or analysis tools, and ability to easily compare heterogeneous data are key features allowing Aladin to be a powerful data exploration and integration tool, and a science enabler. Aladin is developed and maintained by the Centre de données astronomiques de Strasbourg (CDS) and released under the GNU GPL v3. See also Centre national de la recherche scientifique Observatory of Strasbourg SKY-MAP.ORG Stellarium References External links The Aladin Sky Atlas home page Free astronomy software Star atlases Centre de données astronomiques de Strasbourg 1999 software
Aladin Sky Atlas
[ "Astronomy" ]
243
[ "Centre de données astronomiques de Strasbourg", "Astronomy data and publications" ]
2,181,236
https://en.wikipedia.org/wiki/A%20Sidewalk%20Astronomer
A Sidewalk Astronomer is a 2005 documentary film about former Vedanta monk and amateur astronomer John Dobson. The film follows Dobson to state parks, astronomy clubs, and downtown streets as he promotes awareness of astronomy through his own personal style of sidewalk astronomy. The documentary includes voice overs by Dobson himself promoting his unorthodox views on religion and cosmology. Crews Produced and directed: Jeffrey Fox Jacobs Director of photography: Jeffrey Fox Jacobs Editor: Jeanne Vitale Music: John Angier Release: Jacobs Entertainment Inc Running time: 78 minutes Review "An inspiring film about an inspired teacher".. New York Times Screenings Shown at: Tribeca Film Festival 2005 ; Singapore Film Festival 2005; Maine Film Festival 2005; Avignon Film Festival 2005 ; Green Mountain Film Festival 2006 References External links Official Movie Website "A Stargazer Who Exhorts the World to Gaze With Him" by Dana Stevens The San Francisco Sidewalk Astronomers 2005 films Amateur astronomy 2005 documentary films American documentary films Biographical documentary films Documentary films about outer space 2000s English-language films 2000s American films English-language documentary films
A Sidewalk Astronomer
[ "Astronomy" ]
215
[ "Space art", "Documentary films about outer space" ]
2,181,360
https://en.wikipedia.org/wiki/Tarski%27s%20axioms
Tarski's axioms are an axiom system for Euclidean geometry, specifically for that portion of Euclidean geometry that is formulable in first-order logic with identity (i.e. is formulable as an elementary theory). As such, it does not require an underlying set theory. The only primitive objects of the system are "points" and the only primitive predicates are "betweenness" (expressing the fact that a point lies on a line segment between two other points) and "congruence" (expressing the fact that the distance between two points equals the distance between two other points). The system contains infinitely many axioms. The axiom system is due to Alfred Tarski who first presented it in 1926. Other modern axiomizations of Euclidean geometry are Hilbert's axioms (1899) and Birkhoff's axioms (1932). Using his axiom system, Tarski was able to show that the first-order theory of Euclidean geometry is consistent, complete and decidable: every sentence in its language is either provable or disprovable from the axioms, and we have an algorithm which decides for any given sentence whether it is provable or not. Overview Early in his career Tarski taught geometry and researched set theory. His coworker Steven Givant (1999) explained Tarski's take-off point: From Enriques, Tarski learned of the work of Mario Pieri, an Italian geometer who was strongly influenced by Peano. Tarski preferred Pieri's system [of his Point and Sphere memoir], where the logical structure and the complexity of the axioms were more transparent. Givant then says that "with typical thoroughness" Tarski devised his system: What was different about Tarski's approach to geometry? First of all, the axiom system was much simpler than any of the axiom systems that existed up to that time. In fact the length of all of Tarski's axioms together is not much more than just one of Pieri's 24 axioms. It was the first system of Euclidean geometry that was simple enough for all axioms to be expressed in terms of the primitive notions only, without the help of defined notions. Of even greater importance, for the first time a clear distinction was made between full geometry and its elementary — that is, its first order — part. Like other modern axiomatizations of Euclidean geometry, Tarski's employs a formal system consisting of symbol strings, called sentences, whose construction respects formal syntactical rules, and rules of proof that determine the allowed manipulations of the sentences. Unlike some other modern axiomatizations, such as Birkhoff's and Hilbert's, Tarski's axiomatization has no primitive objects other than points, so a variable or constant cannot refer to a line or an angle. Because points are the only primitive objects, and because Tarski's system is a first-order theory, it is not even possible to define lines as sets of points. The only primitive relations (predicates) are "betweenness" and "congruence" among points. Tarski's axiomatization is shorter than its rivals, in a sense Tarski and Givant (1999) make explicit. It is more concise than Pieri's because Pieri had only two primitive notions while Tarski introduced three: point, betweenness, and congruence. Such economy of primitive and defined notions means that Tarski's system is not very convenient for doing Euclidean geometry. Rather, Tarski designed his system to facilitate its analysis via the tools of mathematical logic, i.e., to facilitate deriving its metamathematical properties. Tarski's system has the unusual property that all sentences can be written in universal-existential form, a special case of the prenex normal form. This form has all universal quantifiers preceding any existential quantifiers, so that all sentences can be recast in the form This fact allowed Tarski to prove that Euclidean geometry is decidable: there exists an algorithm which can determine the truth or falsity of any sentence. Tarski's axiomatization is also complete. This does not contradict Gödel's first incompleteness theorem, because Tarski's theory lacks the expressive power needed to interpret Robinson arithmetic . The axioms Alfred Tarski worked on the axiomatization and metamathematics of Euclidean geometry intermittently from 1926 until his death in 1983, with Tarski (1959) heralding his mature interest in the subject. The work of Tarski and his students on Euclidean geometry culminated in the monograph Schwabhäuser, Szmielew, and Tarski (1983), which set out the 10 axioms and one axiom schema shown below, the associated metamathematics, and a fair bit of the subject. Gupta (1965) made important contributions, and Tarski and Givant (1999) discuss the history. Fundamental relations These axioms are a more elegant version of a set Tarski devised in the 1920s as part of his investigation of the metamathematical properties of Euclidean plane geometry. This objective required reformulating that geometry as a first-order theory. Tarski did so by positing a universe of points, with lower case letters denoting variables ranging over that universe. Equality is provided by the underlying logic (see First-order logic#Equality and its axioms). Tarski then posited two primitive relations: Betweenness, a triadic relation. The atomic sentence Bxyz denotes that the point y is "between" the points x and z, in other words, that y is a point on the line segment xz. (This relation is interpreted inclusively, so that Bxyz is trivially true whenever x=y or y=z). Congruence (or "equidistance"), a tetradic relation. The atomic sentence Cwxyz or commonly wx ≡ yz can be interpreted as wx is congruent to yz, in other words, that the length of the line segment wx is equal to the length of the line segment yz. Betweenness captures the affine aspect (such as the parallelism of lines) of Euclidean geometry; congruence, its metric aspect (such as angles and distances). The background logic includes identity, a binary relation denoted by =. The axioms below are grouped by the types of relation they invoke, then sorted, first by the number of existential quantifiers, then by the number of atomic sentences. The axioms should be read as universal closures; hence any free variables should be taken as tacitly universally quantified. Congruence axioms Reflexivity of Congruence Identity of Congruence Transitivity of Congruence Commentary While the congruence relation is, formally, a 4-way relation among points, it may also be thought of, informally, as a binary relation between two line segments and . The "Reflexivity" and "Transitivity" axioms above, combined, prove both: that this binary relation is in fact an equivalence relation it is reflexive: . it is symmetric . it is transitive . and that the order in which the points of a line segment are specified is irrelevant. . . . The "transitivity" axiom asserts that congruence is Euclidean, in that it respects the first of Euclid's "common notions". The "Identity of Congruence" axiom states, intuitively, that if xy is congruent with a segment that begins and ends at the same point, x and y are the same point. This is closely related to the notion of reflexivity for binary relations. Betweenness axioms Identity of Betweenness The only point on the line segment is itself. Axiom of Pasch Axiom schema of Continuity Let φ(x) and ψ(y) be first-order formulae containing no free instances of either a or b. Let there also be no free instances of x in ψ(y) or of y in φ(x). Then all instances of the following schema are axioms: Let r be a ray with endpoint a. Let the first order formulae φ and ψ define subsets X and Y of r, such that every point in Y is to the right of every point of X (with respect to a). Then there exists a point b in r lying between X and Y. This is essentially the Dedekind cut construction, carried out in a way that avoids quantification over sets. Note that the formulae φ(x) and ψ(y) may contain parameters, i.e. free variables different from a, b, x, y. And indeed, each instance of the axiom scheme that does not contain parameters can be proven from the other axioms. Lower Dimension There exist three noncollinear points. Without this axiom, the theory could be modeled by the one-dimensional real line, a single point, or even the empty set. Congruence and betweenness Upper Dimension Three points equidistant from two distinct points form a line. Without this axiom, the theory could be modeled by three-dimensional or higher-dimensional space. Axiom of Euclid Three variants of this axiom can be given, labeled A, B and C below. They are equivalent to each other given the remaining Tarski's axioms, and indeed equivalent to Euclid's parallel postulate. A: Let a line segment join the midpoint of two sides of a given triangle. That line segment will be half as long as the third side. This is equivalent to the interior angles of any triangle summing to two right angles. B: Given any triangle, there exists a circle that includes all of its vertices. C: Given any angle and any point v in its interior, there exists a line segment including v, with an endpoint on each side of the angle. Each variant has an advantage over the others: A dispenses with existential quantifiers; B has the fewest variables and atomic sentences; C requires but one primitive notion, betweenness. This variant is the usual one given in the literature. Five Segment Begin with two triangles, xuz and x'u'z'. Draw the line segments yu and y'u', connecting a vertex of each triangle to a point on the side opposite to the vertex. The result is two divided triangles, each made up of five segments. If four segments of one triangle are each congruent to a segment in the other triangle, then the fifth segments in both triangles must be congruent. This is equivalent to the side-angle-side rule for determining that two triangles are congruent; if the angles uxz and u'x'z' are congruent (there exist congruent triangles xuz and x'u'z'), and the two pairs of incident sides are congruent (xu ≡ x'u' and xz ≡ x'z'), then the remaining pair of sides is also congruent (uz ≡ u'z). Segment Construction For any point y, it is possible to draw in any direction (determined by x) a line congruent to any segment ab. Discussion According to Tarski and Givant (1999: 192-93), none of the above axioms are fundamentally new. The first four axioms establish some elementary properties of the two primitive relations. For instance, Reflexivity and Transitivity of Congruence establish that congruence is an equivalence relation over line segments. The Identity of Congruence and of Betweenness govern the trivial case when those relations are applied to nondistinct points. The theorem xy≡zz ↔ x=y ↔ Bxyx extends these Identity axioms. A number of other properties of Betweenness are derivable as theorems including: Reflexivity: Bxxy ; Symmetry: Bxyz → Bzyx ; Transitivity: (Bxyw ∧ Byzw) → Bxyz ; Connectivity: (Bxyw ∧ Bxzw) → (Bxyz ∨ Bxzy). The last two properties totally order the points making up a line segment. The Upper and Lower Dimension axioms together require that any model of these axioms have dimension 2, i.e. that we are axiomatizing the Euclidean plane. Suitable changes in these axioms yield axiom sets for Euclidean geometry for dimensions 0, 1, and greater than 2 (Tarski and Givant 1999: Axioms 8(1), 8(n), 9(0), 9(1), 9(n) ). Note that solid geometry requires no new axioms, unlike the case with Hilbert's axioms. Moreover, Lower Dimension for n dimensions is simply the negation of Upper Dimension for n - 1 dimensions. When the number of dimensions is greater than 1, Betweenness can be defined in terms of congruence (Tarski and Givant, 1999). First define the relation "≤" (where is interpreted "the length of line segment is less than or equal to the length of line segment "): In the case of two dimensions, the intuition is as follows: For any line segment xy, consider the possible range of lengths of xv, where v is any point on the perpendicular bisector of xy. It is apparent that while there is no upper bound to the length of xv, there is a lower bound, which occurs when v is the midpoint of xy. So if xy is shorter than or equal to zu, then the range of possible lengths of xv will be a superset of the range of possible lengths of zw, where w is any point on the perpendicular bisector of zu. Betweenness can then be defined by using the intuition that the shortest distance between any two points is a straight line: The Axiom Schema of Continuity assures that the ordering of points on a line is complete (with respect to first-order definable properties). As was pointed out by Tarski, this first-order axiom schema may be replaced by a more powerful second-order Axiom of Continuity if one allows for variables to refer to arbitrary sets of points. The resulting second-order system is equivalent to Hilbert's set of axioms. (Tarski and Givant 1999) The Axioms of Pasch and Euclid are well known. The Segment Construction axiom makes measurement and the Cartesian coordinate system possible—simply assign the length 1 to some arbitrary non-empty line segment. Indeed, it is shown in (Schwabhäuser 1983) that by specifying two distinguished points on a line, called 0 and 1, we can define an addition, multiplication and ordering, turning the set of points on that line into a real-closed ordered field. We can then introduce coordinates from this field, showing that every model of Tarski's axioms is isomorphic to the two-dimensional plane over some real-closed ordered field. The standard geometric notions of parallelism and intersection of lines (where lines are represented by two distinct points on them), right angles, congruence of angles, similarity of triangles, tangency of lines and circles (represented by a center point and a radius) can all be defined in Tarski's system. Let wff stand for a well-formed formula (or syntactically correct first-order formula) in Tarski's system. Tarski and Givant (1999: 175) proved that Tarski's system is: Consistent: There is no wff such that it and its negation can both be proven from the axioms; Complete: Every wff or its negation is a theorem provable from the axioms; Decidable: There exists an algorithm that decides for every wff whether is it is provable or disprovable from the axioms. This follows from Tarski's: Decision procedure for the real closed field, which he found by quantifier elimination (the Tarski–Seidenberg theorem); Axioms admitting the above-mentioned representation as a two-dimensional plane over a real closed field. This has the consequence that every statement of (second-order, general) Euclidean geometry which can be formulated as a first-order sentence in Tarski's system is true if and only if it is provable in Tarski's system, and this provability can be automatically checked with Tarski's algorithm. This, for instance, applies to all theorems in Euclid's Elements, Book I. An example of a theorem of Euclidean geometry which cannot be so formulated is the Archimedean property: to any two positive-length line segments S1 and S2 there exists a natural number n such that nS1 is longer than S2. (This is a consequence of the fact that there are real-closed fields that contain infinitesimals.) Other notions that cannot be expressed in Tarski's system are the constructability with straightedge and compass and statements that talk about "all polygones" etc. Gupta (1965) proved the Tarski's axioms independent, excepting Pasch and Reflexivity of Congruence. Negating the Axiom of Euclid yields hyperbolic geometry, while eliminating it outright yields absolute geometry. Full (as opposed to elementary) Euclidean geometry requires giving up a first order axiomatization: replace φ(x) and ψ(y) in the axiom schema of Continuity with x ∈ A and y ∈ B, where A and B are universally quantified variables ranging over sets of points. Comparison with Hilbert's system Hilbert's axioms for plane geometry number 16, and include Transitivity of Congruence and a variant of the Axiom of Pasch. The only notion from intuitive geometry invoked in the remarks to Tarski's axioms is triangle. (Versions B and C''' of the Axiom of Euclid refer to "circle" and "angle," respectively.) Hilbert's axioms also require "ray," "angle," and the notion of a triangle "including" an angle. In addition to betweenness and congruence, Hilbert's axioms require a primitive binary relation "on," linking a point and a line. Hilbert uses two axioms of Continuity, and they require second-order logic. By contrast, Tarski's Axiom schema of Continuity consists of infinitely many first-order axioms. Such a schema is indispensable; Euclidean geometry in Tarski's (or equivalent) language cannot be finitely axiomatized as a first-order theory. Hilbert's system is therefore considerably stronger: every model is isomorphic to the real plane (using the standard notions of points and lines). By contrast, Tarski's system has many non-isomorphic models: for every real-closed field F, the plane F2'' provides one such model (where betweenness and congruence are defined in the obvious way). The first four groups of axioms of Hilbert's axioms for plane geometry are bi-interpretable with Tarski's axioms minus continuity. See also Euclidean geometry Euclidean space Notes References . Available as a 2007 reprint, Brouwer Press, Elementary geometry Foundations of geometry Mathematical axioms
Tarski's axioms
[ "Mathematics" ]
4,042
[ "Mathematical logic", "Mathematical axioms", "Elementary mathematics", "Elementary geometry", "Foundations of geometry" ]
2,181,434
https://en.wikipedia.org/wiki/Ping%20of%20death
A ping of death is a type of attack on a computer system that involves sending a malformed or otherwise malicious ping to a computer. In this attack, a host sends hundreds of ping requests with a packet size that is large or illegal to another host to try to take it offline or to keep it preoccupied responding with ICMP Echo replies. A correctly formed ping packet is typically 56 bytes in size, or 64 bytes when the Internet Control Message Protocol (ICMP) header is considered, and 84 bytes including Internet Protocol (IP) version 4 header. However, any IPv4 packet (including pings) may be as large as 65,535 bytes. Some computer systems were never designed to properly handle a ping packet larger than the maximum packet size because it violates the Internet Protocol. Like other large but well-formed packets, a ping of death is fragmented into groups of 8 octets before transmission. However, when the target computer reassembles the malformed packet, a buffer overflow can occur, causing a system crash and potentially allowing the injection of malicious code. The excessive byte size prevents the machine from processing it effectively, impacting the cloud environment and causing disruptions in the operating system processes leading to reboots or crashes. In early implementations of TCP/IP, this bug is easy to exploit and can affect a wide variety of systems including Unix, Linux, Mac, Windows, and peripheral devices. As systems began filtering out pings of death through firewalls and other detection methods, a different kind of ping attack known as ping flooding later appeared, which floods the victim with so many ping requests that normal traffic fails to reach the system (a basic denial-of-service attack). The ping of death attack has been largely neutralized by advancements in technology. Devices produced after 1998 include defenses against such attacks, rendering them resilient to this specific threat. However, in a notable development, a variant targeting IPv6 packets on Windows systems was identified, leading Microsoft to release a patch in mid-2013. Detailed information The maximum packet length of an IPv4 packet including the IP header is 65,535 (216 − 1) bytes, a limitation presented by the use of a 16-bit wide IP header field that describes the total packet length. The underlying data link layer almost always poses limits to the maximum frame size (See MTU). In Ethernet, this is typically 1500 bytes. In such a case, a large IP packet is split across multiple IP packets (also known as IP fragments), so that each IP fragment will match the imposed limit. The receiver of the IP fragments will reassemble them into the complete IP packet and continue processing it as usual. When fragmentation is performed, each IP fragment needs to carry information about which part of the original IP packet it contains. This information is kept in the Fragment Offset field, in the IP header. The field is 13 bits long, and contains the offset of the data in the current IP fragment, in the original IP packet. The offset is given in units of 8 bytes. This allows a maximum offset of 65,528 ((213-1)*8). Then when adding 20 bytes of IP header, the maximum will be 65,548 bytes, which exceeds the maximum frame size. This means that an IP fragment with the maximum offset should have data no larger than 7 bytes, or else it would exceed the limit of the maximum packet length. A malicious user can send an IP fragment with the maximum offset and with much more data than 8 bytes (as large as the physical layer allows it to be). When the receiver assembles all IP fragments, it will end up with an IP packet which is larger than 65,535 bytes. This may possibly overflow memory buffers which the receiver allocated for the packet, and can cause various problems. As is evident from the description above, the problem has nothing to do with ICMP, which is used only as payload, big enough to exploit the problem. It is a problem in the reassembly process of IP fragments, which may contain any type of protocol (TCP, UDP, IGMP, etc.). The correction of the problem is to add checks in the reassembly process. The check for each incoming IP fragment makes sure that the sum of "Fragment Offset" and "Total length" fields in the IP header of each IP fragment is smaller or equal to 65,535. If the sum is greater, then the packet is invalid, and the IP fragment is ignored. This check is performed by some firewalls, to protect hosts that do not have the bug fixed. Another fix for the problem is using a memory buffer larger than 65,535 bytes for the re-assembly of the packet. (This is essentially a breaking of the specification, since it adds support for packets larger than those allowed.) Ping of death in IPv6 In 2013, an IPv6 version of the ping of death vulnerability was discovered in Microsoft Windows. Windows TCP/IP stack did not handle memory allocation correctly when processing incoming malformed ICMPv6 packets, which could cause remote denial of service. This vulnerability was fixed in MS13-065 in August 2013. The CVE-ID for this vulnerability is . In 2020, another bug () in ICMPv6 was found around Router Advertisement, which could even lead to remote code execution. See also INVITE of Death LAND Ping flood ReDoS Smurf attack References External links Ping of death at Insecure.Org Denial-of-service attacks
Ping of death
[ "Technology" ]
1,143
[ "Denial-of-service attacks", "Computer security exploits" ]
2,181,563
https://en.wikipedia.org/wiki/Nuclear%20material
Nuclear material refers to the metals uranium, plutonium, and thorium, in any form, according to the IAEA. This is differentiated further into "source material", consisting of natural and depleted uranium, and "special fissionable material", consisting of enriched uranium (U-235), uranium-233, and plutonium-239. Uranium ore concentrates are considered to be a "source material", although these are not subject to safeguards under the Nuclear Non-Proliferation Treaty. According to the Nuclear Regulatory Commission(NRC), there are four different types of regulated nuclear materials: special nuclear material, source material, byproduct material and radium. Special nuclear materials have plutonium, uranium-233 or uranium with U233 or U235 that has a content found more than in nature. Source material is thorium or uranium that has a U235 content equal to or less than what is in nature. Byproduct material is radioactive material that is not source or special nuclear material. It can be an isotope produced by a nuclear reactor, the tailings and waste that is produced or extracted from uranium or thorium from an ore that processed mainly for its source material content. Byproduct material can also be discrete sources of radium-226 or discrete sources of accelerator-produced isotopes or naturally occurring isotopes that pose a threat greater or equal to a discrete source of radium-226. Radium is also a regulated nuclear material that is found in nature and produced by the radioactive decay of uranium. The half-life of radium is approximately 1,600 years. Different countries may use different terminology: in the United States of America, "nuclear material" most commonly refers to "special nuclear materials" (SNM), with the potential to be made into nuclear weapons as defined in the Atomic Energy Act of 1954. The "special nuclear materials" are also plutonium-239, uranium-233, and enriched uranium (U-235). Note that the 1980 Convention on the Physical Protection of Nuclear Material definition of nuclear material does not include thorium. The NRC has a regulatory process for nuclear materials with five main components. Develop regulation and guidance for their applicants and licensees Licensing, decommissioning and certification for applicants to use nuclear materials, or operate a nuclear facility or decommission a permit license termination Oversight of licensee operations and facilities that ensure that licensees comply with the safety requirements Operational experience at licensed facilities or licensed activities Support for decisions by conducting research, holding hearings that address concerns, and obtain independent reviews that support the NRC regulatory decisions The United States Department of Energy Office of Environmental Management (EM) manages and dispositions spent nuclear fuel and surplus nuclear materials. The EM Nuclear Materials Program safely and securely manages the spent nuclear fuels in their facilities while managing an inventory of the materials. The Nuclear Waste Policy Act defines procedures to evaluate and select locations for geological repositories to safely dispose/store the radioactive waste. The EM also works with the National Nuclear Security Administration (NNSA) to dispose the surplus, non-pit, weapons-usable plutonium-239. EM with the NNSA, oversee the disposition of 21 metric tons of surplus highly enriched uranium materials that has about 13.5 metric tons of spent nuclear fuel. See also Tube Alloys Institute of Nuclear Materials Management Material unaccounted for References Nuclear weapons
Nuclear material
[ "Physics" ]
690
[ "Materials", "Nuclear materials", "Matter" ]
2,181,790
https://en.wikipedia.org/wiki/X-ray%20microtomography
In radiography, X-ray microtomography uses X-rays to create cross-sections of a physical object that can be used to recreate a virtual model (3D model) without destroying the original object. It is similar to tomography and X-ray computed tomography. The prefix micro- (symbol: μ) is used to indicate that the pixel sizes of the cross-sections are in the micrometre range. These pixel sizes have also resulted in creation of its synonyms high-resolution X-ray tomography, micro-computed tomography (micro-CT or μCT), and similar terms. Sometimes the terms high-resolution computed tomography (HRCT) and micro-CT are differentiated, but in other cases the term high-resolution micro-CT is used. Virtually all tomography today is computed tomography. Micro-CT has applications both in medical imaging and in industrial computed tomography. In general, there are two types of scanner setups. In one setup, the X-ray source and detector are typically stationary during the scan while the sample/animal rotates. The second setup, much more like a clinical CT scanner, is gantry based where the animal/specimen is stationary in space while the X-ray tube and detector rotate around. These scanners are typically used for small animals (in vivo scanners), biomedical samples, foods, microfossils, and other studies for which minute detail is desired. The first X-ray microtomography system was conceived and built by Jim Elliott in the early 1980s. The first published X-ray microtomographic images were reconstructed slices of a small tropical snail, with pixel size about 50 micrometers. Working principle Imaging system Fan beam reconstruction The fan-beam system is based on a one-dimensional (1D) X-ray detector and an electronic X-ray source, creating 2D cross-sections of the object. Typically used in human computed tomography systems. Cone beam reconstruction The cone-beam system is based on a 2D X-ray detector (camera) and an electronic X-ray source, creating projection images that later will be used to reconstruct the image cross-sections. Open/Closed systems Open X-ray system In an open system, X-rays may escape or leak out, thus the operator must stay behind a shield, have special protective clothing, or operate the scanner from a distance or a different room. Typical examples of these scanners are the human versions, or designed for big objects. Closed X-ray system In a closed system, X-ray shielding is put around the scanner so the operator can put the scanner on a desk or special table. Although the scanner is shielded, care must be taken and the operator usually carries a dosimeter, since X-rays have a tendency to be absorbed by metal and then re-emitted like an antenna. Although a typical scanner will produce a relatively harmless volume of X-rays, repeated scannings in a short timeframe could pose a danger. Digital detectors with small pixel pitches and micro-focus x-ray tubes are usually employed to yield in high resolution images. Closed systems tend to become very heavy because lead is used to shield the X-rays. Therefore, the smaller scanners only have a small space for samples. 3D image reconstruction The principle Because microtomography scanners offer isotropic, or near isotropic, resolution, display of images does not need to be restricted to the conventional axial images. Instead, it is possible for a software program to build a volume by 'stacking' the individual slices one on top of the other. The program may then display the volume in an alternative manner. Image reconstruction software For X-ray microtomography, powerful open source software is available, such as the ASTRA toolbox. The ASTRA Toolbox is a MATLAB and python toolbox of high-performance GPU primitives for 2D and 3D tomography, from 2009 to 2014 developed by iMinds-Vision Lab, University of Antwerp and since 2014 jointly developed by iMinds-VisionLab, UAntwerpen and CWI, Amsterdam. The toolbox supports parallel, fan, and cone beam, with highly flexible source/detector positioning. A large number of reconstruction algorithms are available, including FBP, ART, SIRT, SART, CGLS. For 3D visualization, tomviz is a popular open-source tool for tomography. Volume rendering Volume rendering is a technique used to display a 2D projection of a 3D discretely sampled data set, as produced by a microtomography scanner. Usually these are acquired in a regular pattern, e.g., one slice every millimeter, and usually have a regular number of image pixels in a regular pattern. This is an example of a regular volumetric grid, with each volume element, or voxel represented by a single value that is obtained by sampling the immediate area surrounding the voxel. Image segmentation Where different structures have similar threshold density, it can become impossible to separate them simply by adjusting volume rendering parameters. The solution is called segmentation, a manual or automatic procedure that can remove the unwanted structures from the image. Typical use Archaeology Reconstructing fire-damaged artifacts, such as the En-Gedi Scroll and Herculaneum papyri Unpacking cuneiform tablets wrapped in clay envelopes and clay tokens Biomedical Both in vitro and in vivo small animal imaging Neurons Human skin samples Bone samples, including teeth, ranging in size from rodents to human biopsies Lung imaging using respiratory gating Cardiovascular imaging using cardiac gating Imaging of the human eye, ocular microstructures and tumors Tumor imaging (may require contrast agents) Soft tissue imaging Insects – Insect development Parasitology – migration of parasites, parasite morphology Tablet consistency checks Developmental biology Tracing the development of the extinct Tasmanian tiger during growth in the pouch Model and non-model organisms (elephants, zebrafish, and whales) Electronics Small electronic components. E.g. DRAM IC in plastic case. Microdevices Spray nozzle Composite materials and metallic foams Ceramics and Ceramic–Metal composites. Microstructural analysis and failure investigation Composite material with glass fibers 10 to 12 micrometres in diameter Polymers, plastics Plastic foam Diamonds Detecting defects in a diamond and finding the best way to cut it. Food and seeds 3-D imaging of foods Analysing heat and drought stress on food crops Bubble detection in squeaky cheese Wood and paper Piece of wood to visualize year periodicity and cell structure Building materials Concrete after loading Geology In geology it is used to analyze micro pores in the reservoir rocks, it can used in microfacies analysis for sequence stratigraphy. In petroleum exploration it is used to model the petroleum flow under micro pores and nano particles. It can give a resolution up to 1 nm. Sandstone Porosity and flow studies Fossils Vertebrates Invertebrates Microfossils Benthonic foraminifers Palaeography Digitally unfolding letters of correspondence which employed letterlocking. Space Locating stardust-like particles in aerogel using X-ray techniques Samples returned from asteroid 25143 Itokawa by the Hayabusa mission Stereo images Visualizing with blue and green or blue filters to see depth Others Cigarettes Social insect nests See also Synchrotron Mind uploading References External links MicroComputed Tomography: Methodology and Applications Synchrotron and non synchrotron X-ray microtomography threedimensional representation of bone ingrowth in calcium phosphate biomaterials Microfocus X-ray Computer Tomography in Materials Research Locating Stardust-like particles in aerogel using x-ray techniques Use of micro CT to study kidney stones Use of micro CT in ophthalmology Application of the Gatan X-ray Ultramicroscope (XuM) to the Investigation of Material and Biological Samples 3D Synchrotron X-ray microtomography of paint samples Medical imaging Materials science Microtomography Microtechnology X-ray computed tomography Articles containing video clips
X-ray microtomography
[ "Physics", "Chemistry", "Materials_science", "Technology", "Engineering" ]
1,635
[ "Applied and interdisciplinary physics", "Microtechnology", "Materials science", "Measuring instruments", "Microscopes", "nan", "Microscopy" ]
2,181,834
https://en.wikipedia.org/wiki/Aromatic-ring-hydroxylating%20dioxygenases
Aromatic-ring-hydroxylating dioxygenases (ARHD) incorporate two atoms of dioxygen (O2) into their substrates in the dihydroxylation reaction. The product is (substituted) cis-1,2-dihydroxycyclohexadiene, which is subsequently converted to (substituted) benzene glycol by a cis-diol dehydrogenase. A large family of multicomponent mononuclear (non-heme) iron oxygenases has been identified. Components of bacterial aromatic-ring dioxygenases constitute two different functional classes: hydroxylase components and electron transfer components. Hydroxylase components are either (αβ)n or (α)n oligomers. Two prosthetic groups, a Rieske-type [Fe2S2] center and a mononuclear iron, are associated with the α-subunit in the (αβ)n-type enzymes. Electron transfer components are composed of flavoprotein (NADH:ferredoxin oxidoreductase) and Rieske-type [Fe2S2] ferredoxin. In benzoate and toluate 1,2-dioxygenase systems, a single protein containing reductase and Rieske-type ferredoxin domains transfers the electrons from NADH to the hydroxylase component. In the phthalate 4,5-dioxygenase system, phthalate dioxygenase reductase (PDR) has the same function. PDR is a single protein comprising FMN-binding reductase and plant-type ferredoxin domains. Thus, the electron transfer in ARHD systems can be summarised as: Biochemical classification benzene 1,2-dioxygenase benzene + NADH + H+ + O2 = cis-cyclohexa-3,5-diene-1,2-diol + NAD+ phthalate 4,5-dioxygenase phthalate + NADH + H+ + O2 = cis-4,5-dihydroxycyclohexa-1(6),2-diene-1,2-dicarboxylate + NAD+ 4-sulfobenzoate 3,4-dioxygenase 4-sulfobenzoate + NADH + H+ + O2 = 3,4-dihydroxybenzoate + sulfite + NAD+ 4-chlorophenylacetate 3,4-dioxygenase 4-chlorophenylacetate + NADH + H+ + O2 = 3,4-dihydroxyphenylacetate + chloride + NAD+ benzoate 1,2-dioxygenase benzoate + NADH + H+ + O2 = 1,2-dihydroxycyclohexa-3,5-diene-1-carboxylate + NAD+ toluene dioxygenase toluene + NADH + H+ + O2 = (1S,2R)-3-methylcyclohexa-3,5-diene-1,2-diol + NAD+ naphthalene 1,2-dioxygenase naphthalene + NADH + H+ + O2 = (1R,2S)-1,2-dihydronaphthalene-1,2-diol + NAD+ terephthalate 1,2-dioxygenase terephthalate + NADH + H+ + O2 = (1R,6S)-dihydroxycyclohexa-2,4-diene-1,4-dicarboxylate + NAD+ biphenyl 2,3-dioxygenase biphenyl + NADH + H+ + O2 = (1S,2R)-3-phenylcyclohexa-3,5-diene-1,2-diol + NAD+ Structure The crystal structure of the hydroxylase component of naphthalene 1,2-dioxygenase from Pseudomonas has been determined. The protein is an (αβ)3 hexamer. The β-subunit belongs to the α+β class. It has no prosthetic groups and its role in catalysis is unknown. The α-subunit can be divided into two domains: a Rieske domain that contains the [Fe2S2] center and the catalytic domain that contains the active site mononuclear iron. The Rieske domain (residues 38-158) consists of four β-sheets. The overall fold is very similar to that of the soluble fragment of the Rieske protein from bovine heart mitochondrial cytochrome bc1 complex. In the [Fe2S2] center, Fe1 is coordinated by two cysteine residues (Cys-81 and Cys-101) while Fe2 is coordinated by Nδ atoms of two histidine residues (His-83 and His-104). The catalytic domain belongs to the α+β class and is dominated by a nine-stranded antiparallel β-sheet. The iron of the active site is located at the bottom of a narrow channel, approximately 15 Å from the protein surface. The mononuclear iron is coordinated by His-208, His-213, Asp-362 (bidentate) and a water molecule. The geometry can be described as a distorted octahedral with one ligand missing. The structure of the hexamer suggests cooperativity between adjacent α-subunits, where electrons from the [Fe2S2] center in one α-subunit (A) are transferred to the mononuclear iron in the adjacent α-subunit (B) through AspB-205, which is hydrogen-bonded to HisA-104 of the Rieske center and HisB-208 of the active site. References External links - structure of naphthalene 1,2-dioxygenase from Pseudomonas putida - structure of biphenyl 2,3-dioxygenase from Rhodococcus sp. strain RHA1 - InterPro entry for Bacterial ring hydroxylating dioxygenase, alpha subunit Metalloproteins
Aromatic-ring-hydroxylating dioxygenases
[ "Chemistry" ]
1,352
[ "Metalloproteins", "Bioinorganic chemistry" ]
2,182,059
https://en.wikipedia.org/wiki/Z%C3%B6llner%20illusion
The Zöllner illusion is an optical illusion named after its discoverer, German astrophysicist Johann Karl Friedrich Zöllner. In 1860, Zöllner sent his discovery in a letter to physicist and scholar Johann Christian Poggendorff, editor of Annalen der Physik und Chemie, who subsequently discovered the related Poggendorff illusion in Zöllner's original drawing. One depiction of the illusion consists of a series of parallel, black diagonal lines which are crossed with short, repeating lines, the direction of the crossing lines alternating between horizontal and vertical. This creates the illusion that the black lines are not parallel. The shorter lines are on an angle to the longer lines, and this angle helps to create the impression that one end of the longer lines is nearer to the viewer than the other end. This is similar to the way the Wundt illusion appears. It may be that the Zöllner illusion is caused by this impression of depth. This illusion is similar to the Hering illusion, Poggendorff illusion, Müller-Lyer illusion, and Café wall illusion. All these illusions demonstrate how lines can seem to be distorted by their background. References External links A demonstration of the Zöllner illusion that allows for adjusting the angle of the shorter lines Optical illusions
Zöllner illusion
[ "Physics" ]
265
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
2,182,188
https://en.wikipedia.org/wiki/Air-operated%20valve
An air-operated valve, also known as a pneumatic valve, is a type of power-operated pipe valve that uses air pressure to perform a function similar to a solenoid. As air pressure is increased, the compressed air starts to push against the piston or diaphragm walls which causes the valve to actuate. Whether the valve opens or closes depends on the application. These valves are used for many functions in pneumatic systems, but most often serve one of two functions. The first activates a part of the system when a specific pressure is reached. The second prevents damage by maintaining a constant pressure or flow rate inside a system, or releasing pressure when it reaches excessive levels. Types Air-operated valves may be 2-way, 3-way and 4-way. 2-way valves can be either normally closed or normally opened. These valves have two ports that help the valves regulate the flow of air into a system. These valves often provide a simple on-off function. 3-way valves can be normally closed, normally open, and offer a universal function where gas can be diverted through a third opening to move the valve into the normally closed or normally open position. 3-way valves pressurize and exhaust one outlet port to control a single-acting cylinder or pilot another valve. Three-way valves may be used in pairs to operate a double-acting cylinder, thus replacing a four-way valve. A primary function of the 3-way valve is to save/store air that's compressed in high cyclic applications. 4-way valves are used for systems that require higher air pressure. Four-way valves are the most commonly used components for directional control in a pneumatic system. The 4-way valve can have four or five ports, with different positions and uses. Their most common function is to regulate the motion of a cylinder, motor, or other powering components. Applications Pneumatic systems support production lines, mechanical clamps, train doors, and many other parts of industrial businesses. See also The difference between an air-operated valve/pneumatic valve and a solenoid valve is the element in use. Air-operated valves use air, while solenoid valves use electricity. Control valve Solenoid valve References Valves
Air-operated valve
[ "Physics", "Chemistry" ]
464
[ "Physical systems", "Valves", "Hydraulics", "Piping" ]
2,182,298
https://en.wikipedia.org/wiki/Sander%20illusion
The Sander illusion or Sander's parallelogram is an optical illusion described by the German psychologist Friedrich Sander (1889–1971) in 1926. However, it had been published earlier by Matthew Luckiesh in his 1922 book Visual Illusions: Their Causes, Characteristics, and Applications . The diagonal line bisecting the larger, left-hand parallelogram appears to be considerably longer than the diagonal line bisecting the smaller, right-hand parallelogram, but it is the same length. One possible reason for this illusion is that the diagonal lines around the blue lines give a perception of depth, and when the blue lines are included in that depth, they are perceived as different lengths. References Optical illusions
Sander illusion
[ "Physics" ]
147
[ "Optical phenomena", "Physical phenomena", "Optical illusions" ]
2,182,309
https://en.wikipedia.org/wiki/Humphry%20Bowen
Humphry John Moule Bowen (22 June 1929 – 9 August 2001) was a British botanist and chemist. Early life and education Bowen was born in Oxford, son of the chemist Edmund Bowen and Edith Bowen (nee Moule). He attended the Dragon School, gaining a scholarship to Rugby School and then a demyship to Magdalen College, Oxford. He won the Gibbs Prize in 1949 and completed a DPhil in chemistry at Oxford University in 1953 before starting his professional career as a chemist. Bowen was also a proficient amateur actor in his early years, appearing with a young Ronnie Barker at Oxford. Research career His first post was with the Atomic Energy Research Establishment (AERE) near the village of Harwell where he lived, working at the Wantage Research Laboratory, then in Berkshire. His early work started an interest in radioisotopes and trace elements that he maintained throughout his working life. While at AERE, he spent several months in 1956 attending the British nuclear tests at Maralinga in Australia to study the environmental effects of radiation. Bowen realized that the calibration of different instruments intended to measure trace elements was an important issue that needed addressing. His solution was to produce a good supply of a material which later become known as Bowen's Kale. This was a dried, crushed chomogenate of the plant kale, that was stable and consistent enough to be distributed as a research calibration standard - probably the first successful example of such a standard. In 1964, he was appointed as a lecturer in the chemistry department at the University of Reading. Later he was promoted to Reader in analytical chemistry in 1974. At Reading, Bowen undertook consultancy for Dunlop, investigating potential uses for their products. When the Torrey Canyon oil disaster occurred in 1967, he realized that it might be possible to use foam booms to block the oil from spreading in the English Channel. His original experiments were conducted in a small bucket in his laboratory. Although not entirely successful in reality at the time due to the rough seas, this lateral thinking combined his interest in chemistry with his love of nature and has since been effectively deployed to protect ports and harbours against encroaching oil slicks. Bowen wrote a number of professional books in the field of chemistry, including two editions of Trace elements in Biochemistry (1966 and 1976). In 1968, Bowen noted that the paint used for yellow line road markings can contain chromate pigment, which may cause urban pollution as it deteriorates. He pointed out that hexavalent chromium in dust can cause dermatitis ulceration on the skin, inflammation of the nasal mucosa and larynx, and lung cancer. From 1951 onwards, Bowen was a long-serving member of the Botanical Society of the British Isles (BSBI). He was meetings secretary for a period and the official recorder of plants for the counties of Berkshire and Dorset, producing Floras for both counties. He retired to Winterborne Kingston in Dorset at the end of his life. He was also one of the leading contributors of botanical data for the Flora of Oxfordshire. He acted as an expert botanical guide on tours around Europe, especially Greece and Turkey. Humphry Bowen donated a large collection of lichens from Berkshire and Oxfordshire to the Museum of Reading in the 1970s. He established the Bowen Cup at the University of Reading in 1988, an annual prize for the student in the Department of Chemistry at the University who achieves the top marks in Part II Analytical Chemistry. See also Bowen's son, Jonathan Bowen, a computer scientist. George Claridge Druce, the Victorian botanist who also wrote floras for more than one county. Tottles. Bibliography H. J. M. Bowen, Trace Elements in Biochemistry. Academic Press, 1966. H. J. M. Bowen, Properties of Solids and their Structures. McGraw-Hill, 1967. H. J. M. Bowen, Environmental Chemistry of the Elements. Academic Press, 1979. . References External links 1929 births 2001 deaths Scientists from Oxford People educated at The Dragon School People educated at Rugby School Alumni of Magdalen College, Oxford English nature writers English botanists English chemists English science writers Analytical chemists Tour guides Academics of the University of Reading
Humphry Bowen
[ "Chemistry" ]
861
[ "Analytical chemists" ]
2,182,370
https://en.wikipedia.org/wiki/Formic%20acid%20%28data%20page%29
This page provides supplementary chemical data on formic acid. Material Safety Data Sheet The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet (MSDS) for this chemical from a reliable source and follow its directions. MSDS from FLUKA in the SDSdata.org database Science Stuff Structure and properties Thermodynamic properties Vapor pressure of liquid Table data obtained from CRC Handbook of Chemistry and Physics, 44th ed. The "(s)" notation indicates temperature of solid/vapor equilibrium. Otherwise the data is temperature of liquid/vapor equilibrium. Distillation data Spectral data Safety data References Chemical data pages Chemical data pages cleanup
Formic acid (data page)
[ "Chemistry" ]
143
[ "Chemical data pages", "nan" ]
2,182,690
https://en.wikipedia.org/wiki/Chorda%20tympani
Chorda tympani is a branch of the facial nerve that carries gustatory (taste) sensory innervation from the front of the tongue and parasympathetic (secretomotor) innervation to the submandibular and sublingual salivary glands. Chorda tympani has a complex course from the brainstem, through the temporal bone and middle ear, into the infratemporal fossa, and ending in the oral cavity. Structure Chorda tympani fibers emerge from the pons of the brainstem as part of the intermediate nerve of the facial nerve. The facial nerve exits the cranial cavity through the internal acoustic meatus and enters the facial canal. Within the facial canal, chorda tympani branches off the facial nerve and enters the lateral wall of the tympanic cavity within the middle ear, where it runs across the tympanic membrane (from posterior to anterior) and medial to the neck of the malleus. Chorda tympani then exits the skull by descending through the petrotympanic fissure into the infratemporal fossa. Here it joins the lingual nerve, a branch of the mandibular nerve (CN V3). Traveling with the lingual nerve, the fibers of chorda tympani enter the sublingual space to reach the anterior 2/3 of the tongue and submandibular ganglion. The special sensory fibers originate from the taste buds in the anterior 2/3 of the tongue and carry taste information to the nucleus of solitary tract of the brainstem, where taste information from facial, glossopharyngeal, and vagus nerves is integrated. The preganglionic parasympathetic fibers originate in the superior salivary nucleus of the brainstem and project to the submandibular ganglion to synapse with postganglionic fibers which go on to innervate the submandibular and sublingual salivary glands. Function The chorda tympani carries two types of nerve fibers from their origin from the facial nerve to the lingual nerve that carries them to their destinations: Special sensory fibers providing taste sensation from the anterior two-thirds of the tongue. Preganglionic parasympathetic fibers to the submandibular ganglion, providing secretomotor innervation to two salivary glands: the submandibular gland and sublingual gland and to the vessels of the tongue, which when stimulated, cause a dilation of blood vessels of the tongue. Taste The chorda tympani is one of three cranial nerves that are involved in taste. The taste system involves a complicated feedback loop, with each nerve acting to inhibit the signals of other nerves. There are similarities between the tastes the chorda tympani picks up in sweeteners between mice and primates, but not rats. Relating research results to humans is therefore not always consistent. Sodium chloride is detected and recognized most by the chorda tympani nerve. The recognition and responses to sodium chloride in the chorda tympani is mediated by amiloride-sensitive sodium channels. The chorda tympani has a relatively low response to quinine and varied responses to hydrochloride. The chorda tympani is less responsive to sucrose than is the greater petrosal nerve. Chorda tympani transection The chorda tympani nerve carries its information to the nucleus of solitary tract, and shares this area with the greater petrosal, glossopharyngeal, and vagus nerves. When the greater petrosal and glossopharyngeal nerves are cut, regardless of age, the chorda tympani nerve takes over the space in the terminal field. This takeover of space by the chorda tympani is believed to be the nerve reverting to its original state before competition and pruning. The chorda tympani, as part of the peripheral nervous system, is not as plastic in early ages. In a study done by Hosley et al. and a study done by Sollars, it has been shown that when the nerve is cut at a young age, the related taste buds are not likely to grow back to full strength. In a bilateral transection of the chorda tympani in mice, the preference for sodium chloride increases compared to before the transection. Also avoidance of higher concentrations of sodium chloride is eliminated. The amiloride-sensitive channels responsible for salt recognition and response is functional in adult rats but not neonatal rats. This explains part of the change in preference of sodium chloride after a chorda tympani transection. The chorda tympani innervates the fungiform papillae on the tongue. According to a study done by Sollars et al. in 2002, when the chorda tympani has been transected early in postnatal development some of the fungiform papillae undergo a structural change to become more “filiform-like”. When some of the other papillae grow back, they do so without a pore. Dysfunction Injury to the chorda tympani nerve leads to loss or distortion of taste from anterior 2/3 of tongue. However, taste from the posterior 1/3 of tongue (supplied by the glossopharyngeal nerve) remains intact. The chorda tympani appears to exert a particularly strong inhibitory influence on other taste nerves, as well as on pain fibers in the tongue. When the chorda tympani is damaged, its inhibitory function is disrupted, leading to less inhibited activity in the other nerves. Additional images References External links () Photo at Washington University in St. Louis Facial nerve Ear Cranial nerves Otorhinolaryngology Nervous system
Chorda tympani
[ "Biology" ]
1,190
[ "Organ systems", "Nervous system" ]
2,182,714
https://en.wikipedia.org/wiki/Pregnenolone
Pregnenolone (P5), or pregn-5-en-3β-ol-20-one, is an endogenous steroid and precursor/metabolic intermediate in the biosynthesis of most of the steroid hormones, including the progestogens, androgens, estrogens, glucocorticoids, and mineralocorticoids. In addition, pregnenolone is biologically active in its own right, acting as a neurosteroid. In addition to its role as a natural hormone, pregnenolone has been used as a medication and supplement; for information on pregnenolone as a medication or supplement, see the pregnenolone (medication) article. Biological function Pregnenolone and its 3β-sulfate, pregnenolone sulfate, like dehydroepiandrosterone (DHEA), DHEA sulfate, and progesterone, belong to the group of neurosteroids that are found in high concentrations in certain areas of the brain, and are synthesized there. Neurosteroids affect synaptic functioning, are neuroprotective, and enhance myelinization. Pregnenolone and its sulfate ester may improve cognitive and memory function. In addition, they may have protective effects against schizophrenia. Biological activity Neurosteroid activity Pregnenolone is an allosteric endocannabinoid, as it is a negative allosteric modulator of the CB1 receptor. Pregnenolone is involved in a natural negative feedback loop against CB1 receptor activation in animals. It prevents CB1 receptor agonists like tetrahydrocannabinol, the main active constituent in cannabis, from fully activating the CB1. A related compound AEF0117 has been derived from pregnenolone and is more specific for this type of activity. Pregnenolone has been found to bind with high, nanomolar affinity to microtubule-associated protein 2 (MAP2) in the brain. In contrast to pregnenolone, pregnenolone sulfate did not bind to microtubules. However, progesterone did and with similar affinity to pregnenolone, although unlike pregnenolone, it did not increase binding of MAP2 to tubulin. Pregnenolone was found to induce tubule polymerization in neuronal cultures and to increase neurite growth in PC12 cells treated with nerve growth factor. As such, pregnenolone may control formation and stabilization of microtubules in neurons and may affect both neural development during prenatal development and neural plasticity during aging. Although pregnenolone itself does not possess these activities, its metabolite pregnenolone sulfate is a negative allosteric modulator of the GABAA receptor as well as a positive allosteric modulator of the NMDA receptor. In addition, pregnenolone sulfate has been shown to activate the transient receptor potential M3 (TRPM3) ion channel in hepatocytes and pancreatic islets causing calcium entry and subsequent insulin release. Nuclear receptor activity Pregnenolone has been found to act as an agonist of the pregnane X receptor. Pregnenolone has no progestogenic, corticosteroid, estrogenic, androgenic, or antiandrogenic activity. Biochemistry Biosynthesis Pregnenolone is synthesized from cholesterol. This conversion involves hydroxylation of the side chain at the C20 and C22 positions, with cleavage of the side chain. The enzyme performing this task is cytochrome P450scc, located in the mitochondria, and controlled by anterior pituitary trophic hormones, such as adrenocorticotropic hormone, follicle-stimulating hormone, and luteinizing hormone, in the adrenal glands and gonads. There are two intermediates in the transformation of cholesterol into pregnenolone, 22R-hydroxycholesterol and 20α,22R-dihydroxycholesterol, and all three steps in the transformation are catalyzed by P450scc. Pregnenolone is produced mainly in the adrenal glands, the gonads, and the brain. Although pregnenolone is also produced in the gonads and brain, most circulating pregnenolone is derived from the adrenal cortex. To assay conversion of cholesterol to pregnenolone, radiolabeled cholesterol has been used. Pregnenolone product can be separated from cholesterol substrate using Sephadex LH-20 minicolumns. Distribution Pregnenolone is lipophilic and readily crosses the blood–brain barrier. This is in contrast to pregnenolone sulfate, which does not cross the blood–brain barrier. Metabolism Pregnenolone undergoes further steroid metabolism in one of several ways: Pregnenolone can be converted into progesterone. The critical enzyme step is two-fold using a 3β-hydroxysteroid dehydrogenase and a Δ5-4 isomerase. The latter transfers the double bond from C5 to C4 on the A ring. Progesterone is the entry into the Δ4 pathway, resulting in production of 17α-hydroxyprogesterone and androstenedione, precursor to testosterone and estrone. Aldosterone and corticosteroids are also derived from progesterone or its derivatives. Pregnenolone can be converted to 17α-hydroxypregnenolone by the enzyme 17α-hydroxylase (CYP17A1). Using this pathway, termed Δ5 pathway, the next step is conversion to dehydroepiandrosterone (DHEA) via 17,20-lyase (CYP17A1). DHEA is the precursor of androstenedione. Pregnenolone can be converted to androstadienol by 16-ene synthase (CYP17A1). Pregnenolone can be converted to pregnenolone sulfate by steroid sulfotransferase, and this conversion can be reversed by steroid sulfatase. Levels Normal circulating levels of pregnenolone are as follows: Men: 10 to 200 ng/dL Women: 10 to 230 ng/dL Children: 10 to 48 ng/dL Adolescent boys: 10 to 50 ng/dL Adolescent girls: 15 to 84 ng/dL Mean levels of pregnenolone have been found not to significantly differ in postmenopausal women and elderly men (40 and 39 ng/dL, respectively). Studies have found that pregnenolone levels are not significantly changed after surgical or medical castration in men, which is in accordance with the fact that pregnenolone is mainly derived from the adrenal glands. Conversely, medical castration has been found to partially suppress pregnenolone levels in premenopausal women. Similarly, an adrenalectomized premenopausal woman showed incompletely diminished circulating pregnenolone levels. Chemistry Pregnenolone is also known chemically as pregn-5-en-3β-ol-20-one. Like other steroids, it consists of four interconnected cyclic hydrocarbons. The compound contains ketone and hydroxyl functional groups, two methyl branches, and a double bond at C5, in the B cyclic hydrocarbon ring. Like many steroid hormones, it is hydrophobic. The sulfated derivative, pregnenolone sulfate, is water-soluble. 3β-Dihydroprogesterone (pregn-4-en-3β-ol-20-one) is an isomer of pregnenolone in which the C5 double bond has been replaced with a C4 double bond. History Pregnenolone was first synthesized by Adolf Butenandt and colleagues in 1934. References Sterols CB1 receptor antagonists Glycine receptor agonists Ketones Neurosteroids Pregnane X receptor agonists Pregnanes Sigma agonists Steroid hormones CB1 receptor negative allosteric modulators
Pregnenolone
[ "Chemistry" ]
1,711
[ "Ketones", "Functional groups" ]
2,182,716
https://en.wikipedia.org/wiki/Fourier-transform%20ion%20cyclotron%20resonance
Fourier-transform ion cyclotron resonance mass spectrometry is a type of mass analyzer (or mass spectrometer) for determining the mass-to-charge ratio (m/z) of ions based on the cyclotron frequency of the ions in a fixed magnetic field. The ions are trapped in a Penning trap (a magnetic field with electric trapping plates), where they are excited (at their resonant cyclotron frequencies) to a larger cyclotron radius by an oscillating electric field orthogonal to the magnetic field. After the excitation field is removed, the ions are rotating at their cyclotron frequency in phase (as a "packet" of ions). These ions induce a charge (detected as an image current) on a pair of electrodes as the packets of ions pass close to them. The resulting signal is called a free induction decay (FID), transient or interferogram that consists of a superposition of sine waves. The useful signal is extracted from this data by performing a Fourier transform to give a mass spectrum. History FT-ICR was invented by Melvin B. Comisarow and Alan G. Marshall at the University of British Columbia. The first paper appeared in Chemical Physics Letters in 1974. The inspiration was earlier developments in conventional ICR and Fourier-transform nuclear magnetic resonance (FT-NMR) spectrometry. Marshall has continued to develop the technique at The Ohio State University and Florida State University. Theory The physics of FTICR is similar to that of a cyclotron at least in the first approximation. In the simplest idealized form, the relationship between the cyclotron frequency and the mass-to-charge ratio is given by where f = cyclotron frequency, q = ion charge, B = magnetic field strength and m = ion mass. This is more often represented in angular frequency: where is the angular cyclotron frequency, which is related to frequency by the definition . Because of the quadrupolar electrical field used to trap the ions in the axial direction, this relationship is only approximate. The axial electrical trapping results in axial oscillations within the trap with the (angular) frequency where is a constant similar to the spring constant of a harmonic oscillator and is dependent on applied voltage, trap dimensions and trap geometry. The electric field and the resulting axial harmonic motion reduces the cyclotron frequency and introduces a second radial motion called magnetron motion that occurs at the magnetron frequency. The cyclotron motion is still the frequency being used, but the relationship above is not exact due to this phenomenon. The natural angular frequencies of motion are where is the axial trapping frequency due the axial electrical trapping and is the reduced cyclotron (angular) frequency and is the magnetron (angular) frequency. Again, is what is typically measured in FTICR. The meaning of this equation can be understood qualitatively by considering the case where is small, which is generally true. In that case the value of the radical is just slightly less than , and the value of is just slightly less than (the cyclotron frequency has been slightly reduced). For the value of the radical is the same (slightly less than ), but it is being subtracted from , resulting in a small number equal to (i.e. the amount that the cyclotron frequency was reduced by). Instrumentation FTICR-MS differs significantly from other mass spectrometry techniques in that the ions are not detected by hitting a detector such as an electron multiplier but only by passing near detection plates. Additionally the masses are not resolved in space or time as with other techniques but only by the ion cyclotron resonance (rotational) frequency that each ion produces as it rotates in a magnetic field. Thus, the different ions are not detected in different places as with sector instruments or at different times as with time-of-flight instruments, but all ions are detected simultaneously during the detection interval. This provides an increase in the observed signal-to-noise ratio owing to the principles of Fellgett's advantage. In FTICR-MS, resolution can be improved either by increasing the strength of the magnet (in teslas) or by increasing the detection duration. Cells A review of different cell geometries with their specific electric configurations is available in the literature. However, ICR cells can belong to one of the following two categories: closed cells or open cells. Several closed ICR cells with different geometries were fabricated and their performance has been characterized. Grids were used as end caps to apply an axial electric field for trapping ions axially (parallel to the magnetic field lines). Ions can be either generated inside the cell or can be injected to the cell from an external ionization source. Nested ICR cells with double pair of grids were also fabricated to trap both positive and negative ions simultaneously. The most common open cell geometry is a cylinder, which is axially segmented to produce electrodes in the shape of a ring. The central ring electrode is commonly used for applying radial excitation electric field and detection. DC electric voltage is applied on the terminal ring electrodes to trap ions along the magnetic field lines. Open cylindrical cells with ring electrodes of different diameters have also been designed. They proved not only capable in trapping and detecting both ion polarities simultaneously, but also they succeeded to separate positive from negative ions radially. This presented a large discrimination in kinetic ion acceleration between positive and negative ions trapped simultaneously inside the new cell. Several ion axial acceleration schemes were recently written for ion–ion collision studies. Stored-waveform inverse Fourier transform Stored-waveform inverse Fourier transform (SWIFT) is a method for the creation of excitation waveforms for FTMS. The time-domain excitation waveform is formed from the inverse Fourier transform of the appropriate frequency-domain excitation spectrum, which is chosen to excite the resonance frequencies of selected ions. The SWIFT procedure can be used to select ions for tandem mass spectrometry experiments. Applications Fourier-transform ion cyclotron resonance (FTICR) mass spectrometry is a high-resolution technique that can be used to determine masses with high accuracy. Many applications of FTICR-MS use this mass accuracy to help determine the composition of molecules based on accurate mass. This is possible due to the mass defect of the elements. FTICR-MS is able to achieve higher levels of mass accuracy than other forms of mass spectrometer, in part, because a superconducting magnet is much more stable than radio-frequency (RF) voltage. Another place that FTICR-MS is useful is in dealing with complex mixtures, such as biomass or waste liquefaction products, since the resolution (narrow peak width) allows the signals of two ions with similar mass-to-charge ratios (m/z) to be detected as distinct ions. This high resolution is also useful in studying large macromolecules such as proteins with multiple charges, which can be produced by electrospray ionization. For example, attomole level of detection of two peptides has been reported. These large molecules contain a distribution of isotopes that produce a series of isotopic peaks. Because the isotopic peaks are close to each other on the m/z axis, due to the multiple charges, the high resolving power of the FTICR is extremely useful. FTICR-MS is very useful in other studies of proteomics as well. It achieves exceptional resolution in both top-down and bottom-up proteomics. Electron-capture dissociation (ECD), collisional-induced dissociation (CID), and infrared multiphoton dissociation (IRMPD) are all utilized to produce fragment spectra in tandem mass spectrometry experiments. Although CID and IRMPD use vibrational excitation to further dissociate peptides by breaking the backbone amide linkages, which are typically low in energy and weak, CID and IRMPD may also cause dissociation of post-translational modifications. ECD, on the other hand, allows specific modifications to be preserved. This is quite useful in analyzing phosphorylation states, O- or N-linked glycosylation, and sulfating. References External links What's in an Oil Drop? An Introduction to Fourier Transform Ion Cyclotron Resonance (FT-ICR) for Non-scientists National High Magnetic Field Laboratory Scottish Instrumentation Resource Centre for Advanced Mass Spectrometry Fourier-transform Ion Cyclotron Resonance (FT-ICR) FT-ICR Introduction University of Bristol Mass spectrometry Measuring instruments
Fourier-transform ion cyclotron resonance
[ "Physics", "Chemistry", "Technology", "Engineering" ]
1,808
[ "Spectrum (physical sciences)", "Instrumental analysis", "Mass", "Measuring instruments", "Mass spectrometry", "Matter" ]
2,182,805
https://en.wikipedia.org/wiki/Periodinane
Periodinanes also known as λ5-iodanes are organoiodine compounds with iodine in the +5 oxidation state. These compounds are described as hypervalent because the iodine center has more than 8 valence electrons. Periodinane compounds The λ5-iodanes such as the Dess-Martin periodinane have square pyramidal geometry with 4 heteroatoms in basal positions and one apical phenyl group. Iodoxybenzene or iodylbenzene, , is a known oxidizing agent. Dess-Martin periodinane (1983) is another powerful oxidant and an improvement of the IBX acid already in existence in 1983. The IBX acid is prepared from 2-iodobenzoic acid and potassium bromate and sulfuric acid and is insoluble in most solvents whereas the Dess-Martin reagent prepared from reaction of the IBX acid with acetic anhydride is very soluble. The oxidation mechanism ordinarily consists of a ligand exchange reaction followed by a reductive elimination. Uses The predominant use of periodinanes is as oxidizing reagents replacing toxic reagents based on heavy metals. See also Carbonyl oxidation with hypervalent iodine reagents Dess–Martin oxidation References External links Hypervalent Iodine Chemistry Oxidizing agents
Periodinane
[ "Chemistry" ]
283
[ "Redox", "Oxidizing agents" ]
2,182,941
https://en.wikipedia.org/wiki/Silicide
A silicide is a type of chemical compound that combines silicon and a usually more electropositive element. Silicon is more electropositive than carbon. In terms of their physical properties, silicides are structurally closer to borides than to carbides. Because of size differences however silicides are not isostructural with borides and carbides. Bonds in silicides range from conductive metal-like structures to covalent or ionic. Silicides of all non-transition metals have been described except beryllium. Silicides are used in interconnects. Structure Silicon atoms in silicides can have many possible organizations: Isolated silicon atoms: electrically conductive (or semiconductive) CrSi, MnSi, FeSi, CoSi, , , , , , Si2 pairs: , hafnium and thorium silicides Si4 tetrahedra: KSi, RbSi, CsSi Sin chains: USi, , CaSi, SrSi, YSi Planar hexagonal graphite-like Si layers: β-USi2, silicides of other lanthanoids and actinoids Corrugated hexagonal Si layers: CaSi2 Open three-dimensional Si skeletons: SrSi2, ThSi2, α-USi2 Preparation and reactivity Most silicides are produced by direct combination of the elements. A silicide prepared by a self-aligned process is called a salicide. This is a process in which silicide contacts are formed only in those areas in which deposited metal (which after annealing becomes a metal component of the silicide) is in direct contact with silicon, hence, the process is self-aligned. It is commonly implemented in MOS/CMOS processes for ohmic contacts of the source, drain, and poly-Si gate. Alkali and alkaline earth metals Group 1 and 2 silicides e.g. Na2Si and Ca2Si react with water, yielding hydrogen and/or silanes. Magnesium silicide reacts with hydrochloric acid to give silane: Mg2Si + 4 HCl → SiH4 + 2 MgCl2 Group 1 silicides are even more reactive. For example, sodium silicide, Na2Si, reacts rapidly with water to yield sodium silicate, Na2SiO3, and hydrogen gas. Rubidium silicide is pyrophoric, igniting in contact with air. Transition metals and other elements The transition metal silicides are usually inert to aqueous solutions. At red heat, they react with potassium hydroxide, fluorine, and chlorine. Mercury, thallium, bismuth, and lead are immiscible with liquid silicon. Applications Silicide thin films have applications in microelectronics due to their high electrical conductivity, thermal stability, corrosion resistance, and compatibility with photolithographic wafer processes. For example silicides formed over layers of polysilicon, called polycides, are commonly used as an interconnect material in integrated circuits for their high conductivity. Silicides formed through the salicide process also see use as a low work function metal in ohmic and Schottky contacts. High work function metals are often not ideal for use in metal–semiconductor junctions directly due to fermi–level pinning where the Schottky barrier potential of the junction becomes locked around 0.7–0.8V. For this reason low forward-voltage Schottky diodes and ohmic interconnects between a semiconductor and a metal often utilize a thin layer of silicide at the metal–semiconductor interface. List (incomplete) Nickel silicide, NiSi Sodium silicide, NaSi Magnesium silicide, Mg2Si Platinum silicide, PtSi (platinum is actually more electronegative than silicon) Titanium silicide, TiSi2 Tungsten silicide, WSi2 Molybdenum disilicide, MoSi2 Neptunium silicide, NpSi2 See also Binary compounds of silicon References Anions
Silicide
[ "Physics", "Chemistry" ]
867
[ "Ions", "Matter", "Anions" ]
2,183,007
https://en.wikipedia.org/wiki/Startle%20response
In animals, including humans, the startle response is a largely unconscious defensive response to sudden or threatening stimuli, such as sudden noise or sharp movement, and is associated with negative affect. Usually the onset of the startle response is a startle reflex reaction. The startle reflex is a brainstem reflectory reaction (reflex) that serves to protect vulnerable parts, such as the back of the neck (whole-body startle) and the eyes (eyeblink) and facilitates escape from sudden stimuli. It is found across many different species, throughout all stages of life. A variety of responses may occur depending on the affected individual's emotional state, body posture, preparation for execution of a motor task, or other activities. The startle response is implicated in the formation of specific phobias. Startle reflex Neurophysiology A startle reflex can occur in the body through a combination of actions. A reflex from hearing a sudden loud noise will happen in the primary acoustic startle reflex pathway consisting of three main central synapses, or signals that travel through the brain. First, there is a synapse from the auditory nerve fibers in the ear to the cochlear root neurons (CRN). These are the first acoustic neurons of the central nervous system. Studies have shown a direct correlation to the amount of decrease of the startle to the number of CRNs that were killed. Second, there is a synapse from the CRN axons to the cells in the nucleus reticularis pontis caudalis (PnC) of the brain. These are neurons that are located in the pons of the brainstem. A study done to disrupt this portion of the pathway by the injection of PnC inhibitory chemicals has shown a dramatic decrease in the amount of startle by about 80 to 90 percent. Third, a synapse occurs from the PnC axons to the motor neurons in the facial motor nucleus or the spinal cord that will directly or indirectly control the movement of muscles. The activation of the facial motor nucleus causes a jerk of the head while an activation in the spinal cord causes the whole body to startle. During neuromotor examinations of newborns, it is noted that, for a number of techniques, the patterns of the startle reaction and the Moro reflex may significantly overlap, the notable distinction being the absence of arm abduction (spreading) during startle responses. Reflexes There are many various reflexes that can occur simultaneously during a startle response. The fastest reflex recorded in humans happens within the masseter muscle or jaw muscle. The reflex was measured by electromyography which records the electrical activity during movement of the muscles. This also showed the response latency, or the delay between the stimulus and the response recorded, was found to be about 14 milliseconds. The blink of the eye which is the reflex of the orbicularis oculi muscle was found to have a latency of about 20 to 40 milliseconds. Out of larger body parts, the head is quickest in a movement latency in a range from 60 to 120 milliseconds. The neck then moves almost simultaneously with a latency of 75 to 121 milliseconds. Next, the shoulder jerks at 100 to 121 milliseconds along with the arms at 125 to 195 milliseconds. Lastly the legs respond with a latency of 145 to 395 milliseconds. This type of cascading response correlates to how the synapses travel from the brain and down the spinal cord to activate each motor neuron. Acoustic startle reflex The acoustic startle reflex is thought to be caused by an auditory stimulus greater than 80 decibels. The reflex is typically measured by electromyography, brain imaging or sometimes positron emission tomography. There are many brain structures and pathways thought to be involved in the reflex. The amygdala, hippocampus, bed nucleus of the stria terminalis (BNST) and anterior cingulate cortex are all thought to play a role in modulating the reflex. The anterior cingulate cortex in the brain is largely thought to be the main area associated with emotional response and awareness, which can contribute to the way an individual reacts to startle-inducing stimuli. Along with the anterior cingulate cortex, the amygdala and the hippocampus are known to have implications in this reflex. The amygdala is known to have a role in the "fight-or-flight response", and the hippocampus functions to form memories of the stimulus and the emotions associated with it. The role of the BNST in the acoustic startle reflex may be attributed to specific areas within the nucleus responsible for stress and anxiety responses. Activation of the BNST by certain hormones is thought to promote a startle response The auditory pathway for this response was largely elucidated in rats in the 1980s. The basic pathway follows the auditory pathway from the ear up to the nucleus of the lateral lemniscus (LLN) from where it activates a motor centre in the reticular formation. This centre sends descending projections to lower motor neurones of the limbs. In slightly more detail this corresponds to ear (cochlea) → cranial nerve VIII (auditory) → cochlear nucleus (ventral/inferior) → LLN → caudal pontine reticular nucleus (PnC). The whole process has a less than 10ms latency. There is no involvement of the superior/rostral or inferior/caudal colliculus in the reaction that "twitches" the hindlimbs, but these may be important for adjustment of pinnae and gaze towards the direction of the sound, or for the associated blink. Application in occupational settings A study undertaken in 2005 by researchers at the Department of Aviation and Logistics, University of Southern Queensland, looked at the performance of aircraft pilots following unexpected critical events. Analysing a number of recent aircraft accidents, the authors identified the negative impact of the startle response as causal or contributory in these accidents. The authors argued that fear resulting from threat, especially if life-threatening, prompted startle effects which had a serious negative impact on pilots' performances. The study considered training strategies to address this, including exposing pilots to unexpected critical events more often, enabling them to improve their responses. See also Startle-evoked movement Escape response Jumping Frenchmen of Maine Jump scare Prepulse inhibition – attenuation of the startle response after a weaker preceding prepulse stimulus Surprise (emotion) References , review Ethology Reflexes Self-defense
Startle response
[ "Biology" ]
1,363
[ "Behavioural sciences", "Ethology", "Behavior" ]
2,183,025
https://en.wikipedia.org/wiki/Aluminide
Aluminides are intermetallic compounds of aluminium. Since aluminium is near the nonmetals on the periodic table, it can bond with metals differently than other metals. The properties of an aluminide are between those of a metal alloy and those of an ionic compound. Aluminides are used as bond coats in thermal barrier coating systems. Examples Magnesium aluminide, MgAl Titanium aluminide, TiAl Iron aluminides, including Fe3Al and FeAl Nickel aluminide, Ni3Al See :Category:Aluminides for a list. References Intermetallics
Aluminide
[ "Physics", "Chemistry", "Materials_science" ]
131
[ "Inorganic compounds", "Metallurgy", "Inorganic compound stubs", "Alloys", "Intermetallics", "Condensed matter physics", "Aluminides" ]
2,183,056
https://en.wikipedia.org/wiki/4-Dimethylaminopyridine
4-Dimethylaminopyridine (DMAP) is a derivative of pyridine with the chemical formula (CH3)2NC5H4N. This white solid is of interest because it is more basic than pyridine, owing to the resonance stabilisation from the NMe2 substituent. Because of its basicity, DMAP is a useful nucleophilic catalyst for a variety of reactions such as esterifications with anhydrides, the Baylis-Hillman reaction, hydrosilylations, tritylation, the Steglich rearrangement, Staudinger synthesis of β-lactams and many more. Chiral DMAP analogues are used in kinetic resolution experiments of mainly secondary alcohols and Evans auxiliary type amides. Preparation DMAP can be prepared in a two-step procedure from pyridine, which is first oxidized to 4-pyridylpyridinium cation. This cation then reacts with dimethylamine: Esterification catalyst In the case of esterification with acetic anhydrides the currently accepted mechanism involves three steps. First, DMAP and acetic anhydride react in a pre-equilibrium reaction to form an ion pair of acetate and the acetylpyridinium ion. In the second step the alcohol adds to the acetylpyridinium, and elimination of pyridine forms an ester. Here the acetate acts as a base to remove the proton from the alcohol as it nucleophilically adds to the activated acylpyridinium. The bond from the acetyl group to the catalyst gets cleaved to generate the catalyst and the ester. The described bond formation and breaking process runs synchronous concerted without the appearance of a tetrahedral intermediate. The acetic acid formed will then protonate the DMAP. In the last step of the catalytic cycle the auxiliary base (usually triethylamine or pyridine) deprotonates the protonated DMAP, reforming the catalyst. The reaction runs through the described nucleophilic reaction pathway irrespective of the anhydride used, but the mechanism changes with the pKa value of the alcohol used. For example, the reaction runs through a base-catalyzed reaction pathway in the case of a phenol. In this case, DMAP acts as a base and deprotonates the phenol, and the resulting phenolate ion adds to the anhydride. Safety DMAP has a relatively high toxicity and is particularly dangerous because of its ability to be absorbed through the skin. It is also corrosive. Related compound 4-Pyrrolidinylpyridine References Further reading Reagents for organic chemistry 4-Aminopyridines Catalysts Dimethylamino compounds
4-Dimethylaminopyridine
[ "Chemistry" ]
608
[ "Catalysis", "Catalysts", "Chemical kinetics", "Reagents for organic chemistry" ]
2,183,504
https://en.wikipedia.org/wiki/ATREX
The ATREX engine (Air Turbo Ramjet Engine with eXpander cycle) developed in Japan is an experimental precooled jet engine that works as a turbojet at low speeds and a ramjet up to mach 6.0. ATREX uses liquid hydrogen fuel in a fairly exotic single-fan arrangement. The liquid hydrogen fuel is pumped through a heat exchanger in the air-intake, simultaneously heating the liquid hydrogen and cooling the incoming air. This cooling of the incoming air is critical in achieving a reasonable efficiency. The hydrogen then continues through a second heat exchanger positioned after the combustion section, where the hot exhaust is used to further heat the hydrogen, turning it in a very high pressure gas. This gas is then passed through the tips of the fan providing driving power to the fan at subsonic speeds. After mixing with the air, the hydrogen is burned in the combustion chamber. The development of this engine lost focus in favor of the new hypersonic precooled turbojet engine (PCTJ). See also RB545 Skylon References External links Official ATREX project site (English version, archived version, July 9, 2005) Space program of Japan Jet engines
ATREX
[ "Technology" ]
243
[ "Jet engines", "Engines" ]
2,183,554
https://en.wikipedia.org/wiki/DEC%20RADIX%2050
RADIX 50 or RAD50 (also referred to as RADIX50, RADIX-50 or RAD-50), is an uppercase-only character encoding created by Digital Equipment Corporation (DEC) for use on their DECsystem, PDP, and VAX computers. RADIX 50's 40-character repertoire (050 in octal) can encode six characters plus four additional bits into one 36-bit machine word (PDP-6, PDP-10/DECsystem-10, DECSYSTEM-20), three characters plus two additional bits into one 18-bit word (PDP-9, PDP-15), or three characters into one 16-bit word (PDP-11, VAX). The actual encoding differs between the 36-bit and 16-bit systems. 36-bit systems In 36-bit DEC systems RADIX 50 was commonly used in symbol tables for assemblers or compilers which supported six-character symbol names from a 40-character alphabet. This left four bits to encode properties of the symbol. For its similarities to the SQUOZE character encoding scheme used in IBM's SHARE Operating System for representing object code symbols, DEC's variant was also sometimes called DEC Squoze, however, IBM SQUOZE packed six characters of a 50-character alphabet plus two additional flag bits into one 36-bit word. RADIX 50 was not normally used in 36-bit systems for encoding ordinary character strings; file names were normally encoded as six six-bit characters, and full ASCII strings as five seven-bit characters and one unused bit per 36-bit word. 18-bit systems RADIX 50 (also called Radix 508 format) was used in Digital's 18-bit PDP-9 and PDP-15 computers to store symbols in symbol tables, leaving two extra bits per 18-bit word ("symbol classification bits"). 16-bit systems Some strings in DEC's 16-bit systems were encoded as 8-bit bytes, while others used RADIX 50 (then also called MOD40). In RADIX 50, strings were encoded in successive words as needed, with the first character within each word located in the most significant position. For example, using the PDP-11 encoding, the string "ABCDEF", with character values 1, 2, 3, 4, 5, and 6, would be encoded as a word containing the value 1×402 + 2×401 + 3×400 = , followed by a second word containing the value 4×402 + 5×401 + 6×400 = . Thus, 16-bit words encoded values ranging from 0 (three spaces) to ("999"). When there were fewer than three characters in a word, the last word for the string was padded with trailing spaces. There were several minor variations of this encoding with differing interpretations of the 27, 28, 29 code points. Where RADIX 50 was used for filenames stored on media, the code points represent the , , characters, and will be shown as such when listing the directory with utilities such as DIR. When encoding strings in the PDP-11 assembler and other PDP-11 programming languages the code points represent the , , characters, and are encoded as such with the default RAD50 macro in the global macros file, and this encoding was used in the symbol tables. Some early documentation for the RT-11 operating system considered the code point 29 to be undefined. The use of RADIX 50 was the source of the filename size conventions used by Digital Equipment Corporation PDP-11 operating systems. Using RADIX 50 encoding, six characters of a filename could be stored in two 16-bit words, while three more extension (file type) characters could be stored in a third 16-bit word. Similary, a three-character device name such as "DL1" could also be stored in a 16-bit word. The period that separated the filename and its extension, and the colon separating a device name from a filename, was implied (i.e., was not stored and always assumed to be present). See also Base 40 Base conversion Chen–Ho encoding Densely packed decimal (DPD) Hertz encoding Packed BCD Six-bit character code Split octal References Further reading External links https://github.com/turbo/ptt-its/blob/master/doc/info/midas.25 Character encoding Character sets Digital Equipment Corporation
DEC RADIX 50
[ "Technology" ]
956
[ "Natural language and computing", "Character encoding" ]
2,183,637
https://en.wikipedia.org/wiki/Watchtower
A watchtower or watch tower is a type of fortification used in many parts of the world. It differs from a regular tower in that its primary use is military and from a turret in that it is usually a freestanding structure. Its main purpose is to provide a high, safe place from which a sentinel or guard may observe the surrounding area. In some cases, non-military towers, such as religious towers, may also be used as watchtowers. History Military watchtowers The Romans built numerous towers as part of a system of communications, one example being the towers along Hadrian's Wall in Britain. Romans built many lighthouses, such as the Tower of Hercules in northern Spain, which survives to this day as a working building, and the equally famous lighthouse at Dover Castle, which survives to about half its original height as a ruin. In medieval Europe, many castles and manor houses, or similar fortified buildings, were equipped with watchtowers. In some of the manor houses of western France, the watchtower equipped with arrow or gun loopholes was one of the principal means of defense. A feudal lord could keep watch over his domain from the top of his tower. In southern Saudi Arabia and Yemen, small stone and mud towers called "qasaba" were constructed as either watchtowers or keeps in the Asir mountains. Furthermore, in Najd, a watchtower, called "Margab", was used to watch for approaching enemies far in distance and shout calling warnings from atop. Scotland saw the construction of Peel towers that combined the function of watchtower with that of a keep or tower house that served as the residence for a local notable family. Mediterranean countries, and Italy in particular, saw the construction of numerous coastal watchtowers since the early Middle Ages, connected to the menace of Saracen attacks from the various Muslim states existing at the time (such as the Balearic Islands, Ifriqiya or Sicily). Later (starting from the 16th century) many were restored or built against the Barbary pirates. Similarly, the city state of Hamburg gained political power in the 13th century over a remote island 150 kilometers down the Elbe river estuary to erect the Great Tower Neuwerk by 1310 to protect its trading routes. They also claimed customs at the watchtower protecting the passage. Some notable examples of military Mediterranean watchtowers include the towers that the Knights of Malta had constructed on the coasts of Malta. These towers ranged in size from small watchtowers to large structures armed with numerous cannons. They include the Wignacourt, de Redin, and Lascaris towers, named for the Grand Master, such as Martin de Redin, that commissioned each series. The name of Tunisia's second biggest city, Sfax, is the berber-punic translation from the greek "Taphroúria" (Ταφρούρια) meaning watchtower, which may mean that the 9th century Muslim town was built as an extension of what is currently known as the Kasbah, one of the corners of the surviving complete rampart of the medina. In the Channel Islands, the Jersey Round Towers and the Guernsey loophole towers date from the late 18th century. They were erected to give warning of attacks by the French. The Martello towers that the British built in the UK and elsewhere in the British Empire were defensive fortifications that were armed with cannon and that were often within line of sight of each other. One of the last Martello towers to be built was Fort Denison in Sydney harbour. The most recent descendants of the Martello Towers are the flak towers that the various combatants erected in World War II as mounts for anti-aircraft artillery. Modern warfare In modern warfare the relevance of watchtowers has decreased due to the availability of alternative forms of military intelligence, such as reconnaissance by spy satellites and unmanned aerial vehicles. However watch towers have been used in counter-insurgency wars to maintain a military presence in conflict areas in case such as by the French Army in French Indochina, by the British Army and the RUC in Northern Ireland and the IDF in Gaza and West Bank. Non-military watchtowers An example of the non-military watchtower in history is the one of Jerusalem. Though the Hebrews used it to keep a watch for approaching armies, the religious authorities forbade the taking of weapons up into the tower as this would require bringing weapons through the temple. Rebuilt by King Herod, that Watchtower was renamed after Mark Antony, his friend who battled against Gaius Julius Caesar Octavianus (later Augustus) and lost. See also Blockhouse Diaolou Fire lookout tower Observation towers are similar constructions being usually outside of fortifications. A similar use have also Control towers on airports or harbours. References External links Fortified towers by type
Watchtower
[ "Engineering" ]
973
[ "Structural engineering", "Towers" ]
2,183,802
https://en.wikipedia.org/wiki/Redcedar%20bolt
Redcedar bolts are relatively small (1 foot x 1 foot x 1 foot is common) cubes of Western Redcedar which are later processed into redcedar roof shingles. References Roofs Woodworking
Redcedar bolt
[ "Technology", "Engineering" ]
44
[ "Structural system", "Structural engineering", "Roofs" ]
2,183,884
https://en.wikipedia.org/wiki/Push%20Access%20Protocol
Push Access Protocol (or PAP) is a protocol defined in WAP-164 of the Wireless Application Protocol (WAP) suite from the Open Mobile Alliance. PAP is used for communicating with the Push Proxy Gateway, which is usually part of a WAP Gateway. PAP is intended for use in delivering content from Push Initiators to Push Proxy Gateways for subsequent delivery to narrow band devices, including mobile phones and pagers. Example messages include news, stock quotes, weather, traffic reports, and notification of events such as email arrival. With Push functionality, users are able to receive information without having to request it. In many cases it is important for the user to get the information as soon as it is available. The Push Access Protocol is not intended for use over the air. PAP is designed to be independent of the underlying transport protocol. PAP specifies the following possible operations between the Push Initiator and the Push Proxy Gateway: Submit a Push Cancel a Push Query for status of a Push Query for wireless device capabilities Result notification The interaction between the Push Initiators and the Push Proxy Gateways is in the form of XML messages. Operations Push Submission The purpose of the Push Submission is to deliver a Push message from a Push Initiator to a PPG, which should then deliver the message to a user agent in a device on the wireless network. The Push message contains a control entity and a content entity, and MAY contain a capabilities entity. The control entity is an XML document that contains control information (push-message) for the PPG to use in processing the message for delivery. The content entity represents content to be sent to the wireless device. The capabilities entity contains client capabilities assumed by the Push Initiator and is in the RDF [RDF] format as defined in the User Agent Profile [UAPROF]. The PPG MAY use the capabilities information to validate that the message is appropriate for the client. The response to the push request is an XML document (push-response, section 9.3) that indicates initial acceptance or failure. At minimum the PPG MUST validate against the DTD [XML] the control entity in the message and report the result in the response. The PPG MAY indicate, using progress-note (if requested by the Push initiator in the progress-notes-requested attribute), that other validations have been completed. The contents and number of progress-notes are implementation specific. A typical response message may contain progress notes for each stage of internal processing. The processing stages used are implementation specific. There are provisions in the Push message to specify multiple recipients. The response message corresponds to the submit message, so there is one response message for one push message, regardless of the number of addresses specified. If the Push Initiator desires information related to the final outcome of the delivery, then it MUST request a result notification information in the push submission and provide a return address (e.g. URL). Result Notification This operation is used by the PPG to inform the initiator of the final outcome of a push submission, if requested by the Push Initiator. This notification (arrow 5, below) tells the Push Initiator that the message was sent (transmitted, as in arrow 3), delivered (confirmation received from wireless device, as in arrow 4), it expired, was cancelled, or there was an error. If there was a processing error, the notification SHOULD be sent immediately upon detection of the error to the Push Initiator and the message should not be sent to the client. Otherwise, the notification MUST be sent after the message delivery process has been completed. The delivery process is considered completed when the message is no longer a candidate for delivery, e.g. the message has expired. If the push submission is indicated as rejected in step two in figure 3, then no result notification will be sent. The Push Initiator MUST have provided a return address (e.g. URL) during the push operation for this notification to be possible. Push Cancellation The purpose of the Push Cancellation is to allow the Push Initiator to attempt to cancel a previously submitted push message. The Push Initiator initiates this operation. The PPG responds with an indication of whether the request was successful or not. Status Query The status query operation allows the Push Initiator to request the current status of a message that has been previously submitted. If status is requested for a message which is addressed to multiple recipients, the PPG MUST send back a single response containing status query results for each of the recipients. Client Capabilities Query This operation allows the Push Initiator to query the PPG for the capabilities of a specific device. The response is a multipart/related document containing the ccq-response (section 9.11) element in an XML document and, in the second entity, the actual client capabilities information in RDF [RDF] as defined in the User Agent Profile [UAPROF]. The PPG MAY add to the capabilities reported if the PPG is willing to perform transformations to the formats supported by the client. For example, if a client has JPG support but not GIF and a PPG is willing to convert GIF files to JPG, then the PPG may report that the client can support JPG and GIF files. The capabilities reported may be the combined PPG and client capabilities and they may have been derived from session capabilities or retrieved from a CC/PP server. Capabilities may also be derived using implementation dependent means. Addressing There are three addresses to be considered by the Push Initiator: the push proxy gateway address, the wireless device address, and the result notification address. The push proxy gateway address must be known by the Push Initiator. This address is needed at the layer below the push access protocol. The push proxy gateway is addressed using a unique address that depends on the underlying protocol. For example, when the underlying protocol is HTTP, a URL [RFC1738] is used. The device addressing information is included as part of the message content (XML tagged content). Any character allowed in an RFC822 address may appear in the device address field. In addition, a “notify-requested-to” address may be provided by the Push Initiator when required so that the push proxy gateway can later respond to the Push Initiator with result notification. Multiple Recipient Addressing There are scenarios in which a Push Initiator may want to send identical messages to multiple recipients. Rather than submitting multiple identical push messages, one to each recipient, the Push Initiator may submit a single push message addressed to multiple recipients. This section is intended to clarify behaviour related to operations on multiple recipients. When the PPG returns the push-response message, after a push submission to multiple recipients, the response corresponds to the message, regardless of the number of recipients specified in the push submission (there is one response for each push submission). When a Push Initiator requests status (section 9.8) with multiple addresses specified, the PPG MUST reply with a single statusquery-response (section 9.9) containing the individual statuses. The same is true when only a push-id is specified (no address specified) in the query for status of a multiple recipient message. Result notifications (section 9.6) MUST be sent by the PPG for each individual recipient, if result notification is requested by the Push Initiator during the submission of a message to multiple recipients. In cases where a message is sent to multiple recipients and later a cancel is requested by the initiator, the PPG MAY send back individual responses related to each of the multiple recipients or it MAY send responses related to many or all of the recipients. Support of multiple addresses is OPTIONAL in a PPG. Multicast/Broadcast Addresses There are scenarios in which a single address submitted by a PI may be expanded by a PPG into multiple addresses for delivery. In addition, a single address transmitted on a wireless network may be received by multiple devices (e.g. broadcast). This type of service is expected for the distribution of information of interest to a broad population (e.g. news, weather, and traffic). This section is intended to clarify behaviour related to operations involving multicast and broadcast addresses. Since the address expansion is done in the PPG or in the wireless network, the behaviour between the PI and the PPG is identical to behaviour as if the address were not expanded. The response contains the individual address as submitted by the PI. Message Format The push access protocol is independent of the transport used. PAP messages carry control information, and in the case of a push submission, also content and optionally client capabilities information. Control information includes command/response messages between the PPG and the Push Initiator, and parameters passed to the PPG for use in sending content to the wireless device. Examples of this type of information include the wireless device address, the delivery priority of the message, etc. This information is not normally delivered to the wireless device. Content is information that is intended for the wireless device. This information might be intelligible only to the wireless device (e.g. may be encrypted by the Push Initiator or may be application data for an application unknown to the PPG) or it may be recognisable by the PPG (e.g. HTML or WML). The PPG may be configured to perform some transformation on recognisable content (e.g. HTML to WML) for certain wireless devices. The other category of information is client capability information as specified in the User Agent Profile [UAPROF]. When more than control is carried in a message, the format of the message is a MIME multipart/related [RFC2387] compound object. When only control information (e.g. for message responses) is carried in a message, the format of the message is a simple application/xml entity. All information is transported within a single message body. In the multipart messages, the first entity contains all push related control information in an XML document, the second entity contains the content for the wireless device, the third entity, if present, contains UAPROF client capabilities. The format of the content entity is specified in [PushMsg]. Control Entity Format The control entity is a MIME body part which holds an XML document containing one pap element as defined in section 9.1. The control entity MUST be included in every PAP request and response. The control entity MUST be the first entity in the MIME multipart/related message. Content Entity Format The content entity is a MIME body part containing the content to be sent to the wireless device. The content type is not defined by the PAP, but can be any type as long as it is described by MIME. The content entity is included only in the push submission and is not included in any other operation request or response. The content entity MUST be the second entity in the MIME multipart/related message. Capabilities Entity Format The capabilities entity is a MIME body part containing the Push Initiator's assumed subset of the capabilities of the wireless device/user agent. The capabilities format is specified in the User Agent Profile [UAPROF]. The capabilities entity, if present, MUST be the third entity in the Push Submission MIME multipart/related message and MUST be the second entity in a Client Capabilities Query response. References External links Push Access Protocol Specification, Version 29-Apr-2001 Push technology Wireless Application Protocol
Push Access Protocol
[ "Technology" ]
2,380
[ "Wireless networking", "Wireless Application Protocol" ]
2,183,934
https://en.wikipedia.org/wiki/Conditional%20access
Conditional access (CA) is a term commonly used in relation to software and to digital television systems. Conditional access is an evaluation to ensure the person who is seeking access to content is authorized to access the content. Access is managed by requiring certain criteria to be met before granting access to the content. In software Conditional access is a function that lets you manage people's access to the software in question, such as email, applications, and documents. It is usually offered as SaaS (Software-as-a-Service) and deployed in organizations to keep company data safe. By setting conditions on the access to this data, the organization has more control over who accesses the data and where and in what way the information is accessed. When setting up conditional access, access can be limited to or prevented based on the policy defined by the system administrator. For example, a policy might require access is available from certain networks, or access is blocked when a specific web browser is requesting the access. In digital television Under the Digital Video Broadcasting (DVB) standard, conditional access system (CAS) standards are defined in the specification documents for DVB-CA (conditional access), DVB-CSA (the common scrambling algorithm) and DVB-CI (the Common Interface). These standards define a method by which one can obfuscate a digital-television stream, with access provided only to those with valid decryption smart-cards. The DVB specifications for conditional access are available from the standards page on the DVB website. This is achieved by a combination of scrambling and encryption. The data stream is scrambled with a 48-bit secret key, called the control word. Knowing the value of the control word at a given moment is of relatively little value, as under normal conditions, content providers will change the control word several times per minute. The control word is generated automatically in such a way that successive values are not usually predictable; the DVB specification recommends using a physical process for that. In order for the receiver to unscramble the data stream, it must be permanently informed about the current value of the control word. In practice, it must be informed slightly in advance, so that no viewing interruption occurs. Encryption is used to protect the control word during transmission to the receiver: the control word is encrypted as an entitlement control message (ECM). The CA subsystem in the receiver will decrypt the control word only when authorised to do so; that authority is sent to the receiver in the form of an entitlement management message (EMM). The EMMs are specific to each subscriber, as identified by the smart card in his receiver, or to groups of subscribers, and are issued much less frequently than ECMs, usually at monthly intervals. This being apparently not sufficient to prevent unauthorized viewing, TPS has lowered this interval down to about 12 minutes. This can be different for every provider, BSkyB uses a term of 6 weeks. When Nagravision 2 was hacked, Digital+ started sending a new EMM every three days to make unauthorized viewing more cumbersome. The contents of ECMs and EMMs are not standardized and as such they depend on the conditional access system being used. The control word can be transmitted through different ECMs at once. This allows the use of several conditional access systems at the same time, a DVB feature called simulcrypt, which saves bandwidth and encourages multiplex operators to cooperate. DVB Simulcrypt is widespread in Europe; some channels, like the CNN International Europe from the Hot Bird satellites, can use 7 different CA systems in parallel. The decryption cards are read, and sometimes updated with specific access rights, either through a conditional-access module (CAM), a PC card-format card reader meeting DVB-CI standards, or through a built-in ISO/IEC 7816 card reader, such as that in the Sky Digibox. Several companies provide competing CA systems; ABV, VideoGuard, Irdeto, Nagravision, Conax, Viaccess, Synamedia, Mediaguard (a.k.a. SECA) are among the most commonly used CA systems. Due to the common usage of CA in DVB systems, many tools to aid in or even directly circumvent encryption exist. CAM emulators and multiple-format CAMs exist which can either read several card formats or even directly decrypt a compromised encryption scheme. Most multiple format CAMs and all CAMs that directly decrypt a signal are based on reverse engineering of the CA systems. A large proportion of the systems currently in use for DVB encryption have been opened to full decryption at some point, including Nagravision, Conax, Viaccess, Mediaguard (v1) as well as the first version of VideoGuard. Conditional access in North America In Canada and United States, the standard for conditional access is provided with CableCARDs whose specification was developed by the cable company consortium CableLabs. Cable companies in the United States are required by the Federal Communications Commission to support CableCARDs. Standards exist for two-way communication (M-card), but satellite television has separate standards. Next-generation approaches in the United States eschew such physical cards and employ schemes using downloadable software for conditional access such as DCAS. The main appeal of such approaches is that the access control may be upgraded dynamically in response to security breaches without requiring expensive exchanges of physical conditional-access modules. Another appeal is that it may be inexpensively incorporated into non-traditional media display devices such as portable media players. Conditional access systems Conditional access systems include: Analog systems EuroCrypt Nagravision Videocipher VideoCrypt Digital systems See also Access control, the same principle applied outside of television. B-CAS CableCARD Card sharing Compression Networks Conditional-access module DigiCipher 2 Digital rights management Pirate decryption PowerVu Smart card Television encryption Viaccess Videocipher VideoGuard Pairing Smartcard References External links CAS history in Spanish CA ID list on dvbservices.com Digital television Digital rights management Broadcast engineering
Conditional access
[ "Engineering" ]
1,260
[ "Broadcast engineering", "Electronic engineering" ]
2,184,047
https://en.wikipedia.org/wiki/Beryllium%20fluoride
Beryllium fluoride is the inorganic compound with the formula BeF2. This white solid is the principal precursor for the manufacture of beryllium metal. Its structure resembles that of quartz, but BeF2 is highly soluble in water. Properties Beryllium fluoride has distinctive optical properties. In the form of fluoroberyllate glass, it has the lowest refractive index for a solid at room temperature of 1.275. Its dispersive power is the lowest for a solid at 0.0093, and the nonlinear coefficient is also the lowest at 2 × 10−14. Structure and bonding The structure of solid BeF2 resembles that of cristobalite. Be2+ centers are four coordinate and tetrahedral and the fluoride centers are two-coordinate. The Be-F bond lengths are about 1.54 Å. Analogous to SiO2, BeF2 can also adopt a number of related structures. An analogy also exists between BeF2 and AlF3: both adopt extended structures at mild temperature. Gaseous and liquid BeF2 Gaseous beryllium fluoride adopts a linear structure, with a Be-F distance of 143 pm. BeF2 reaches a vapor pressure of 10 Pa at 686 °C, 100 Pa at 767 °C, 1 kPa at 869 °C, 10 kPa at 999 °C, and 100 kPa at 1172 °C. Molecular in the gaseous state is isoelectronic to carbon dioxide. As a liquid, beryllium fluoride has a tetrahedral structure. The density of liquid BeF2 decreases near its freezing point, as Be2+ and F− ions begin to coordinate more strongly with one another, leading to the expansion of voids between formula units. Production The processing of beryllium ores generates impure Be(OH)2. This material reacts with ammonium bifluoride to give ammonium tetrafluoroberyllate: Be(OH)2 + 2 (NH4)HF2 → (NH4)2BeF4 + 2 H2O Tetrafluoroberyllate is a robust ion, which allows its purification by precipitation of various impurities as their hydroxides. Heating purified (NH4)2BeF4 gives the desired product: (NH4)2BeF4 → 2 NH3 + 2 HF + BeF2 In general the reactivity of BeF2 ions with fluoride are quite analogous to the reactions of SiO2 with oxides. Applications Reduction of BeF2 at 1300 °C with magnesium in a graphite crucible provides the most practical route to metallic beryllium: BeF2 + Mg → Be + MgF2 The Beryllium chloride is not a useful precursor because of its volatility. Niche uses Beryllium fluoride is used in biochemistry, particularly protein crystallography as a mimic of phosphate. Thus, ADP and beryllium fluoride together tend to bind to ATP sites and inhibit protein action, making it possible to crystallise proteins in the bound state. Beryllium fluoride forms a basic constituent of the preferred fluoride salt mixture used in liquid-fluoride nuclear reactors. Typically beryllium fluoride is mixed with lithium fluoride to form a base solvent (FLiBe), into which fluorides of uranium and thorium are introduced. Beryllium fluoride is exceptionally chemically stable, and LiF/BeF2 mixtures (FLiBe) have low melting points (360–459 °C) and the best neutronic properties of fluoride salt combinations appropriate for reactor use. MSRE used two different mixtures in the two cooling circuits. Safety Beryllium compounds are highly toxic. The increased toxicity of beryllium in the presence of fluoride has been noted as early as 1949. The in mice is about 100 mg/kg by ingestion and 1.8 mg/kg by intravenous injection. References External links IARC Monograph "Beryllium and Beryllium Compounds" National Pollutant Inventory: Beryllium and compounds fact sheet National Pollutant Inventory: Fluoride and compounds fact sheet Hazards of Beryllium fluoride MSDS from which the LD50 figures Beryllium compounds Fluorides Alkaline earth metal halides
Beryllium fluoride
[ "Chemistry" ]
906
[ "Highly-toxic chemical substances", "Harmful chemical substances", "Fluorides", "Salts" ]
2,184,383
https://en.wikipedia.org/wiki/Boiling-point%20elevation
Boiling-point elevation is the phenomenon whereby the boiling point of a liquid (a solvent) will be higher when another compound is added, meaning that a solution has a higher boiling point than a pure solvent. This happens whenever a non-volatile solute, such as a salt, is added to a pure solvent, such as water. The boiling point can be measured accurately using an ebullioscope. Explanation The boiling point elevation is a colligative property, which means that boiling point elevation is dependent on the number of dissolved particles but not their identity. It is an effect of the dilution of the solvent in the presence of a solute. It is a phenomenon that happens for all solutes in all solutions, even in ideal solutions, and does not depend on any specific solute–solvent interactions. The boiling point elevation happens both when the solute is an electrolyte, such as various salts, and a nonelectrolyte. In thermodynamic terms, the origin of the boiling point elevation is entropic and can be explained in terms of the vapor pressure or chemical potential of the solvent. In both cases, the explanation depends on the fact that many solutes are only present in the liquid phase and do not enter into the gas phase (except at extremely high temperatures). The vapor pressure affects the solute shown by Raoult's Law while the free energy change and chemical potential are shown by Gibbs free energy. Most solutes remain in the liquid phase and do not enter the gas phase, except at very high temperatures. In terms of vapor pressure, a liquid boils when its vapor pressure equals the surrounding pressure. A nonvolatile solute lowers the solvent’s vapor pressure, meaning a higher temperature is needed for the vapor pressure to equalize the surrounding pressure, causing the boiling point to elevate. In terms of chemical potential, at the boiling point, the liquid and gas phases have the same chemical potential. Adding a nonvolatile solute lowers the solvent’s chemical potential in the liquid phase, but the gas phase remains unaffected. This shifts the equilibrium between phases to a higher temperature, elevating the boiling point. Relationship between Freezing-point Depression Freezing-point depression is analogous to boiling point elevation, though the magnitude of freezing-point depression is higher for the same solvent and solute concentration. These phenomena extend the liquid range of a solvent in the presence of a solute. Related equations for Calculating Boiling Point The extent of boiling-point elevation can be calculated by applying Clausius–Clapeyron relation and Raoult's law together with the assumption of the non-volatility of the solute. The result is that in dilute ideal solutions, the extent of boiling-point elevation is directly proportional to the molal concentration (amount of substance per mass) of the solution according to the equation: ΔTb = Kb · bc where the boiling point elevation, is defined as Tb (solution) − Tb (pure solvent). Kb, the ebullioscopic constant, which is dependent on the properties of the solvent. It can be calculated as Kb = RTb2M/ΔHv, where R is the gas constant, and Tb is the boiling temperature of the pure solvent [in K], M is the molar mass of the solvent, and ΔHv is the heat of vaporization per mole of the solvent. bc is the colligative molality, calculated by taking dissociation into account since the boiling point elevation is a colligative property, dependent on the number of particles in solution. This is most easily done by using the van 't Hoff factor i as bc = bsolute · i, where bsolute is the molality of the solution. The factor i accounts for the number of individual particles (typically ions) formed by a compound in solution. Examples: i = 1 for sugar in water i = 1.9 for sodium chloride in water, due to the near full dissociation of NaCl into Na+ and Cl− (often simplified as 2) i = 2.3 for calcium chloride in water, due to nearly full dissociation of CaCl2 into Ca2+ and 2Cl− (often simplified as 3) Non integer i factors result from ion pairs in solution, which lower the effective number of particles in the solution. Equation after including the van 't Hoff factor ΔTb = Kb · bsolute · i The above formula reduces precision at high concentrations, due to nonideality of the solution. If the solute is volatile, one of the key assumptions used in deriving the formula is not true because the equation derived is for solutions of non-volatile solutes in a volatile solvent. In the case of volatile solutes, the equation can represent a mixture of volatile compounds more accurately, and the effect of the solute on the boiling point must be determined from the phase diagram of the mixture. In such cases, the mixture can sometimes have a lower boiling point than either of the pure components; a mixture with a minimum boiling point is a type of azeotrope. Ebullioscopic constants Values of the ebullioscopic constants Kb for selected solvents: Uses Together with the formula above, the boiling-point elevation can be used to measure the degree of dissociation or the molar mass of the solute. This kind of measurement is called ebullioscopy (Latin-Greek "boiling-viewing"). However, superheating is a factor that can affect the precision of the measurement and would be challenging to avoid because of the decrease in molecular mobility. Therefore, ΔTb would be hard to measure precisely even though superheating can be partially overcome by the invention of the Beckmann thermometer. In reality, cryoscopy is used more often because the freezing point is often easier to measure with precision. See also Colligative properties Freezing point depression Dühring's rule List of boiling and freezing information of solvents References Amount of substance Chemical properties Physical chemistry
Boiling-point elevation
[ "Physics", "Chemistry", "Mathematics" ]
1,251
[ "Scalar physical quantities", "Applied and interdisciplinary physics", "Physical quantities", "Quantity", "Chemical quantities", "Amount of substance", "nan", "Wikipedia categories named after physical quantities", "Physical chemistry" ]
2,184,799
https://en.wikipedia.org/wiki/Fucose
Fucose is a hexose deoxy sugar with the chemical formula C6H12O5. It is found on N-linked glycans on the mammalian, insect and plant cell surface. Fucose is the fundamental sub-unit of the seaweed polysaccharide fucoidan. The α(1→3) linked core of fucoidan is a suspected carbohydrate antigen for IgE-mediated allergy. Two structural features distinguish fucose from other six-carbon sugars present in mammals: the lack of a hydroxyl group on the carbon at the 6-position (C-6) (thereby making it a deoxy sugar) and the L-configuration. It is equivalent to 6-deoxy--galactose. In the fucose-containing glycan structures, fucosylated glycans, fucose can exist as a terminal modification or serve as an attachment point for adding other sugars. In human N-linked glycans, fucose is most commonly linked α-1,6 to the reducing terminal β-N-acetylglucosamine. However, fucose at the non-reducing termini linked α-1,2 to galactose forms the H antigen, the substructure of the A and B blood group antigens. Fucose is released from fucose-containing polymers by an enzyme called α-fucosidase found in lysosomes. L-Fucose has several potential applications in cosmetics, pharmaceuticals, and dietary supplements Fucosylation of antibodies has been established to reduce binding to the Fc receptor of Natural Killer cells and thereby reduce antibody-dependent cellular cytotoxicity. Therefore, afucosylated monoclonal antibodies have been designed to recruit the immune system to cancers cells have been manufactured in cell lines deficient in the enzyme for core fucosylation (FUT8), thereby enhancing the in vivo cell killing. See also Digitalose, the methyl ether of D-fucose Fucitol Fucosidase Fucosyltransferase Verotoxin-producing Escherichia coli References Aldohexoses Deoxy sugars Pyranoses
Fucose
[ "Chemistry" ]
474
[ "Deoxy sugars", "Carbohydrates" ]
2,184,818
https://en.wikipedia.org/wiki/Bischler%E2%80%93M%C3%B6hlau%20indole%20synthesis
The Bischler–Möhlau indole synthesis, also often referred to as the Bischler indole synthesis, is a chemical reaction that forms a 2-aryl-indole from an α-bromo-acetophenone and excess aniline; it is named after August Bischler and . Despite its long history, this classical reaction had received relatively little attention in comparison with other methods for indole synthesis, owing to the reactions harsh conditions, poor yields and unpredictable regioselectivity. Recently, milder methods have been developed, including the use of lithium bromide as a catalyst and an improved procedure involving the use of microwave irradiation. History What is now known as the Bischler-Möhlau indole synthesis was discovered and formulated through the separate, but complimentary, findings of German Scientist Richard Möhlau in 1882 and Russia-born German chemist August Bischler (with partner H. Brion) in 1892. These two researchers did not collaborate with each other, but instead independently developed very similar procedures starting from an aromatic ketone structure with an excess of some aniline and ultimately producing a product. The images below depict the original indole synthesis equations written by Möhlau and Bischler, respectively: Being that both scientists had published their works for indole synthesis within the same decade, the general indole synthesis process was given the name Bischler-Möhlau indole synthesis. This original procedure for the indole synthesis is known to have inconsistent results and yields, but has been modified into new indole synthesis procedures: Buu-Hoï Modified Indole Synthesis Blackhall and Thomson Modified Indole Synthesis Japp and Murray Modified Indole Synthesis Reaction mechanism The first two step involve the reaction of the α-bromo-acetophenone with molecules of aniline to form intermediate 4. The charged aniline forms a decent enough leaving group for an electrophilic cyclization to form intermediate 5, which quickly aromatizes and tautomerizes to give the desired indole 7. See also Fischer indole synthesis Bischler–Napieralski reaction References Indole forming reactions Name reactions
Bischler–Möhlau indole synthesis
[ "Chemistry" ]
445
[ "Name reactions", "Ring forming reactions", "Organic reactions" ]
2,184,922
https://en.wikipedia.org/wiki/James%20Orr%20%28theologian%29
James Orr (1844–6 September 1913) was a Scottish Presbyterian minister and professor of church history and then theology. He was an influential defender of evangelical doctrine and a contributor to The Fundamentals. Biography Orr was born in Glasgow and spent his childhood in Manchester and Leeds. He was orphaned and became an apprentice bookbinder, but went on to enter Glasgow University in 1865. In 1870, he obtained an M.A. in Philosophy of Mind, and after graduating from the theological college of the United Presbyterian Church, he was ordained a minister in Hawick. In 1885 he received a D.D. from Glasgow University, and in the early 1890s delivered a series of lectures that later became the influential The Christian View of God and the World. He was appointed professor of Church history in 1891 at the theological college of the United Presbyterian Church. He was one of the primary promoters of the union of the United Presbyterian Church with the Free Church of Scotland, and he represented the United Presbyterians in the unification talks. After they joined in 1900, he moved to Free Church College (now Trinity College, Glasgow), as professor of apologetics and theology. He lectured widely in both Britain and the United States. Views Orr was a vocal critic of theological liberalism (of Albrecht Ritschl especially) and helped establish Christian fundamentalism. His lectures and writings upheld the doctrines of the virgin birth and resurrection of Jesus, and the infallibility of the Bible. In contrast to modern fundamentalists and his friend B. B. Warfield, he did not agree with the position of Biblical inerrancy. Like Warfield, but also unlike modern Christian fundamentalists, he advocated a position which he called "theistic evolution". Orr wrote that "evolution is coming to be recognized as but a new name for 'creation', only that the creative power now works from within, instead of, as in the old conception, in an external plastic fashion." In his book Revelation and Inspiration (1910), he wrote that evolution is not in conflict with the Christian theistic view of the world. Bibliography The Christian View of God and the World (1893) online version The Ritschlian Theology and the Evangelical Faith (1897) Neglected Factors in the Study of the Early Progress of Christianity (1899) The Progress of Dogma (1902) David Hume (1903) Ritschlianism; Expository and Critical Essays (1903) New Testament Apocryphal Writings (London 1903); Protevangelium of James: on the birth of Mary | Gospel of Thomas: miracles of the infancy | Gospel of Pseudo-Matthew | Gospel of Nicodemus | Gospel of Peter | Acts of Paul and Thecla | The Falling Asleep of Mary. 182 pp. God's Image in Man and its Defacement in Light of Modern Denials (1905) Problem of the Old Testament Considered with Reference to Recent Criticism (1906) The Bible under Trial. Apologetic Papers in View of Present Day Assaults on Holy Scripture (1907) online version The Virgin Birth of Christ Hodder and Stoughton, London (1907) The Resurrection of Jesus (1908) Side-Lights on Christian Doctrine (1909) Revelation and Inspiration (1910) Sin as a Problem To-Day (1910) The History and Literature of the Early Church (1913) "The Holy Scriptures and Modern Negations", "The Early Narratives of Genesis", "Science and Christian Faith", and "The Virgin Birth of Christ", in The Fundamentals: A testimony to the truth, R.A. Torrey and A.C. Dixon (eds) (1917) online version The International Standard Bible Encyclopedia (ed.) (1939) Secondary Sources Coke, Tom S. “Reconsidering James Orr.” Reformed Journal, vol. 30, no. 12, Dec. 1980, pp. 20–22. Davies, William Walter. “The Battle of the Critics.” Methodist Review, vol. 88, Sept. 1906, pp. 827–830. Dorrien, Gary J. The Remaking of Evangelical Theology (Louisville, KY: Westminster John Knox Press, 1998). Eyre-Todd, “Rev. James Orr.” In Who’s Who in Glasgow in 1909 (Cambridge: Chadwyck-Healey, 1987). Hoefel, Robert J. “B B Warfield and James Orr: A Study in Contrasting Approaches to Scripture.” Christian Scholar’s Review, vol. 16, no. 1, Sept. 1986, pp. 40–52. Hoefel, Robert J. The Doctrine of Inspiration in the Writings of James Orr and B.B. Warfield: A Study in Contrasting Approaches to Scripture (Ph.D. Diss.: Fuller Theological Seminary, 1983). Livingstone, David N. “B B Warfield, the Theory of Evolution and Early Fundamentalism.” The Evangelical Quarterly, vol. 58, no. 1, Jan. 1986, pp. 69–83. McGrath, Gavin Basil. “James Orr’s Endorsement of Theistic Evolution.” Perspectives on Science and Christian Faith, vol. 51, no. 2, June 1999, pp. 114–120. Neely, Alan P. “James Orr and the Question of Inerrancy.” The Proceedings of the Conference on Biblical Inerrancy 1987, 1987, pp. 261–272. Schaff, Philip. “Orr, James.” In New Schaff-Herzog Encyclopedia of Religious Knowledge (Grand Rapids, MI: Baker Book House, 1977). Scorgie, Glen G. A Call for Continuity: The Theological Contributions of James Orr. Mercer Univ Pr, 1988. Scorgie, Glen G. “James Orr, Defender of the Church’s Faith.” Crux, vol. 22, no. 3, Sept. 1986, pp. 22–27. Sell, Alan P. F. Defending and Declaring the Faith: Some Scottish Examples, 1860–1920 (Colorado: Helmers & Howard, 1987). Shatzer, Jacob. “Theological Interpretation of Scripture and Evangelicals: An Apology for The Fundamentals.” Pro Ecclesia, vol. 22, no. 1, Wint 2013, pp. 88–102. Toon, Peter. “The Development of Doctrine: An Evangelical Perspective.” Reformed Journal, vol. 23, no. 3, Mar. 1973, pp. 7–12. Wright, David F. “Soundings in the Doctrine of Scripture in British Evangelicalism in the First Half of the Twentieth Century.” Tyndale Bulletin, vol. 31, 1980, pp. 87–106. Zaspel, Fred G. “B. B. Warfield on Creation and Evolution.” Themelios, vol. 35, no. 2, July 2010, pp. 198–211. Zorn, Raymond O. “The Christian View of God and the World.” Book Review. Mid-America Journal of Theology, vol. 8, no. 2, Fall 1992, pp. 217–218.z Notes References Gary J. Dorien, The Remaking of Evangelical Theology, Westminster John Knox Press, 1998. George Eyre-Todd, "Rev. James Orr", in Who's Who in Glasgow 1909. Jeff MacDonald, "Book Review of A Call for Continuity: The Theological Contribution of James Orr", Layman Online, 26 May 2005. Gavin Basil McGrath, "James Orr's Endorsement of Theistic Evolution", Perspectives on Science and Christian Faith 51.2 (June 1999): 114-121. Philip Schaff, "Orr, James", New Schaff-Herzog Encyclopedia of Religious Knowledge, 1953. Glen G. Scorgie, A Call for Continuity: The Theological Contribution of James Orr, Regent College Publishing, 2004. () External links Three essays by Professor James Orr - Essays #11-13: Bible under Trial, An Instructive Object Lesson and "Presuppositions" in OT Criticism 1844 births 1913 deaths Alumni of the University of Glasgow Clergy from Glasgow Scottish Calvinist and Reformed theologians 20th-century Scottish historians Theistic evolutionists 19th-century Scottish historians Ministers of the United Presbyterian Church (Scotland) Ministers of the United Free Church of Scotland
James Orr (theologian)
[ "Biology" ]
1,691
[ "Non-Darwinian evolution", "Theistic evolutionists", "Biology theories" ]
2,185,021
https://en.wikipedia.org/wiki/Serre%27s%20property%20FA
In mathematics, Property FA is a property of groups first defined by Jean-Pierre Serre. A group G is said to have property FA if every action of G on a tree has a global fixed point. Serre shows that if a group has property FA, then it cannot split as an amalgamated product or HNN extension; indeed, if G is contained in an amalgamated product then it is contained in one of the factors. In particular, a finitely generated group with property FA has finite abelianization. Property FA is equivalent for countable G to the three properties: G is not an amalgamated product; G does not have Z as a quotient group; G is finitely generated. For general groups G the third condition may be replaced by requiring that G not be the union of a strictly increasing sequence of subgroup. Examples of groups with property FA include SL3(Z) and more generally G(Z) where G is a simply-connected simple Chevalley group of rank at least 2. The group SL2(Z) is an exception, since it is isomorphic to the amalgamated product of the cyclic groups C4 and C6 along C2. Any quotient group of a group with property FA has property FA. If some subgroup of finite index in G has property FA then so does G, but the converse does not hold in general. If N is a normal subgroup of G and both N and G/N have property FA, then so does G. It is a theorem of Watatani that Kazhdan's property (T) implies property FA, but not conversely. Indeed, any subgroup of finite index in a T-group has property FA. Examples The following groups have property FA: A finitely generated torsion group; SL3(Z); The Schwarz group for integers A,B,C ≥ 2; SL2(R) where R is the ring of integers of an algebraic number field which is not Q or an imaginary quadratic field. The following groups do not have property FA: SL2(Z); SL2(RD) where RD is the ring of integers of an imaginary quadratic field of discriminant not −3 or −4. References English translation: Properties of groups Trees (graph theory)
Serre's property FA
[ "Mathematics" ]
464
[ "Mathematical structures", "Algebraic structures", "Properties of groups" ]
2,185,600
https://en.wikipedia.org/wiki/Storage%20Management%20Initiative%20%E2%80%93%20Specification
The Storage Management Initiative Specification, commonly called SMI-S, is a computer data storage management standard developed and maintained by the Storage Networking Industry Association (SNIA). It has also been ratified as an ISO standard. SMI-S is based upon the Common Information Model and the Web-Based Enterprise Management standards defined by the Distributed Management Task Force, which define management functionality via HTTP. The most recent approved version of SMI-S is available on the SNIA website. The main objective of SMI-S is to enable broad interoperable management of heterogeneous storage vendor systems. The current version is SMI-S 1.8.0 Rev 5. Over 1,350 storage products are certified as conformant to SMI-S. Basic concepts SMI-S defines CIM management profiles for storage systems. The entire SMI Specification is categorized in profiles and subprofiles. A profile describes the behavioral aspects of an autonomous, self-contained management domain. SMI-S includes profiles for Arrays, Switches, Storage Virtualizers, Volume Management and several other management domains. In DMTF parlance, an SMI-S provider is an implementation for a specific profile or set of profiles. A subprofile describes a part of a management domain, and can be a common part in more than one profile. At a very basic level, SMI-S entities are divided into two categories: Clients are management software applications that can reside virtually anywhere within a network, provided they have a communications link (either within the data path or outside the data path) to providers. Servers are the devices under management. Servers can be disk arrays, virtualization engines, host bus adapters, switches, tape drives, etc. SMI-S timeline 2000 – Collection of computer storage industry leaders led by Roger Reich begins building an interoperable management backbone for storage and storage networks (named Bluefin) in a small consortia called the Partner Development Process. 2002 – Bluefin donated by the consortia to the Storage Networking Industry Association (SNIA) and later renamed to Storage Management Initiative – Specification or SMI-S. SMI-S 1.0 publicly announced by the SNIA. 2003 – The Storage Management Initiative launches formal industry wide specification development, interoperability testing and demonstrations programs, as well as conformance testing systems and certifications. Work proceeds in the SNIA SMI Technical Steering Committee and related TWGs. 2004 – SMI-S 1.0.2 becomes an ANSI standard. Initial development of SMI-S 1.1.0 started. 2005 – SMI-S 1.0.2 submitted to ISO. 2006 – SMI-S 1.0.3 accepted as an ISO standard. SMI-S 1.1.0 published as a SNIA Technical Position. Working Drafts developed for SMI-S 1.2.0. 2007 – SMI-S 1.2.0 published as a SNIA Technical Position. Working Drafts developed for SMI-S 1.3.0 and SMI-S 1.4.0. 2008 – SMI-S 1.1.1 published as an ANSI standard and submitted to ISO for consideration as an ISO standard. SMI-S 1.3.0 published as a SNIA Technical Position. 2009 – SMI-S 1.4.0 published as a SNIA Technical Position. Working Drafts developed for SMI-S 1.5.0. 2010 – SMI-S 1.5.0 published as a SNIA Technical Position. Working Drafts developed for SMI-S 1.6.0. 2011 – SMI-S 1.1.1 published as an ISO standard, ISO/IEC 24775:2011. SMI-S 1.3.0 published as an ANSI standard: INCITS 388-2011. Development continues on SMI-S 1.6.0 and 1.6.1 in SNIA Technical Work Groups. Discussions are being conducted re a possible SMI-S V2.0. 2012 – SMI-S 1.6.0 published as a SNIA Technical Position. Five interoperability plugfests held. 2013 – Working Drafts developed for SMI-S 1.6.1. Five interoperability plugfests held, include one international plugfest (US and China). 2014 – Eight books that comprise SMI-S 1.5.0 published as an ISO standard: Information technology -- Storage management. SNIA SMI-S 1.6.1 Rev 5 published as a SNIA Technical Position. Working Drafts developed for SMI-S 1.7.0 Rev 1. Six interoperability plugfests held, including two international plugfests (US and China). 2015 – Working Drafts developed for SMI-S 1.7.0. Six interoperability plugfests held, including one in China. 2016 – SMI-S 1.7.0 Rev 5 is published as a SNIA Technical Position. Multiple interoperability plugfests held. 2018 – SMI-S 1.8.0 Rev 3 is published as a SNIA Technical Position. Multiple interoperability plugfests held. 2019 – SMI-S 1.8.0 Rev 4 is published as a SNIA Technical Position. Multiple interoperability plugfests held. 2020 – SMI-S 1.8.0 Rev 5 is published as a SNIA Technical Position. Multiple interoperability plugfests held. Open source projects pywbem - An open source library written in Python. It provides storage management software developers and system administrators with an easy-to-use method of accessing Common Information Model (CIM) objects and operations in Web-Based Enterprise Management (WBEM) servers, such as those found in SMI-S and other CIM-based environments. pywbem GitHub Library - A repository of pywbem projects on GitHub. pywbem Documentation - An overview of pywbem projects, community issues and feature requests. StorageIM SMI-S monitor client for SMI-enabled Arrays, Switches, HBAs and Storage Libraries. SBLIM Umbrella project for a collection of systems management tools to enable WBEM on Linux. See also CIM — Common Information Model WBEM — Web-Based Enterprise Management SNIA — Storage Networking Industry Association SCVMM System Center 2012 - Virtual Machine Manager Storage Management Initiative Specification (SMI-S) – SNIA SMI-S website References External links Storage Management Initiative Specification (SMI-S) provides good material both at the overview and detail level. Storage Management Initiative Specification (SMI-S) Releases – Approved specifications of SMI-S. SMI Developers Group provides information to assist developers working with SMI. Storage Management Lab Program (SM Lab) provides information about the program that runs SMI-S interoperability plugfests. SMI-S Conformance Testing Program (SMI-S CTP) describes how SNIA validates that a member company's products (software or hardware) conform to a specific version of SMI-S. DMTF Standard Publications contains a list of published DMTF standards. Computer data storage American National Standards Institute standards
Storage Management Initiative – Specification
[ "Technology" ]
1,508
[ "American National Standards Institute standards", "Computer standards" ]
13,339,410
https://en.wikipedia.org/wiki/Hydroxyl%20aluminium%20bis%282-ethylhexanoate%29
Hydroxyl aluminium bis(2-ethylhexanoate) is a chemical substance derived from 2-ethylhexanoic acid and aluminium(III). Nominally it is the coordination complex with the formula Al(OH)(O2CCHEt(CH2)3CH3)2 where Et = ethyl. The composition is not a homogeneous compound. It is used as a thickening agent in various products, including in napalm. It is slightly hygroscopic. References Aluminium compounds Ethylhexanoates
Hydroxyl aluminium bis(2-ethylhexanoate)
[ "Chemistry" ]
115
[ "Organic compounds", "Organic compound stubs", "Organic chemistry stubs" ]
13,339,992
https://en.wikipedia.org/wiki/ARC-ECRIS
ARC-ECRIS is an Electron Cyclotron Resonance Ion Source (ECRIS) based on arc-shaped coils unlike the conventional ECRIS which bases on a multipole magnet (usually a hexapole magnet) inside a solenoid magnet. First time the arc-shaped coils were used already in the 1960s in fusion experiments, for example at the Lawrence Livermore National Laboratory (MFTF, Baseball II, ...) and later in Japan (GAMMA10, ...). In 2006 the JYFL ion source group designed, constructed and tested similar plasma trap to produce highly charged heavy ion beams. The first tests were promising and showed that a stable plasma can be confined in an arc-coil magnetic field structure (see references). References External links YouTube video of a conventional ECRIS plasma (hexapolar magnetic field) Ion source
ARC-ECRIS
[ "Physics" ]
179
[ "Spectrum (physical sciences)", "Ion source", "Mass spectrometry", "Particle physics", "Particle physics stubs" ]
13,340,022
https://en.wikipedia.org/wiki/Bacillus%20virus%20phi29
Bacillus virus Φ29 (bacteriophage Φ29) is a double-stranded DNA (dsDNA) bacteriophage with a prolate icosahedral head and a short tail that belongs to the genus Salasvirus, order Caudovirales, and family Salasmaviridae. They are in the same order as phages PZA, Φ15, BS32, B103, M2Y (M2), Nf, and GA-1. First discovered in 1965, the Φ29 phage is the smallest Bacillus phage isolated to date and is among the smallest known dsDNA phages. Φ29 has a unique DNA packaging motor structure that employs prohead packaging RNA (pRNA) to guide the translocation of the phage genome during replication. This novel structure system has inspired ongoing research in nanotechnology, drug delivery, and therapeutics. In nature, the Φ29 phage infects Bacillus subtilis, a species of gram-positive, endospore-forming bacteria that is found in soil, as well as the gastrointestinal tracts of various marine and terrestrial organisms, including human beings. History In 1965, American microbiologist Dr. Bernard Reilly discovered the Φ29 phage in Dr. John Spizizen's lab at the University of Minnesota. Due to its small size and complex morphology, it has become an ideal model for the study of many processes in molecular biology, such as morphogenesis, viral DNA packaging, viral replication, and transcription. Structure The structure of Φ29 is composed of seven main proteins: the terminal protein (p3), the head or capsid protein (p8), the head or capsid fiber protein (p8.5), the distal tail knob (p9), the portal or connector protein (p10), the tail tube or lower collar proteins (p11), and the tail fibers or appendage proteins (p12*). The main difference between Φ29's structure and that of other phages is its use of pRNA in its DNA packaging motor. DNA packaging motor The Φ29 DNA packaging motor packages the phage genome into the procapsid during viral replication. The Φ29 packaging motor is structurally composed of the procapsid and the connector proteins, which interact with the pRNA, the packaging enzyme (gp16), and the packaging substrate (genomic DNA-gp3). Because the process of genome packaging is energy-intensive, it must be facilitated by an ATP-powered motor that converts chemical energy to mechanical energy through ATP hydrolysis. The Φ29 packaging motor is able to generate approximately 57 piconewtons (pN) of force, making it one of the most powerful biomotors studied to date. pRNA The Φ29 pRNA is a highly versatile molecule that can polymerize into dimers, trimers, tetramers, pentamers, and hexamers. Early studies such as Anderson (1990) and Trottier (1998) hypothesized that pRNA formed intermolecular hexamers, but these studies had a solely genetic basis rather than a microscopy based approach. In the year 2000, a study by Simpson et al. employed cryo-electron microscopy to determine that, in vivo, only a pentamer or smaller polymer could spatially fit in the virus. Ultimately, single isomorphous replacement with anomalous scattering (SIRAS) crystallography was used to determine that the in vivo structure is a tetramer ring. This discovery aligned with what was known about the structural geometry and necessary flexibility of the packaging motor's three-way junction. When pRNA is in this tetramer ring form, it works as a part of the DNA packaging motor to transport DNA molecules to their destination location within the prohead capsule. Specifically, the functional domains of pRNA bind to the gp16 packaging enzyme and the structural connector molecule to aid in the translocation of DNA through the prohead channel. After DNA packaging is complete, the pRNA dissociates and is degraded. Genome and replication The Φ29 phage has a linear dsDNA genome consisting of 19,285 bases. Both 5’ ends of the genome are capped with a covalently bonded terminal protein (p3) that complexes with DNA polymerase during replication. Φ29 is one of many phages with a DNA polymerase that has a different structure and function compared to standard DNA polymerases in other organisms. Φ29 forms a replication complex involving the p3 terminal protein, the dAMP nucleotide, and its own DNA polymerase to synthesize DNA in a 5’ to 3’ direction. This replication process also employs a sliding-back mechanism towards the 3’ end of the genome that uses a repeating TTT motif to move the replication complex backward without altering the template sequence. This allows the initiation of DNA replication to be more accurate by having the polymerase complex check a specific sequence before beginning the elongation process. Applications Nanoparticle assembly Versatility in RNA structure and function provides the ability to assemble nanoparticles for nanomedicinal therapeutics. The pRNA in bacteriophage Φ29 can use its three-way junction in order to self-assemble into nanoparticles. One major challenge of using pRNA-derived nanoparticles is large-scale production, as most industries are currently unequipped to handle industrial pRNA synthesis. This is primarily because RNA nanotechnology is still an emerging field that lacks industrial application and manufacturing optimization of small RNAs. Drug delivery Φ29’s DNA packaging system, using pRNA, incorporates a motor for the delivery of therapeutic molecules like ribozymes and aptamers. The small size of pRNA-derived nanoparticles also helps to deliver drugs in tight spaces like blood vessels. The main difficulty in using aptamer-based drug delivery is sourcing unique aptamers and other multimers for specific treatments for diseases that potentially degrade therapeutic multimers and nanoparticles in vivo. Nanoparticles need to be stabilized as delivery mechanisms in order to adapt to microenvironments that may result in loss of therapeutic cargo. Triple-negative breast cancer treatment Triple-negative breast cancer (TNBC) is an aggressive form of breast cancer that accounts for ten to fifteen percent of all breast cancer cases. Chemotherapy is the only viable current treatment for TNBC because the loss of target receptors inherent to the disease causes cancer cells to resist therapeutic pharmaceuticals. The three-way junction in the Φ29 DNA packaging motor can help sensitize TNBC cells to chemotherapy using a siRNA drug delivery mechanism to inhibit TNBC growth and volume. This treatment can also be combined with anti-cancer drugs like Doxorubicin to enhance therapeutic effects. See also Bacteriophage Bacteriophage pRNA Φ29 DNA polymerase References Model organisms Podoviridae Bacillus phages
Bacillus virus phi29
[ "Biology" ]
1,446
[ "Model organisms", "Biological models" ]
13,340,538
https://en.wikipedia.org/wiki/Compulsory%20stock%20obligation
In the UK, a compulsory stock obligation (CSO) is a minimum stock of fuel reserves that must be held by a supplier in the United Kingdom against shortages or interruptions in supply. The scheme is administered by the Department of Trade and Industry (DTI). The CSO is based on the actual net imports. Description The compulsory stock obligation was put in place due to regulations by EU, including EU Directive 2009/119/EC. Companies incur an obligation if they are a supplier of a volume of 100,000 tonnes of fuel per annum or greater. This obligation is assessed as being a holding of 67.5 days' stock (50 days for the UK). History Section 6 of the Energy Act 1976 allows the Secretary of State for Energy and Climate Change to require oil suppliers to hold a minimum levels of oil stocks. The UK has released these stocks three times - during the lead up to the Gulf War in 1991, following the impact of Hurricanes Rita and Katrina in the US in 2005, and during the civil disruption in Libya in 2011. References Petroleum in the United Kingdom
Compulsory stock obligation
[ "Chemistry" ]
220
[ "Petroleum", "Petroleum stubs" ]
13,341,540
https://en.wikipedia.org/wiki/Asymmetric%20norm
In mathematics, an asymmetric norm on a vector space is a generalization of the concept of a norm. Definition An asymmetric norm on a real vector space is a function that has the following properties: Subadditivity, or the triangle inequality: Nonnegative homogeneity: and every non-negative real number Positive definiteness: Asymmetric norms differ from norms in that they need not satisfy the equality If the condition of positive definiteness is omitted, then is an asymmetric seminorm. A weaker condition than positive definiteness is non-degeneracy: that for at least one of the two numbers and is not zero. Examples On the real line the function given by is an asymmetric norm but not a norm. In a real vector space the of a convex subset that contains the origin is defined by the formula for . This functional is an asymmetric seminorm if is an absorbing set, which means that and ensures that is finite for each Corresponce between asymmetric seminorms and convex subsets of the dual space If is a convex set that contains the origin, then an asymmetric seminorm can be defined on by the formula For instance, if is the square with vertices then is the taxicab norm Different convex sets yield different seminorms, and every asymmetric seminorm on can be obtained from some convex set, called its dual unit ball. Therefore, asymmetric seminorms are in one-to-one correspondence with convex sets that contain the origin. The seminorm is positive definite if and only if contains the origin in its topological interior, degenerate if and only if is contained in a linear subspace of dimension less than and symmetric if and only if More generally, if is a finite-dimensional real vector space and is a compact convex subset of the dual space that contains the origin, then is an asymmetric seminorm on See also References S. Cobzas, Functional Analysis in Asymmetric Normed Spaces, Frontiers in Mathematics, Basel: Birkhäuser, 2013; . Linear algebra Norms (mathematics)
Asymmetric norm
[ "Mathematics" ]
442
[ "Linear algebra", "Mathematical analysis", "Norms (mathematics)", "Algebra" ]
13,341,622
https://en.wikipedia.org/wiki/Unparticle%20physics
In theoretical physics, unparticle physics is a speculative theory that conjectures a form of matter that cannot be explained in terms of particles using the Standard Model of particle physics, because its components are scale invariant. Howard Georgi proposed this theory in two 2007 papers, "Unparticle Physics" and "Another Odd Thing About Unparticle Physics". His papers were followed by further work by other researchers into the properties and phenomenology of unparticle physics and its potential impact on particle physics, astrophysics, cosmology, CP violation, lepton flavour violation, muon decay, neutrino oscillations, and supersymmetry. Background All particles exist in states that may be characterized by a certain energy, momentum and mass. In most of the Standard Model of particle physics, particles of the same type cannot exist in another state with all these properties scaled up or down by a common factor – electrons, for example, always have the same mass regardless of their energy or momentum. But this is not always the case: massless particles, such as photons, can exist with their properties scaled equally. This immunity to scaling is called "scale invariance". The idea of unparticles comes from conjecturing that there may be "stuff" that does not necessarily have zero mass but is still scale-invariant, with the same physics regardless of a change of length (or equivalently energy). This stuff is unlike particles, and described as unparticle. The unparticle stuff is equivalent to particles with a continuous spectrum of mass. Such unparticle stuff has not been observed, which suggests that if it exists, it must couple with normal matter weakly at observable energies. Since the Large Hadron Collider (LHC) team announced it will begin probing a higher energy frontier in 2009, some theoretical physicists have begun to consider the properties of unparticle stuff and how it may appear in LHC experiments. One of the great hopes for the LHC is that it might come up with some discoveries that will help us update or replace our best description of the particles that make up matter and the forces that glue them together. Properties Unparticles would have properties in common with neutrinos, which have almost zero mass and are therefore nearly scale invariant. Neutrinos barely interact with matter – most of the time physicists can infer their presence only by calculating the "missing" energy and momentum after an interaction. By looking at the same interaction many times, a probability distribution is built up that tells more specifically how many and what sort of neutrinos are involved. They couple very weakly to ordinary matter at low energies, and the effect of the coupling increases as the energy increases. A similar technique could be used to search for evidence of unparticles. According to scale invariance, a distribution containing unparticles would become apparent because it would resemble a distribution for a fractional number of massless particles. This scale invariant sector would interact very weakly with the rest of the Standard Model, making it possible to observe evidence for unparticle stuff, if it exists. The unparticle theory is a high-energy theory that contains both Standard Model fields and Banks–Zaks fields, which have scale-invariant behavior at an infrared point. The two fields can interact through the interactions of ordinary particles if the energy of the interaction is sufficiently high. These particle interactions would appear to have "missing" energy and momentum that would not be detected by the experimental apparatus. Certain distinct distributions of missing energy would signify the production of unparticle stuff. If such signatures are not observed, bounds on the model can be set and refined. Experimental indications Unparticle physics has been proposed as an explanation for anomalies in superconducting cuprate materials, where the charge measured by ARPES appears to exceed predictions from Luttinger's theorem for the quantity of electrons. References External links Particle physics Theoretical physics Fringe physics
Unparticle physics
[ "Physics" ]
827
[ "Theoretical physics", "Particle physics" ]
13,341,682
https://en.wikipedia.org/wiki/Canrenoic%20acid
Canrenoic acid is a synthetic steroidal antimineralocorticoid which was never marketed. See also Potassium canrenoate Canrenone References Antimineralocorticoids Carboxylic acids Conjugated dienes Enones Pregnanes Spirolactones Steroidal antiandrogens Tertiary alcohols
Canrenoic acid
[ "Chemistry" ]
72
[ "Carboxylic acids", "Functional groups" ]
13,341,876
https://en.wikipedia.org/wiki/Threat
A threat is a communication of intent to inflict harm or loss on another person. Intimidation is a tactic used between conflicting parties to make the other timid or psychologically insecure for coercion or control. The act of intimidation for coercion is considered a threat. Threatening or threatening behavior (or criminal threatening behavior) is the crime of intentionally or knowingly putting another person in fear of bodily injury. Some of the more common types of threats forbidden by law are those made with an intent to obtain a monetary advantage or to compel a person to act against their will. In most U.S. states, it is an offense to threaten to (1) use a deadly weapon on another person; (2) injure another's person or property; or (3) injure another's reputation. Law Brazil In Brazil, the crime of threatening someone, defined as a threat to cause unjust and grave harm, is punishable by a fine or three months to one year in prison, as described in the Brazilian Penal Code, article 147. Brazilian does not treat as a crime a threat that was proffered in a heated discussion. Germany The German Strafgesetzbuch § 241 punishes the crime of threat with a prison term for up to three years or a fine. United States In the United States, federal law criminalizes certain true threats transmitted via the U.S. mail or in interstate commerce. It also criminalizes threatening the government officials of the United States. Some U.S. states criminalize cyberbullying. Threats of bodily harm are considered assault. State of Texas In the state of Texas, it is not necessary that the person threatened actually perceive a threat for a threat to exist for legal purposes. True threat A true threat is threatening communication that can be prosecuted under the law. It is distinct from a threat that is made in jest. The U.S. Supreme Court has held that true threats are not protected under the U.S. Constitution based on three justifications: preventing fear, preventing the disruption that follows from that fear, and diminishing the likelihood that the threatened violence will occur. See also References Harassment and bullying Speech crimes Psychological abuse
Threat
[ "Biology" ]
448
[ "Harassment and bullying", "Behavior", "Aggression" ]
13,342,572
https://en.wikipedia.org/wiki/Single-input%20single-output%20system
In control engineering, a single-input and single-output (SISO) system is a simple single-variable control system with one input and one output. In radio, it is the use of only one antenna both in the transmitter and receiver. Details SISO systems are typically less complex than multiple-input multiple-output (MIMO) systems. Usually, it is also easier to make an order of magnitude or trending predictions "on the fly" or "back of the envelope". MIMO systems have too many interactions for most of us to trace through them quickly, thoroughly, and effectively in our heads. Frequency domain techniques for analysis and controller design dominate SISO control system theory. Bode plot, Nyquist stability criterion, Nichols plot, and root locus are the usual tools for SISO system analysis. Controllers can be designed through the polynomial design, root locus design methods to name just two of the more popular. Often SISO controllers will be PI, PID, or lead-lag. See also Control theory References Control engineering Transfer functions
Single-input single-output system
[ "Engineering" ]
215
[ "Control engineering" ]
13,342,698
https://en.wikipedia.org/wiki/Robbins%20lemma
In statistics, the Robbins lemma, named after Herbert Robbins, states that if X is a random variable having a Poisson distribution with parameter λ, and f is any function for which the expected value E(f(X)) exists, then Robbins introduced this proposition while developing empirical Bayes methods. References Theorems in statistics Lemmas Poisson distribution
Robbins lemma
[ "Mathematics" ]
74
[ "Mathematical problems", "Mathematical theorems", "Theorems in statistics", "Lemmas" ]
13,343,202
https://en.wikipedia.org/wiki/Aging-associated%20diseases
An aging-associated disease (commonly termed age-related disease, ARD) is a disease that is most often seen with increasing frequency with increasing senescence. They are essentially complications of senescence, distinguished from the aging process itself because all adult animals age (with rare exceptions) but not all adult animals experience all age-associated diseases. The term does not refer to age-specific diseases, such as the childhood diseases chicken pox and measles, only diseases of the elderly. They are also not accelerated aging diseases, all of which are genetic disorders. Examples of aging-associated diseases are atherosclerosis and cardiovascular disease, cancer, arthritis, cataracts, osteoporosis, type 2 diabetes, hypertension and Alzheimer's disease. The incidence of all of these diseases increases exponentially with age. Of the roughly 150,000 people who die each day across the globe, about two thirds—100,000 per day—die of age-related causes. In industrialized nations, the proportion is higher, reaching 90%. Patterns of differences By age 3, about 30% of rats have had cancer, whereas by age 85 about 30% of humans have had cancer. Humans, dogs and rabbits get Alzheimer's disease, but rodents do not. Elderly rodents typically die of cancer or kidney disease, but not of cardiovascular disease. In humans, the relative incidence of cancer increases exponentially with age for most cancers, but levels off or may even decline by age 60–75(although colon/rectal cancer continues to increase). People with the so-called segmental progerias are vulnerable to different sets of diseases. Those with Werner's syndrome experience osteoporosis, cataracts, and, cardiovascular disease, but not neurodegeneration or Alzheimer's disease; those with Down syndrome have type 2 diabetes and Alzheimer's disease, but not high blood pressure, osteoporosis or cataracts. In Bloom syndrome, those affected most often die of cancer. Research Aging (senescence) increases vulnerability to age-associated diseases, whereas genetics determines vulnerability or resistance between species and individuals within species. Some age-related changes (like graying hair) are said to be unrelated to an increase in mortality. But some biogerontologists believe that the same underlying changes that cause graying hair also increase mortality in other organ systems and that understanding the incidence of age-associated disease will advance knowledge of the biology of senescence just as knowledge of childhood diseases advanced knowledge of human development. Strategies for engineered negligible senescence (SENS) is an emerging research strategy that aims to repair "root causes" for age-related illness and degeneration, as well as develop medical procedures to periodically repair all such damage in the human body, thereby maintaining a youth-like state indefinitely. The SENS programme has identified seven types of aging-related damage, and feasible solutions have been outlined for each. Some critics argue that the SENS agenda is optimistic at best, and that the aging process is too complex and little-understood for SENS to be scientific or implementable in the foreseeable future. It has been proposed that age-related diseases are mediated by vicious cycles. On the basis of extensive research, DNA damage has emerged a major culprit in cancer and numerous other diseases related to ageing. DNA damage can initiate the development of cancer or other aging related diseases depending on several factors. These include the type, amount, and location of the DNA damage in the body, the type of cell experiencing the damage and its stage in the cell cycle, and the specific DNA repair processes available to react to the damage. Types Macular degeneration Age-related macular degeneration (AMD) is a disease that affects the eyes and can lead to vision loss through break down of the central part of the retina called the macula. Degeneration can occur in one eye or both and can be classified as either wet (neovascular) or dry (atrophic). Wet AMD commonly is caused by blood vessels near the retina that lead to swelling of the macula. The cause of dry AMD is less clear, but it is thought to be partly caused by breakdown of light-sensitive cells and tissue surrounding the macula. A major risk factor for AMD is age over the age of 60. Alzheimer's Alzheimer's disease is classified as a "protein misfolding" disease. Aging causes mutations in protein folding, and as a result causes deposits of abnormal modified proteins to accumulate in specific areas of the brain. In Alzheimer's, deposits of Beta-amyloid and hyperphosphorylated tau protein form extracellular plaques and extracellular tangles. These deposits are shown to be neurotoxic and cause cognitive impairment due to their initiation of destructive biochemical pathways. Atherosclerosis Atherosclerosis is categorized as an aging disease and is brought about by vascular remodeling, the accumulation of plaque, and the loss of arterial elasticity. Over time, these processes can stiffen the vasculature. For these reasons, older age is listed as a major risk factor for atherosclerosis. Specifically, the risk of atherosclerosis increases for men above 45 years of age and women above 55 years of age. Benign prostatic hyperplasia Benign prostatic hyperplasia (BPH) is a noncancerous enlargement of the prostate gland due to increased growth. An enlarged prostate can result in incomplete or complete blockage of the bladder and interferes with a man's ability to urinate properly. Symptoms include overactive bladder, decreased stream of urine, hesitancy urinating, and incomplete emptying of the bladder. By age 40, 10% of men will have signs of BPH and by age 60, this percentage increases by 5 fold. Men over the age of 80 have over a 90% chance of developing BPH and almost 80% of men will develop BPH in their lifetime. Cancer Although it is possible for cancer to strike at any age, most patients with invasive cancer are over 65, and the most significant risk factor for developing cancer is age. According to cancer researcher Robert A. Weinberg, "If we lived long enough, sooner or later we all would get cancer." Some of the association between aging and cancer is attributed to immunosenescence, errors accumulated in DNA over a lifetime and age-related changes in the endocrine system. Aging's effect on cancer is complicated by factors such as DNA damage and inflammation promoting it and factors such as vascular aging and endocrine changes inhibiting it. Parkinson's Parkinson's disease, or simply Parkinson's, is a long-term degenerative disorder of the central nervous system that mainly affects the motor system. The disease has many complications, including anxiety, dementia, and depression. Parkinson's disease typically occurs in people over the age of 60, of whom about one percent are affected. The prevalence of Parkinson's disease dementia also increases with age, and to a lesser degree, duration of the disease. Exercise in middle age may reduce the risk of PD later in life. Stroke Stroke was the second most frequent cause of death worldwide in 2011, accounting for 6.2 million deaths (~11% of the total). Stroke could occur at any age, including in childhood, the risk of stroke increases exponentially from 30 years of age, and the cause varies by age. Advanced age is one of the most significant stroke risk factors. 95% of strokes occur in people age 45 and older, and two-thirds of strokes occur in those over the age of 65. A person's risk of dying if he or she does have a stroke also increases with age. Endocrine diseases Studies in animal models show that clearance of senescent cells improves multiple age related endocrine disorders. Osteoporosis Bone density declines with age. By the age of 85 years, ~70% of women and 30% of men have a osteoporosis defined as a bone density less than or equal to 2.5 standard deviations lower than young adults. Metabolic syndrome The metabolic syndrome is the co-occurrence of metabolic risk factors for type 2 diabetes and cardiovascular disease (abdominal obesity, hyperglycemia, dyslipidemia, and hypertension). The prevalence of the metabolic syndrome increases with age reaching close to 50% of people over 60 years old in the USA. as See also Accelerated aging disease Alliance for Aging Research Gerontology Senescence References External links Segmental Progeria Geriatrics Senescence
Aging-associated diseases
[ "Chemistry", "Biology" ]
1,769
[ "Senescence", "Aging-associated diseases", "Metabolism", "Cellular processes" ]
13,343,248
https://en.wikipedia.org/wiki/Muffler%20man
Muffler men are large molded fiberglass sculptures that are placed as advertising icons, roadside attractions, or for decorative purposes, predominantly in the United States. Standing approximately tall, the first figure was a Paul Bunyan character designed to hold an axe. Derivatives of that figure were widely used to hold full-sized car mufflers, tires, or other items promoting various roadside businesses. International Fiberglass of Venice, California constructed most Muffler Men. While the fiberglass figures are no longer manufactured, many still exist throughout a number of states across the United States with some also in Canada. At least four remain on U.S. Route 66, including Chicken Boy and Gemini Giant. Muffler Men have made appearances as characters in the comic strip "Zippy the Pinhead" by Bill Griffith, often in conversation with Zippy. Two books have been devoted to the distinctive roadside figures and the July 2012 issue of AAA New Mexico Journey devoted its front cover to their 50th anniversary. History Boatbuilder Steve Dashew established International Fiberglass in 1963 by purchasing and renaming Bob Prewitt's workshop, Prewitt Fiberglass. The oversized fiberglass men, women, and dinosaurs began as a sideline. The first of the figures, a Paul Bunyan holding an oversized axe to promote a restaurant, was created by Bob Prewitt in 1962 for the Lumberjack Café on Route 66 in Flagstaff, Arizona. Bill Swan who worked for Prewitt helped to design the face of the first Paul Bunyan Muffler man As the fiberglass molds for this initial figure existed when Dashew acquired the company, similar characters could be readily created by keeping the same basic characteristics (such as the right palm up, left palm down position in which the original Bunyan lumberjack figure held his axe) with minor variation. Various fiberglass molds allowed different heads, limbs, or torsos to be substituted to create multiple variant characters. Some would promote food, others automotive products. A fifteen-foot Amish man standing over a diner in Lancaster County, Pennsylvania and a Uniroyal gal in a skirt or bikini were among the many variants. Thousands of the oversize figures would be deployed in a little over a decade at a typical cost of $1000–$2800 each. Some would be customised as promotions of individual roadside businesses on the US Highway system. Many were created to advertise franchise and chain brands, such as the Enco and Humble tigers and the Philips Petroleum cowboys. A novelty fibreglass dinosaur figure was most often seen promoting Sinclair Oil stations, but also appeared at various miniature golf courses. When businesses closed or were sold, often the figures would be repainted and adapted to represent different characters or were relocated. The statues have become natives, Vikings, football players and sports mascots, country bumpkins, cooks and chefs, cowboys, soldiers, sea pirates, and astronauts. The use of roadside novelties represented a means for independent businesses to differentiate themselves in an era before two-lane highways were bypassed by freeways; businesses located directly on the main road would rely heavily on neon signage, promotional displays, and gimmicks to make themselves more visible to passing motorists. Increases in costs to deliver the lightweight but oversized figures proved problematic and business declined with the 1973 oil crisis. International Fiberglass was sold and closed permanently in 1976. Many of the characters, such as a Texaco Big Friend, initially created for a cancelled service station chain promotion, would become rare after International Fiberglass ceased operations. List of muffler men See also Chicken Boy in Highland Park, California Gemini Giant in Wilmington, Illinois U.S. Route 66 in Illinois. A Giant Hot Dog Statue was relocated from the former Bunyon's in Cicero to Atlanta and restored by volunteers. References External links "Muffler Men" at Roadside America website http://americangiants.wordpress.com https://www.rightpalmup.com Roadside attractions in the United States Transport culture Fiberglass sculptures Public art in the United States
Muffler man
[ "Physics" ]
823
[ "Physical systems", "Transport", "Transport culture" ]
13,343,317
https://en.wikipedia.org/wiki/Photoinitiator
In chemistry, a photoinitiator is a molecule that creates reactive species (free radicals, cations or anions) when exposed to radiation (UV or visible). Synthetic photoinitiators are key components in photopolymers (for example, photo-curable coatings, adhesives and dental restoratives). Some small molecules in the atmosphere can also act as photoinitiators by decomposing to give free radicals (in photochemical smog). For instance, nitrogen dioxide () is produced in large quantities by gasoline-burning internal combustion engines. in the troposphere gives smog its brown coloration and catalyzes production of toxic ground-level ozone (). Molecular oxygen () also serves as a photoinitiator in the stratosphere, breaking down into atomic oxygen and combining with in order to form the ozone in the ozone layer. Reactions Photoinitators can create reactive species by different pathways including photodissociation and electron transfer. As an example of dissociation, hydrogen peroxide can undergo homolytic cleavage, with the bond cleaving to form two hydroxyl radicals. Certain azo compounds (such as azobisisobutyronitrile), can also photolytically cleave, forming two alkyl radicals and nitrogen gas: These free radicals can now promote other reactions. Atmospheric photoinitiators Peroxides Since molecular oxygen can abstract H atoms from certain radicals, the HOO· radical is easily created. This particular radical can further abstract H atoms, creating , or hydrogen peroxide; peroxides can further cleave photolytically into two hydroxyl radicals. More commonly, HOO can react with free oxygen atoms to yield a hydroxyl radical (·OH) and oxygen gas. In both cases, the ·OH radicals formed can serve to oxidize organic compounds in the atmosphere. Nitrogen dioxide Nitrogen dioxide can also be photolytically cleaved by photons of wavelength less than 400 nm producing atomic oxygen and nitric oxide. Atomic oxygen is a highly reactive species, and can abstract a H atom from anything, including water. Nitrogen dioxide can be regenerated through a reaction between certain peroxy-containing radicals and NO. Molecular oxygen In the stratosphere, molecular oxygen is an important photoinitiator that begins the ozone-production process in the ozone layer. Oxygen can be photolyzed into atomic oxygen by light with wavelength less than 240 nm. Atomic oxygen can then combine with more molecular oxygen to form ozone. However, ozone can also be photolyzed back into O and . Furthermore, atomic oxygen and ozone can combine into . This set of reactions govern the production of ozone and can be combined to calculate its equilibrium concentration. Commercial photoinitiators and uses AIBN Azobisisobutyronitrile is a white powder often used as a photoinitiator for vinyl-based polymers such as polyvinyl chloride, also known as PVC. Because this particular photoinitiator produces nitrogen gas upon decomposition, it is often used as a blowing agent to change the shape and/or texture of plastics. Benzoyl peroxide Benzoyl peroxide, much like azobisisobutyronitrile, is a white powder used as a photoinitiator in various commercial and industrial processes, including plastics production. Unlike AIBN, however, benzoyl peroxide produces oxygen gas upon decomposing, giving this compound a host of medical uses as well. Upon contact with the skin, benzoyl peroxide breaks down, producing oxygen gas, among other things. The oxygen gas is absorbed into the pores of the skin, where it kills off the acne-causing bacterium Cutibacterium acnes. In addition, the free radicals produced can break down dead skin cells. Clearing out these dead cells prevents pore blockage and, by extension, acne breakouts. 2,2-Dimethoxy-2-phenylacetophenone Camphorquinone Camphorquinone (CQ) is a photosensitiser used with an amine system, that generates primary radicals with light irradiation. These free electron then attack the double bonds of resin monomers resulting in polymerization. The physical properties of the cured resins are affected by the generation of primary radicals during the initial stage of polymerization. Irgacure 819 Irgacure 819 (BAPO Bis(2,4,6-trimethylbenzoyl)-phenylphosphineoxide) is a Norrish type photoinitiator used in polymerization processes like two-photon Polymerization. When exposed to light it forms four radicals (2, 3, 5) per decomposed molecule (1), making it highly efficient in initiating polymerization. The second set of radicals forms through abstraction or chain transfer, further driving the reaction. See also Radical initiator References Bibliography Air pollution Atmospheric chemistry
Photoinitiator
[ "Chemistry" ]
1,024
[ "nan" ]
13,343,892
https://en.wikipedia.org/wiki/IEEE%20MultiMedia
IEEE MultiMedia is a quarterly peer-reviewed scientific journal published by the IEEE Computer Society and covering multimedia technologies. Topics of interest include image processing, video processing, audio analysis, text retrieval and understanding, data mining and analysis, and data fusion. It was established in 1994 and the current editor-in-chief is Shu-Ching Chen (Florida International University). The 2018 impact factor was 3.556. External links Multimedia Computer science journals Multimedia Academic journals established in 1994 Quarterly journals English-language journals
IEEE MultiMedia
[ "Technology" ]
102
[ "Multimedia" ]
13,344,659
https://en.wikipedia.org/wiki/Memory%20safety
Memory safety is the state of being protected from various software bugs and security vulnerabilities when dealing with memory access, such as buffer overflows and dangling pointers. For example, Java is said to be memory-safe because its runtime error detection checks array bounds and pointer dereferences. In contrast, C and C++ allow arbitrary pointer arithmetic with pointers implemented as direct memory addresses with no provision for bounds checking, and thus are potentially memory-unsafe. History Memory errors were first considered in the context of resource management (computing) and time-sharing systems, in an effort to avoid problems such as fork bombs. Developments were mostly theoretical until the Morris worm, which exploited a buffer overflow in fingerd. The field of computer security developed quickly thereafter, escalating with multitudes of new attacks such as the return-to-libc attack and defense techniques such as the non-executable stack and address space layout randomization. Randomization prevents most buffer overflow attacks and requires the attacker to use heap spraying or other application-dependent methods to obtain addresses, although its adoption has been slow. However, deployments of the technology are typically limited to randomizing libraries and the location of the stack. Impact In 2019, a Microsoft security engineer reported that 70% of all security vulnerabilities were caused by memory safety issues. In 2020, a team at Google similarly reported that 70% of all "severe security bugs" in Chromium were caused by memory safety problems. Many other high-profile vulnerabilities and exploits in critical software have ultimately stemmed from a lack of memory safety, including Heartbleed and a long-standing privilege escalation bug in sudo. The pervasiveness and severity of vulnerabilities and exploits arising from memory safety issues have led several security researchers to describe identifying memory safety issues as "shooting fish in a barrel". Approaches Some modern high-level programming languages are memory-safe by default, though not completely since they only check their own code and not the system they interact with. Automatic memory management in the form of garbage collection is the most common technique for preventing some of the memory safety problems, since it prevents common memory safety errors like use-after-free for all data allocated within the language runtime. When combined with automatic bounds checking on all array accesses and no support for raw pointer arithmetic, garbage collected languages provide strong memory safety guarantees (though the guarantees may be weaker for low-level operations explicitly marked unsafe, such as use of a foreign function interface). However, the performance overhead of garbage collection makes these languages unsuitable for certain performance-critical applications. For languages that use manual memory management, memory safety is not usually guaranteed by the runtime. Instead, memory safety properties must either be guaranteed by the compiler via static program analysis and automated theorem proving or carefully managed by the programmer at runtime. For example, the Rust programming language implements a borrow checker to ensure memory safety, while C and C++ provide no memory safety guarantees. The substantial amount of software written in C and C++ has motivated the development of external static analysis tools like Coverity, which offers static memory analysis for C. DieHard, its redesign DieHarder, and the Allinea Distributed Debugging Tool are special heap allocators that allocate objects in their own random virtual memory page, allowing invalid reads and writes to be stopped and debugged at the exact instruction that causes them. Protection relies upon hardware memory protection and thus overhead is typically not substantial, although it can grow significantly if the program makes heavy use of allocation. Randomization provides only probabilistic protection against memory errors, but can often be easily implemented in existing software by relinking the binary. The memcheck tool of Valgrind uses an instruction set simulator and runs the compiled program in a memory-checking virtual machine, providing guaranteed detection of a subset of runtime memory errors. However, it typically slows the program down by a factor of 40, and furthermore must be explicitly informed of custom memory allocators. With access to the source code, libraries exist that collect and track legitimate values for pointers ("metadata") and check each pointer access against the metadata for validity, such as the Boehm garbage collector. In general, memory safety can be safely assured using tracing garbage collection and the insertion of runtime checks on every memory access; this approach has overhead, but less than that of Valgrind. All garbage-collected languages take this approach. For C and C++, many tools exist that perform a compile-time transformation of the code to do memory safety checks at runtime, such as CheckPointer and AddressSanitizer which imposes an average slowdown factor of 2. BoundWarden is a new spatial memory enforcement approach that utilizes a combination of compile-time transformation and runtime concurrent monitoring techniques. Fuzz testing is well-suited for finding memory safety bugs and is often used in combination with dynamic checkers such as AddressSanitizer. Classification of memory safety errors Many different types of memory errors can occur: Spatial Buffer overflow – out-of-bound writes can corrupt the content of adjacent objects, or internal data (like bookkeeping information for the heap) or return addresses. Buffer over-read – out-of-bound reads can reveal sensitive data or help attackers bypass address space layout randomization. Temporal Use after free – dereferencing a dangling pointer storing the address of an object that has been deleted. Double free – repeated calls to free may prematurely free a new object at the same address. If the exact address has not been reused, other corruption may occur, especially in allocators that use free lists. Uninitialized variables – a variable that has not been assigned a value is used. It may contain sensitive information or bits that are not valid for the type. Wild pointers arise when a pointer is used prior to initialization to some known state. They show the same erratic behaviour as dangling pointers, though they are less likely to stay undetected. Invalid free – passing an invalid address to free can corrupt the heap. Mismatched free – when multiple allocators are in use, attempting to free memory with a deallocation function of a different allocator Contributing bugs Depending on the language and environment, other types of bugs can contribute to memory unsafety: Stack exhaustion – occurs when a program runs out of stack space, typically because of too deep recursion. A guard page typically halts the program, preventing memory corruption, but functions with large stack frames may bypass the page, and kernel code may not have the benefit of guard pages. Heap exhaustion – the program tries to allocate more memory than the amount available. In some languages, this condition must be checked for manually after each allocation. Memory leak – Failing to return memory to the allocator may set the stage for heap exhaustion (above). Failing to run the destructor of an RAII object may lead to unexpected results, but is not itself considered a memory safety error. Null pointer dereference – A null pointer dereference will often cause an exception or program termination in most environments, but can cause corruption in operating system kernels or systems without memory protection or when use of the null pointer involves a large or negative offset. In C++, because dereferencing a null pointer is undefined behavior, compiler optimizations may cause other checks to be removed, leading to vulnerabilities elsewhere in the code. Some lists may also include race conditions (concurrent reads/writes to shared memory) as being part of memory safety (e.g., for access control). The Rust programming language prevents many kinds of memory-based race conditions by default, because it ensures there is at most one writer or one or more readers. Many other programming languages, such as Java, do not automatically prevent memory-based race conditions, yet are still generally considered "memory safe" languages. Therefore, countering race conditions is generally not considered necessary for a language to be considered memory safe. References Software bugs Computer security exploits Programming language implementation
Memory safety
[ "Technology" ]
1,662
[ "Computer security exploits" ]