id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
7,204,246
https://en.wikipedia.org/wiki/James%20Kitson%2C%201st%20Baron%20Airedale
James Kitson, 1st Baron Airedale (22 September 1835 16 March 1911), PC, DSc, was an industrialist, locomotive builder, Liberal Party politician and a Member of Parliament for the Holme Valley. He was known as Sir James Kitson from 1886, until he was elevated to the peerage in 1907. Lord Airedale was a prominent Unitarian in Leeds, Yorkshire. Life James Kitson's parents were James Kitson (1807–1885) a self-made locomotive manufacturer who founded Kitson and Company, and his first wife Ann. They had several children. One of them, Emily, married the royal obstetrician William Smoult Playfair in 1864, and became inadvertently involved in a court case with implications for medical ethics that resonate today. Another brother was Arthur Octavius Kitson, whose wife was the subject of the court case. Kitson attended school in Wakefield and studied chemistry and natural sciences at University College London. The loss of his first wife Emily in 1873 was devastating. His sister-in-law Clara Talbot (née Cliff, died 1905), and her husband Grosvenor Talbot (1835–1926) were described as "lifelines", "tending to the grieving man and looking after his children". Kitson and Talbot were college friends. Four years later, Kitson's older brother, Frederick, a gifted engineer also died. In 1885 Kitson purchased Gledhow Hall in Gledhow, Leeds. He redecorated the hall and entertained lavishly including playing host to Prime Minister William Gladstone and his son, Herbert, who was a witness at Kitson's second marriage to Mary Laura Smith in 1881. He commissioned Burmantofts Pottery to create an elaborate bathroom with faiance in honour of a visit from the Prince of Wales circa 1885. Career In 1854, when Kitson was aged nineteen, his father bought the ironworks at Monk Bridge and put him and his elder brother, Frederick, in charge. Monkbridge was amalgamated with their father's Airedale Foundry in 1858. In 1886 the business was a limited liability company under family control with £250,000 in capital. Frederick Kitson withdrew from the business because of ill health several years before his death in 1877. Their father retired in 1876 but James Kitson in reality ran the firm from 1862. The Airedale Foundry built nearly 6,000 locomotives for use in Britain and abroad from when it was founded until the end of the 19th century. The company diversified into manufacturing stationary engines for agricultural use and steam engines for tramways. From the 1880s, the Monkbridge works made steel using the Siemens–Martin open-hearth process. The Airedale Foundry and Monkbridge Works both employed about 2000 workers in 1911. In connection with his business interests Kitson was a member of the Institution of Mechanical Engineers from 1859 and was president of the Iron Trade Association. He was president of the Iron and Steel Institute in 1889 and was awarded the institute's Bessemer gold medal in 1903. Between 1899 and 1901, he was a member of the council of the Institution of Civil Engineers. Kitson's other interests included the London and Northern Steamship Company and the Yorkshire Banking Company. He was a director of the London City and Midland Bank and president of the Baku Russian Petroleum Company. He was also a director of the North Eastern Railway Company and president of the Leeds Chamber of Commerce from 1880 to 1881. Financial success allowed Kitson time, money and influence to pursue other interests including politics. He was president of the Leeds Liberal Association and ran the election campaign for William Ewart Gladstone. In 1880, Kitson was a committee member of the Leeds Trained Nurses Institution. He was elected MP for Colne Valley from 1892 until 1907, supporting education, Irish Home Rule, and the provision of old age pensions. Kitson was a member of the Institution of Civil Engineers and the Institution of Mechanical Engineers. He supported the Mechanics' Institute and the Yorkshire College, the forerunner of the University of Leeds, which awarded him an honorary doctorate, DSc in 1904. Kitson was never a member of Leeds Council but was the city's first lord mayor in 1896–7. He was created a baronet in 1886 and was sworn of the Privy Council in 1906. On 17 July 1907 Kitson was raised to the peerage as the first Baron Airedale of Gledhow in the West Riding of the County of York. Kitson was appointed Honorary Colonel of the 3rd (Volunteer) Battalion, The Prince of Wales's Own (West Yorkshire Regiment) on 20 December 1902. Death Airedale died following a heart attack in Paris at the Hotel Meurice on 16 March 1911. He had been returning home by train from the south of France. His funeral service was held at Mill Hill Chapel on 22 March before his body was taken for burial to Roundhay Church along a route lined by 4000 workpeople. A subsequent memorial service at St Margaret's Church in Westminster was attended by a hundred MPs. Mill Hill Chapel The Kitsons were closely linked to Mill Hill Chapel in Leeds City Square. In 1897 Kitson paid for an extension to the vestry. William Morris designed a window which was dedicated to his mother Ann Kitson who died in 1865. Archibald Keightley Nicholson created a memorial window to Lord Airedale representing the continuation of Christianity. In the early-20th century Lord Airedale was a member of the chapel's small, politically active and very influential congregation. Kitson contributed to a Parliamentary inquiry into the Religious Education for Dissenting Protestants in 1899. Family Kitson married Emily Christina Cliff (1837–1873) on 20 September 1860. Emily was involved in the establishment of the Yorkshire Ladies Council of Education alongside Frances Lupton. Kitson and his wife Emily had issue: Sir Albert Ernest Kitson, 2nd Baron Airedale (1863–1944) James Clifford Kitson (6 December 1864 25 September 1942) Charles Clifford Kitson twin of James Clifford (born 6 December 1864) Emily (born 1866) Edward Christian (born 1873) (Alice) Hilda (1872–1944) Kitson married Mary Laura Smith (died 1939) on 1 June 1881 and had issue: Sir Roland Dudley Kitson, 3rd Baron Airedale (1882–1958) Olive Mary (born 1887) Mayors and lord mayors Several members of the Kitson family were mayor or Lord Mayor of Leeds: In 1860 and 1861, James Kitson In 1896 and 1897, his son, Sir James Kitson MP (later the 1st Baron Airedale) In 1908 (and briefly in 1910), Frederick J Kitson In 1942, Jessie Beatrice Kitson Lord Airedale's father, James Kitson was Mayor of Leeds in 1860–1861. A generation later it was his son who became the first Lord Mayor in 1896–1897. The 1908 the lord mayor was Frederick James Kitson, Lord Airedale's nephew. In late 1942, the elected lord mayor died suddenly, and the council asked a fourth Kitson to take over: Jessie Beatrice Kitson (born 1877), daughter of John Hawthorn Kitson (died 1899) the younger brother of the first Lord Airedale. Arms References Oxford Dictionary of National Biography Retrieved 25 June 2008 External links Parliamentary Archives, Papers of James Kitson, 1st Baron Airedale of Gledhow 1835 births 1911 deaths Liberal Party (UK) MPs for English constituencies 1 Members of the Privy Council of the United Kingdom Presidents of the Liberal Party (UK) UK MPs 1892–1895 UK MPs 1895–1900 UK MPs 1900–1906 UK MPs 1906–1910 UK MPs who were granted peerages Alumni of University College London English mechanical engineers Mayors of Leeds Lord mayors of Leeds English Unitarians Bessemer Gold Medal Peers created by Edward VII
James Kitson, 1st Baron Airedale
[ "Chemistry" ]
1,559
[ "Bessemer Gold Medal", "Chemical engineering awards" ]
7,204,363
https://en.wikipedia.org/wiki/Interchange%20of%20limiting%20operations
In mathematics, the study of interchange of limiting operations is one of the major concerns of mathematical analysis, in that two given limiting operations, say L and M, cannot be assumed to give the same result when applied in either order. One of the historical sources for this theory is the study of trigonometric series. Formulation In symbols, the assumption LM = ML, where the left-hand side means that M is applied first, then L, and vice versa on the right-hand side, is not a valid equation between mathematical operators, under all circumstances and for all operands. An algebraist would say that the operations do not commute. The approach taken in analysis is somewhat different. Conclusions that assume limiting operations do 'commute' are called formal. The analyst tries to delineate conditions under which such conclusions are valid; in other words mathematical rigour is established by the specification of some set of sufficient conditions for the formal analysis to hold. This approach justifies, for example, the notion of uniform convergence. It is relatively rare for such sufficient conditions to be also necessary, so that a sharper piece of analysis may extend the domain of validity of formal results. Professionally speaking, therefore, analysts push the envelope of techniques, and expand the meaning of well-behaved for a given context. G. H. Hardy wrote that "The problem of deciding whether two given limit operations are commutative is one of the most important in mathematics". An opinion apparently not in favour of the piece-wise approach, but of leaving analysis at the level of heuristic, was that of Richard Courant. Examples Examples abound, one of the simplest being that for a double sequence am,n: it is not necessarily the case that the operations of taking the limits as m → ∞ and as n → ∞ can be freely interchanged. For example take am,n = 2m − n in which taking the limit first with respect to n gives 0, and with respect to m gives ∞. Many of the fundamental results of infinitesimal calculus also fall into this category: the symmetry of partial derivatives, differentiation under the integral sign, and Fubini's theorem deal with the interchange of differentiation and integration operators. One of the major reasons why the Lebesgue integral is used is that theorems exist, such as the dominated convergence theorem, that give sufficient conditions under which integration and limit operation can be interchanged. Necessary and sufficient conditions for this interchange were discovered by Federico Cafiero. List of related theorems Interchange of limits: Moore-Osgood theorem Interchange of limit and infinite summation: Tannery's theorem Interchange of limit and derivatives: If a sequence of functions converges at at least one point and the derivatives converge uniformly, then converges uniformly as well, say to some function and the limiting function of the derivatives is . While this is often shown using the mean value theorem for real-valued functions, the same method can be applied for higher-dimensional functions by using the mean value inequality instead. Interchange of partial derivatives: Schwarz's theorem Interchange of integrals: Fubini's theorem Interchange of limit and integral: Dominated convergence theorem Vitali convergence theorem Fichera convergence theorem Cafiero convergence theorem Fatou's lemma Monotone convergence theorem for integrals (Beppo Levi's lemma) Interchange of derivative and integral: Leibniz integral rule See also Iterated limit Uniform convergence Notes Mathematical analysis Limits (mathematics)
Interchange of limiting operations
[ "Mathematics" ]
706
[ "Mathematical analysis" ]
7,204,577
https://en.wikipedia.org/wiki/Pullback%20attractor
In mathematics, the attractor of a random dynamical system may be loosely thought of as a set to which the system evolves after a long enough time. The basic idea is the same as for a deterministic dynamical system, but requires careful treatment because random dynamical systems are necessarily non-autonomous. This requires one to consider the notion of a pullback attractor or attractor in the pullback sense. Set-up and motivation Consider a random dynamical system on a complete separable metric space , where the noise is chosen from a probability space with base flow . A naïve definition of an attractor for this random dynamical system would be to require that for any initial condition , as . This definition is far too limited, especially in dimensions higher than one. A more plausible definition, modelled on the idea of an omega-limit set, would be to say that a point lies in the attractor if and only if there exists an initial condition, , and there is a sequence of times such that as . This is not too far from a working definition. However, we have not yet considered the effect of the noise , which makes the system non-autonomous (i.e. it depends explicitly on time). For technical reasons, it becomes necessary to do the following: instead of looking seconds into the "future", and considering the limit as , one "rewinds" the noise seconds into the "past", and evolves the system through seconds using the same initial condition. That is, one is interested in the pullback limit . So, for example, in the pullback sense, the omega-limit set for a (possibly random) set is the random set Equivalently, this may be written as Importantly, in the case of a deterministic dynamical system (one without noise), the pullback limit coincides with the deterministic forward limit, so it is meaningful to compare deterministic and random omega-limit sets, attractors, and so forth. Several examples of pullback attractors of non-autonomous dynamical systems are presented analytically and numerically. Definition The pullback attractor (or random global attractor) for a random dynamical system is a -almost surely unique random set such that is a random compact set: is almost surely compact and is a -measurable function for every ; is invariant: for all almost surely; is attractive: for any deterministic bounded set , almost surely. There is a slight abuse of notation in the above: the first use of "dist" refers to the Hausdorff semi-distance from a point to a set, whereas the second use of "dist" refers to the Hausdorff semi-distance between two sets, As noted in the previous section, in the absence of noise, this definition of attractor coincides with the deterministic definition of the attractor as the minimal compact invariant set that attracts all bounded deterministic sets. Theorems relating omega-limit sets to attractors The attractor as a union of omega-limit sets If a random dynamical system has a compact random absorbing set , then the random global attractor is given by where the union is taken over all bounded sets . Bounding the attractor within a deterministic set Crauel (1999) proved that if the base flow is ergodic and is a deterministic compact set with then -almost surely. References Further reading Random dynamical systems
Pullback attractor
[ "Mathematics" ]
705
[ "Random dynamical systems", "Dynamical systems" ]
7,204,602
https://en.wikipedia.org/wiki/Pfister%20form
In mathematics, a Pfister form is a particular kind of quadratic form, introduced by Albrecht Pfister in 1965. In what follows, quadratic forms are considered over a field F of characteristic not 2. For a natural number n, an n-fold Pfister form over F is a quadratic form of dimension 2n that can be written as a tensor product of quadratic forms for some nonzero elements a1, ..., an of F. (Some authors omit the signs in this definition; the notation here simplifies the relation to Milnor K-theory, discussed below.) An n-fold Pfister form can also be constructed inductively from an (n−1)-fold Pfister form q and a nonzero element a of F, as . So the 1-fold and 2-fold Pfister forms look like: . For n ≤ 3, the n-fold Pfister forms are norm forms of composition algebras. In that case, two n-fold Pfister forms are isomorphic if and only if the corresponding composition algebras are isomorphic. In particular, this gives the classification of octonion algebras. The n-fold Pfister forms additively generate the n-th power I n of the fundamental ideal of the Witt ring of F. Characterizations A quadratic form q over a field F is multiplicative if, for vectors of indeterminates x and y, we can write q(x).q(y) = q(z) for some vector z of rational functions in the x and y over F. Isotropic quadratic forms are multiplicative. For anisotropic quadratic forms, Pfister forms are multiplicative, and conversely. For n-fold Pfister forms with n ≤ 3, this had been known since the 19th century; in that case z can be taken to be bilinear in x and y, by the properties of composition algebras. It was a remarkable discovery by Pfister that n-fold Pfister forms for all n are multiplicative in the more general sense here, involving rational functions. For example, he deduced that for any field F and any natural number n, the set of sums of 2n squares in F is closed under multiplication, using that the quadratic form is an n-fold Pfister form (namely, ). Another striking feature of Pfister forms is that every isotropic Pfister form is in fact hyperbolic, that is, isomorphic to a direct sum of copies of the hyperbolic plane . This property also characterizes Pfister forms, as follows: If q is an anisotropic quadratic form over a field F, and if q becomes hyperbolic over every extension field E such that q becomes isotropic over E, then q is isomorphic to aφ for some nonzero a in F and some Pfister form φ over F. Connection with K-theory Let kn(F) be the n-th Milnor K-group modulo 2. There is a homomorphism from kn(F) to the quotient In/In+1 in the Witt ring of F, given by where the image is an n-fold Pfister form. The homomorphism is surjective, since the Pfister forms additively generate In. One part of the Milnor conjecture, proved by Orlov, Vishik and Voevodsky, states that this homomorphism is in fact an isomorphism . That gives an explicit description of the abelian group In/In+1 by generators and relations. The other part of the Milnor conjecture, proved by Voevodsky, says that kn(F) (and hence In/In+1) maps isomorphically to the Galois cohomology group Hn(F, F2). Pfister neighbors A Pfister neighbor is an anisotropic form σ which is isomorphic to a subform of aφ for some nonzero a in F and some Pfister form φ with dim φ < 2 dim σ. The associated Pfister form φ is determined up to isomorphism by σ. Every anisotropic form of dimension 3 is a Pfister neighbor; an anisotropic form of dimension 4 is a Pfister neighbor if and only if its discriminant in F*/(F*)2 is trivial. A field F has the property that every 5-dimensional anisotropic form over F is a Pfister neighbor if and only if it is a linked field. Notes References , Ch. 10 Quadratic forms
Pfister form
[ "Mathematics" ]
976
[ "Quadratic forms", "Number theory" ]
7,204,666
https://en.wikipedia.org/wiki/Norm%20form
In mathematics, a norm form is a homogeneous form in n variables constructed from the field norm of a field extension L/K of degree n. That is, writing N for the norm mapping to K, and selecting a basis e1, ..., en for L as a vector space over K, the form is given by N(x1e1 + ... + xnen) in variables x1, ..., xn. In number theory norm forms are studied as Diophantine equations, where they generalize, for example, the Pell equation. For this application the field K is usually the rational number field, the field L is an algebraic number field, and the basis is taken of some order in the ring of integers OL of L. See also Trace form References Field (mathematics) Diophantine equations Homogeneous polynomials
Norm form
[ "Mathematics" ]
177
[ "Diophantine equations", "Mathematical objects", "Equations", "Number theory" ]
7,204,725
https://en.wikipedia.org/wiki/Base%20flow%20%28random%20dynamical%20systems%29
In mathematics, the base flow of a random dynamical system is the dynamical system defined on the "noise" probability space that describes how to "fast forward" or "rewind" the noise when one wishes to change the time at which one "starts" the random dynamical system. Definition In the definition of a random dynamical system, one is given a family of maps on a probability space . The measure-preserving dynamical system is known as the base flow of the random dynamical system. The maps are often known as shift maps since they "shift" time. The base flow is often ergodic. The parameter may be chosen to run over (a two-sided continuous-time dynamical system); (a one-sided continuous-time dynamical system); (a two-sided discrete-time dynamical system); (a one-sided discrete-time dynamical system). Each map is required to be a -measurable function: for all , to preserve the measure : for all , . Furthermore, as a family, the maps satisfy the relations , the identity function on ; for all and for which the three maps in this expression are defined. In particular, if exists. In other words, the maps form a commutative monoid (in the cases and ) or a commutative group (in the cases and ). Example In the case of random dynamical system driven by a Wiener process , where is the two-sided classical Wiener space, the base flow would be given by . This can be read as saying that "starts the noise at time instead of time 0". References Random dynamical systems
Base flow (random dynamical systems)
[ "Mathematics" ]
338
[ "Random dynamical systems", "Dynamical systems" ]
7,204,781
https://en.wikipedia.org/wiki/Comparison%20of%20business%20integration%20software
This article is a comparison of notable business integration and business process automation software. General Scope Scope of this comparison: Service-oriented architecture implementations; Message-oriented middleware and message brokers; Enterprise service bus implementations; BPEL implementations; Enterprise application integration software. General information Compatibility and interoperability Operating system support Hardware support Supported hardware depends on supported operating systems. Database support Web servers support See also List of application servers List of BPEL engines List of BPMN 2.0 engines Notes Footnotes References Daryl C. Plummer, David W. McCoy, Charles Abrams. Magic Quadrant for the Integrated Service Environment Market, 2006. Gartner, research G00137074. Software comparisons Enterprise application integration
Comparison of business integration software
[ "Technology", "Engineering" ]
145
[ "IT infrastructure", "Computing comparisons", "Software engineering", "Software comparisons", "Middleware" ]
7,204,793
https://en.wikipedia.org/wiki/ICE%20demolition%20protocol
For the Institution of Civil Engineers (ICE), the ICE Demolition Protocol is a British waste management protocol produced by EnviroCentre, in partnership with London Remade. It came out of a joint ICE and Institute of Waste Management group called the Resource Sustainability Initiative. The first edition was founded in 2003 and the second in 2008, although the second version does not supersede the second. References Demolition Institution of Civil Engineers Sustainability in the United Kingdom
ICE demolition protocol
[ "Engineering" ]
91
[ "Construction", "Demolition" ]
7,204,913
https://en.wikipedia.org/wiki/Carbon-to-nitrogen%20ratio
A carbon-to-nitrogen ratio (C/N ratio or C:N ratio) is a ratio of the mass of carbon to the mass of nitrogen in organic residues. It can, amongst other things, be used in analysing sediments and soil including soil organic matter and soil amendments such as compost. Sediments In the analysis of sediments, C/N ratios are a proxy for paleoclimate research, having different uses whether the sediment cores are terrestrial-based or marine-based. Carbon-to-nitrogen ratios indicate the degree of nitrogen limitation of plants and other organisms. They can identify whether molecules found in the sediment under study come from land-based or algal plants. Further, they can distinguish between different land-based plants, depending on the type of photosynthesis they undergo. Therefore, the C/N ratio serves as a tool for understanding the sources of sedimentary organic matter, which can lead to information about the ecology, climate, and ocean circulation at different times in Earth's history. Ranges C/N ratios in the range of 4-10:1 usually come from marine sources, whereas higher ratios are likely to come from a terrestrial source. Vascular plants from terrestrial sources tend to have C/N ratios greater than 20. The lack of cellulose, which has a chemical formula of (C6H10O5)n, and greater amount of proteins in algae versus vascular plants causes this significant difference in the C/N ratio. Instruments Examples of devices that can be used to measure this ratio are the CHN analyzer and the continuous-flow isotope ratio mass spectrometer (CF-IRMS). However, for more practical applications, desired C/N ratios can be achieved by blending commonly used substrates of known C/N content, which are readily available and easy to use. By sediment type Marine Organic matter that is deposited in marine sediments contains a key indicator as to its source and the processes it underwent before reaching the floor as well as after deposition, its carbon to nitrogen ratio. In the global oceans, freshly produced algae in the surface ocean typically have a carbon-to-nitrogen ratio of about 4 to 10. However, it has been observed that only 10% of this organic matter (algae) produced in the surface ocean sinks to the deep ocean without being degraded by bacteria in transit, and only about 1% is permanently buried in the sediment. An important process called sediment diagenesis accounts for the other 9% of organic carbon that sank to the deep ocean floor, but was not permanently buried, that is 9% of the total organic carbon produced is degraded in the deep ocean. The microbial communities utilizing the sinking organic carbon as an energy source, are partial to nitrogen-rich compounds because much of these bacteria are nitrogen-limited and much prefer it over carbon. As a result, the carbon-to-nitrogen ratio of sinking organic carbon in the deep ocean is elevated compared to fresh surface ocean organic matter that has not been degraded. An exponential increase in C/N ratios is observed with increasing water depth—with C/N ratios reaching ten at intermediate water depths of about 1000 meters and up to 15 in the deep ocean (deeper than about 2500 meters) . This elevated C/N signature is preserved in the sediment until another form of diagenesis, post-depositional diagenesis, alters its C/N signature once again. Post-depositional diagenesis occurs in organic-carbon-poor marine sediments where bacteria can oxidize organic matter in aerobic conditions as an energy source. The oxidation reaction proceeds as follows: CH2O + H2O → CO2 + 4H+ + 4e−, with standard free energy of –27.4 kJ mol−1 (half-reaction). Once all of the oxygen is used up, bacteria can carry out an anoxic sequence of chemical reactions as an energy source, all with negative ∆G°r values, with the reaction becoming less favorable as the chain of reactions proceeds. The same principle described above explains the preferential degradation of nitrogen-rich organic matter within the sediments, as they are more labile and in higher demand. This principle has been utilized in paleoceanographic studies to identify core sites that have not experienced much microbial activity or contamination by terrestrial sources with much higher C/N ratios. Lastly, ammonia, the product of the second reduction reaction, which reduces nitrate and produces nitrogen gas and ammonia, is readily adsorbed on clay mineral surfaces and protected from bacteria. This has been proposed to explain lower-than-expected C/N signatures of organic carbon in sediments undergoing post-depositional diagenesis. Ammonium produced from the remineralisation of organic material, exists in elevated concentrations (1 - >14μM) within cohesive shelf sea sediments found in the Celtic Sea (depth: 1–30 cm). The sediment depth exceeds 1m and would be a suitable study site for conducting paleolimnology experiments with C:N. Lacustrine Unlike in marine sediments, diagenesis does not pose a large threat to the integrity of the C/N ratio in lacustrine sediments. Though wood from living trees around lakes have consistently higher C/N ratios than wood buried in sediment, the change in elemental composition is not large enough to remove the vascular versus non-vascular plant signals due to the refractory nature of terrestrial organic matter. Abrupt shifts in the C/N ratio down-core can be interpreted as shifts in the organic source material. For example, two studies on Mangrove Lake, Bermuda, and Lake Yunoko, Japan, show irregular, abrupt fluctuations between C/N around 11 to 18. These fluctuations are attributed to shifts from mainly algal dominance to land-based vascular dominance. Results of studies that show abrupt shifts in algal dominance and vascular dominance often lead to conclusions about the state of the lake during these distinct periods of isotopic signatures. Times in which algal signals dominate lakes suggest a deep-water lake, while times in which vascular plant signals dominate lakes suggest the lake is shallow, dry, or marshy. Using the C/N ratio in conjunction with other sediment observations, such as physical variations, D/H isotopic analyses of fatty acids and alkanes, and δ13C analyses on similar biomarkers can lead to further regional climate interpretations that describe the more significant phenomena at play. Soil In microbial communities like soil, the C:N ratio is a key indicator as it describes a balance between energetic foods (represented by carbon) and material to build protein with (represented by nitrogen). An optimal C:N ratio of around 24:1 provides for higher microbial activity. The C:N ratio of soil can be modified by the addition of materials such as compost, manure, and mulch. A feedstock with a near-optimal C:N ratio will be consumed quickly. Any excess C will cause the N originally in the soil to be consumed, competing with the plant for nutrients (immobilization) – at least temporarily until the microbes die. Any excess N, on the other hand, will usually just be left behind (mineralization), but too much excess may result in leaching losses. The recommended C:N ratio for soil materials is, therefore, 30:1. A soil test may be done to find the C:N ratio of the soil itself. The C:N ratio of microbes themselves is generally around 10:1. A lower ratio is correlated with higher soil productivity. Compost The role of C:N ratio in compost feedstock is similar to that of soil feedstock. The recommendation is around 20-30:1. The microbes prefer a ratio of 30-35:1, but the carbon is usually not completely digested (especially in the case of lignin feedstock), hence the lowered ratio. An imbalance of C:N ratio causes a slowdown in the composting process and a drop in temperature. When the C:N ratio is less than 15:1, outgassing of ammonium may occur, creating odor and losing nitrogen. A finished compost has a C:N ratio of around 10:1. Estimating C and N contents of feedstocks The C and N contents of feedstocks is generally known from lookup tables listing common types of feedstock. It is important to deduct the moisture content if the listed value is for dry material. For foodstuffs with a nutrition analysis, the N content may be estimated from the protein content as , reversing the crude protein calculation. The C content may be estimated from crude ash content (often reported in animal feed) or from reported macronutrient levels as . Given the C:N ratio and one of C and N contents, the other content may be calculated using the very definition of the ratio. When only the ratio is known, one must estimate the total C+N% or one of the contents to get both values. Managing mixed feedstocks The C:N ratio of mixed feedstocks is calculated by summing their C and N amounts together and dividing the two results. For compost, moisture is also an important factor. References External links C/N calculator Composting Soil chemistry Geochemistry
Carbon-to-nitrogen ratio
[ "Chemistry" ]
1,891
[ "Soil chemistry", "nan" ]
7,205,021
https://en.wikipedia.org/wiki/Ettringite
Ettringite is a hydrous calcium aluminium sulfate mineral with formula: . It is a colorless to yellow mineral crystallizing in the trigonal system. The prismatic crystals are typically colorless, turning white on partial dehydration. It is part of the ettringite-group which includes other sulfates such as thaumasite and bentorite. Discovery and occurrence Ettringite was first described in 1874 by , for an occurrence near the Ettringer Bellerberg Volcano, Ettringen, Rheinland-Pfalz, Germany. It occurs within metamorphically altered limestone adjacent to igneous intrusive rocks or within xenoliths. It also occurs as weathering crusts on larnite in the Hatrurim Formation of Israel. It occurs associated with portlandite, afwillite and hydrocalumite at Scawt Hill, Ireland and with afwillite, hydrocalumite, mayenite and gypsum in the Hatrurim Formation. It has also been reported from the Zeilberg quarry, Maroldsweisach, Bavaria; at Boisséjour, near Clermont-Ferrand, Puy-de-Dôme, Auvergne, France; the N’Chwaning mine, Kuruman district, Cape Province, South Africa; in the US, occurrences were found in spurrite-merwinite-gehlenite skarn at the 910 level of the Commercial quarry, Crestmore, Riverside County, California and in the Lucky Cuss mine, Tombstone, Arizona. Ettringite is also sometimes referred in the ancient French literature as Candelot salt, or Candlot salt. Occurrence in cement In concrete chemistry, ettringite is a hexacalcium aluminate trisulfate hydrate, of general formula when noted as oxides: or . Ettringite is formed in the hydrated Portland cement system as a result of the reaction of tricalcium aluminate () with calcium sulfate, both present in Portland cement. The addition of gypsum () to clinker during the grinding operation to obtain the crushed powder of Portland cement is essential to avoid the flash setting of concrete during its early hydration. Indeed, the tricalcium aluminate () is the most reactive phase of the four main mineral phases present in Portland cement (, , , and ). hydration is very exothermic and also occurs very fast in the fresh concrete mix as the temperature quickly increases with the progress of the hydration reaction. The effect of gypsum addition is to promote the formation of a thin impervious film of ettringite at the surface of the grains, passivating their surface, and so slowing down their hydration. The addition of gypsum to Portland cement is needed to control the concrete setting. Ettringite, the most prominent representative of AFt phases or (), can also be synthesized in aqueous solution by reacting stoichiometric amounts of calcium oxide, aluminium oxide, and sulfate. In the cement system, the presence of ettringite depends on the ratio of calcium sulfate to tri-calcium aluminate (); when this ratio is low, ettringite forms during early hydration and then converts to the calcium aluminate monosulfate (AFm phases or ()). When the ratio is intermediate, only a portion of the ettringite converts to AFm and both can coexist, while ettringite is unlikely to convert to AFm at high ratios. The following standard abbreviations are used to designate the different oxide phases in the cement chemist notation (CCN): C = CaO S = A = F = = H = K = N = m = mono t = tri AFt and AFm phases AFt: abbreviation for "alumina, ferric oxide, tri-substituted" or (). It represents a group of calcium aluminate hydrates. AFt has the general formula where X represents a doubly charged anion or, sometimes, two singly charged anions. Ettringite is the most common and prominent member of the AFt group (X in this case denoting sulfate), and often simply called Alumina Ferrite tri-sulfate (AFt). AFm: abbreviation for "alumina, ferric oxide, mono-substituted" or (). It represents another group of calcium aluminate hydrates with general formula where X represents a singly charged anion or 'half' a doubly charged anion. X may be one of many anions. The most important anions involved in Portland cement hydration are hydroxyl (), sulfate (), and carbonate (). Structure The mineral ettringite has a structure that runs parallel to the c axis – the needle axis – in the middle of these two lie the sulfate ions and molecules, the space group is P31c. Ettringite crystal system is trigonal, crystals are elongated and in a needle like shape, occurrence of disorder or twining is common, which affects the intercolumn material. The first X-ray diffraction crystallographic study was done by Bannister, Hey and Bernal (1936), which found that the crystal unit cell is of a hexagonal form with a = 11.26 and c = 21.48 with space group /mmc and Z = 2, where Z is a number of formula units per unit cell. From observations on dehydration and chemical formulas there were suggestions of the structure being composed of and , were between them lie ions and molecules. Further X-ray studies ensued; namely Wellin (1956) which determined the crystal structure of thaumasite, and Besjak and Jelenic (1966) which gave confirmation of the structure nature of ettringite. An ettringite sample extracted from Scawt Hill was analysed by C. E. Tilley, the crystal was , with specific gravity of , possessed five prism faces of the form m{100} and a small face a{110}, with no pyramidal or basal faces. Upon X-ray diffraction a Laue diagram along the c-axis revealed a hexagonal axis with vertical planes of symmetry, this study showed that the structure has a hexagonal and not a rhombohedral lattice. Further studies conducted on synthetic ettringite by use of X-ray and powder diffraction confirmed earlier assumptions and analyses. Upon analyzing the structure of both ettringite and thaumasite, it was deduced that both minerals have hexagonal structures, but different space groups. Ettringite crystals have a P31c with a = 11.224 Å, c = 21,108 Å, while thaumasite crystals fall into space group P63 with a=11.04 Å, c=10.39 Å While these two minerals form a solid solution, the difference in space groups lead to discontinuities in unit cell parameters Differences between structures of ettringite and thaumasite arise from the columns of cations and anions Ettringite cation columns are composed of , which run parallel to the c axis, and the other columns of sulfate anions and water molecules in channels parallel to these columns In contrast, thaumasite containing a hexacoordinated silicon complex of (a rare octahedral configuration for Si) consists of a cylindrical column of in the c axis, with sulfate and carbonate anions in channels between these columns which contain water molecules as well. Further research Ongoing research on ettringite and cement phase minerals is performed to find new ways to immobilize toxic anions (e.g., borate, selenate and arsenate) and heavy metals to avoid their dispersion in soils and the environment; this can be achieved by using the proper cement phases whose crystal lattice can accommodate these elements. For example, copper immobilization at high pH can be achieved through the formation of C-S-H/C-A-H and ettringite. The crystal structure of ettringite Ca6Al2(SO4)3(OH)12·26H2O can incorporate a variety of divalent ions: Cu2+, Pb2+, Cd2+ and Zn2+, which can substitute for Ca2+. See also Cement Cement chemists notation Concrete References Aluminium minerals Calcium minerals Cement Concrete 26 Sulfate minerals Geology of Riverside County, California Crestmore Heights, California Trigonal minerals Minerals in space group 159 Minerals described in 1874
Ettringite
[ "Chemistry", "Engineering" ]
1,799
[ "Structural engineering", "Concrete", "Hydrate minerals", "Hydrates" ]
7,205,041
https://en.wikipedia.org/wiki/Ye%27elimite
Ye'elimite is the naturally occurring form of anhydrous calcium sulfoaluminate, . It gets its name from Har Ye'elim in Israel in the Hatrurim Basin west of the Dead Sea where it was first found in nature by Shulamit Gross, an Israeli mineralogist and geologist who studied the Hatrurim Formation. The mineral is cubic, with 16 formula units per unit cell, and a cell dimension of 1.8392 nm, and is readily detected and quantified in mixtures by powder x-ray diffraction. Occurrence in cement It is alternatively called "Klein's Compound", after Alexander Klein of the University of California, Berkeley, who experimented with sulfoaluminate cements around 1960, although it was first described in 1957 by Ragozina. Ye'elimite is most commonly encountered as a constituent of sulfoaluminate cements, in which it is manufactured on the million-tonne-per-annum scale. It also occasionally occurs adventitiously in Portland-type cements. It is thus an anhydrous mineral of the cement clinker whose idealized oxide formula is also written in the cement chemist notation (CCN). On hydration in the presence of calcium and sulfate ions, it forms the insoluble, fibrous mineral ettringite, which provides the strength in sulfoaluminate concretes, monosulfoaluminate, and aluminium hydroxide. It is manufactured by heating the appropriate quantities of finely-ground alumina, calcium carbonate and calcium sulfate to between 1100 and 1300 °C, preferably in the presence of small quantities of fluxing materials, such as Fe2O3. On heating above 1350 °C, ye'elimite begins to decompose to tricalcium aluminate, calcium oxide, sulfur dioxide and oxygen. See also Other rare minerals also discovered in the Hatrurim Formation: Brownmillerite () Gehlenite () Other phases also present in the cement clinker: Larnite: calcium olivine (, , or belite) Mayenite () Calcium aluminates: Monocalcium aluminate (CA) Tricalcium aluminate () Dodecacalcium hepta-aluminate () Calcium aluminate cements References Aluminates Aluminium minerals Calcium minerals Cement Concrete Sulfate minerals Cubic minerals Minerals in space group 214
Ye'elimite
[ "Engineering" ]
510
[ "Structural engineering", "Concrete" ]
7,205,088
https://en.wikipedia.org/wiki/Absorbing%20set%20%28random%20dynamical%20systems%29
In mathematics, an absorbing set for a random dynamical system is a subset of the phase space. A dynamical system is a system in which a function describes the time dependence of a point in a geometrical space. The absorbing set eventually contains the image of any bounded set under the cocycle ("flow") of the random dynamical system. As with many concepts related to random dynamical systems, it is defined in the pullback sense. Definition Consider a random dynamical system φ on a complete separable metric space (X, d), where the noise is chosen from a probability space (Ω, Σ, P) with base flow θ : R × Ω → Ω. A random compact set K : Ω → 2X is said to be absorbing if, for all d-bounded deterministic sets B ⊆ X, there exists a (finite) random time τB : Ω → 0, +∞) such that This is a definition in the pullback sense, as indicated by the use of the negative time shift θ−t. See also Glossary of areas of mathematics Lists of mathematics topics Mathematics Subject Classification Outline of mathematics References (See footnote (e) on p. 104) Random dynamical systems
Absorbing set (random dynamical systems)
[ "Mathematics" ]
250
[ "Random dynamical systems", "Dynamical systems" ]
7,205,441
https://en.wikipedia.org/wiki/Caloboletus%20calopus
Caloboletus calopus, commonly known as the bitter bolete, bitter beech bolete or scarlet-stemmed bolete, is a fungus of the bolete family, found in Asia, Northern Europe and North America. Appearing in coniferous and deciduous woodland in summer and autumn, the stout fruit bodies are attractively coloured, with a beige to olive cap up to 15 cm (6 in) across, yellow pores, and a reddish stipe up to long and wide. The pale yellow flesh stains blue when broken or bruised. Christiaan Persoon first described Boletus calopus in 1801. Modern molecular phylogenetics showed that it was only distantly related to the type species of Boletus and required placement in a new genus; Caloboletus was erected in 2014, with C. calopus designated as the type species. Although Caloboletus calopus is not typically considered edible due to an intensely bitter taste that does not disappear with cooking, there are reports of it being consumed in eastern Europe. Its red stipe distinguishes it from Boletus edulis. Taxonomy Caloboletus calopus was originally published under the name Boletus olivaceus by Jacob Christian Schäffer in 1774, but this name is unavailable for use as it was later sanctioned for another species. Johann Friedrich Gmelin's 1792 synonym Boletus lapidum is also illegitimate. Christiaan Hendrik Persoon described the mushroom in 1801; its specific name is derived from the Greek καλος/kalos ("pretty") and πους/pous ("foot"), referring to its brightly coloured stipe. The German name, Schönfußröhrling or "pretty-foot bolete", is a literal translation. Alternate common names are scarlet-stemmed bolete and bitter beech bolete. Other synonyms include binomials resulting from generic transfers to Dictyopus by Lucien Quélet in 1886, and Tubiporus by René Maire in 1937. Boletus frustosus, originally published as a distinct species by Wally Snell and Esther Dick in 1941, was later described as a variety of B. calopus by Orson K. Miller and Roy Watling in 1968. Estadès and Lannoy described the variety ruforubraporus and the form ereticulatus from Europe in 2001. In his 1986 infrageneric classification of the genus Boletus, Rolf Singer placed C. calopus as the type species of the section Calopodes, which includes species characterised by having a whitish to yellowish flesh, bitter taste, and a blue staining reaction in the tube walls. Other species in section Calopodes include C. radicans, C. inedulis, B. peckii, and B. pallidus. Genetic analysis published in 2013 showed that C. calopus and many (but not all) red-pored boletes were part of a dupainii clade (named for Boletus (now Rubroboletus) dupainii), well-removed from the core group of the type species B. edulis and relatives within the Boletineae. This indicated it needed placement in a new genus. This took place in 2014, B. calopus was transferred to (and designated the type species of) the new genus Caloboletus by Italian mycologist Alfredo Vizzini. Description Up to 15 cm (6 in) or rarely 20 cm (8 in) in diameter, the cap is beige to olive and initially almost globular before opening out to a hemispherical and then convex shape. The surface of the cap is smooth or has minute hairs, and sometimes develops cracks with age. The cap cuticle hangs over the cap margin. The pore surface is initially pale yellow before deepening to an olive-yellow in maturity, and quickly turns blue when it is injured. The pores, numbering one or two per millimetre, are circular when young but become more angular as the mushroom ages. The tubes are up to deep. The attractively coloured stipe is typically yellow above to pink-red below, with a straw-coloured network (reticulation) near the top or over the upper half; occasionally the entire stipe is reddish. It measures long by thick, and is either fairly equal in width throughout, or thicker towards the base. Sometimes, the reddish stipe colour of mature mushrooms or harvested specimens that are a few days old disappears completely, and is replaced with ochre-brown tones. The pale yellow flesh stains blue when broken, the discolouration spreading out from the damaged area. Its smell can be strong, and has been likened to ink. The spore print is olive to olive-brown. Spores are smooth and elliptical, measuring 13–19 by 5–6 μm. The basidia (spore-bearing cells) are club-shaped, four-spored, and measure 30–38 by 9–12 μm. The cystidia are club-shaped to spindle-shaped, hyaline, and measure 25–40 by 10–15 μm. Variety frustosus is morphologically similar to the main type, but its cap becomes areolate (marked out into small areas by cracks and crevices) in maturity. Its spores are slightly smaller too, measuring 11–15 by 4–5.5 μm. In the European form ereticulatus, the reticulations on the upper stipe are replaced with fine reddish granules, while the variety ruforubraporus has pinkish-red pores. Similar species The overall colouration of Caloboletus calopus, with its pale cap, yellow pores and red stipe, is not shared with any other bolete. Large pale specimens resemble Suillellus luridus, and the cap of Rubroboletus satanas is a similar colour but this species has red pores. Fruit bodies in poor condition could be confused with Xerocomellus chrysenteron but the stipes of this species are not reticulated. Edible species such as B. edulis lack a red stipe. It closely resembles the similarly inedible C. radicans, which lacks the redness on the stipe. Like C. calopus, the western North American species C. rubripes also has a bitter taste, similarly coloured cap, and yellowish pores that bruise blue, but it lacks reticulation on its reddish stipe. Found in northwestern North America, B. coniferarum lacks reddish or pinkish colouration in its yellow reticulate stipe, and has a darker, olive-grey to deep brown cap. Two eastern North American species, C. inedulis and C. roseipes, also have an appearance similar to C. calopus. C. inedulis produces smaller fruit bodies with a white to greyish-white cap, while C. roseipes associates solely with hemlock. C. firmus, found in the eastern United States, eastern Canada, and Costa Rica, has a pallid cap colour, reddish stipe, and bitter taste, but unlike C. calopus, has red pores and lacks stipe reticulation. C. panniformis, a Japanese species described as new to science in 2013, bears a resemblance to C. calopus, but can be distinguished by its rough cap surface, or microscopically by the amyloid-staining cells in the flesh of the cap, and morphologically distinct cystidia on the stipe. Distribution and habitat An ectomycorrhizal species, Caloboletus calopus grows in coniferous and deciduous woodland, often at higher altitudes, especially under beech and oak. Fruit bodies occur singly or in large groups. The species grows on chalky ground from July to December, in Northern Europe, and North America's Pacific Northwest and Michigan. In North America, its range extends south to Mexico. Variety frustosus is known from California and the Rocky Mountains of Idaho. In 1968, after comparing European and North American collections, Miller and Watling suggested that the typical form of C. calopus does not occur in the United States. Similar comparisons by other authors have led them to the opposite conclusion, and the species has since been included in several North American field guides. The bolete has been recorded from the Black Sea region in Turkey, from under Populus ciliata and Abies pindrow in Rawalpindi and Nathia Gali in Pakistan, Yunnan Province in China, Korea, and Taiwan. Biochemistry Although it is an attractive-looking bolete, Caloboletus calopus is not considered edible on account of its very bitter taste, which does not disappear upon cooking. There are reports of it being eaten in far eastern Russia and Ukraine. The bitter taste is largely due to the compounds calopin and a δ-lactone derivative, O-acetylcyclocalopin A. These compounds contains a structural motif known as a 3-methylcatechol unit, which is rare in natural products. A total synthesis of calopin was reported in 2003. The frustosus variety is reported as causing severe sickness in Europe. The pulvinic acid derivatives atromentic acid, variegatic acid, and xerocomic acid are present in B. calopus mushrooms. These compounds inhibit cytochrome P450—major enzymes involved in drug metabolism and bioactivation. Other compounds found in the fruit bodies include calopin B, and the sesquiterpenoid compounds cyclopinol and boletunones A and B. The latter two highly oxygenated compounds have significant free-radical scavenging activity in vitro. The compounds 3-octanone (47.0% of total volatile compounds), 3-octanol (27.0%), 1-octen-3-ol (15.0%), and limonene (3.6%) are the predominant volatile components that give the fruit body its odour. See also List of North American boletes References External links Inedible fungi calopus Fungi of Asia Fungi of Europe Fungi of North America Fungi described in 1801 Taxa named by Christiaan Hendrik Persoon Fungus species
Caloboletus calopus
[ "Biology" ]
2,136
[ "Fungi", "Fungus species" ]
7,205,933
https://en.wikipedia.org/wiki/Non-drying%20oil
A non-drying oil is an oil which does not harden and remains liquid when it is exposed to air. This is as opposed to a drying oil, which hardens (through polymerization) completely, or a semi-drying oil, which partially hardens. Oils with an iodine number of less than 115 are considered non-drying. Uses Non-drying oil is often used as a base in anti-climb paint, a type of slippery coating used to prevent climbing on its surface. Another use would be in baby oil. Examples Almond oil Babassu oil Baobab oil Castor oil Cocoa butter Coconut oil Colza oil Macadamia oil Nahar seed oil Mineral oil Olive oil Peanut oil Tea seed oil Tiger nut oil Petroleum References Oils Crime prevention Coatings Visual arts materials Painting materials Wood finishing materials
Non-drying oil
[ "Chemistry" ]
165
[ "Oils", "Carbohydrates", "Coatings" ]
7,205,947
https://en.wikipedia.org/wiki/Young%20stellar%20object
Young stellar object (YSO) denotes a star in its early stage of evolution. This class consists of two groups of objects: protostars and pre-main-sequence stars. Classification by spectral energy distribution A star forms by accumulation of material that falls in to a protostar from a circumstellar disk or envelope. Material in the disk is cooler than the surface of the protostar, so it radiates at longer wavelengths of light producing excess infrared emission. As material in the disk is depleted, the infrared excess decreases. Thus, YSOs are usually classified into evolutionary stages based on the slope of their spectral energy distribution in the mid-infrared, using a scheme introduced by Lada (1987). He proposed three classes (I, II and III), based on the values of intervals of spectral index : . Here is wavelength, and is flux density. The is calculated in the wavelength interval of 2.2–20 (near- and mid-infrared region). Andre et al. (1993) discovered a class 0: objects with strong submillimeter emission, but very faint at . Greene et al. (1994) added a fifth class of "flat spectrum" sources. Class 0 sources – undetectable at Class I sources have Flat spectrum sources have Class II sources have Class III sources have This classification schema roughly reflects evolutionary sequence. It is believed that most deeply embedded Class 0 sources evolve towards Class I stage, dissipating their circumstellar envelopes. Eventually they become optically visible on the stellar birthline as pre-main-sequence stars. Class II objects have circumstellar disks and correspond roughly to classical T Tauri stars, while Class III stars have lost their disks and correspond approximately to weak-line T Tauri stars. An intermediate stage where disks can only be detected at longer wavelengths (e.g., at ) are known as transition-disk objects. Characteristics YSOs are also associated with early star evolution phenomena: jets and bipolar outflows, disk winds, masers, Herbig–Haro objects, and protoplanetary disks (circumstellar disks or proplyds). Classification of YSOs by mass These stars may be differentiated by mass: Massive YSOs, intermediate-mass YSOs, and brown dwarfs. Gallery See also Bok globule References External links Star formation Star types
Young stellar object
[ "Astronomy" ]
488
[ "Star types", "Astronomical classification systems" ]
7,206,152
https://en.wikipedia.org/wiki/Sail%20Tower
The Sail Tower (, Beit HaMifras), officially District Government Center - Building B () is a skyscraper and government building in Haifa, Israel. It is part of Haifa's District Government Center (responsible for the Haifa District), named after Yitzhak Rabin. Its construction began in 1999 and was completed on February 28, 2002. It has 29 floors and stands at 137 m (405 ft). As such, it was the tallest skyscraper in Haifa until 2003, surpassed by the IEC Tower. Counting antennas, it is still the tallest building in Haifa. Without the antennas, the sails of the Sail Tower reach 113 m, and its main roof is at 95 m. The District Government Center in Haifa was planned to combine new and old elements. In contrast with the modern Sail Tower, the promenade leading up to it was designed in an older Middle Eastern style, including mosaic on the floors depicting the history of Haifa. One of the maps depicted dates back to 1773. Inclusion in media In Social Quantum's mobile app, Megapolis, a building called "Zodiac building" appears. The in game building is designed after the Sail building in Haifa. See also List of skyscrapers in Israel Vasco da Gama Tower, skyscraper of similar appearance in Lisbon, Portugal (sail) JW Marriott Panama, skyscraper of similar appearance in Panama City, Panama (sail) W Barcelona, skyscraper of similar appearance in Barcelona, Spain (sail) Spinnaker Tower, skyscraper of similar appearance in Portsmouth, United Kingdom (sail) Burj Al Arab, skyscraper of similar appearance in Dubai, United Arab Emirates (sail) Elite Plaza Business Center, skyscraper of similar appearance in Yerevan, Armenia (sail) References External links Buildings and structures in Haifa Skyscraper office buildings in Israel Government buildings completed in 2002 Postmodern architecture 2002 establishments in Israel Commemoration of Yitzhak Rabin
Sail Tower
[ "Engineering" ]
378
[ "Postmodern architecture", "Architecture" ]
7,206,492
https://en.wikipedia.org/wiki/Directive%20on%20the%20legal%20protection%20of%20designs
Directive 98/71/EC of the European Parliament and of the Council of 13 October 1998 on the legal protection of designs is a European Union directive in the field of industrial design rights, made under the internal market provisions of the Treaty of Rome. It sets harmonised standards for eligibility and protection of most types of registered design. Eligible designs A design is defined as "the appearance of the whole or a part of a product resulting from the features of, in particular, the lines, contours, colours, shape, texture and/or materials of the product itself and/or its ornamentation" (Art. 2). Designs may be protected if: they are novel, that is if no identical design has been made available to the public; they have individual character, that is the "informed user" would find it different from other designs which are available to the public. Where a design forms part of a more complex product, the novelty and individual character of the design are judged on the part of the design which is visible during normal use. Designs are not protected insofar as their appearance is wholly determined by their technical function, or by the need to interconnect with other products to perform a technical function (the "must-fit" exception). However modular systems such as Lego or Mechano may be protected [Art. 8(3)]. Design right protection The holder of a registered design right has the exclusive right to authorise or prohibit others from using the design in any way, notably by producing, importing, selling or using products based on the design. However, rightholders may not prevent private and non-commercial use, use for research or use for teaching. There is also an exception for foreign-registered ships and aeroplanes, based on the principles of maritime sovereignty. Protection under a registered design right last initially for one or more periods of five years, and may be renewed up to a maximum total of twenty-five years. In respect of a given product, they are exhausted when it is sold with the consent of the rightholder (the first-sale doctrine). Protection by a registered design right does not affect any other intellectual property rights in the product, notably unregistered design rights, patents and trade marks. The question of copyright protection is left to the laws of the Member States, which apply varying criteria of originality to the copyright protection of "applied art"; the point, however, is that the existence of the registered design right does not stop the design also being eligible for copyright protection. Implementation Review The Directive leaves the question of component parts mostly without harmonisation, given the widely varying practices between Member States. See also Industrial design rights in the European Union Community design External links Text of directive with metadata Legal protection of designs Industrial design Intellectual property law of the European Union 1998 in law 1998 in the European Union
Directive on the legal protection of designs
[ "Engineering" ]
581
[ "Industrial design", "Design engineering", "Design" ]
7,206,721
https://en.wikipedia.org/wiki/Pumping%20%28computer%20systems%29
Pumping, when referring to computer systems, is an informal term for transmitting a data signal more than one time per clock signal. Overview Early types of system memory (RAM), such as SDRAM, transmitted data on only the rising edge of the clock signal. With the advent of double data rate synchronous dynamic RAM or DDR SDRAM, the data was transmitted on both rising and falling edges. However, quad-pumping has been used for a while for the front-side bus (FSB) of a computer system. This works by transmitting data at the rising edge, peak, falling edge, and trough of each clock cycle. Intel computer systems (and others) use this technology to reach effective FSB speeds of 1600 MT/s (million transfers per second), even though the FSB clock speed is only 400 MHz (cycles per second). A phase-locked loop in the CPU then multiplies the FSB clock by a factor in order to get the CPU speed. Example: A Core 2 Duo E6600 processor is listed as 2.4 GHz with a 1066 MHz FSB. The FSB is known to be quad-pumped, so its clock frequency is 1066/4 = 266 MHz. Therefore, the CPU multiplier is 2400/266, or 9×. The DDR2 RAM that it is compatible with is known to be double-pumped and to have an Input/Output Bus twice that of the true FSB frequency (effectively transferring data 4 times a clock cycle), so to run the system synchronously (see front-side bus) the type of RAM that is appropriate is quadruple 266 MHz, or DDR2-1066 (PC2-8400 or PC2-8500, depending on the manufacturer's labeling.). References Computer memory
Pumping (computer systems)
[ "Technology" ]
377
[ "Computing stubs", "Computer hardware stubs" ]
7,206,727
https://en.wikipedia.org/wiki/Sound%20amplification%20by%20stimulated%20emission%20of%20radiation
Sound amplification by stimulated emission of radiation (SASER) refers to a device that emits acoustic radiation. It focuses sound waves in a way that they can serve as accurate and high-speed carriers of information in many kinds of applications—similar to uses of laser light. Acoustic radiation (sound waves) can be emitted by using the process of sound amplification based on stimulated emission of phonons. Sound (or lattice vibration) can be described by a phonon just as light can be considered as photons, and therefore one can state that SASER is the acoustic analogue of the laser. In a SASER device, a source (e.g., an electric field as a pump) produces sound waves (lattice vibrations, phonons) that travel through an active medium. In this active medium, a stimulated emission of phonons leads to amplification of the sound waves, resulting in a sound beam coming out of the device. The sound wave beams emitted from such devices are highly coherent. The first successful SASERs were developed in 2009. Terminology Instead of a feedback-built wave of electromagnetic radiation (i.e., a laser beam), a SASER delivers a sound wave. SASER may also be referred to as phonon laser, acoustic laser or sound laser. Uses and applications SASERs could have wide applications. Apart from facilitating the investigation of terahertz-frequency ultrasound, the SASER is also likely to find uses in optoelectronics (electronic devices that detect and control light—as a method of transmitting a signal from an end to the other of, for instance, fiber optics), as a method of signal modulation and/or transmission. Such devices could be high precision measurement instruments and they could lead to high energy focused sound. Using SASERs to manipulate electrons inside semiconductors could theoretically result in terahertz-frequency computer processors, much faster than the current chips. History This concept can be more conceivable by imagining it in analogy to laser theory. Theodore Maiman operated the first functioning LASER on May 16, 1960 at Hughes Research Laboratories, Malibu, California, A device that operates according to the central idea of the "sound amplification by stimulated emission of radiation" theory is the thermoacoustic laser. This is a half-open pipe with a heat differential across a special porous material inserted in the pipe. Much like a light laser, a thermoacoustic SASER has a high-Q cavity and uses a gain medium to amplify coherent waves. For further explanation see thermoacoustic heat engine. The possibility of phonon laser action had been proposed in a wide range of physical systems such as nanomechanics, semiconductors, nanomagnets and paramagnetic ions in a lattice. Finding materials that stimulate emission was needed for the development of the SASER. The generation of coherent phonons in a double-barrier semiconductor heterostructure was first proposed around 1990. The transformation of the electric potential energy in a vibrational mode of the lattice is remarkably facilitated by the electronic confinement in a double-barrier structure. On this basis, physicists were searching for materials in which stimulated emission rather than spontaneous emission, is the dominant decay process. A device was first experimentally demonstrated in the Gigahertz range in 2009. Announced in 2010, two independent groups came up with two different devices that produce coherent phonons at any frequency in the range megahertz to terahertz. One group from the University of Nottingham consisted of A.J. Kent and his colleagues R.P. Beardsley, A.V. Akimov, W. Maryam and M. Henini. The other group from the California Institute of Technology (Caltech) consisted of Ivan S. Grudinin, Hansuek Lee, O. Painter and Kerry J. Vahala from Caltech implemented a study on Phonon Laser Action in a tunable two-level system. The University of Nottingham device operates at about 440 GHz, while the Caltech device operates in the megahertz range. According to a member of the Nottingham group, the two approaches are complementary and it should be possible to use one device or the other to create coherent phonons at any frequency in the megahertz to terahertz range. A significant result rises from the operating frequency of these devices. The differences between the two devices suggest that SASERs could be made to operate over a wide range of frequencies. Work on the SASER continues at the University of Nottingham, the Lashkarev Institute of Semiconductor Physics at the National Academy of Sciences of Ukraine, and Caltech. In 2023 researchers using a Paul trap coaxed two ions into forming a phonon laser containing fewer than 10 phonons, placing it firmly in the quantum regime, whereas previous phonon lasers had had at least 10,000 phonons. Design SASER's central idea is based on sound waves. The set-up needed for the implementation of sound amplification by stimulated emission of radiation is similar to an oscillator. An oscillator can produce oscillations without any external feed-mechanism. An example is a common sound amplification system with a microphone, amplifier and speaker. When the microphone is in front of the speaker, we hear an annoying whistle. This whistle is generated without extra contribution from the sound source, and is self-reinforced and self-sufficient while the microphone is somewhere in front of the speaker. This phenomenon, known as the Larsen effect, is the result of a positive feedback. In general, every oscillator consists of three main parts. These are the power source or pump, the amplifier and the positive feedback leading to the output. The corresponding parts in a SASER device are the excitation or pumping mechanism, the active (amplifying) medium, and the feedback leading to acoustic radiation. Pumping can be performed, for instance, with an alternating electric field or with some mechanical vibrations of resonators. The active medium should be a material in which sound amplification can be induced. An example of a feedback mechanism into the active medium is the existence of superlattice layers that reflect the phonons back and force them to bounce repeatedly to amplify sound. Therefore, to proceed to an understanding of a SASER design we need to imagine it in analogy with a laser device. In a laser, the active medium is placed between two mirror surfaces (reflectors)of a Fabry–Pérot interferometer. A spontaneously emitted photon inside this interferometer can force excited atoms to decay a photon of same frequency, same momentum, same polarization and same phase. Because the momentum (as a vector) of the photon is nearly parallel to the axes of the mirrors, it is possible for photons to repeat multiple reflections and force more and more photons to follow them producing an avalanche effect. The number of photons of this coherent laser beam increases and competes the number of photons perished due to losses. The basic necessary condition for the generation of a laser radiation is the population inversion, which can be achieved either by exciting atoms and inducing percussion or by external radiation absorption. A SASER device mimics this procedure using a source-pump to induce a sound beam of phonons. This sound beam propagates not in an optical cavity, but in a different active medium. An example of an active medium is the superlattice. A superlattice can consist of multiple ultra-thin lattices of two different semiconductors. These two semiconductor materials have different band gaps, and form quantum wells—which are potential wells that confine particles to move in two dimensions instead of three, forcing them to occupy a planar region. In the superlattice, a new set of selection rules is composed that affects the flow-conditions of charges through the structure. When this set-up is excited by a source, the phonons start to multiply while they reflect on the lattice levels, until they escape from the lattice structure in a form of an ultrahigh frequency phonon beam. Namely, a concerted emission of phonons can lead to coherent sound and an example of concerted phonon emission is the emission coming from quantum wells. This stands in similar paths with the laser where a coherent light can build up by the concerted stimulated emission of light from a lot of atoms. A SASER device transforms the electric potential energy in a single vibrational mode of the lattice (phonon). The medium where the amplification takes place consists of stacks of thin layers of semiconductors that together form quantum wells. In these wells, electrons can be excited by parcels of ultrasound of millielectronvolts of energy. This amount of energy is equivalent to a frequency of 0.1 to 1 THz. Physics Just as light is a wave motion that is considered as composed of particles called photons, we can think of the normal modes of vibration in a solid as being particle-like. The quantum of lattice vibration is called phonon. In lattice dynamics we want to find the normal modes of vibration of a crystal. In other words, we need to calculate the energies (or frequencies ) of the phonons as a function of their wave vector's k . The relationship between frequency ω and wave vector k is called phonon dispersion. Light and sound are similar in various ways. They both can be thought of in terms of waves, and they both come in quantum mechanical units. In the case of light we have photons while in sound we have phonons. Both sound and light can be produced as random collections of quanta (e.g. light emitted by a light bulb) or orderly waves that travel in a coordinated form (e.g. laser light). This parallelism implies that lasers should be as feasible with sound as they are with light. In the 21st century, it is easy to produce low frequency sound in the range that humans can hear (~20 kHz), in either a random or orderly form. However, at the terahertz frequencies in the regime of phonon laser applications, more difficulties arise. The problem stems from the fact that sound travels much slower than light. This means that the wavelength of sound is much shorter than light at a given frequency. Instead of resulting in orderly, coherent phonons, laser structures that can produce terahertz sound tend to emit phonons randomly. Researchers have overcome the problem of terahertz frequencies by following various approaches. Scientists in Caltech have overcome this problem by assembling a pair of microscopic cavities that only permit specific frequencies of phonons to be emitted. This system can be also tuned to emit phonons of different frequencies by changing the relative separation of the microcavities. On the other hand, the group from the University of Nottingham took a different approach. They have built their device out of electrons moving through a series of structures known as quantum wells. Briefly, as an electron hops from one quantum well to another neighbouring well it produces a phonon. External energy pumping (e.g. a light beam or voltage) can help to the excitation of an electron. Relaxation of an electron from one of the upper states may occur by emission of either a photon or a phonon. This is determined by the density of states of phonons and photons. Density of states is the number of states per volume unit in an interval of energy (E, E + dE) that are available to be occupied by electrons. Both phonons and photons are bosons and thus, they obey Bose–Einstein statistics. This means that, since bosons with the same energy can occupy the same place in space, phonons and photons are force carrier particles and they have integer spins. There are more allowed states available for occupancy in a phonon field than in a photon field. Therefore, since the density of terminal states in the phonon field exceeds that in a photon field (by up to ~105), phonon emission is by far the more likely event. We could also imagine a concept where the excitation of an electron briefly leads to vibration of the lattice and thus to phonon generation. The vibration energy of the lattice can take discrete values for every excitation. Every one of this "excitation packages" is called phonon. An electron does not stay in an excited state for too long. It readily releases energy to return to its stable low energy state. The electrons release energy in any random direction and at any time (after their excitation). At some particular times, some electrons get excited while others lose energy in a way that the average energy of system is the lowest possible. By pumping energy into the system we can achieve a population inversion. This means that there are more excited electrons than electrons in the lowest energy state in the system. As electron releases energy (e.g. phonon) it interacts with another excited electron to release its energy too. Therefore, we have a stimulated emission, which means a lot of energy (e.g., acoustic radiation, phonons) is released at the same time. One can mention that the stimulated emission is a procedure where we have a spontaneous and an induced emission at the same time. The induced emission comes from the pumping procedure and then is added to the spontaneous emission. A SASER device should consist of a pumping mechanism and an active medium. The pumping procedure can be induced for example by an alternating electric field or with some mechanical vibrations of resonators, followed by acoustic amplification in the active medium. The fact that a SASER operates on principles remarkably similar to a laser, can lead to an easier way of understanding the relevant operation circumstances. Instead of a feedback-built potent wave of electromagnetic radiation, a SASER delivers a potent sound wave. Some methods for sound amplification of GHz–THz have been proposed so far. Some have been explored only theoretically and others have been explored in non-coherent experiments. We note that acoustic waves of 100 GHz to 1 THz have wavelengths in nanometre range. Sound amplification according to the experiment taken in the University of Nottingham could be based on an induced cascade of electrons in semiconductor superlattices. The energy levels of electrons are confined in the superlattice layers. As the electrons hop between gallium arsenide quantum wells in the superlattice they emit phonons. Then, one phonon going in, produces two phonons coming out of the superlattice. This process can be stimulated by other phonons and then give rise to an acoustic amplification. Upon the addition of electrons, short-wavelength (in the terahertz range) phonons are produced. Since the electrons are confined to the quantum wells existing within the lattice, the transmission of their energy depends upon the phonons they generate. As these phonons strike other layers in the lattice, they excite electrons, which produce further phonons, which go on to excite more electrons, and so on. Eventually, a very narrow beam of high-frequency ultrasound exits the device. Semiconductor superlattices are used as acoustic mirrors. These superlattice structures must be in the right size obeying the theory of multilayer distributed Bragg reflector, in similarity with multilayer dielectric mirrors in optics. Proposed schemes and devices Basic understanding of the SASER development requires the evaluation of some proposed examples of SASER devices and SASER theoretical schemes. Liquid with gas bubbles as the active medium In this proposed theoretical scheme, the active medium is a liquid dielectric (e.g. ordinary distilled water) in which dispersed particles are uniformly distributed. Means of electrolysis cause gas bubbles that serve as the dispersed particles. A pumped wave excited in the active medium produces a periodic variation of the volumes of the dispersed particles (gas bubbles). Since, the initial spatial distribution of the particles is uniform, the waves emitted by the particles are added with different phases and give zero on the average. Nevertheless, if the active medium is located in a resonator, then a standing mode can be excited in it. Particles then bunch under the action of the acoustic radiation forces. In this case, the oscillations of the bubbles are self-synchronized and the useful mode amplifies. The similarity of this with the free-electron laser is useful to understand the theoretical concepts of the scheme. In a FEL, electrons move through magnetic periodic systems producing electromagnetic radiation. The radiation of the electrons is initially incoherent but then on account of the interaction with the useful electromagnetic wave they start to bunch according to phase and they become coherent. Thus, the electromagnetic field is amplified. We note that, in the case of the piezoelectric radiators usually used to generate ultrasound, only the working surface radiates and therefore the working system is two-dimensional. On the other hand, a sound amplification by stimulated emission of radiation device is a three-dimensional system, since the entire volume of the active medium radiates. The active medium gas–liquid mixture fills the resonator. The bubble density in the liquid is initially distributed uniformly in space. Since the wave propagates in such a medium, the pump wave leads to the appearance of an additional quasi-periodic wave. This wave is coupled with the spatial variation of the bubble density under the action of radiation pressure forces. Hence, the wave amplitude and the bubble density vary slowly compared with the period of the oscillations. In the theoretical scheme where the usage of resonators is essential, the SASER radiation passes through the resonator walls, which are perpendicular to the direction of propagation of the pump wave. According to an example of an electrically pumped SASER, the active medium is confined between two planes, which are defined by the solid walls of the resonator. The radiation then, propagates along an axis parallel to the plane defined by the two resonator walls. The static electric field acting on the liquid with gas bubbles results in the deformation of dielectrics and therefore leads to a change in the volumes of the particles. We note that, the electromagnetic waves in the medium propagate with a velocity much greater than the velocity of sound in the same medium. This results to the assumption that the effective pump wave acting on the bubbles does not depend on the spatial coordinates. The pressure of a wave pump in the system leads to both the appearance of a backward wave and a dynamical instability of the system. Mathematical analyses have shown that two types of losses must be overcome for generation of oscillations to start. Losses of the first type are associated with the dispersion of energy inside the active medium and second type losses are due to radiation losses at the ends of the resonator. These types of losses are inversely proportional to the amount of energy stored in the resonator. In general, the disparity of the radiators does not play a role in any attempt of a mathematical calculation of the starting conditions. Bubbles with resonance frequencies close to the pump frequency make the main contribution to the gain of the useful mode. In contrast, the determination of the starting pressure in ordinary lasers is independent from the number of radiators. The useful mode grows with the number of particles but sound absorption increases at the same time. Both these factors neutralize each other. Bubbles play the main role in the energy dispersion in a SASER. A relevant suggested scheme of sound amplification by stimulated emission of radiation using gas bubbles as the active medium was introduced around 1995 The pumping is created by mechanical oscillations of a cylindrical resonator and the phase bunching of bubbles is realized by acoustic radiation forces. A notable fact is that gas bubbles can only oscillate under an external action, but not spontaneously. According to other proposed schemes, the electrostriction oscillations of the dispersed particle volumes in the cylindrical resonator are realized by an alternating electromagnetic field. However, a SASER scheme with an alternating electric field as the pump has a limitation. A very large amplitude of electric field (up to tens of kV/cm) is required to realize the amplification. Such values approach the electric puncture intensity of liquid dielectrics. Hence, a study proposes a SASER scheme without this limitation. The pumping is created by radial mechanical pulsations of a cylinder. This cylinder contains an active medium—a liquid dielectric with gas bubbles. The radiation emits through the faces of the cylinder. Narrow-gap indirect semiconductors and excitons in coupled quantum wells A proposal for the development of a phonon laser on resonant phonon transitions has been introduced from a group in Institute of Spectroscopy in Moscow, Russia. Two schemes for steady stimulated phonon generation were mentioned. The first scheme exploits a narrow-gap indirect semiconductor or analogous indirect gap semiconductor heterostructure where the tuning into resonance of one-phonon transition of electron–hole recombination can be carried out by external pressure, magnetic or electric fields. The second scheme uses one-phonon transition between direct and indirect exciton levels in coupled quantum wells. We note that an exciton is an electrically neutral quasiparticle that describes an elementary excitation of condensed matter. It can transport energy without transporting net electric charge. The tuning into the resonance of this transition can be accomplished by engineering of dispersion of indirect exciton by external in-plane magnetic and normal electric fields. The magnitude of phonon wave vector in the second proposed scheme, is supposed to be determined by magnitude of in-plane magnetic field. Therefore, such kind of SASER is tunable (i.e. its wavelength of operation can be altered in a controlled manner). Common semiconductor lasers can be realised only in direct gap semiconductors. The reasoning behind that is that a pair of electron and hole near minima of their bands in an indirect gap semiconductor can recombine only with production of a phonon and a photon, due to energy and momentum conservation laws. This kind of process is weak in comparison with electron–hole recombination in a direct semiconductor. Consequently, the pumping of these transitions has to be very intense so as to obtain a steady laser generation. Hence, the lasing transition with production of only one particle – photon – must be resonant. This means that the lasing transition must be allowed by momentum and energy conservation laws to generate in a steady form. Photons have negligible wave vectors and therefore the band extremes have to be in the same position of the Brillouin zone . On the other hand, for devices such as SASERs, acoustic phonons have a considerable dispersion. According to dynamics, this leads to the statement that the levels on which the laser should operate, must be in the k-space relatively to each other. K-space refers to a space where things are in terms of momentum and frequency instead of position and time. The conversion between real space and k-space is a mathematical transformation called the Fourier transform and thus k-space can be also called Fourier space. We note that, the difference in energy of the photon lasing levels has to be at least smaller than the Debye energy in the semiconductor. Here we can think of the Debye energy as the maximum energy associated with the vibrational modes of the lattice. Such levels can be formed by conduction and valence bands in narrow gap indirect semiconductors. Narrow-gap indirect semiconductor as a SASER system The energy gap in a semiconductor under the influence of pressure or magnetic field slightly varies and thus does not deserve any consideration. On the other hand, in narrow-gap semiconductors this variation of energy is considerable and therefore external pressure or magnetic field may serve the purpose of tuning into the resonance of one-phonon interband transition. Note that interband transition is the transition between the conduction and valence band. This scheme considers of indirect semiconductors instead of direct semiconductors. The reasoning behind that comes from the fact that, due to the k-selection rule in semiconductors, interband transitions with the production of only one phonon can be only those that produce an optical phonon. However, optical phonons have a short lifetime (they split into two due to anharmonicity) and therefore they add some important complications. Here we can note that even in the case of multi-stage process of acoustic phonon creation it is possible to create SASER. Examples of narrow-gap indirect semiconductors that can be used are chalcogenides PbTe, PbSe and PbS with energy gap 0.15 – 0.3 eV. For the same scheme, the usage of a semiconductor heterostructure (layers of different semiconductors) with narrow gap indirect in momentum space between valence and conduction bands may be more effective. This could be more promising since the spatial separation of the layers provides a possibility of tuning the interband transition into resonance by an external electric field. An essential statement here is that this proposed phonon laser can operate only if the temperature is much lower than the energy gap in the semiconductor. During the analysis of this theoretical scheme several assumptions were introduced for simplicity reasons. The method of the pumping keeps the system electro-neutral and the dispersion laws of electrons and holes are assumed to be parabolic and isotropic. Also phonon dispersion law is required to be linear and isotropic too. Since the entire system is electro-neutral, the process of pumping creates electrons and holes with the same rate. A mathematical analysis, leads to an equation for the average number of electron–hole pairs per one phonon mode per unit volume. For a low loss limit, this equation gives us a pumping rate for the SASER that is rather moderate in comparison with usual phonon lasers on a p–n transition. Tunable exciton transition in coupled quantum wells It has been mentioned that a quantum well is basically a potential well that confines particles to move in two dimensions instead of three, forcing them to occupy a planar region. In coupled quantum wells there are two possible ways for electrons and holes to be bound into an exciton: indirect exciton and direct exciton. In indirect exciton, electrons and holes are in different quantum wells, in contrast with direct exciton where electrons and holes are located in the same well. In a case where the quantum wells are identical, both levels have a two-fold degeneracy. Direct exciton level is lower than the level of indirect exciton because of greater Coulomb interaction. Also, indirect exciton has an electric dipole momentum normal to coupled quantum well and thus a moving indirect exciton has an in-plane magnetic momentum perpendicular to its velocity. Any interactions of its electric dipole with normal electric field, lowers one of indirect exciton sub-levels and in sufficiently strong electric fields the moving indirect exciton becomes the ground excitonic level. Having in mind these procedures, one can select velocity to have an interaction between magnetic dipole and in-plane magnetic field. This displaces the minimum of the dispersion law away from the radiation zone. The importance of this, lies on the fact that electric and in-plane magnetic fields normal to coupled quantum wells, can control the dispersion of indirect exciton. Normal electric field is needed for tuning the transition: direct exciton --> indirect exciton + phonon into resonance and its magnitude can form a linear function with the magnitude of in-plane magnetic field. We note that the mathematical analysis of this scheme considers of longitudinal acoustic (LA) phonons instead of transverse acoustic (TA) phonons. This aims to more simple numerical estimations. Generally, the preference in transverse acoustic (TA) phonons is better because TA phonons have lower energy and the greater life-time than LA phonons. Therefore, their interaction with the electronic subsystem is weak. In addition, simpler quantitative evaluations require a pumping of direct exciton level performed by a laser irradiation. A further analysis of the scheme can help us to establish differential equations for direct exciton, indirect exciton and phonon modes. The solution of these equations gives that separately phonon and indirect exciton modes have no definite phase and only the sum of their phases is defined. The aim here is to check if the operation of this scheme with a rather moderate pumping rate can hold against the fact that excitons in coupled quantum wells have low dimensionality in comparison to phonons. Hence, phonons not confined in the coupled quantum well are considered. An example is longitudinal optical (LO) phonons that are in AlGaAs/GaAs heterostructure and thus, phonons presented in this proposed system are three-dimensional. Differences in dimensionalities of phonons and excitons cause upper level to transform into many states of phonon field. By applying this information to specific equations we can conclude to a desired result. There is no additional requirement for the laser pumping despite the difference in phonon and exciton dimensionalities. A tunable two-level system Phonon laser action has been stated in a wide range of physical systems (e.g. semiconductors). A 2012 publication from the Department of Applied Physics in California Institute of Technology (Caltech), introduces a demonstration of a compound micro-cavity system, coupled with a radio-frequency mechanical mode, which operates in close analogy to a two-level laser system. This compound micro-cavity system can also be called "photonic molecule". Hybridized orbitals of an electrical system are replaced by optical supermodes of this photonic molecule while the transitions between their corresponding energy levels are induced by a phonon field. For typical conditions of the optical micro-resonators, the photonic molecule behaves as a two-level laser system. Nevertheless, there is a bizarre inversion between the roles of the active medium and the cavity modes (laser field). The medium becomes purely optical and the laser field is provided by the material as a phonon mode. An inversion produces gain, causing phonon laser action above a pump power threshold of around 7 μW. The proposed device is characterized from a continuously tunable gain spectrum that selectively amplifies mechanical modes from radio frequency to microwave rates. Viewed as Brillouin process, the system accesses a regime in which the phonon plays the role of Stokes wave. Stokes wave refers to a non-linear and periodic surface wave on an inviscid fluid (ideal fluid assumed to have no viscosity) layer of constant mean depth. For this reason it should be also possible to controllably switch between phonon and phonon laser regimes. Compound optical microcavity systems provide beneficial spectral controls. These controls impact both phonon laser action and cooling and define some finely spaced optical levels whose transition energies are proportional to phonon energies. These level spacings are continuously tunable by a significant adjustment of optical coupling. Therefore, amplification and cooling occur around a tunable line center, in contrast with some cavity optomechanical phenomena. The creation of these finely spaced levels does not require increasing the optical microcavity dimensions. Hence, these finely spaced levels do not affect the optomechanical interaction strength in a significant degree. The approach uses intermodal coupling, induced by radiation pressure and can also provide a spectrally selective mean to detect phonons. Moreover, some evidences of intermodal cooling are observed in this kind of experiments and thus, there is an interest in optomechanical cooling. Overall, an extension to multilevel systems using multiple coupled resonators is possible. Two-level system In a two level system, the particles have only two available energy levels, separated by some energy difference: ΔΕ = E2 − E1 = hv, where ν is the frequency of the associated electromagnetic wave of the photon emitted and h is the Planck constant. Also note: E2 > E1. These two levels are the excited (upper) and ground (lower) states. When a particle in the upper state interacts with a photon matching the energy separation of the levels, the particle may decay, emitting another photon with the same phase and frequency as the incident photon. Therefore, by pumping energy into the system we can have a stimulated emission of radiation—which means that the pump forces the system to release a big amount of energy at a specific time. A fundamental characteristic of lasing, like the population inversion, is not actually possible in a two-level system and therefore a two-level laser is not possible. In a two-level atom the pump is, in a way, the laser itself. Coherent terahertz amplification in a Stark ladder superlattice The amplification of coherent terahertz sound in a Wannier–Stark ladder superlattice has been achieved in 2009 according to a paper publication from the School of Physics and Astronomy in the University of Nottingham. Wannier–Stark effect, exists in superlattices. Electron states in quantum wells respond sensitively to moderate electric fields either by the quantum confined Stark effect in the case of wide barriers or by Wannier-Stark localization in the case of a superlattice. Both effects lead to large changes of the optical properties near the absorption edge, which are useful for intensity modulation and optical switching. Namely, in a mathematical point of view, if an electric field is applied to a superlattice the relevant Hamiltonian exhibits an additional scalar potential. If an eigenstate exists, then the states corresponding to wave functions are eigenstates of the Hamiltonian as well. These states are equally spaced both in energy and real space and form the so-called Wannier–Stark ladder. In the proposed scheme, an application of an electrical bias to a semiconductor superlattice is increasing the amplitude of coherent folded phonons generated by an optical pulse. This increase of the amplitude is observed for those biases in which the energy drop per period of the superlattice is greater than the phonon energy. If the superlattice is biased such that the energy drop per period of the superlattice exceeds the width of electronic minibands (Wannier–Stark regime), the electrons become localized in the quantum wells and vertical electron transport takes place via hopping between neighboring quantum wells, which may be phonon assisted. As it had been shown previously, under these conditions stimulated phonon emission can become the dominant phonon-assisted hoping process for phonons of an energy value close to the Stark splitting. Thus, coherent phonon amplification is theoretically possible in this type of system. Together with the increase in amplitude, the spectrum of the bias-induced oscillations is narrower than the spectrum of the coherent phonons at zero bias. This shows that coherent amplification of phonons due to stimulated emission takes place in the structure under electrical pumping. A bias voltage is applied to a weakly coupled n-doped GaAs/AlAs superlattice and increases the amplitude of the coherent hypersound oscillations generated by a femtosecond optical pulse. An evidence of hypersound amplification by stimulated emission of phonons emerges, in a system where the inversion of the electron populations for phonon-assisted transitions exists. This evidence is provided by the bias-induced amplitude increase and experimentally observer spectral narrowing of the superlattice phonon mode with a frequency of 441 GHz. The main target of this type of experiments is to highlight the realization probability of a coherent amplification of THz sound. The THz stimulated phonon induced transitions between the electron superlattice states lead to this coherent amplification while processing a population inversion. An essential step towards coherent generation ("sasing") of THz sound and other active hypersound devices has been provided by this achievement of THz sound amplification. Generally, in a device where the threshold for "sasing" is achieved, the technique described by this proposed scheme could be used to measure the coherence time of the emitted hypersound. See also Acoustics Laser Maser Optoelectronics Ultrasound References and notes Further reading and works referred to B.A. Glavin, V.A. Kochelap, T.L. Linnik, P. Walker, A.J. Kentand M. Henini, Monochromatic terahertz acoustic phonon emission from piezoelectric superlattices, Jour. Phys. Cs 92 (2007). K. Vahala, M. Herrmann, S. Knunz, V. Batteiger, G. Saathoff, T. W. Hansch and Th. Udem, A phonon Laser Transducers Acoustics
Sound amplification by stimulated emission of radiation
[ "Physics" ]
7,672
[ "Classical mechanics", "Acoustics" ]
7,206,776
https://en.wikipedia.org/wiki/Origanum%20%C3%97%20hybridum
Origanum × hybridum, synonym Origanum × pulchellum, is an ornamental plant of hybrid origin. Its two parents are O. dictamnus and O. sipyleum. It is known as the showy marjoram or the showy oregano. References External links DFT Digital Library - Vascular Plant Images: Origanum pulchellum hybridum Hybrid plants
Origanum × hybridum
[ "Biology" ]
84
[ "Hybrid plants", "Plants", "Hybrid organisms" ]
7,206,824
https://en.wikipedia.org/wiki/Truck%20scale
A truck scale (US), weighbridge (non-US) or railroad scale is a large set of scales, usually mounted permanently on a concrete foundation, that is used to weigh entire rail or road vehicles and their contents. By weighing the vehicle both empty and when loaded, the load carried by the vehicle can be calculated. The key component that uses a weighbridge in order to make the weigh measurement is load cells. Weight certification in the United States Commercial scales have to be National Type Evaluation Program (NTEP) approved or certified. The certification is issued by the National Conference on Weights and Measures (NCWM), in accordance to the National Institute of Standards and Technology (NIST), "Handbook 44" specifications and tolerances, through Conformity Assessment and the Verified Conformity Assessment Program (VCAP) Legal for trade Handbook 44: General Code paragraph G-A.1.; and the NIST Handbook 130 (Uniform Weights and Measures Law; Section 1.13.) define Commercial Weighing and Measuring Equipment as follows; NTEP approved scales are generally considered those scales which are intended by the manufacturer for use in commercial applications where products are sold by weight. NTEP Approved is also known as Legal for Trade or complies with Handbook 44. NTEP scales are commonly used for applications ranging from weighing coldcuts at the deli, to fruit at the roadside farm stand, shipping centers for determining shipping cost to weighing gold and silver and more. Rail weighbridge A rail weighbridge is used to weigh rollingstock including railroad cars, railroad cars, goods wagons and locomotives, empty or loaded. When loaded, the net weight of the cargo is the gross weight less the tare weight when known. It is also used to weigh trams. There are different types, but all of them have electronic sensors built into the track that measure the weight. All designs have in common that there must be a sufficient approach and departure distance in front of and behind the respective scale. All of them can measure independently of the direction of travel and whether the train is being pushed or pulled. In principle, a distinction is made between three different types of construction: 1. Dynamic track weighbridge The dynamic weighbridge consists of one or more weighbridges that can be connected together. The construction of the weighbridge is similar to the static track scales with load cells and weighing platform. The rails are applied to the weighing platform and are designed with rail bevelling. Rail switches are integrated into the rails to detect the position of the wagons on the scale. Together with the weighing terminal and the software, the weight of the individual wagons or the bogies is determined dynamically during the passage at up to 10 km/h. Advantages: Weighing accuracy class up to 0.2 for individual wagon weights in accordance with calibration regulations and OIML-R 106, Due to the modular design, liquids can also be dynamically weighed in a verifiable manner, Suitable as a static reference scale for calibration, thus saving costs with every recalibration, A weighbridge is very robust and durable due to its construction like a static track scale. Disadvantages: No determination of wheel load and axle loads, however, the design can be expanded to include integrated axle load and wheel load measurement with force sensors in the track. 2. Dynamic track scales with strain gauges in the track For dynamic track scales with force sensors, several force sensors are drilled and pressed into the track. When a train passes over the scales at up to 30 km/h, the rail is deformed by the mass of the vehicle (deformation). The change in material stress deforms the sensor, in which strain gauges are mounted as in a classic load cell. Thus, the weight of the individual wheelset or bogie can be calculated from the specific deformation behaviour of the rail. Advantages: Can be used as a wheel load scale and axle load scale, Higher measuring speeds possible than with the other two designs, Comparatively inexpensive due to the use of only a small amount of hardware and little track construction work. Disadvantages: Not calibratable, Accuracy depends on passing speed, Can only be used for solids. 3. Dynamic track scales based on weighing sleepers A dynamic track weigher based on weighing sleepers is, like the strain gauge in rail weigher, a gapless construction without rail cuts. In simple terms, several sleepers are removed from the track and replaced by weighing sleepers. Load cells are installed in these sleepers. Compared to the weighbridge, the gapless (and thus force-coupled) design means that the weighbridge cannot be statically adjusted, but can only operate purely dynamically. This requires a very stable substructure without a jump in stiffness. The difference to the scale with strain gauge in the rail is that calibratable sensors can be used for this variant and the scale is therefore calibratable. Advantages: Weighing accuracy class up to 0.2 for individual wagon weights in accordance with calibration regulations and OIML-R 106, Like the scales with strain gauges in the rail, the hardware volume is low, Modular design also enables legal-for-trade dynamic weighing of liquids. Disadvantages: Static reference scale required for dynamic calibration, which increases the costs for recalibration, Costly substructure/track construction work required (to ensure long-term stability, a resin-based ballast bonding is usually used for the weighing track. A procedure that creates an almost fixed track). Types Electronic (deep pit type) Electronic (pit less type) Digital (deep pit type) Digital (shallow pit) Digital (pit less type) Rail Weighbridge Movable Weighbridge Mechanical weighbridge Mechanical (digital type) Electro-mechanical Portable weighbridge Axle scales Portable ramp end scales In -Motion weighbridge Design concept Truck scales can be surface mounted with a ramp leading up a short distance and the weighing equipment underneath or they can be pit mounted with the weighing equipment and platform in a pit so that the weighing surface is level with the road. They are typically built from steel or concrete and by nature are extremely robust. In earlier versions the bridge is installed over a rectangular pit that contains levers that ultimately connect to a balance mechanism. The most complex portion of this type is the arrangement of levers underneath the weighbridge since the response of the scale must be independent of the distribution of the load. Modern devices use multiple load cells that connect to an electronic equipment to totalize the sensor inputs. In either type of semi-permanent scale the weight readings are typically recorded in a nearby hut or office. Many weighbridges are now linked to a PC which runs truck scale software capable of printing tickets and providing reporting features. Uses Truck scales can be used for two main purposes: Selling or charging by weight over the bridge (Trade Approved) Check weighing both axle weights and gross vehicle weights. This helps to stop axle overloading and possible heavy fines. They are used in industries that manufacture or move bulk items, such as in mines or quarries, garbage dumps / recycling centers, bulk liquid and powder movement, household goods, and electrical equipment. Since the weight of the vehicle carrying the goods is known (and can be ascertained quickly if it is not known by the simple expedient of weighing the empty vehicle) they are a quick and easy way to measure the flow of bulk goods in and out of different locations. A single axle truck scale or axle weighing system can be used to check individual axle weights and gross vehicle weights to determine whether the vehicle is safe to travel on the public highway without being stopped and fined by the authorities for being overloaded. Similar to the full size truck scale these systems can be pit mounted with the weighing surface flush to the level of the roadway or surface mounted. For many uses (such as at police over the road truck weigh stations or temporary road intercepts) weighbridges have been largely supplanted by simple and thin electronic weigh cells, over which a vehicle is slowly driven. A computer records the output of the cell and accumulates the total vehicle weight. By weighing the force of each axle it can be assured that the vehicle is within statutory limits, which typically will impose a total vehicle weight, a maximum weight within an axle span limit and an individual axle limit. The former two limits ensure the safety of bridges while the latter protects the road surface. Portable versions Portable truck scales can also be found in use around the world. A portable truck scale will have lower frame work that can be placed on non-typical surfaces such as dirt. These scales retain the same level of accuracy as a pit-type scale, with accuracy of up to + or - 1%. The first portable truck scale record in the US was units operated by the Weight Patrol of the Los Angeles Motor Patrol in 1929. Four such weighing units were used with one under each of the trucks wheels. Each unit could record up to . Technological advancement Digital Load cells : Digital load cells have replaced traditional analog ones due to their superior accuracy, faster response times, and better resistance to environmental factors. These load cells offer real-time weight data with reduced signal interference. Weighbridge Software Integration : Weighbridge software has been developed to streamline data collection, analysis, and reporting. This software simplifies integration with other business systems, improving compliance tracking, inventory management, and billing. Remote Monitoring & Connectivity : Weighbridges now feature remote monitoring capabilities, allowing users to access weight data and system status in real-time from a distance. This feature enhances efficiency by providing preventive maintenance and troubleshooting capabilities. In- Motion weighing : In-motion weighbridge systems have revolutionized truck weighing by allowing vehicles to be weighed while moving slowly over the scale. This eliminates the need for stopping for weighing, improving traffic flow and saving time. RFID Technology : RFID technology is being integrated into weighbridge systems to automate the identification of vehicles and goods. This improves data accuracy, speeds up the weighing process, and reduces errors. Imaging : Advanced camera systems capture images of vehicles and their loads during the weighing process. This visual evidence can be useful in dispute resolution, record-keeping, and verification. Data Analytics & Reporting : Weighbridge technology now includes powerful data analytics tools that help organizations draw insights from weight data. These insights can aid in making informed operational decisions, identifying patterns, and optimizing load distribution Mobile Apps & Cloud Integration : Mobile applications allow users to interact with weighbridge systems remotely and access reports, alerts, and real-time weight data. Integration with the cloud ensures secure data storage and cross-platform accessibility. Sustainable : Weighbridge designs now incorporate solar-powered systems and energy-saving components to minimize their environmental impact. Enhanced Durability & Construction : Weighbridge construction materials have advanced to withstand heavy usage, harsh weather conditions, and corrosive environments, resulting in longer lifespans and reduced maintenance requirements. See also On-board scale Tare weight Weigh lock Weigh station Weighing scale References Bridges Weighing instruments Measuring instruments
Truck scale
[ "Physics", "Technology", "Engineering" ]
2,241
[ "Structural engineering", "Weighing instruments", "Mass", "Measuring instruments", "Bridges", "Matter" ]
7,206,827
https://en.wikipedia.org/wiki/Suspended%20solids
Suspended solids refers to small solid particles which remain in suspension in water as a colloid or due to motion of the water. Suspended solids can be removed by sedimentation if their size or density is comparatively large, or by filtration. It is used as one indicator of water quality and of the strength of sewage, or wastewater in general. It is an important design parameter for sewage treatment processes. It is sometimes abbreviated SS, but is not to be confused with settleable solids, also abbreviated SS, which contribute to the blocking of sewer pipes. Explanation Suspended solids are important as pollutants and pathogens are carried on the surface of particles. The smaller the particle size, the greater the total surface area per unit mass of particle in grams, and so the higher the pollutant load that is likely to be carried. Removal Removal of suspended solids is generally achieved through the use of sedimentation and/or water filters (usually at a municipal level). By eliminating most of the suspended solids in a water supply, the significant water is usually rendered close to drinking quality. This is followed by disinfection to ensure that any free floating pathogens, or pathogens associated with the small remaining amount of suspended solids, are rendered ineffective. Effectiveness of filtering The use of a very simple cloth filter, consisting of a folded cotton sari, drastically reduces the load of cholera carried in the water, and is suitable for use by the very poor; in this case, an appropriate technology method of disinfection might be added, such as solar water disinfection. A major exception to this generalization is arsenic contamination of groundwater, as arsenic is a very serious pollutant which is soluble, and thus not removed when suspended solids are removed. This makes it very difficult to remove, and finding an alternative water source is often the most realistic option. See also Bottom trawling Total suspended solids Turbidity References Water
Suspended solids
[ "Environmental_science" ]
391
[ "Water", "Hydrology" ]
7,207,519
https://en.wikipedia.org/wiki/Functional%20square%20root
In mathematics, a functional square root (sometimes called a half iterate) is a square root of a function with respect to the operation of function composition. In other words, a functional square root of a function is a function satisfying for all . Notation Notations expressing that is a functional square root of are and , or rather (see Iterated function#Fractional_iterates_and_flows,_and_negative_iterates), although this leaves the usual ambiguity with taking the function to that power in the multiplicative sense, just as f ² = f ∘ f can be misinterpreted as x &mapsto; f(x)². History The functional square root of the exponential function (now known as a half-exponential function) was studied by Hellmuth Kneser in 1950. The solutions of over (the involutions of the real numbers) were first studied by Charles Babbage in 1815, and this equation is called Babbage's functional equation. A particular solution is for . Babbage noted that for any given solution , its functional conjugate by an arbitrary invertible function is also a solution. In other words, the group of all invertible functions on the real line acts on the subset consisting of solutions to Babbage's functional equation by conjugation. Solutions A systematic procedure to produce arbitrary functional -roots (including arbitrary real, negative, and infinitesimal ) of functions relies on the solutions of Schröder's equation. Infinitely many trivial solutions exist when the domain of a root function f is allowed to be sufficiently larger than that of g. Examples is a functional square root of . A functional square root of the th Chebyshev polynomial, , is , which in general is not a polynomial. is a functional square root of . [red curve] [blue curve] [orange curve], although this is not unique, the opposite being a solution of , too. [black curve above the orange curve] [dashed curve] (See. For the notation, see .) See also Iterated function Function composition Abel equation Schröder's equation Flow (mathematics) Superfunction Fractional calculus Half-exponential function References Functional analysis Functional equations
Functional square root
[ "Mathematics" ]
463
[ "Functions and mappings", "Mathematical analysis", "Functional analysis", "Functional equations", "Mathematical objects", "Equations", "Mathematical relations" ]
7,207,607
https://en.wikipedia.org/wiki/Jinitiator
Jinitiator is a Java virtual machine (JVM) made and distributed by Oracle Corporation. It allows a web enabled Oracle Forms client application to be run inside a web browser. This JVM is called only when a web-based Oracle application is accessed. This behavior is implemented by a plugin or an activex control, depending on the browser. The first two numbers of the version roughly follow the Sun Java numbering convention. It means that for instance Jinitiator 1.3.1.25 is based upon JDK 1.3 or later. The main reason for Oracle to develop Jinitiator was to support Oracle Forms on the web in earlier releases due to bugs in earlier releases of the JDK. In 2007 Oracle announced, that for the upcoming release of Forms version 11, Jinitiator would no longer be needed and that users should migrate to the Sun Java plug-in. In January 2010, a product obsolescence desupport notice was posted saying that JInitiator would no longer be supported and that all users should upgrade. Since version 10.1.2.0.2 of Forms in 2010, Oracle began working closely with Sun to completely phase out Jinitiator. The latest version (released in 2008) is 1.3.1.30 and is still available at the Oracle website. Obsolete versions of Jinitiator can be made to work under Windows 7 with Internet Explorer 9, but this approach is not supported or recommended by Oracle. References External links Vulnerability Note VU#474433: Oracle JInitiator ActiveX control stack buffer overflows Oracle software Java virtual machine
Jinitiator
[ "Technology" ]
329
[ "Computing stubs", "Software stubs" ]
7,208,118
https://en.wikipedia.org/wiki/Stellar%20mass%20loss
Stellar mass loss is a phenomenon observed in stars by which stars lose some mass over their lives. Mass loss can be caused by triggering events that cause the sudden ejection of a large portion of the star's mass. It can also occur when a star gradually loses material to a binary companion or due to strong stellar winds. Massive stars are particularly susceptible to losing mass in the later stages of evolution. The amount and rate of mass loss varies widely based on numerous factors. Stellar mass loss plays a very important role in stellar evolution, the composition of the interstellar medium, nucleosynthesis as well as understanding the populations of stars in clusters and galaxies. Causes Every star undergoes some mass loss in its lifetime. This could be caused by its own stellar wind, or by interactions with the outside environment. Additionally, massive stars are particularly vulnerable to significant mass loss and can be influenced by a number of factors, including: Gravitational attraction of a binary companion Coronal mass ejection-type events Ascension to red giant or red supergiant status Some of these causes are discussed below, along with the consequences of such phenomenon. Solar wind The solar wind is a stream of plasma released from the upper atmosphere of the Sun. The high temperatures of the corona allow charged particles and other atomic nuclei to gain the energy needed to escape the Sun's gravity. The sun loses mass due to the solar wind at a very small rate, solar masses per year. The solar wind carries trace amounts of the nuclei of heavy elements fused in the core of the sun, revealing the inner workings of the sun while also carrying information about the solar magnetic field. In 2021, the Parker Solar Probe measured 'sound speed' and magnetic properties of the solar wind plasma environment. Binary Mass Transfer Often when a star is a member of a pair of close-orbiting binary stars, the tidal attraction of the gasses near the center of mass is sufficient to pull gas from one star onto its partner. This effect is especially prominent when the partner is a white dwarf, neutron star, or black hole. Mass loss in binary systems has particularly interesting outcomes. If the secondary star in the system overflows its Roche lobe, it loses mass to the primary, greatly altering their evolution. If the primary star is a white dwarf, the system rapidly develops into a Type-Ia supernova. Another alternate scenario for the same system is the formation of a cataclysmic variable or a 'Nova'. If the accreting star is a Neutron star or a Black hole, the resultant system is an X-ray binary. A study in 2012 found that more than 70% of all massive stars exchange mass with a companion which leads to a binary merger in one-third of the cases. Since the trajectory of evolution of these stars is greatly altered due to the mass loss to the companion, models of stellar evolution are focusing on replicating these observations. Mass ejection Certain classes of stars, especially Wolf-Rayet stars are sufficiently massive and as they evolve, their radius increases. This causes their hold on their upper layers to weaken allowing small disturbances to blast large amounts of the outer layers into space. Events such as solar flares and coronal mass ejections are mere blips on the mass loss scale for low mass stars (like our sun). However, these same events cause catastrophic ejection of stellar material into space for massive stars like Wolf-Rayet stars. Such stars are extremely charitable and spend much of their lives donating mass to the surrounding interstellar medium. As they are stripped of their hydrogen envelopes, they continue to be good samaritans, giving up heavier elements like helium, carbon, nitrogen and oxygen, with some of the most massive stars putting out even heavier elements up to aluminum. Red giant mass loss Stars which have entered the red giant phase are notorious for rapid mass loss. As above, the gravitational hold on the upper layers is weakened, and they may be shed into space by violent events such as the beginning of a helium flash in the core. The final stage of a red giant's life will also result in prodigious mass loss as the star loses its outer layers to form a planetary nebula. The structures of these nebulae provide insight into the history of the mass loss of the star. Over-densities and under-densities reveal the periods where the star was actively losing mass while the distribution of these clumps in space hints at the physical cause of the loss. Uniform spherical shells in the nebula point towards symmetric stellar winds while asymmetry and lack of uniform structure point to mass ejections and stellar flares as the cause. This phenomenon takes on a new scale when looking at AGB stars. Stars found on the Asymptotic giant branch of the Hertzsprung–Russell diagram are the most prone to mass loss in the later stages of their evolution compared to others. This phase is when the most amount of mass is lost for a single star that does not go on to explode in a supernova. See also Red giant Red supergiant Betelgeuse Coronal mass ejection Helium flash External Links and Further Reading Simulation of a Red Supergiant displaying instability and mass loss A Review of Stellar Mass Loss in Massive Stars Effects of Mass Loss of Intermediate stars on the Interstellar Medium References Concepts in stellar astronomy Stellar phenomena
Stellar mass loss
[ "Physics" ]
1,088
[ "Concepts in stellar astronomy", "Physical phenomena", "Stellar phenomena", "Concepts in astrophysics" ]
7,208,255
https://en.wikipedia.org/wiki/Rostelecom
Rostelecom (Ростелеком) is Russia’s largest provider of digital services for a wide variety of consumers, households, private businesses, government and municipal authorities, and other telecom providers. Rostelecom interconnects all local public operators’ networks into a single national network for long-distance service. In other words, if one makes a long-distance call or originates Internet contact to or from Russia, it is likely that Rostelecom is providing part of the service. The company's stock trades primarily on the Moscow Exchange. History Prior to 1990 the Ministry of Communications of the USSR provided telecommunications services. On June 26, 1990, the Ministry established a state-owned joint-stock company Sovtelekom, which obtained the rights to operate the telecommunications network of the USSR. On December 30, 1992, by order of the State Property Committee of Russia, a state-owned enterprise Rostelecom, which consisted of 20 state long-distance and international calls, as well as communication equipment Intertelekom was organized. Throughout the 1990s, the company which was part of Svyazinvest, was the sole long-distance operator in Russia. Alongside it, local companies operated in the different regions of Russia under the umbrella of Svyazinvest while Rostelecom connected between their networks. In 2001, these companies were merged to form a number of regional incumbent telecommunications operators: CentreTelecom, SibirTelecom, Dalsvyaz, Uralsvyazinform, VolgaTelecom, North-West Telecom, Southern Telecommunications Company and Dagsvyazinform. On 2011, Svyazinvest was liquidated with the regional subsidiaries merged into Rostelecom. In 2021, the company's revenue amounted to 351 billion rubles. On October 18, 2006 "Rostelecom" received a certificate of quality of IP-MPLS network and became the ISP backbone. In December 2006, Rostelecom and the telecommunications company KDDI in Japan under the "Transit Europe - Asia" signed an agreement to build a line of Nakhodka - Naoetsu with total bandwidth of 640 Gbit/s instead of the previous 560 Mbit/s. Sanctions On 24 February 2022, in response to the Russian invasion of Ukraine, the Office of Foreign Assets Control (OFAC) of the United States Department of the Treasury imposed sanctions against Rostelecom. Ownership Owners of Rostelecom ordinary (voting) share as of November 2021: Federal Agency for State Property Management (38.2%) JSC Telecom Investments (20.98%) VTB Bank (8.44%) Vnesheconombank (3.96%) Operations PJSC Rostelecom is the largest integrated digital services and products provider, operating in all segments of the telecommunications market in Russia. The Company serves millions of households, state and private enterprises across the country. Rostelecom is a key strategic innovator that provides solutions in the following fields: E-Government, cybersecurity, Beeline and MTS data-centres and cloud computing, biometry, healthcare, education and housing & utility services. In the summer of 2019, it was announced that Rostec plans to develop digital healthcare together with PJSC Rostelecom. Land network The company's network is based on extant Russian fiber-optic cable lines - FOCL. By cable the network is connected to countries in Europe and East Asia. Fiber-optic cable lines crosses Russian Federation on directions «Moscow — Novorossiysk», «Moscow — Khabarovsk» and «Moscow — Saint Petersburg». IP transit has been allocated to a separate company, RTComm, using Rostelecom's STM-16 FOCL resources, but Rostelecom is building its own STM-64 (9,9533 Gbit/s) network, which as of August 2006, covered Rostov-on-Don, Krasnodar, Volgograd, Stavropol, and planned to cover the whole of Russia by the end of 2006. Rostelecom had 29.2 million local fixed-line voice subscribers, 12.4 million mobile voice subscribers, 7.4 million fixed-line broadband subscribers and 5.5 million pay-TV subscribers at the end of the first quarter of 2010. Satellite network Using the services of the Russian Orbital Group, Rostelecom has built its satellite system for its Eastern region, comprising 11 land stations in Siberia and the Russian Far East. Satellite service for the Western region is being built at this time. Cellular network Throughout the 90s Rostelecom created subsidiaries that operated cellular networks in different regions of the country, including NSS, Baikalvestkom, Yeniseikom, SkyLink, Volgograd GSM and Akos which provided mobile services on the territory of 59 regions of Russia, serving more than 13.5 million subscribers. During the 2010s, Rostelecom and its subsidiaries built mobile networks of the third generation in 27 regions of Russia. Total planned to install more than 8 thousand base stations. Suppliers of equipment and solutions for the 3G+ network are Ericsson and Huawei. In April 2013 the company announced the launch of 3G+ networks in the Sverdlovsk, Kurgan and Chelyabinsk regions, in the south of the Tyumen Oblast and in the Yamalo-Nenets Autonomous Area. MegaFon and Tele2 and SkyLink This launch followed the introduction of 3G+ services in Perm Krai. Rostelecom's 3G+ network was installed using HSPA+ technology, providing data transfer speeds of up to 21 MB/s, with the possibility of upgrading the network to reach speeds of up to 42 MB/s if demand requires. The 3G+ network is LTE-ready, so that only minor modifications will be required before the company can roll out its 4G (LTE) network in the future. In June 2013 Rostelecom launched its first part of its LTE network in Sochi for the 2014 Winter Olympics. Besides, the company launched LTE networks in 8 other regions besides Karsnodar Krai by the end of 2013, including Khanty-Mansi Autonomous Okrug, Republic of Khakassia, Republic of North Ossetia–Alania, Sakhalin Oblast, Chukotka Autonomous Okrug, Nenets Autonomous Okrug and the Jewish Autonomous Oblast. In December 2013, Rostelecom board approved a plan to merge its mobile business into Tele2 Russia, former division of Nordic telecoms group Tele2 which sold it in April 2013 to VTB Bank due to the lack of 3G and 4G data licences, limiting its future growth prospects. Rostelecom would get a 45% voting stake in the new company, T2 RTK Holding, in exchange for contributing its standalone mobile subsidiaries and assets, including SkyLink. Tele2 Russia, owned by state-controlled bank VTB and Russian businessmen Yuri Kovalchuk and Alexei Mordashov, will have 55%. Rostelecom and Tele2 Russia together have around 38 million mobile subscribers, or a combined market share of 16%. During the second stage, Rostelecom spun-off its integrated mobile businesses into its new wholly owned subsidiary, RT-Mobile (), which will be expected to have Rostelecom's mobile licences, including the LTE licences, re-issued to it. Analysts said the deal makes sense as "Rostelecom has been less efficient in rolling out mobile networks. By relying on the Tele2 team in mobile expansion Rostelecom removes risks, while remaining open to an upside". In February 2014 Rostelecom and Tele2 signed a framework agreement on the integration of mobile assets to the authorized capital of the joint venture "T2 Rus Holding". At the first stage of integration, Rostelecom passed seven cellular subsidiaries it owns: "Sky Link", "Nizhny Novgorod Cellular Communications", "Baikalwestcom", " Volgograd GSM" Yenisei Telecom" and ICCO. Network infrastructure Backbone network Regional backhaul network International networks Access networks (FTTB, GPON) Controversy In April 2017, Rostelecom (AS12389) originated 50 prefixes for numerous other autonomous systems (AS). This caused Internet traffic normally destined for these organizations to instead be routed to Rostelecom. The hijacked prefixes belonged to financial institutions (most notably MasterCard and Visa), other telecom companies, and a variety of other organizations. What makes the list of affected networks 'curious' is the high number of financial institutions such as: MasterCard, Visa, Fortis, Alfa-Bank, and more. The other notable characteristic of this event is that the advertisement included several more prefixes that were more specifically defined than the prefixes normally announced, which makes it less likely that these were unintentionally leaked. In 2017, state-owned Rostelecom was selected to run a Russian national biometric database, with Russian legislators adopting a law to oblige banks and state agencies to enter their customers' biometric information, including facial images and voice samples, into the database. See also List of telecommunications regulatory bodies References External links Telecommunications companies of Russia Internet service providers of Russia Mobile phone companies of Russia Digital television Streaming television Government-owned companies of Russia Companies based in Moscow Telecommunications companies established in 1993 1993 establishments in Russia Companies listed on the Moscow Exchange Companies in the MOEX Russian brands Russian entities subject to U.S. Department of the Treasury sanctions
Rostelecom
[ "Technology" ]
2,011
[ "Multimedia", "Streaming television" ]
7,209,279
https://en.wikipedia.org/wiki/Glass%20fiber%20reinforced%20concrete
Glass fiber reinforced concrete (GFRC) is a type of fiber-reinforced concrete. The product is also known as glassfibre reinforced concrete or GRC in British English. Glass fiber concretes are mainly used in exterior building façade panels and as architectural precast concrete. Somewhat similar materials are fiber cement siding and cement boards. Composition GRC (Glass fibre-reinforced concrete) ceramic consists of high-strength, alkali-resistant glass fibre embedded in a concrete & ceramic matrix. In this form, both fibres and matrix retain their physical and chemical identities, while offering a synergistic combination of properties that cannot be achieved with either of the components acting alone. In general, fibres are the principal load-carrying members, while the surrounding matrix keeps them in the desired locations and orientation, acting as a load transfer medium between the fibres and protecting them from environmental damage. The fibres provide reinforcement for the matrix and other useful functions in fibre-reinforced composite materials. Glass fibres can be incorporated into a matrix either in continuous or discontinuous (chopped) lengths. Durability was poor with the original type of glass fibres since the alkalinity of cement reacts with its silica. In the 1970s alkali-resistant glass fibres were commercialized. Alkali resistance is achieved by adding zirconia to the glass. The higher the zirconia content the better the resistance to alkali attack. AR glass fibres should have a Zirconia content of more than 16% to be in compliance with internationally recognized specifications (EN, ASTM, PCI, GRCA, etc). Laminates A widely used application for fibre-reinforced concrete is structural laminate, obtained by adhering and consolidating thin layers of fibres and matrix into the desired thickness. The fibre orientation in each layer as well as the stacking sequence of various layers can be controlled to generate a wide range of physical and mechanical properties for the composite laminate. GFRC cast without steel framing is commonly used for purely decorative applications such as window trims, decorative columns, exterior friezes, or limestone-like wall panels. Properties The design of glass-fibre-reinforced concrete panels uses a knowledge of its basic properties under tensile, compressive, bending and shear forces, coupled with estimates of behavior under secondary loading effects such as creep, thermal response and moisture movement. There are a number of differences between structural metal and fibre-reinforced composites. For example, metals in general exhibit yielding and plastic deformation, whereas most fibre-reinforced composites are elastic in their tensile stress-strain characteristics. However, the dissimilar nature of these materials provides mechanisms for high-energy absorption on a microscopic scale comparable to the yielding process. Depending on the type and severity of external loads, a composite laminate may exhibit gradual deterioration in properties but usually does not fail in a catastrophic manner. Mechanisms of damage development and growth in metal and composite structure are also quite different. Other important characteristics of many fibre-reinforced composites are their non-corroding behavior, high damping capacity and low coefficients of thermal expansion. Glass-fibre-reinforced concrete architectural panels have the general appearance of pre-cast concrete panels, but differ in several significant ways. For example, the GFRC panels, on average, weigh substantially less than pre-cast concrete panels due to their reduced thickness. Their low weight decreases loads superimposed on the building’s structural components making construction of the building frame more economical. Sandwich panels A sandwich panel is a composite of three or more materials bonded together to form a structural panel. It takes advantage of the shear strength of a low density core material and the high compressive and tensile strengths of the GFRC facing to obtain high strength-to-weight ratios. The theory of sandwich panels and functions of the individual components may be described by making an analogy to an I-beam. The core in a sandwich panel is comparable to the web of an I-beam, which supports the flanges and allows them to act as a unit. The web of the I-beam and the core of the sandwich panels carry the beam shear stresses. The core in a sandwich panel differs from the web of an I-beam in that it maintains continuous support for the facings, allowing the facings to be worked up to or above their yield strength without crimping or buckling. Obviously, the bonds between the core and facings must be capable of transmitting shear loads between these two components, thus making the entire structure an integral unit. The load-carrying capacity of a sandwich panel can be increased dramatically by introducing light steel framing. Light steel stud framing is similar to conventional steel stud framing for walls, except that the frame is encased in a concrete product. Here, the sides of the steel frame are covered with two or more layers of GFRC, depending on the type and magnitude of external loads. The strong and rigid GFRC provides full lateral support on both sides of the studs, preventing them from twisting and buckling laterally. The resulting panel is lightweight in comparison with traditionally reinforced concrete, yet is strong and durable and can be easily handled. Technical specifications GFRC Material Properties Typical strength properties of GRC Uses GFRC is incredibly versatile and has a large number of use cases due to its strength, weight, and design. The most common place you will see this material is in the construction industry. It's used in very demanding cases such as architectural cladding that's hanging several stories above sidewalks or even more for aesthetics such as interior furniture pieces like GFRC coffee tables, GRC Jali, Elevation screens. The glass fiber reinforced concrete not only reduces the cost of concrete but also enhances its strength. References 6. GFRC Screen (GRC Jali) Asian GRC. Revised 12 February 2024. "GFRC Technical Specification, GRC Material Properties, Typical strength properties of GRC and uses". Concrete Composite materials Fibre-reinforced cementitious materials
Glass fiber reinforced concrete
[ "Physics", "Engineering" ]
1,230
[ "Structural engineering", "Composite materials", "Materials", "Concrete", "Matter" ]
7,209,369
https://en.wikipedia.org/wiki/Fiber-reinforced%20concrete
Fiber-reinforced concrete or fibre-reinforced concrete (FRC) is concrete containing fibrous material which increases its structural integrity. It contains short discrete fibers that are uniformly distributed and randomly oriented. Fibers include steel fibers, glass fibers, synthetic fibers and natural fibers – each of which lend varying properties to the concrete. In addition, the character of fiber-reinforced concrete changes with varying concretes, fiber materials, geometries, distribution, orientation, and densities. Historical perspective The concept of using fibers as reinforcement is not new. Fibers have been used as reinforcement since ancient times. Historically, horsehair was used in mortar and straw in mudbricks. In the 1900s, asbestos fibers were used in concrete. In the 1950s, the concept of composite materials came into being and fiber-reinforced concrete was one of the topics of interest. Once the health risks associated with asbestos were discovered, there was a need to find a replacement for the substance in concrete and other building materials. By the 1960s, steel, glass (GFRC), and synthetic (such as polypropylene) fibers were used in concrete. Research into new fiber-reinforced concretes continues today. Fibers are usually used in concrete to control cracking due to plastic shrinkage and to drying shrinkage. They also reduce the permeability of concrete and thus reduce bleeding of water. Some types of fibers produce greater impact, abrasion, and shatter resistance in concrete. Larger steel or synthetic fibers can replace rebar or steel completely in certain situations. Fiber reinforced concrete has all but completely replaced bar in underground construction industry such as tunnel segments where almost all tunnel linings are fiber reinforced in lieu of using rebar. This may, in part, be due to issues relating to oxidation or corrosion of steel reinforcements. This can occur in climates that are subjected to water or intense and repeated moisture, see Surfside Building Collapse. Indeed, some fibers actually reduce the compressive strength of concrete. Lignocellulosic fibers in a cement matrix can degrade due to the hydrolysis of lignin and hemicelluloses. The amount of fibers added to a concrete mix is expressed as a percentage of the total volume of the composite (concrete and fibers), termed "volume fraction" (Vf). Vf typically ranges from 0.1 to 3%. The aspect ratio (l/d) is calculated by dividing fiber length (l) by its diameter (d). Fibers with a non-circular cross section use an equivalent diameter for the calculation of aspect ratio. If the fiber's modulus of elasticity is higher than the matrix (concrete or mortar binder), they help to carry the load by increasing the tensile strength of the material. Increasing the aspect ratio of the fiber usually segments the flexural strength and toughness of the matrix. Longer length results in better matrix inside the concrete and finer diameter increases the count of fibers. To ensure that each fiber strand is effective, it is recommended to use fibers longer than the maximum aggregate size. Normal concrete contains equivalent diameter aggregate which is 35-45% of concrete, fibers longer than are more effective. However, fibers that are too long and not properly treated at time of processing tend to "ball" in the mix and create work-ability problems. Fibers are added for long term durability of concrete. Glass and polyester decompose in alkaline condition of concrete and various additives and surface treatment of concrete. The High Speed 1 tunnel linings incorporated concrete containing 1 kg/m3 or more of polypropylene fibers, of diameter 18 & 32 μm, giving the benefits noted below. Adding fine diameter polypropylene fibers, not only provides reinforcement in tunnel lining, but also prevents "spalling" and damage of lining in case of fire due to accident. Benefits Glass fibers can: Improve concrete strength at low cost. Adds tensile reinforcement in all directions, unlike rebar. Add a decorative look as they are visible in the finished concrete surface. Polypropylene and nylon fibers can: Improve mix cohesion, improving pumpability over long distances Improve freeze-thaw resistance Improve resistance to explosive spalling in case of a severe fire Improve impact- and abrasion-resistance Increase resistance to plastic shrinkage during curing Improve structural strength Reduce steel reinforcement requirements Improve ductility Reduce crack widths and control the crack widths tightly, thus improving durability Steel fibers can: Improve structural strength Reduce steel reinforcement requirements Reduce crack widths and control the crack widths tightly, thus improving durability Improve impact- and abrasion-resistance Improve freeze-thaw resistance Natural (lignocellulosic, LC) fibers and/or particles can: Improve ductility Contribute to crack control via bridging Reduce the negative environmental impact of the materials (GWP - global warming potential) Reduce weight LC (plant-based) fibers and particles can degrade in a cement matrix Blends of both steel and polymeric fibers are often used in construction projects in order to combine the benefits of both products; structural improvements provided by steel fibers and the resistance to explosive spalling and plastic shrinkage improvements provided by polymeric fibers. In certain specific circumstances, steel fiber or macro synthetic fibers can entirely replace traditional steel reinforcement bar ("rebar") in reinforced concrete. This is most common in industrial flooring but also in some other precasting applications. Typically, these are corroborated with laboratory testing to confirm that performance requirements are met. Care should be taken to ensure that local design code requirements are also met, which may impose minimum quantities of steel reinforcement within the concrete. There are increasing numbers of tunnelling projects using precast lining segments reinforced only with steel fibers. Micro-rebar has also been recently tested and approved to replace traditional reinforcement in vertical walls designed in accordance with ACI 318 Chapter 14. Some developments At least half of the concrete in a typical building component protects the steel reinforcement from corrosion. Concrete using only fiber as reinforcement can result in saving of concrete, thereby greenhouse effect associated with it. FRC can be molded into many shapes, giving designers and engineers greater flexibility. High performance FRC (HPFRC) claims it can sustain strain-hardening up to several percent strain, resulting in a material ductility of at least two orders of magnitude higher when compared to normal concrete or standard fiber-reinforced concrete. HPFRC also claims a unique cracking behavior. When loaded to beyond the elastic range, HPFRC maintains crack width to below 100 μm, even when deformed to several percent tensile strains. Field results with HPFRC and The Michigan Department of Transportation resulted in early-age cracking. Recent studies on high-performance fiber-reinforced concrete in a bridge deck found that adding fibers provided residual strength and controlled cracking. There were fewer and narrower cracks in the FRC even though the FRC had more shrinkage than the control. Residual strength is directly proportional to the fiber content. The use of natural fibers has become a topic of research mainly due to the expected positive environmental impact, recyclability, and economy. The degradation of natural fibers and particles in a cement matrix is a concern. Some studies were performed using waste carpet fibers in concrete as an environmentally friendly use of recycled carpet waste. A carpet typically consists of two layers of backing (usually fabric from polypropylene tape yarns), joined by CaCO3 filled styrene-butadiene latex rubber (SBR), and face fibers (majority being nylon 6 and nylon 66 textured yarns). Such nylon and polypropylene fibers can be used for concrete reinforcement. Other ideas are emerging to use recycled materials as fibers: recycled polyethylene terephthalate (PET) fiber, for example. Standards International The following are several international standards for fiber-reinforced concrete: BS EN 14889-1:2006 – Fibres for Concrete. Steel Fibres. Definitions, specifications & conformity BS EN 14845-1:2007 – Test methods for fibres in concrete ASTM A820-16 – Standard Specification for Fiber-Reinforced Concrete (superseded) ASTM C1116/C1116M - Standard Specification for Fiber-Reinforced Concrete ASTM C1018-97 – Standard Test Method for Flexural Toughness and First-Crack Strength of Fiber-Reinforced Concrete (Using Beam With Third-Point Loading) (Withdrawn 2006) Canada CSA A23.1-19 Annex U - Ultra High Performance Concrete (with and without Fiber Reinforcement) CSA S6-19, 8.1 - Design Guideline for Ultra High Performance Concrete See also Fiber-reinforced plastic Glass-reinforced plastic Reinforced concrete Steel fibre-reinforced shotcrete Textile-reinforced concrete References Citations Books Composite materials Reinforced concrete Glass applications Building materials Fibre-reinforced cementitious materials
Fiber-reinforced concrete
[ "Physics", "Engineering" ]
1,795
[ "Building engineering", "Composite materials", "Construction", "Materials", "Building materials", "Matter", "Architecture" ]
7,210,481
https://en.wikipedia.org/wiki/Ernest%20Earl%20Lockhart
Ernest Earl Lockhart (September 10, 1912 – July 26, 2006) was a chemist and explorer. Early life and education Ernest Earl Lockhart was born in Boston, Massachusetts (USA) on September 10, 1912. He grew up in the Hyde Park section of Boston; Lockhart was the youngest of three children of Clinton Daniel Lockhart and Celeste Althea Westhaver, who both emigrated from Nova Scotia, Canada. E.E. Lockhart was educated in the Boston public schools, at the Chauncy Hall School, and then at Massachusetts Institute of Technology, where he earned three degrees, culminating with a PhD in biochemistry in 1938. Career and achievements Following a year of study on fellowship at the Biochemical Institute in Stockholm, Sweden, E.E. Lockhart served as the physiologist on Rear Admiral Richard Evelyn Byrd’s United States Antarctic Service Expedition of 1939-1941 to the South Pole. For this service he received a special medal authorized by the Congress of the United States. A memorable experience on the expedition was a four-month, 400-mile field trip by dog team. He was the radio operator for his four-man party. Mount Lockhart, a mountain in the Fosdick range, is named after him. Upon his return home, E.E. Lockhart began a career of research and teaching at M.I.T. in the field of food technology and nutrition. In 1955 he left M.I.T. to become research director at the Coffee Brewing Institute, a trade organization located in New York City. In 1965, he became assistant research director of the Coca Cola Company in Atlanta, Georgia, Georgia, where he lived until his retirement in 1978. Earl was a co-founder of the International Life Sciences Institute, a worldwide foundation that seeks to improve the well-being of the general public through the advancement of science. Later life and death Ernest Earl Lockhart retired to his Cape Cod home in West Dennis, Massachusetts, where he and his wife Helen lived for his remaining twenty-seven years. He died at his home on July 26, 2006, at the age of 93. A family ceremony in his honor was held on the nearby Bass River, on September 10, 2006, the ninety-fourth anniversary of his birth. References Mount Lockhart The Tech, Tuesday October 18, 1949 Time magazine, Monday, Dec. 1, 1941 Biochemical Journal, Volume 33, part 4, April 1939 1912 births 2006 deaths American biochemists American food chemists People from Hyde Park, Boston People from Barnstable County, Massachusetts Massachusetts Institute of Technology School of Science alumni Chapel Hill – Chauncy Hall School alumni
Ernest Earl Lockhart
[ "Chemistry" ]
538
[ "Food chemists", "American food chemists" ]
7,210,758
https://en.wikipedia.org/wiki/Integral%20cryptanalysis
In cryptography, integral cryptanalysis is a cryptanalytic attack that is particularly applicable to block ciphers based on substitution–permutation networks. It was originally designed by Lars Knudsen as a dedicated attack against Square, so it is commonly known as the Square attack. It was also extended to a few other ciphers related to Square: CRYPTON, Rijndael, and SHARK. Stefan Lucks generalized the attack to what he called a saturation attack and used it to attack Twofish, which is not at all similar to Square, having a radically different Feistel network structure. Forms of integral cryptanalysis have since been applied to a variety of ciphers, including Hierocrypt, IDEA, Camellia, Skipjack, MISTY1, MISTY2, SAFER++, KHAZAD, and FOX (now called IDEA NXT). Unlike differential cryptanalysis, which uses pairs of chosen plaintexts with a fixed XOR difference, integral cryptanalysis uses sets or even multisets of chosen plaintexts of which part is held constant, and another part varies through all possibilities. For example, an attack might use 256 chosen plaintexts that have all but 8 of their bits the same, but all differ in those 8 bits. Such a set necessarily has an XOR sum of 0, and the XOR sums of the corresponding sets of ciphertexts provide information about the cipher's operation. This contrast between the differences of pairs of texts and the sums of larger sets of texts inspired the name "integral cryptanalysis", borrowing the terminology of calculus. References Cryptographic attacks
Integral cryptanalysis
[ "Technology" ]
331
[ "Cryptographic attacks", "Computer security exploits" ]
7,210,869
https://en.wikipedia.org/wiki/B%C3%A9ton%20brut
Béton brut () is architectural concrete that is left unfinished after being cast, displaying the patterns and seams imprinted on it by the formwork. Béton brut is not a material itself, but rather a way of using concrete. The term comes from French and means "raw concrete". History The use of béton brut was pioneered by modernist architects such as Auguste Perret and Le Corbusier. Le Corbusier coined the term béton brut during the construction of Unité d'Habitation in Marseille, France built in 1952. The term began to spread widely after the British architectural critic Reyner Banham associated it with Brutalism in his 1966 book, The New Brutalism: Ethic or Aesthetic?, which characterized a recent cluster of new architectural designs, particularly in Europe. Béton brut became popular among modern architects, leading to the appreciation of the brutalist architecture style, which thrived in the 1950s–1970s. Brutalism stems from the philosophies of modern architecture that promote the truth to materials, which is achieved by their raw expression. The essence of the philosophy is seen in the imperfections of béton brut which stem from the idea to create an aesthetic based on the exposure of a building's components, including the frame, sheathing, and mechanical systems. The result is the visibility of the imprinted seams and construction methods of the formwork used to mold the concrete. This style of concrete is a part of structural expressionism, which emerged as steel structures became more advanced and viable. Fabrication After being cast, concrete will usually have a finishing treatment that smooths its surface, ridding it of any imperfections. In the case of béton brut, the concrete is left unfinished, expressing the pattern left by the formwork. Formwork is used in concrete construction as the frame for a structure in which fresh concrete is poured to then harden and take on the desired shape. Aesthetic of concrete surfaces can be varied with different formwork sheathing (e.g. board shuttering, smooth formwork, form liner, form moulds, filter fleeces). The type of material used to create the formwork (i.e. glass, wood, steel etc.) will have effects on the appearance of the final product. When Corbusier coined the term, he was specifically responding to board-marked concrete, which he used to construct many of his post-World War II buildings. When the formwork is lined with wood it is called board form. When lumber is used to create the formwork, the concrete picks up the grain structure as it sets, resulting in a texture on the poured concrete that resembles the wood. It is important to use the same type of wood throughout the job, especially on larger buildings where the molds may get repeated uses, because the lumber can absorb moisture, which may possibly affect the color of the concrete. Other raw patterns can be created by using textured metal formwork, or having the aggregate bush or pick hammered. Wood-imprinted concrete is still popular in landscaping, especially in some western European countries. Surface processing techniques (e.g., washed concrete surfaces, photo concrete, acidified surfaces) can also be used to create the aesthetic of béton brut. Particularly high-quality poured concrete, achieved by leaving enough room between the formwork and the reinforcing bars for the concrete to flow freely, is called Sichtbeton in German and cemento a vista in Italian. Both terms translate roughly to "concrete for viewing". Examples Church of Notre Dame du Raincy (1922–23) by Auguste Perret Unité d'Habitation Habitat 67, by Moshe Safdie, Montreal, Canada Reinanzaka House (1924) by Antonin Raymond University of Illinois at Chicago (East side of campus designed by Walter Netsch of Skidmore, Owings & Merrill) The Evergreen State College Rudolph Hall, The Yale School of Architecture, Yale University, New Haven, CT Sainte-Bernadette-du-Banlay church, Nevers, France, architect Claude Parent Boston City Hall, Boston, MA Royal National Theatre, London at University of Malaya the Main Entrance to the War Memorial Complex, Brest Fortress the Ilinden Memorial in North Macedonia University of Massachusetts Dartmouth (Designed by Paul Rudolph) See also Brutalist architecture Truth to materials References External links Examples of use in brutalist buildings in Ontario Concrete +Beton brut
Béton brut
[ "Engineering" ]
909
[ "Structural engineering", "Concrete" ]
7,210,930
https://en.wikipedia.org/wiki/Embodied%20imagination
Embodied imagination is a therapeutic and creative form of working with dreams and memories pioneered by Dutch Jungian psychoanalyst Robert Bosnak and based on principles first developed by Swiss psychiatrist Carl Jung, especially in his work on alchemy, and on the work of American archetypal psychologist James Hillman, who focused on soul as a simultaneous multiplicity of autonomous states. Technique The technique of embodied imagination takes dreaming as the paradigm for all work with images. While dreaming, everyone experiences dreams as embodied events in time and space; that is, the dreamer is convinced that he or she is experiencing a real event in a real environment. Bosnak describes how a dream "instantaneously presents a total world, so real that you are convinced you are awake. You don't just think so, you it in the same way you now know you are awake reading this book." So from the perspective of dreaming, the image is a place. Based on this notion, the dreamer can re-enter the landscape of the dream and flashback into its images to more fully and deeply explore and experience them. The dreamer explores the images of the dream while in a hypnagogic state, a state of consciousness between waking and sleeping. While in this state, the dreamer is asked a series of questions that help him or her to re-experience the dream by describing details of its landscape and image. Once fully immersed in the images that the dream environment presents, the dreamer is then also invited to feel and identify the feelings and sensations manifested in the body from a variety of dream perspectives. Perspectives explored are both that of the dream ego as well any "others" that appear in the dream. These "others" may be, for example, another person, an animal, or a physical object. Approaching dream figures in this way is consistent with archetypal psychologist James Hillman's prescription for therapeutic work in regard to the phenomena of psychic multiplicity. Drawing upon Carl Jung's realization that "the ego complex is not the only complex in the psyche," Hillman described the psyche to be not a singular unified whole defined by the ego point of view, but rather a self-organizing multiplicity of autonomous selves. In the technique of embodied imagination, for each of these "selves" or "states" representing various perspectives, the dreamer then feels, identifies, and locates the feelings and sensations in his or her body. At the conclusion of the dreamwork session, the dreamer simultaneously holds in conscious awareness these differentiated and complex states of embodied feeling and sensation. The act of holding these multiple disparate states at the same time creates a psychical tension from which a completely new image or feeling state spontaneously emerges from the dreamer's psyche. This new image or state presents a completely new and previously unknown awareness to the dreamer, one through which the dreamer often feels changed, transformed, or greatly expanded in the ability to embody and feel intensely. Professional societies On November 3, 2006, the International Society for Embodied Imagination was founded at a conference in Guangzhou. See also Alchemy Archetypal psychology Contemporary dream interpretation James Hillman Polytheistic myth as psychology References Notes Further reading Bosnak, Robert. (October 2003). Embodied imagination, Journal of Contemporary Psychoanalysis, Volume 39, Number 4. Bosnak, Robert. (2007). Embodiment: Creative imagination in medicine, art and travel. London: Routledge. Bosnak, Robert. (Spring 2006). Sulphur dreaming. Spring: A Journal of Archetype and Culture, Volume 74, pp. 91–106. Bromberg, Philip M. (October 2003). On being one's dream: Some reflections on Robert Bosnak's "Embodied imagination." Journal of Contemporary Psychoanalysis, Volume 39, Number 4. Corbin, Henry. (1972). Mundus imaginalis, or the Imaginary and the Imaginal (Ruth Horine, Trans.). Spring: An Annual of Archetypal Psychology and Jungian Thought, pp. 1–19. Hillman, James. (1975). Re-visioning psychology. New York: Harper and Row. Schwartz-Salant, Nathan, Ed. (1995). Jung on alchemy, Princeton, NJ: Princeton University Press. Sonenberg, Janet. (2003). Dreamwork for actors. London/New York: Routledge. White, Judy and Jill Fischer. Embodied Imagination® in Barrett, Diedre and McNamara, Patrick, editors. Encyclopedia of Sleep and Dreams [2 volumes]: The Evolution, Function, Nature, and Mysteries of Slumber, Greenwood, 2012. External links Embodied imagination in the work with dreams and memories, Video lectures by Robert Bosnak The International Association for the Study of Dreams Dream Personal life Psychotherapy by type Acting techniques Analytical psychology Carl Jung Alchemy Memory
Embodied imagination
[ "Biology" ]
1,000
[ "Dream", "Behavior", "Sleep" ]
7,211,500
https://en.wikipedia.org/wiki/Species%20Survival%20Plan
The American Species Survival Plan or SSP program was developed in 1981 by the (American) Association of Zoos and Aquariums to help ensure the survival of selected species in zoos and aquariums, most of which are threatened or endangered in the wild. SSP program SSP programs focus on animals that are near threatened, threatened, endangered, or otherwise in danger of extinction in the wild, when zoo and zoology conservationists believe captive breeding programs will aid in their chances of survival. These programs help maintain healthy and genetically diverse animal populations within the Association of Zoos and Aquariums-accredited zoo community. AZA accredited zoos and AZA conservation partners that are involved in SSP programs engage in cooperative population management and conservation efforts that include research, conservation genetics, public education, reintroduction, and in situ or field conservation projects. The process for selecting recommended species is guided by Taxon Advisory Groups, whose sole objective is to curate Regional Collection Plans for the conservation needs of a species and how AZA institutions will cooperate to reach those needs. Today, there are almost 300 existing SSP programs. The SSP has been met with widespread success in ensuring that, should a species population become functionally extinct in its natural habitat, a viable population still exists within a zoological setting. This has also led to AZA species reintroduction programs, examples of which include the black-footed ferret, the California condor, the northern riffleshell, the golden lion tamarin, the Karner blue butterfly, the Oregon spotted frog, the palila finch, the red wolf, and the Wyoming toad. SSP master plan An SSP master plan is a document produced by the SSP coordinator (generally a zoo professional under the guidance of an elected management committee) for a certain species. This document sets ex situ population goals and other management recommendations to achieve the maximum genetic diversity and demographic stability for a species, given transfer and space constraints. See also European Endangered Species Programme List of SSP programs As of 2023, there are 290 species that are a part of the Species Survival Plan program. Aardvark Addax Agouti, Brazilian Alligator, Chinese Anteater, Giant Aracari, Curl-Crested Aracari, Green Argus, Great Armadillo, Screaming Armadillo, Six-banded Armadillo, Southern three-banded Baboon, Hamadryas Barbet, Red-and-yellow Bat, Egyptian fruit Bat, Rodrigues fruit Bat, Straw-colored fruit Bear, Andean spectacled Bear, Sloth Beaver, American Binturong Bird-of-paradise, Raggiana Bluebird, Fairy Boa, Jamaican Bongo, Eastern Bonobo Bushbaby, Mohol Cacique, Yellow-rumped Callimico Capybara Cardinal, Red-capped Cassowary, Southern (double-wattled) Cat, Pallas Cat, Sand Cheetah Chimpanzee Chuckwalla, San Esteban Cobra, King Colobus, Angolan Colobus, Guereza Condor, Andean Coua, Crested Crane, Black crowned Crane, Demoiselle Crane, Grey crowned Crane, Red crowned Crane, Wattled Crane, White-naped Curassow, Blue-billed Curassow, Helmeted Deer, Western tufted Dikkop, Spotted Dog, African painted Dove, Beautiful fruit Dove, Black-naped fruit Dove, Luzon bleeding heart Dove, Mindanao bleeding heart Dragon, Komodo Duck, Spotted whistling Duck, West Indian whistling Duiker, Blue Duiker, Yellow-backed Elephant, African Elephant, Asian Flamingo, Caribbean Flamingo, Chilean Flamingo, Greater Flamingo, Lesser Fossa Fox, Bat-eared Fox, Fennec Fox, Swift Frog, Panamanian golden (ahogado) Frog, Panamanian Golden (sora) Frogmouth, Tawny Gazelle, Addra Gecko, Giant leaf-tailed Gecko, Henkel's leaf-tailed Gharial, Sunda Gibbon, Lar (white-handed) Gibbon, White-cheeked Giraffe, Generic Giraffe, Masai Goose, African pygmy Goose, Red-breasted Goose, Swan Gorilla, Western Lowland Guineafowl, Crested Hamerkop Heron, Boat-billed Hippopotamus, Pygmy Hippopotamus, River Hog, Red River Honeyeater, Blue-Faced Hornbill, Abyssinian Ground Hornbill, Red-Billed Hornbill, Rhinoceros Hornbill, Southern Ground Hornbill, Trumpeter Hornbill, Wrinkled Horse, Asian Wild Hwamei, Chinese Hyena, spotted Hyrax, Rock Ibis, African Sacred Ibis, Hadada Ibis, Scarlet Ibis, Waldrapp Iguana, Fiji Banded Iguana, Grand Cayman Blue Iguana, Jamaican Jaguar Jay, Plush Crested Kangaroo, Red Kangaroo, Western Gray Kookaburra, Laughing Kudu, Lesser Langur, Francois' Lapwing, Masked Lapwing, Spur-Winged Laughingthrush, White-Crested Leiothrix, Red-Billed Lemur, Black and White Ruffed Lemur, Collared Lemur, Mongoose Lemur, Red Ruffed Lemur, Ring-Tailed Leopard, Clouded Leopard, Snow Liocichla, Scarlet-Faced Lion Lizard, Caiman Lizard, Chinese crocodile Lizard, Rio Fuerte Beaded Loris, Pygmy Slow Lynx, Canada Macaque, Japanese Macaw, Blue-Throated Macaw, Hyacinth Macaw, Red-Fronted Magpie, Azure-Winged Mandrill Mara, Patagonian Marmoset, Geoffroy's Meerkat Merganser, Scaly-Sided Monitor, Black Tree Monkey, Bolivian Gray Titi Monkey, Common Squirrel Monkey, DeBrazza's Monkey, Mexican Spider Monkey, Robust Black Spider Monkey, Southern Black Howler Motmot, Blue-Crowned Muntjac, Reeves' Myna, Bali Ocelot Okapi Orangutan, Bornean Orangutan, Sumatran Oropendola, Crested Oryx, Scimitar-Horned Otter, Asian Small-Clawed Otter, North American River Owl, Burrowing Owl, Snowy Owl, Spectacled Panda, red (fulgens) Panda, red (refulgens) Partridge, Crested Wood Peccary, Chacoan Pelican, Pink-Backed Penguin, African Penguin, Chinstrap Penguin, Gentoo (ellsworthi) Penguin, Humboldt Penguin, King Penguin, Magellanic Penguin, Southern Rockhopper Pheasant, Palawan Peacock Pheasant, Vietnam Pigeon, Green-Naped Pheasant Pigeon, Nicobar Pigeon, Victoria Crowned Pochard, Baer's Porcupine, Cape Porcupine, Crested Porcupine, North American Porcupine, Prehensile-Tailed Pudu, Chilean (Southern) Puffin, Tufted Rattlesnake, Aruba Island Rattlesnake, Eastern Massasauga Rattlesnake, Santa Catalina Island Ray, Spotted Eagle Rhinoceros, Eastern Black Rhinoceros, Greater One-Horned Rhinoceros, Southern White Ringtail Roadrunner, Greater Roller, Blue-Bellied Saki, White-Faced Screamer, Southern Sea Lion, California Seahorse, Big Bellied Seahorse, Lined Seal, Harbor Seriema, Red-Legged Serval Shama, White-rumped Shark, Sand Tiger Shark, Zebra Siamang Skink, Prehensile-Tailed Sloth, Hoffman's Two-Toed Sloth, Linne's Two-Toed Snake, Eastern Indigo Spoonbill, African Spoonbill, Roseate Squirrel, Prevost's Starling, Emerald Starling, Golden-Breasted Starling, Violet-Backed (Amethyst) Stilt, Black-Necked Stingray, White-Blotched River Stork, Abdim's (White-Bellied) Stork, Marabou Stork, Saddle-Billed Stork, White Sunbittern Swan, Coscoroba Swan, Trumpeter Takin, Sichuan Tamandua, Southern Tamarin, Bearded Emperor Tamarin, Cotton-Top Tamarin, Golden Lion Tanager, Blue-Grey Tanager, Silver-Beaked Tanager, Turquoise Tapir, Malayan (Asian) Teal, Marbled Tenrec, Lesser Madagascar Hedgehog Tern, Inca Tiger, Amur Tiger, Malayan Tiger, Sumatran Tortoise, African Pancake Tortoise, Brown Forest Tortoise, Burmese Black Tortoise, Burmese Star Tortoise, Egyptian Tortoise, Galapagos (microphyes) Tortoise, Home's Hinge-back Tortoise, Madagascar Flat-Tailed Tortoise, Madagascar spider (Common) Tortoise, Madagascar spider (Northern) Tortoise, Radiated Toucan, Keel-Billed Toucan, Toco Tragopan, Cabot's Tree Kangaroo, Matschie's Tree Shrew, Northern Troupial Trumpeter, Grey-Winged Turaco, Great Blue Turaco, Lady Ross' Turaco, Red-Crested Turaco, Violaceous Turaco, White-Cheeked Turtle, Black-Breasted Leaf Turtle, Blanding's Turtle, Coahuilan Box Turtle, Indochinese Box Turtle, Malaysian Giant Turtle, McCord's Box Turtle, Pan's Box Turtle, Rote Island Snake-Necked Turtle, Spiny Turtle, Spotted Vulture, Cinereous Vulture, King Wallaby, Bennett's (Red-necked) Wallaby, Tammar Wallaroo, Common Warthog, Common Weaver, White-Headed Buffalo Wolf, Maned Woodhoopoe, Green Zebra, Grevy's Zebra, Hartmann's mountain Zebra, Plains Notes References External links AZA website Animal breeding organizations Wildlife conservation Zoology Zoos
Species Survival Plan
[ "Biology" ]
2,061
[ "Wildlife conservation", "Zoology", "Biodiversity" ]
7,211,838
https://en.wikipedia.org/wiki/Pop%20pop%20boat
A pop-pop boat (also known as a flash-steamer, hot-air-boat, or toc-toc after a German version from the 1920s) is a toy with a simple steam engine without moving parts, typically powered by a candle or vegetable oil burner. The name comes from the noise made by some versions of the boats. Initially patented in 1891, the concept has undergone a number of changes and subsequent patents. The engine consists of a boiler and one or more exhaust tubes, in which an oscillation of the water is established in the tubes to eject water out the exhaust tubes in pulses to propel the boat. History Credit for the first pop pop boat is usually given to a Frenchman named Thomas Piot. In 1891, Piot filed a patent application in the UK for a simple pop pop boat using a small boiler and two exhaust tubes. A 1975 article by Basil Harley mentions a similar boat seen in a French journal from 1880, indicating that this type of toy may have existed for many years prior to Piot's patent. In 1915, an American named Charles J. McHugh filed a patent application for the diaphragm type of engine, which was an improvement to Piot's design. In 1920, William Purcell filed a patent for the coiled tube type of engine. This type of engine has been common over the years in homemade pop pop boats, due to its simplicity of construction. The Cub Scout book (published by the Boy Scouts of America) contained a project called a "Jet Boat" for many years. This project used a coil type of engine based on Purcell's design which was placed in a wooden hull. Many commercial pop pop boats have also used this type of engine, due to its low cost. McHugh filed for another patent in 1926. This was again a diaphragm engine design, refined so that it could be more easily fabricated commercially. In 1934, Paul Jones filed a patent for another diaphragm design which could be produced industrially from simple stamped parts. Many pop pop boats produced in the 1920s had a single exhaust pipe. Designs using two exhaust pipes are easier to fill, and have been much more common over the years. Pop pop boats were popular for many years, especially in the 1940s and 1950s. Pop pop boats declined in popularity along with other tin toys in the latter half of the 20th century as plastic toys took over much of the market. While they are no longer produced in such large numbers, pop pop boats continue to be produced. These toys have come in many varieties over the years. Some have been simple and inexpensive, while others have been much more ornate and artistic. As with many toys, pop pop boats are often sought by collectors, and the prices paid vary depending on rarity and design. Design and construction A pop pop boat is powered by a simple heat engine. This engine, sometimes referred to as a pulsating water engine, consists of a boiler and one or more exhaust tubes. A heating element of some sort is placed under the boiler. Candles or small oil burners are commonly used. While a single exhaust tube may be used, two exhaust tubes are much more commonly used. This is because the boiler and the exhaust tubes have to be filled with water, and using two tubes allows water to be injected into one tube while air inside the engine escapes through the other tube. The boiler and exhaust tubes are usually made out of metal, with tin or copper being common. When heat is applied to the boiler, water in the boiler evaporates, producing steam. The expanding steam is suddenly pushed out of the boiler, making a "pop" sound, and pushes some of the water out of the exhaust tube, propelling the boat forward. The boiler is now dry, and cannot, therefore, generate any more steam. The momentum of the column of water in the exhaust tube keeps it moving outward, so that the pressure inside the boiler drops below atmospheric pressure. In the case of a diaphragm type engine, the boiler also bulges inward at this point, also making a popping sound. The pressure outside the boiler now forces water back into the boiler. This water then boils and the cycle repeats. The popping noise is more pronounced when a diaphragm-type boiler is used: coil-type boilers are much quieter. Any air in the boiler can act as a spring and support the oscillation of the water, but if too much air enters the boiler, the oscillation stops because all the water has been displaced, and no steam can be generated. Water contains some dissolved air, which can build up in the engine during operation. Therefore, engines must "burp" out air periodically in order to run for a long time. In pop pop boats with two exhaust tubes, the water is expelled from both tubes during the first phase of the cycle, and drawn in from both tubes during the second phase of the cycle. The water does not circulate in through one tube and out through the other. The internal-combustion analog of the pop pop boat engine is the valveless pulse jet. Commercial pop pop boats have usually been made out of tinplate. The hull of the boat may be made out of any material that floats. Homemade pop pop boats are often made out of wood. Boiler designs vary. Simple metal containers in the shape of a box or cylinder are common. A more efficient boiler can be made by using a metal pan whose top is a slightly concave diaphragm made out of a thin, springy metal. Many pop pop boats have used a single tube of metal, which is formed into a coil in its center and left straight on both ends to form the exhausts. The coil in this version functions as the boiler. Principle of operation The operation of the pop pop boat may seem surprising, since one might expect that if water is going in and out through the exhaust tube, the boat should merely shake back and forth. But while the water pushed out carries away with it momentum, which must be balanced (by Newton's third law) by an opposite momentum on the part of the boat, the water sucked in quickly impinges on the boiler tank and transfers its momentum to the boat. The initial reaction force on the boat (which would pull it backwards) is therefore cancelled by the pushing of the water when it hits the inside of the boiler. The result is that the inflow of water causes no appreciable force on the boat. Some authors have argued that the reason why the pop pop boat works is that the water being propelled out the back of the boat forms a narrow jet, while the water being drawn back in on the second half of the cycle is drawn in from all directions. This asymmetry may be seen as well in the way in which one blows out a candle: it is easy to extinguish a candle by blowing on it, since all of the air expelled is moving in a concentrated, directional jet. However, it is difficult to put out the flame by sucking in air, the air being sucked in coming from all directions. This observation, though correct, may be misleading as an explanation of why the pop pop boat moves forward. The asymmetry of the shapes of the inflow and the outflow is a consequence of the viscosity of water, whereas the boat would be able to operate in an ideal fluid. Furthermore, as they pass through the exhausts, the inflowing and the outflowing water carry the same momentum (in opposite directions), relative to the boat. The important difference is that the momentum of the outflow is expelled, whereas the momentum of the inflow is soon transferred to the boat. The sucking/blowing asymmetry does make the boat more efficient, even if it's not the principle on which it operates. The physics of the operation of the pop pop boat is similar to that of the Feynman sprinkler, a submerged sprinkler which is seen to turn weakly or not at all as water is sucked in through it. In both cases, the reaction force on the solid device caused by the sucking in of the fluid is balanced by the fluid impinging on the inside of the device. Cultural impact The pop pop boat featured prominently in the 2008 Japanese animated fantasy film Ponyo. Toy boats with a diaphragm type engine, like the one shown in the film, were produced and sold as a tie-in when the movie was released. See also References External links The Pop Pop Pages including extensive references and vendors 4Physics.com description Pop pop boat information, including history and operating instructions Steamboats Boilers Powered toys Steam engines Jet engines French inventions
Pop pop boat
[ "Physics", "Chemistry", "Technology" ]
1,771
[ "Physical quantities", "Engines", "Powered toys", "Power (physics)", "Jet engines", "Boilers", "Pressure vessels" ]
7,212,817
https://en.wikipedia.org/wiki/Crystallization%20adjutant
A crystallization adjutant is a material used to promote crystallization, normally in a context where a material does not crystallize naturally from a pure solution. Additives in Macromolecular Crystallization In macromolecular crystallography, the term additive is used instead of adjutant. An additive can either interact directly with the protein, and become incorporated at a fixed position in the resulting crystal or have a role within the disordered solvent, that in protein crystals constitute roughly 50% of the lattice volume. Polyethylene glycols of various molecular weights and high-ionic strength salts such as ammonium sulfate and sodium citrate that induce protein precipitation when used in high concentrations are classified as precipitants, while certain other salts such as zinc sulfate or calcium sulfate that may cause a protein to precipitate vigorously even when used in small amounts are considered adjutants. Crystallization adjutants are considered additives when they are effective at relatively low concentrations. The distinction between buffers and adjutants is also fuzzy. Buffer molecules can become part of the lattice (for example HEPES in becomes incorporated in crystals of human neutrophil collagenase) but their main use is to maintain the rather precise pH requirements for crystallization that many proteins have. Commonly used buffers such as citrate have a high ionic strength and at the typical buffer concentrations they also act as precipitants. Various species such as Ca2+ and Zn2+ are a biological requirement for certain proteins to fold correctly and certain co-factors are needed to maintain a well defined conformation. Certain strategies, like replacing precipitants and buffers with others intended to have a similar effect, have been used to differentiate between the roles played in protein crystallization by the various components in the crystallization solution. Additives for Membrane Protein Crystallization For membrane proteins, the situation is more complicated because the system that is being crystallized is not the membrane protein itself but the micellar system in which the membrane protein is embedded. The size of the protein-detergent mixed micelles are affected by both additives and detergents which will strongly influence the crystals obtained. In addition to varying the concentration of primary detergents, additives (lipids and alcohols) and secondary detergents can be used to modulate the size and shape of the detergent micelles. By reducing the size of the mixed micelles lattice forming protein-protein contacts are encouraged. Lipid cubic phases, spontaneous self-assembling liquid crystals or lipid mesophases have been used successfully in the crystallization of integral membrane proteins. Temperature, salts, detergents, various additives are used in this system to tailor the cubic phase to suit the target protein. Typical detergents used are n-dodecyl-β-d-maltopyranoside, n-decyl-β-d-glucopyranoside, lauryldimethylamine oxide LDAO, n-hexyl-β-d-glucopyranoside, n-nonyl-β-d-glucopyranoside and n-octyl-β-d-glucopyranoside; the various lipids are dioleoyl phosphatidylcholine, dioleoyl phosphatidylethanolamine and monoolein. References External links A list of adjutants from a German Crystallography laboratory The 'Jeffamine' group of compounds, a number of which are commonly used adjutants Crystallography
Crystallization adjutant
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
732
[ "Crystallography", "Condensed matter physics", "Materials science" ]
7,212,867
https://en.wikipedia.org/wiki/Ohio%20Supercomputer%20Center
The Ohio Supercomputer Center (OSC) is a supercomputer facility located on the western end of the Ohio State University campus, just north of Columbus. Established in 1987, the OSC partners with Ohio universities, labs and industries, providing students and researchers with high performance computing, advanced cyberinfrastructure, research and computational science education services. OSC is member-organization of the Ohio Technology Consortium, the technology and information division of the Ohio Department of Higher Education. OSC works with an array of statewide/regional/national communities, including education, academic research, industry, and state government. The Center's research programs are primarily aligned with three of several key areas of research identified by the state to be well positioned for growth and success, such as the biosciences, advanced materials and energy/environment. OSC is funded through the Ohio Department of Higher Education by the state operating and capital budgets of the Ohio General Assembly. History OSC was established by the Ohio Board of Regents (now the Ohio Department of Higher Education) in 1987 as a statewide resource designated to place Ohio's research universities and private industry in the forefront of computational research. Also in 1987, the OSC networking initiative — known today as OARnet — provided the first network access to the Center’s first Cray supercomputer. In 1988, OSC launched the Center’s Industrial Interface Program to serve businesses interested in accessing the supercomputer. Battelle Memorial Institute, located just south of Ohio State, became OSC’s first industrial user. Today, the Center continues to offer HPC services to researcher in industry, primarily through its AweSim industrial engagement program. In the summer of 1989, 20 talented high school students attended the first Governor’s Summer Institute. Today, OSC offers summer STEM education programs through Summer Institute and Young Women's summer Institute, which began in 2000. Later in the fall of 1989, OSC engineers installed a $22 million Cray Y-MP8/864 system, which was deemed the largest and fastest supercomputer in the world for a short time. The seven-ton system was able to calculate 200 times faster than many mainframes at that time. Directors of the Center: William McCurdy, Ph.D., OSC Acting Director, 1986–87 Charles Bender, Ph.D., OSC Executive Director, 1987-2002 Al Stutz, OSC Acting Director, 2001 Russell Pitzer, Ph.D., OSC Interim Director, 2001-2003 Stanley Ahalt, Ph.D., OSC Executive Director, 2003-2009 Ashok Krishnamurthy, Ph.D., OSC Interim Co-executive Director, 2009-2012 Steven Gordon, Ph.D., OSC Interim Co-executive Director, 2009-2012 Pankaj Shah, OSC Executive Director, 2012-2015 David Hudak, Ph.D., OSC Interim Executive Director, 2015-2018, Executive Director 2018- Systems Production systems (Mar. 2022) include: Pitzer Cluster (installed 2019): A 10,240-core Dell Intel Gold 6148 machine + 19,104-core Dual Intel Xeon 8268 machine 224 nodes have 40 cores per node and 192 GB of memory per node 340 nodes have 48 cores per node and 192 GB of memory per node 32 nodes have 40 cores, 384 GB of memory, and 2 NVIDIA Volta V100 GPUs 42 nodes have 48 cores, 384 GB of memory, and 2 NVIDIA Volta V100 GPUs 4 nodes have 48 cores, 768 GB of memory, and 4 NVIDIA Volta V100s w/32GB GPU memory and NVLink 4 nodes have 80 cores and 3.0 TB of memory for large Symmetric Multiprocessing (SMP) style jobs Theoretical system peak performance of 3940 teraflops (CPU only) Owens Cluster (installed 2016): A 23,392-core Dell Intel Xeon E5-2680 v4 machine 648 nodes have 28 cores per node and 128 GB of memory per node 16 nodes have 48 cores and 1.5 TB of memory for large Symmetric Multiprocessing (SMP) style jobs 160 nodes have 28 cores, 128 GB or memory, and 1 NVIDIA Tesla P100 GPU Theoretical system peak performance 750 teraflops (CPU only) Storage Systems 5.3 Petabytes of storage 5.5 Petabytes of tape backup References External links Ohio Supercomputer Center website Ohio Technology Consortium website Information technology organizations based in North America Buildings and structures in Columbus, Ohio Science and technology in the United States Supercomputer sites Supercomputers
Ohio Supercomputer Center
[ "Technology" ]
972
[ "Supercomputers", "Supercomputing" ]
7,212,941
https://en.wikipedia.org/wiki/Fritz%20Leonhardt
Fritz Leonhardt (12 July 1909 – 30 December 1999) was a German structural engineer who made major contributions to 20th-century bridge engineering, especially in the development of cable-stayed bridges. His book Bridges: Aesthetics and Design is well known throughout the bridge engineering community. Biography Born in Stuttgart in 1909, Leonhardt studied at Stuttgart University and Purdue University. In 1934 he joined the German Highway Administration, working with Paul Bonatz amongst others. He was appointed at the remarkably young age of 28 as the Chief Engineer for the Cologne-Rodenkirchen Bridge. In 1954 he formed the consulting firm Leonhardt und Andrä, and from 1958 to 1974 taught the design of reinforced concrete and prestressed concrete at Stuttgart University. He was President of the University from 1967 to 1969. He received Honorary Doctorates from six universities, honorary membership of several important engineering universities, and won a number of prizes including the Werner von Siemens Ring, the Honorary Medal Emil Mörsch, the Freyssinet Medal of the FIP, and the Gold Medal of the Institution of Structural Engineers. In 1988, he was awarded an Honorary Degree (Doctor of Science) by the University of Bath. Throughout his career, Leonhardt was as dedicated to research as to design, and his major contributions to bridge engineering technology included: development of a launching system for prestressed concrete bridges, first used in his 1963 bridge over the Caroní River in Ciudad Guayana, Venezuela the 'Hi-Am' anchor for cable stays, in collaboration with the Swiss firm B.B.R.V. anchorages in prestressed concrete experiments during the 1930s on steel orthotropic decks. Major works His major structures include the Cologne-Rodenkirchen Bridge, Stuttgart Television Tower, Hamburg's Alster-Schwimmhalle and various cable-stayed bridges in Düsseldorf. He also worked on the design of several cable-stayed bridges abroad, including the Pasco-Kennewick bridge (1978) in the U.S., and the Helgeland Bridge (1981) in Norway. Fritz Leonhardt Prize This prize was established in 1999 on the 90th anniversary of Leonhardt's birth, to recognise outstanding achievements in structural engineering. The first prize was awarded to Michel Virlogeux. Subsequent winners have included Jörg Schlaich (2002), René Walter (2005), and William F. Baker (engineer) (2009). Bibliography Brücken / Bridges (4th edition), Deutsche Verlags-Anstalt, Stuttgart (Germany), , 1994 (first published 1982). Ponts/Puentes, Presses polytechniques et universitaires romandes, Lausanne (Switzerland), , 1986. Notes External links Fritz Leonhardt Symposium 2009 – University of Stuttgart Bridge Design and Engineering: Fritz Leonhardt, Master of Bridges Structures of Leonhardt, Andrä and Partners IStructE Gold Medal winners Bridge engineers German civil engineers Structural engineers 1909 births 1999 deaths Werner von Siemens Ring laureates Commanders Crosses of the Order of Merit of the Federal Republic of Germany Recipients of the Order of Merit of Baden-Württemberg 20th-century German engineers Engineers from Stuttgart
Fritz Leonhardt
[ "Engineering" ]
643
[ "Structural engineering", "Structural engineers" ]
1,546,501
https://en.wikipedia.org/wiki/Doxylamine
Doxylamine is an antihistamine medication used to treat insomnia and allergies, and—in combination with pyridoxine (vitamin B6)—to treat morning sickness in pregnant women. It is available over-the-counter and is typically sold under such brand names as Equate or Unisom, among others; and it is used in nighttime cold medicines (e.g., NyQuil) and pain medications containing acetaminophen and/or codeine to help with sleep. The medication is delivered chemically by the salt doxylamine succinate and is taken by mouth. Doxylamine and other first-generation antihistamines are the most widely used sleep medications in the world. Typical side effects of doxylamine (at recommended doses) include dizziness, drowsiness, grogginess, and dry mouth, among others. As an antihistamine, doxylamine is an inverse agonist of the histamine H1 receptor. As a first-generation antihistamine, it typically crosses the blood–brain barrier into the brain, thereby producing a suite of sedative and hypnotic effects that are mediated by the central nervous system. (N.b.: An agonist is a molecule that activates certain receptors (i.e., specific cellular proteins) in a cell to produce a specific pharmacological response, causing the cell to modify its activity—while an inverse agonist targets the same receptors as those of a given agonist, but causes a response opposite to that caused by the agonist. An antagonist blocks the action of a given agonist.) Doxylamine is also a potent anticholinergic, meaning that it causes delirum at high doses—i.e., at much higher doses than recommended. (Specifically it is an antagonist of the muscarinic acetylcholine receptors M1 through M5.) These sedative and deliriant effects have in some cases led to using the drug recreationally. Doxylamine was first described in 1948 or 1949. Medical uses Doxylamine is an antihistamine used to treat sneezing, runny nose, watery eyes, hives, skin rash, itching, and other cold or allergy symptoms. It is also used as a short-term treatment for insomnia. Insomnia The first-generation sedating antihistamines diphenhydramine, doxepin, doxylamine, and pyrilamine are the most widely used medications in the world for preventing and treating insomnia. As of 2004, doxylamine and diphenhydramine, which are both over-the-counter medications, were the agents most commonly used to treat short-term insomnia. As of 2008 and 2017, over-the-counter antihistamines were not recommended by the American Academy of Sleep Medicine for treatment of chronic insomnia "due to the relative lack of efficacy and safety data". Neither version of their guidelines explicitly included or mentioned doxylamine, although diphenhydramine was discussed. A 2015 systematic review of over-the-counter sleep aids including doxylamine found little evidence to inform the use of doxylamine for treatment of insomnia. A major systematic review and network meta-analysis of medications for the treatment of insomnia published in 2022 found that doxylamine had an effect size (standardized mean difference (SMD)) against placebo for treatment of insomnia at 4weeks of 0.47 (95% 0.06 to 0.89). The certainty of evidence was rated as moderate. No data were available for doxylamine in terms of longer-term treatment (3months). For comparison, the other sedating medicines assessed, doxepin and trimipramine (both of which are tricyclic antidepressants) had effect sizes (SMD) at 4weeks of 0.30 (95% CI –0.05 to 0.64) (very low certainty evidence) and 0.55 (95% CI –0.11 to 1.21) (very low certainty evidence), respectively. Doses of doxylamine that have been used for sleep range from 5 to 50mg, with 25mg being the typical dose. Morning sickness Doxylamine is used in the combination drug pyridoxine/doxylamine to treat morning sickness (nausea and vomiting of pregnancy). It is the only medication approved by the United States Food and Drug Administration for the treatment of morning sickness. Available forms Doxylamine is used medically as doxylamine succinate, the succinate salt of doxylamine, and is available both alone (brand names Decapryn, Doxy-Sleep-Aid, Unisom) and in combination with pyridoxine (a form of vitamin B6) (brand names Bendectin, Bonjesta, Diclegis). Doxylamine is available alone as immediate-release oral tablets containing 25mg doxylamine succinate. Oral tablets containing 12.5mg doxylamine succinate as well as oral capsules containing 25mg doxylamine succinate were also previously available but were discontinued. The combination of doxylamine and pyridoxine is available in the form of extended- and delayed-release oral tablets containing 10 to 20mg doxylamine succinate and 10 to 20mg pyridoxine hydrochloride. Doxylamine alone is available over-the-counter, whereas doxylamine in combination with pyridoxine is a prescription-only medication. Doxylamine is also available in over-the-counter nighttime cold medicine products such as NyQuil Cold & Flu (contains acetaminophen, doxylamine succinate 6.25 to 12.5mg, and dextromethorphan hydrobromide), where it serves as the sedating component. Contraindications The fetal safety rating of doxylamine is "A" (no evidence of risk). Side effects Side effects of doxylamine include dizziness, drowsiness, and dry mouth, among others. Doxylamine is a potent anticholinergic and has a side-effect profile common to such drugs, including blurred vision, dry mouth, constipation, muscle incoordination, urinary retention, mental confusion, and delirium. Because of its relatively long elimination half-life (10–12hours), doxylamine is associated with next-day effects including sedation, drowsiness, grogginess, dry mouth, and tiredness when used as a hypnotic. This may be described as a "hangover effect". The shorter elimination half-life of diphenhydramine (4–8hours) compared to doxylamine may give it an advantage over doxylamine as a sleep aid in this regard. Antihistamines like doxylamine are sedating initially but tolerance occurs with repeated use and can result in rebound insomnia upon discontinuation. Occasional case reports of coma and rhabdomyolysis have been reported with doxylamine. This is in contrast to diphenhydramine. Studies of doxylamine's carcinogenicity in mice and rats have produced positive results for both liver and thyroid cancer, especially in the mouse. The carcinogenicity of the drug in humans is not well-studied, and the International Agency for Research on Cancer lists the drug as "not classifiable as to its carcinogenicity to humans". Continuous and/or cumulative use of anticholinergic medications, including first-generation antihistamines, is associated with a higher risk of cognitive decline and dementia in older people. Overdose Doxylamine is generally safe for administration to healthy adults. Doses of doxylamine of up to 1,600mg/day for 6months have been given to adults with schizophrenia, with little toxicity encountered. The median lethal dose () is estimated to be 50–500mg/kg in humans. Symptoms of overdose may include dry mouth, dilated pupils, insomnia, night terrors, euphoria, hallucinations, seizures, rhabdomyolysis, and death. Fatalities have been reported from doxylamine overdose. These have been characterized by coma, tonic-clonic (or grand mal) seizures and cardiopulmonary arrest. Children appear to be at a high risk for cardiopulmonary arrest. A toxic dose for children of more than 1.8mg/kg has been reported. A 3-year-old child died 18 hours after ingesting 1,000mg doxylamine succinate. Rarely, an overdose results in rhabdomyolysis and acute kidney injury. Pharmacology Pharmacodynamics Doxylamine acts primarily as an antagonist or inverse agonist of the histamine H1 receptor. This action is responsible for its antihistamine and sedative properties. To a lesser extent, doxylamine acts as an antagonist of the muscarinic acetylcholine receptors, an action responsible for its anticholinergic and (at high doses) deliriant effects. Pharmacokinetics The bioavailability of doxylamine is 24.7% for oral administration and 70.8% for intranasal administration. The Tmax of doxylamine is 1.5 to 2.5 hours. Its elimination half-life is 10 to 12hours (range 7 to 15hours). Doxylamine is metabolized in the liver primarily by the cytochrome P450 enzymes CYP2D6, CYP1A2, and CYP2C9. The main metabolites are N-desmethyldoxylamine, N,N-didesmethyldoxylamine, and doxylamine N-oxide. Doxylamine is eliminated 60% in the urine and 40% in feces. Chemistry Doxylamine is a member of the ethanolamine class of antihistamines. Other antihistamines from this group include bromodiphenhydramine, carbinoxamine, clemastine, dimenhydrinate, diphenhydramine, orphenadrine, and phenyltoloxamine. History Doxylamine is a first-generation antihistamine and was discovered by Nathan Sperber and colleagues and was first reported in 1948 or 1949. It has been the antihistamine component of NyQuil since 1966. Bendectin, a combination of doxylamine, pyridoxine (vitamin B6), and dicyclomine (an anticholinergic antispasmodic agent), was marketed for treatment of morning sickness in 1956. This product was reformulated in 1976 to remove dicyclomine. The reformulated product was voluntarily discontinued by the manufacturer in the United States in 1983 due to concerns about an alleged association with congenital limb defects. However, these concerns have not been supported by studies. In 2013, doxylamine/pyridoxine was reintroduced in the United States under the brand name Diclegis. The combination was not removed from the market in Canada, where it had been marketed since 1979. Society and culture Formulations Doxylamine is primarily used as the succinic acid salt, doxylamine succinate. It is the sedating ingredient of NyQuil (generally in combination with dextromethorphan and acetaminophen). In Commonwealth countries, such as Australia, Canada, South Africa, and the United Kingdom, doxylamine is available prepared with paracetamol (acetaminophen) and codeine under the brand name Dolased, Propain Plus, Syndol, or Mersyndol, as treatment for tension headache and other types of pain. Doxylamine succinate is used in general over-the-counter sleep-aids branded as Somnil (South Africa), Dozile, Donormyl, Lidène (France, Russian Federation), Dormidina (Spain, Portugal), Restavit, Unisom-2, Sominar (Thailand), Sleep Aid (generic, Australia) and Dorminox (Poland). In the United States: Doxylamine succinate is the active ingredient in many over-the-counter sleep aids branded under various names. Doxylamine succinate and pyridoxine (Vitamin B6) are the ingredients of Diclegis, approved by the FDA in April 2013 becoming the only drug approved for morning sickness with a class A safety rating for pregnancy (no evidence of risk). In Canada: Doxylamine succinate and pyridoxine (vitamin B6) are the ingredients of Diclectin, which is used to prevent morning sickness. It is also available in combination with vitamin B6 and folic acid under the brand name Evanorm (marketed by Ion Healthcare). In India Doxylamine preparations are available typically in combination with pyridoxine which may also contain folic acid. Doxylamine usage is thus restricted for pregnant women. References 2-Pyridyl compounds Antiemetics Antihistamines Dimethylamino compounds Ethers H1 receptor antagonists Hypnotics M1 receptor antagonists M3 receptor antagonists M4 receptor antagonists M5 receptor antagonists Sedatives
Doxylamine
[ "Chemistry", "Biology" ]
2,875
[ "Hypnotics", "Behavior", "Functional groups", "Organic compounds", "Ethers", "Sleep" ]
1,546,503
https://en.wikipedia.org/wiki/WinCustomize
WinCustomize is a website that provides content for users to customize Microsoft Windows. The site hosts thousands of skins, themes, icons, wallpapers, and other graphical content to modify the Windows graphical user interface. There is some premium or paid content, however, the vast majority of the content is free for users to download. Site history WinCustomize was launched in March 2001 by Brad Wardell and Pat Ford, both of whom work at Stardock. After the dot-com recession had taken down many popular skin sites, WinCustomize quickly grew in popularity due to a combination of wide variety of content, uptime reliability, and being the preferred content destination by Stardock customers. The site has grown at a far greater pace than its founders had anticipated. It has managed to avoid having to put many limitations on users or having to resort to pop-up advertising because of its corporate patron Stardock subsidizing its costs. This growth has prompted several site redesigns to offer improved functionality and reliability to users. Since launch, WinCustomize has undergone several iterations: WinCustomize 2k5 — Launched at the end of 2004, WinCustomize was redesigned for improved stability, and added functionality, such as personal pages for subscribers, an articles' system, tutorials etc. WinCustomize 2k7 — Launched January 15, 2007, WC2k7 was a fundamental rewrite using ASP.NET. The focus was to build a foundation that was easier to maintain and, in the future, expand. WinCustomize v6 — Planned for Late 2008/Early 2009, the WC v6 project aims to be a major revision to how users navigate and interact with the site and the community as a whole. Where 2k7 was focused on the core codebase, v6 is focused on the user interface and experience. In July 2007 the WinCustomize Wiki was launched. WinCustomize 2010 — WinCustomize 2010 was launched on April 20, 2010. This major revision represents a major change in the sites look and navigation for users. A guided tour of the new site was published for users. Popular skinning programs Programs heavily associated with Windows customization include: WindowBlinds – enables users to customize the look and feel of the Windows GUI. Winamp – A skinnable media player from Nullsoft. IconPackager – A program that enables users to change their Windows icons. Rainlendar — A skinnable calendar and small desktop application (widgets & monitoring) program. DesktopX — A program that enables users to build their own desktop with objects and widgets. Windows Media Player — Microsoft's skinnable media player. LiteStep – A noted desktop shell replacement for Windows. ObjectDock — provides a dock that adds functionality to the Windows interface, similar but not an emulation of the dock in Mac OS X. LogonStudio — alters the Windows XP and Windows Vista welcome screen. BootSkin — alters the Windows 2000 and Windows XP boot screen. References External links WinCustomize forums WinCustomize Wiki Digital art Art websites Online databases Computing websites Stardock software
WinCustomize
[ "Technology" ]
646
[ "Computing websites" ]
1,546,530
https://en.wikipedia.org/wiki/Breather
In physics, a breather is a nonlinear wave in which energy concentrates in a localized and oscillatory fashion. This contradicts with the expectations derived from the corresponding linear system for infinitesimal amplitudes, which tends towards an even distribution of initially localized energy. A discrete breather is a breather solution on a nonlinear lattice. The term breather originates from the characteristic that most breathers are localized in space and oscillate (breathe) in time. But also the opposite situation: oscillations in space and localized in time, is denoted as a breather. Overview A breather is a localized periodic solution of either continuous media equations or discrete lattice equations. The exactly solvable sine-Gordon equation and the focusing nonlinear Schrödinger equation are examples of one-dimensional partial differential equations that possess breather solutions. Discrete nonlinear Hamiltonian lattices in many cases support breather solutions. Breathers are solitonic structures. There are two types of breathers: standing or traveling ones. Standing breathers correspond to localized solutions whose amplitude vary in time (they are sometimes called oscillons). A necessary condition for the existence of breathers in discrete lattices is that the breather main frequency and all its multipliers are located outside of the phonon spectrum of the lattice. Example of a breather solution for the sine-Gordon equation The sine-Gordon equation is the nonlinear dispersive partial differential equation with the field u a function of the spatial coordinate x and time t. An exact solution found by using the inverse scattering transform is: which, for ω < 1, is periodic in time t and decays exponentially when moving away from x = 0. Example of a breather solution for the nonlinear Schrödinger equation The focusing nonlinear Schrödinger equation is the dispersive partial differential equation: with u a complex field as a function of x and t. Further i denotes the imaginary unit. One of the breather solutions (Kuznetsov-Ma breather) is with which gives breathers periodic in space x and approaching the uniform value a when moving away from the focus time t = 0. These breathers exist for values of the modulation parameter b less than . Note that a limiting case of the breather solution is the Peregrine soliton. See also Breather surface Soliton References and notes Waves
Breather
[ "Physics" ]
491
[ "Waves", "Physical phenomena", "Motion (physics)" ]
1,546,802
https://en.wikipedia.org/wiki/Servants%27%20hall
The servants' hall is a common room for domestic workers in a great house, typically referring to the servants' dining room. If there is no separate sitting room, the servants' hall doubles as the place servants may spend their leisure hours and serves as both sitting room and dining room. Background Meals in the servants' hall were sometimes very formal affairs, depending on the size and formality of the household. At dinner in a formal house, the butler and housekeeper presided over the table much as the master and lady of the house did 'above stairs' (i.e., in the rooms occupied by the employer). In Victorian England, the strict rules of precedence were mirrored by the domestic staff in grand or formal homes in the seating arrangements of the Servants' Hall. A senior servant such as the lady's maid took the place of honour but would have to "go lower" (i.e. take a place further down the table) if the employer of a visiting servant outranked the mistress of the house. See also Servants' quarters References Domestic work Rooms
Servants' hall
[ "Engineering" ]
218
[ "Rooms", "Architecture" ]
1,546,863
https://en.wikipedia.org/wiki/Hans%20Reichenbach
Hans Reichenbach (September 26, 1891 – April 9, 1953) was a leading philosopher of science, educator, and proponent of logical empiricism. He was influential in the areas of science, education, and of logical empiricism. He founded the Gesellschaft für empirische Philosophie (Society for Empirical Philosophy) in Berlin in 1928, also known as the "Berlin Circle". Carl Gustav Hempel, Richard von Mises, David Hilbert and Kurt Grelling all became members of the Berlin Circle. In 1930, Reichenbach and Rudolf Carnap became editors of the journal Erkenntnis. He also made lasting contributions to the study of empiricism based on a theory of probability; the logic and the philosophy of mathematics; space, time, and relativity theory; analysis of probabilistic reasoning; and quantum mechanics. In 1951, he authored The Rise of Scientific Philosophy, his most popular book. Early life Hans was the second son of a Jewish merchant, Bruno Reichenbach, who had converted to Protestantism. He married Selma Menzel, a school mistress, who came from a long line of Protestant professionals which went back to the Reformation. His elder brother Bernard played a significant role in the left communist movement. His younger brother, Herman was a music educator. After completing secondary school in Hamburg, Hans Reichenbach studied civil engineering at the Hochschule für Technik Stuttgart, and physics, mathematics and philosophy at various universities, including Berlin, Erlangen, Göttingen and Munich. Among his teachers were Ernst Cassirer, David Hilbert, Max Planck, Max Born, Edmund Husserl, and Arnold Sommerfeld. Political activism Reichenbach was active in youth movements and student organizations. He joined the Freistudentenschaft in 1910. He attended the founding conference of the Freideutsche Jugend umbrella group at Hoher Meissner in 1913. He published articles about the university reform, the freedom of research, and against anti-Semitic infiltrations in student organizations. His older brother Bernard shared in this activism and went on to become a member of the Communist Workers' Party of Germany, representing this organisation on the Executive Committee of the Communist International. Hans wrote the Platform of the Socialist Student Party, Berlin which was published in 1918. The party had remained clandestine until the November Revolution when it was formally founded with him as chairman. He also worked with Karl Wittfogel, Alexander Schwab and his other brother Herman at this time. In 1919 his text Student und Sozialismus: mit einem Anhang: Programm der Sozialistischen Studentenpartei was published by Hermann Schüller, an activist with the League for Proletarian Culture. However following his attending lectures by Albert Einstein in 1919, he stopped participating in political groups. Academic career Reichenbach received a degree in philosophy from the University of Erlangen in 1915 and his PhD dissertation on the theory of probability, titled Der Begriff der Wahrscheinlichkeit für die mathematische Darstellung der Wirklichkeit (The Concept of Probability for the Mathematical Representation of Reality) and supervised by Paul Hensel and Max Noether, was published in 1916. Reichenbach served during World War I on the Russian front, in the German army radio troops. In 1917 he was removed from active duty, due to an illness, and returned to Berlin. While working as a physicist and engineer, Reichenbach attended Albert Einstein's lectures on the theory of relativity in Berlin from 1917 to 1920. In 1920 Reichenbach began teaching at the Technische Hochschule Stuttgart as Privatdozent. In the same year, he published his first book (which was accepted as his habilitation in physics at the Technische Hochschule Stuttgart) on the philosophical implications of the theory of relativity, The Theory of Relativity and A Priori Knowledge (Relativitätstheorie und Erkenntnis Apriori), which criticized the Kantian notion of synthetic a priori. He subsequently published Axiomatization of the Theory of Relativity (1924), From Copernicus to Einstein (1927) and The Philosophy of Space and Time (1928), the last stating the logical positivist view on the theory of relativity. Reichenbach distinguishes between axioms of connection and of coordination. Axioms of connection are those scientific laws which specify specific relations between specific physical things, like Maxwell’s equations. They describe empirical laws. Axioms of coordination are those laws which describe all things and are a priori, like Euclidean geometry and are “general rules according to which the connections take place”. For example the axioms of connection of gravitational equations are based upon the axioms of coordination of arithmetic. Another distinction of his was between the 'context of discovery' and 'context of justification'. The way scientists come up with ideas is not always the same as the way they justify them, and so as separate objects of study Reichenbach distinguished between them. In 1926, with the help of Albert Einstein, Max Planck and Max von Laue, Reichenbach became assistant professor in the physics department of the University of Berlin. He gained notice for his methods of teaching, as he was easily approached and his courses were open to discussion and debate. This was highly unusual at the time, although the practice is nowadays a common one. In 1928, Reichenbach founded the so-called "Berlin Circle" (; ). Among its members were Carl Gustav Hempel, Richard von Mises, David Hilbert and Kurt Grelling. The Vienna Circle manifesto lists 30 of Reichenbach's publications in a bibliography of closely related authors. In 1930 he and Rudolf Carnap began editing the journal Erkenntnis. When Adolf Hitler became Chancellor of Germany in 1933, Reichenbach was immediately dismissed from his appointment at the University of Berlin under the government's so called "Race Laws" due to his Jewish ancestry. Reichenbach himself did not practise Judaism, and his mother was a German Protestant, but he nevertheless suffered problems. He thereupon emigrated to Turkey, where he headed the department of philosophy at Istanbul University. He introduced interdisciplinary seminars and courses on scientific subjects, and in 1935 he published The Theory of Probability. In 1938, with the help of Charles W. Morris, Reichenbach moved to the United States to take up a professorship at the University of California, Los Angeles in its Philosophy Department. Reichenbach helped establish UCLA as a leading philosophy department in the United States in the post-war period. Carl Hempel, Hilary Putnam, and Wesley Salmon were perhaps his most prominent students. During his time there, he published several of his most notable books, including Philosophic Foundations of Quantum Mechanics in 1944, Elements of Symbolic Logic in 1947, and The Rise of Scientific Philosophy (his most popular book) in 1951. Reichenbach died unexpectedly of a heart attack on April 9, 1953. He was living in Los Angeles at the time, and had been working on problems in the philosophy of time and on the nature of scientific laws. As part of this he proposed a three part model of time in language, involving speech time, event time and — critically — reference time, which has been used by linguists since for describing tenses. This work resulted in two books published posthumously: The Direction of Time and Nomological Statements and Admissible Operations. Archives Hans Reichenbach manuscripts, photographs, lectures, correspondence, drawings and other related materials are maintained by the Archives of Scientific Philosophy, Special Collections, University Library System, University of Pittsburgh. Much of the content has been digitized. Some more notable content includes: Correspondence to Nagel, 1934-1938 Philosophy Congress Responses to Questionnaire Weyl's Extension of the Riemannian Concept of Space, Appendix Selected publications 1916. Der Begriff der Wahrscheinlichkeit für die mathematische Darstellung der Wirklichkeit (Ph.D. dissertation, University of Erlangen). 1920. Relativitätstheorie und Erkenntnis Apriori (habilitation thesis, Technische Hochschule Stuttgart). English translation: 1965. The theory of relativity and a priori knowledge. University of California Press. 1922. "Der gegenwärtige Stand der Relativitätsdiskussion." English translation: "The present state of the discussion on relativity" in Reichenbach (1959). 1924. Axiomatik der relativistischen Raum-Zeit-Lehre. English translation: 1969. Axiomatization of the theory of relativity. University of California Press. 1924. "Die Bewegungslehre bei Newton, Leibniz und Huyghens." English translation: "The theory of motion according to Newton, Leibniz, and Huyghens" in Reichenbach (1959). 1927. Von Kopernikus bis Einstein. Der Wandel unseres Weltbildes. English translation: 1942, From Copernicus to Einstein. Alliance Book Co. 1928. Philosophie der Raum-Zeit-Lehre. English translation: Maria Reichenbach, 1957, The Philosophy of Space and Time. Dover. 1930. Atom und Kosmos. Das physikalische Weltbild der Gegenwart. English translation: 1932, Atom and cosmos: the world of modern physics. G. Allen & Unwin, ltd. 1931. "Ziele und Wege der heutigen Naturphilosophie." English translation: "Aims and methods of modern philosophy of nature" in Reichenbach (1959). 1935. Wahrscheinlichkeitslehre: eine Untersuchung über die logischen und mathematischen Grundlagen der Wahrscheinlichkeitsrechnung. English translation: 1949, The theory of probability, an inquiry into the logical and mathematical foundations of the calculus of probability. University of California Press. 1938. Experience and prediction: an analysis of the foundations and the structure of knowledge. University of Chicago Press. 1942. From Copernicus to Einstein. Dover 1980: 1944. Philosophic Foundations of Quantum Mechanics. University of California Press. Dover 1998: 1947. Elements of Symbolic Logic. Dover 1980: 1948. "Philosophy and physics" in Faculty research lectures, 1946. University of California Press. 1949. "The philosophical significance of the theory of relativity" in Schilpp, P. A., ed., Albert Einstein: philosopher-scientist. Evanston: The Library of Living Philosophers. 1951. The Rise of Scientific Philosophy. University of California Press. 1954. Nomological statements and admissible operations. North Holland. 1956. The Direction of Time. University of California Press. Dover 1971. 1959. Modern philosophy of science: Selected essays by Hans Reichenbach. Routledge & Kegan Paul. Greenwood Press 1981: 1978. Selected writings, 1909–1953: with a selection of biographical and autobiographical sketches (Vienna circle collection). Dordrecht: Reidel. Springer paperback vol 1: 1979. Hans Reichenbach, logical empiricist (Synthese library). Dordrecht: Reidel. 1991. Erkenntnis Orientated: A Centennial volume for Rudolf Carnap and Hans Reichenbach. Kluwer. Springer 2003: 1991. Logic, language, and the structure of scientific theories: proceedings of the Carnap-Reichenbach centennial, University of Konstanz, 21–24 May 1991. University of Pittsburgh Press. See also American philosophy List of American philosophers References Sources Adolf Grünbaum, 1963, Philosophical Problems of Space and Time. Alfred A. Knopf. Ch. 3. Günther Sandner, The Berlin Group in the Making: Politics and Philosophy in the Early Works of Hans Reichenbach and Kurt Grelling. Proceedings of 10th International Congress of the International Society for the History of Philosophy of Science (HOPOS), Ghent, July 2014. (Abstract .) Carl Hempel, 1991, Hans Reichenbach remembered, Erkenntnis 35: 5–10. Wesley Salmon, 1977, "The philosophy of Hans Reichenbach," Synthese 34: 5–88. Wesley Salmon (ed.), 1979, Hans Reichenbach: Logical Empiricist. Springer. Wesley Salmon, 1991, "Hans Reichenbach's vindication of induction," Erkenntnis 35: 99–122. External links The Rise of Scientific Philosophy Descriptive summary & full searchable text at Google Book Search The Internet Encyclopedia of Philosophy: Hans Reichenbach by Mauro Murzi. The Stanford Encyclopedia of Philosophy: Hans Reichenbach by Clark Glymour and Frederick Eberhardt The Stanford Encyclopedia of Philosophy: "Reichenbach's Common Cause Principle" by Frank Arntzenius. Guide to the Hans Reichenbach Collection at the University of Pittsburgh's Archive of Scientific Philosophy "Reichenbach's Theory of Tense and its Application to English" 1891 births 1953 deaths 20th-century American male writers 20th-century American philosophers 20th-century American physicists 20th-century American educators 20th-century American essayists 20th-century German male writers 20th-century German philosophers 20th-century German physicists American logicians American male essayists American male non-fiction writers Jewish emigrants from Nazi Germany to the United States American socialists Analytic philosophers Philosophers of probability Empiricists German epistemologists 20th-century German educators German logicians German male essayists German male non-fiction writers German physicists German socialists History of logic Academic staff of the Humboldt University of Berlin German expatriates in Turkey Expatriate academics in Turkey Academic staff of Istanbul University Logical positivism German philosophers of education 20th-century German educational theorists German philosophers of language Philosophers of logic Philosophers of mathematics Philosophers of time German philosophy academics Philosophy writers Probability theorists German quantum physicists UCLA Department of Philosophy faculty University of California, Los Angeles faculty Vienna Circle Writers from Hamburg Humboldt University of Berlin alumni
Hans Reichenbach
[ "Mathematics" ]
2,863
[ "Philosophers of mathematics", "Mathematical logic", "Logical positivism" ]
1,546,930
https://en.wikipedia.org/wiki/Graphoscope
A graphoscope was a 19th-century device used in parlors in order to enhance the viewing of photographs and text. The graphoscope is supposed to be based on a 1864 patent of Charles John Rowsell. These novelty items consisted of a single magnifying glass, often in a wooden frame, in an overall construction that could collapse into a compact rectangular form. A photo/card holder was usually also included. A KOMBI camera often had included in its design a graphoscope for better film viewing. Many devices combined a Stereoscope and Graphoscope. See also Zograscope Sources https://web.archive.org/web/20120204093105/http://www.eyeantiques.com/ViewingInstruments/Graphoscope.htm http://www.bdcmuseum.org.uk/explore/item/69068/ https://web.archive.org/web/20160305080514/http://www.georgeglazer.com/archives/decarts/instruments/stereoscope.html https://web.archive.org/web/20091026224453/http://geocities.com/mbarel.geo/kombi.html Graphoscope. History and how it works. References Magnifiers
Graphoscope
[ "Technology", "Engineering" ]
290
[ "Magnifiers", "Measuring instruments" ]
1,547,057
https://en.wikipedia.org/wiki/EHealth
eHealth describes healthcare services which are supported by digital processes, communication or technology such as electronic prescribing, Telehealth, or Electronic Health Records (EHRs). The term "eHealth" originated in the 1990s, initially conceived as "Internet medicine," but has since evolved to have a broader range of technologies and innovations aimed at enhancing healthcare delivery and accessibility. According to the World Health Organization (WHO), eHealth encompasses not only internet-based healthcare services but also modern advancements such as artificial intelligence, mHealth (mobile health), and telehealth, which collectively aim to improve accessibility and efficiency in healthcare delivery. Usage of the term varies widely. A study in 2005 found 51 unique definitions of eHealth, reflecting its diverse applications and interpretations. While some argue that it is interchangeable with health informatics as a broad term covering electronic/digital processes in health, others use it in the narrower sense of healthcare practice specifically facilitated by the Internet. It also includes health applications and links on mobile phones, referred to as mHealth or m-Health. . Key components of eHealth include electronic health records (EHRs), telemedicine, health information exchange, mobile health applications, wearable devices, and online health information. For example, diabetes monitoring apps allow patients to track health metrics in real time, bridging the gap between home and clinical care. These technologies enable healthcare providers, patients, and other stakeholders to access, manage, and exchange health information more effectively, leading to improved communication, decision-making, and overall healthcare outcomes. Types The term can encompass a range of services or systems that are at the edge of medicine/healthcare and information technology, including: Electronic health record: enabling the communication of patient data between different healthcare professionals (GPs, specialists etc.); Computerized physician order entry: a means of requesting diagnostic tests and treatments electronically and receiving the results ePrescribing: access to prescribing options, printing prescriptions to patients and sometimes electronic transmission of prescriptions from doctors to pharmacists Clinical decision support system: providing information electronically about protocols and standards for healthcare professionals to use in diagnosing and treating patients Telemedicine: physical and psychological diagnosis and treatments at a distance, including telemonitoring of patients functions and videoconferencing; Telerehabilitation: providing rehabilitation services over a distance through telecommunications. Telesurgery: use robots and wireless communication to perform surgery remotely. Teledentistry: exchange clinical information and images over a distance. Consumer health informatics: use of electronic resources on medical topics by healthy individuals or patients; Health knowledge management: e.g. in an overview of latest medical journals, best practice guidelines or epidemiological tracking (examples include physician resources such as Medscape and MDLinx); Virtual healthcare teams: consisting of healthcare professionals who collaborate and share information on patients through digital equipment (for transmural care) mHealth or m-Health: includes the use of mobile devices in collecting aggregate and patient-level health data, providing healthcare information to practitioners, researchers, and patients, real-time monitoring of patient vitals, and direct provision of care (via mobile telemedicine); Medical research using grids: powerful computing and data management capabilities to handle large amounts of heterogeneous data. Health informatics / healthcare information systems: also often refer to software solutions for appointment scheduling, patient data management, work schedule management and other administrative tasks surrounding health. There can be integrated data collection platforms for devices and standards and require extended research. Internet Based Sources for Public Health Surveillance (Infoveillance). Contested Definition Several authors have noted the variable usage in the term; from being specific to the use of the Internet in healthcare to being generally around any use of computers in healthcare. Various authors have considered the evolution of the term and its usage and how this maps to changes in health informatics and healthcare generally. Oh et al., in a 2005 systematic review of the term's usage, offered the definition of eHealth as a set of technological themes in health today, more specifically based on commerce, activities, stakeholders, outcomes, locations, or perspectives. One thing that all sources seem to agree on is that e-health initiatives do not originate with the patient, though the patient may be a member of a patient organization that seeks to do this, as in the e-Patient movement. eHealth literacy eHealth literacy is defined as "the ability to seek, find, understand and appraise health information from electronic sources and apply knowledge gained to addressing or solving a health problem." This concept encompasses six types of literacy: traditional (literacy and numeracy), information, media, health, computer, and scientific. Of these, media and computer literacies are unique to the Internet context. eHealth media literacy includes awareness of media bias, the ability to discern both explicit and implicit meanings from media messages, and the capability to derive accurate information from digital content. While eHealth literacy involves the ability to use technology, it is extremely important to have the skills to critically evaluate online health information. This makes media literacy a critical part of successfully using eHealth. Having the composite skills of eHealth literacy allows health consumers to achieve positive outcomes from using the Internet for health purposes. eHealth literacy has the potential to both protect consumers from harm and empower them to fully participate in informed health-related decision making. People with high levels of eHealth literacy are also more aware of the risk of encountering unreliable information on the Internet On the other hand, the extension of digital resources to the health domain in the form of eHealth literacy can also create new gaps between health consumers. eHealth literacy hinges not on the mere access to technology, but rather on the skill to apply the accessed knowledge. The efficiency of eHealth also heavily relies on the efficiency and ease of use regarding technology being used by the patient. A high understanding of technology will not overcome the obstacles of overcomplicated technology being used by patients that are physically and mentally hindered. The population of elderly people surpassed the number of children for the first time in history in 2018. A more multi-faceted approach is necessary for this age group, because they are more susceptible to chronic disease, contraindications of medication, and other age-related setbacks like forgetfulness. Ehealth offers services that can be very helpful for all of these scenarios, making an elderly patient's quality of life substantially better with proper use. Data exchange One of the factors hindering the widespread acceptance of e-health tools is the concern about privacy, particularly regarding EPRs (Electronic patient record). This main concern has to do with the confidentiality of the data, as well as non-confidential data that may be vulnerable to unauthorized access. Each medical practice has its own jargon and diagnostic tools, so to standardize the exchange of information, various coding schemes may be used in combination with international medical standards. Systems that deal with these transfers are often referred to as Health Information Exchange (HIE). Of the forms of e-health already mentioned, there are roughly two types; front-end data exchange and back-end exchange. Front-end exchange typically involves the patient, while back-end exchange does not. A common example of a rather simple front-end exchange is a patient sending a photo taken by mobile phone of a healing wound and sending it via email to the family doctor for control. Such an action may avoid the cost of an expensive visit to the hospital. A common example of a back-end exchange is when a patient on vacation visits a doctor who then may request access to the patient's health records, such as medicine prescriptions, x-ray photographs, or blood test results. Such an action may reveal allergies or other prior conditions that are relevant to the visit. Thesaurus Successful e-health initiatives such as e-Diabetes have shown that for data exchange to be facilitated either at the front-end or the back-end, a common thesaurus is needed for terms of reference. Various medical practices in chronic patient care (such as for diabetic patients) already have a well defined set of terms and actions, which makes standard communication exchange easier, whether the exchange is initiated by the patient or the caregiver. In general, explanatory diagnostic information (such as the standard ICD-10) may be exchanged insecurely, and private information (such as personal information from the patient) must be secured. E-health manages both flows of information, while ensuring the quality of the data exchange. Early adopters Patients living with long term conditions (also called chronic conditions) over time often acquire a high level of knowledge about the processes involved in their own care, and often develop a routine in coping with their condition. For these types of routine patients, front-end e-health solutions tend to be relatively easy to implement. E-mental health E-mental health is frequently used to refer to internet based interventions and support for mental health conditions. However, it can also refer to the use of information and communication technologies that also includes the use of social media, landline and mobile phones. These services can range from providing information to offering peer support, computer-based programs, virtual applications, games, and real-time interaction with trained clinicians. Additionally, services can be delivered through telephones and interactive voice response (IVR). Mental disorders, including alcohol and drug use disorders, mood disorders such as depression, dementia, schaddressed ia, and anxiety disorders can all be addressed through e-mental health services. The majority of e-mental health interventions have focused on the treatment of depression and anxiety. There are also E-mental health programs available for other interventions such as smoking cessation, gambling, and post-disaster mental health. Advantages and disadvantages E-mental health has a number of advantages such as being low cost, easily accessible and providing anonymity to users. However, there are also a number of disadvantages such as concerns regarding treatment credibility, user privacy and confidentiality. Online security involves the implementation of appropriate safeguards to protect user privacy and confidentiality. This includes appropriate collection and handling of user data, the protection of data from unauthorized access and modification and the safe storage of data. Technical difficulties are another potential disadvantage. With almost all forms of technology, there will be unintended difficulties or malfunctions, which doesn't exclude tablets, computers, and wireless medical devices. Ehealth is also very dependent on the patient having functional Wi-Fi, which can be an issue that cannot be fixed without an expert. E-mental health has been gaining momentum in the academic research as well as practical arenas in a wide variety of disciplines such as psychology, clinical social work, family and marriage therapy, and mental health counseling. Testifying to this momentum, the E-Mental Health movement has its own international organization, the International Society for Mental Health Online. However, e-Mental health implementation into clinical practice and healthcare systems remains limited and fragmented. Programs There are at least five programs currently available to treat anxiety and depression. Several programs have been identified by the UK National Institute for Health and Care Excellence as cost effective for use in primary care. These include Fearfighter, a text based cognitive behavioral therapy program to treat people with phobias, and Beating the Blues, an interactive text, cartoon and video CBT program for anxiety and depression. Two programs have been supported for use in primary care by the Australian Government. The first is Anxiety Online, a text based program for the anxiety, depressive and eating disorders, and the second is THIS WAY UP, a set of interactive text, cartoon and video programs for the anxiety and depressive disorders. Another is iFightDepression a multilingual, free to use, web-based tool for self-management of less severe forms of depression, for use under guidance of a GP or psychotherapist. There are a number of online programs relating to smoking cessation. QuitCoach is a personalised quit plan based on the users response to questions regarding giving up smoking and tailored individually each time the user logs into the site. Freedom From Smoking takes users through lessons that are grouped into modules that provide information and assignments to complete. The modules guide participants through steps such as preparing to quit smoking, stopping smoking and preventing relapse. Other internet programs have been developed specifically as part of research into treatment for specific disorders. For example, an online self-directed therapy for problem gambling was developed to specifically test this as a method of treatment. All participants were given access to a website. The treatment group was provided with behavioural and cognitive strategies to reduce or quit gambling. This was presented in the form of a workbook which encouraged participants to self-monitor their gambling by maintaining an online log of gambling and gambling urges. Participants could also use a smartphone application to collect self-monitoring information. Finally participants could also choose to receive motivational email or text reminders of their progress and goals. An internet based intervention was also developed for use after Hurricane Ike in 2009. During this study, 1,249 disaster-affected adults were randomly recruited to take part in the intervention. Participants were given a structured interview then invited to access the web intervention using a unique password. Access to the website was provided for a four-month period. As participants accessed the site they were randomly assigned to either the intervention. those assigned to the intervention were provided with modules consisting of information regarding effective coping strategies to manage mental health and health risk behaviour. eHealth programs have been found to be effective in treating borderline personality disorder (BPD). Cybermedicine Cybermedicine is the use of the Internet to deliver medical services, such as medical consultations and drug prescriptions. It is the successor to telemedicine, wherein doctors would consult and treat patients remotely via telephone or fax. Cybermedicine is already being used in small projects where images are transmitted from a primary care setting to a medical specialist, who comments on the case and suggests which intervention might benefit the patient. A field that lends itself to this approach is dermatology, where images of an eruption are communicated to a hospital specialist who determines if referral is necessary. The field has also expanded to include online "ask the doctor" services that allow patients direct, paid access to consultations (with varying degrees of depth) with medical professionals (examples include Bundoo.com, Teladoc, and Ask The Doctor). A Cyber Doctor, known in the UK as a Cyber Physician, is a medical professional who does consultation via the internet, treating virtual patients, who may never meet face to face. This is a new area of medicine which has been utilized by the armed forces and teaching hospitals offering online consultation to patients before making their decision to travel for unique medical treatment only offered at a particular medical facility. Self-monitoring healthcare devices Self-monitoring is the use of sensors or tools which are readily available to the general public to track and record personal data. The sensors are usually wearable devices and the tools are digitally available through mobile device applications. Self-monitoring devices were created for the purpose of allowing personal data to be instantly available to the individual to be analyzed. As of now, fitness and health monitoring are the most popular applications for self-monitoring devices. The biggest benefit to self-monitoring devices is the elimination of the necessity for third party hospitals to run tests, which are both expensive and lengthy. These devices are an important advancement in the field of personal health management. Self-monitoring devices, like fitness trackers, have also been shown to help manage chronic diseases, providing users with real-time data that supports ongoing care and better disease management. Self-monitoring healthcare devices exist in many forms. An example is the Nike+ FuelBand, which is a modified version of the original pedometer. This device is wearable on the wrist and allows one to set a personal goal for a daily energy burn. It records the calories burned and the number of steps taken for each day while simultaneously functioning as a watch. To add to the ease of the user interface, it includes both numeric and visual indicators of whether or not the individual has achieved his or her daily goal. Finally, it is also synced to an iPhone app which allows for tracking and sharing of personal record and achievements. Other monitoring devices have more medical relevance. A well-known device of this type is the blood glucose monitor. The use of this device is restricted to diabetic patients and allows users to measure the blood glucose levels in their body. It is extremely quantitative and the results are available instantaneously. However, this device is not as independent of a self-monitoring device as the Nike+ Fuelband because it requires some patient education before use. One needs to be able to make connections between the levels of glucose and the effect of diet and exercise. In addition, the users must also understand how the treatment should be adjusted based on the results. In other words, the results are not just static measurements. The demand for self-monitoring health devices is skyrocketing, as wireless health technologies have become especially popular in the last few years. In fact, it is expected that by 2016, self-monitoring health devices will account for 80% of wireless medical devices. The key selling point for these devices is the mobility of information for consumers. The accessibility of mobile devices such as smartphones and tablets has increased significantly within the past decade. This has made it easier for users to access real-time information in a number of peripheral devices. There are still many future improvements for self-monitoring healthcare devices. Although most of these wearable devices have been excellent at providing direct data to the individual user, the biggest task which remains at hand is how to effectively use this data. Although the blood glucose monitor allows the user to take action based on the results, measurements such as the pulse rate, EKG signals, and calories do not necessarily serve to actively guide an individual's personal healthcare management. Consumers are interested in qualitative feedback in addition to the quantitative measurements recorded by the devices. Integrating self-monitoring devices with healthcare providers can help close this gap by allowing healthcare professionals to track their patients' data remotely, which in turn allows for more personalized care and timely interventions. eHealth During COVID-19 The pandemic that impacted the entire world made it extremely difficult for vast amounts of people to receive adequate healthcare in person. Elderly citizens and people with chronic health conditions were at more risk than the average healthy human, therefore they were more adversely affected than most. The switch from in-person to telehealth appointments and interventions was necessary to reduce the risks of spreading and/or contracting the disease. The forced use of telehealth during the pandemic highlighted its strengths and weaknesses, which accelerated the progression of this medium. The user feedback on eHealth during the COVID-19 pandemic was very positive, and consequently many patients and healthcare providers reported that they will continue to use this method of healthcare following the pandemic. In developing countries eHealth in general, and telemedicine in particular, is a vital resource to remote regions of emerging and developing countries but is often difficult to establish because of the lack of communications infrastructure. For example, in Benin, hospitals often can become inaccessible due to flooding during the rainy season and across Africa, the low population density, along with severe weather conditions and the difficult financial situation in many African states, has meant that the majority of the African people are badly disadvantaged in medical care. Telemedicine in Nepal is becoming popular tool to improve health care delivery in order to combat difficult landscape. In many regions there is not only a significant lack of facilities and trained health professionals, but also no access to eHealth because there is also no internet access in remote villages, or even a reliable electricity supply. Approximately 13 percent of people who live in Kenya have health insurance. A majority of the total health expenditure in sub-Saharan Africa was paid out-of-pocket, which forces millions into poverty yearly. A Kenyan service by the name of M-PESA may offer a solution to this problem. This mobile platform provides full transparency of patients needs and allows access to medical products and the ability to efficiently manage their funding. Internet connectivity, and the benefits of eHealth, can be brought to these regions using satellite broadband technology, and satellite is often the only solution where terrestrial access may be limited, or poor quality, and one that can provide a fast connection over a vast coverage area. Evaluation While eHealth has become an indispensable facet of healthcare in the past 5 years, there are still barriers preventing it from reaching its full potential. Knowledge of the socio-economic performance of eHealth is limited, and findings from evaluations are often challenging to transfer to other settings. Socio-economic evaluations of some narrow types of mHealth can rely on health economic methodologies, but larger scale eHealth may have too many variables, and tortuous, intangible cause and effect links may need a wider approach. There are no international guidelines for the usage of eHealth due to many variables such as ignorance on the matter, infrastructure issues, quality of healthcare professionals and lack of healthcare plans. It should also be stated that the effectiveness of eHealth is also dependent on the patient's condition. Some researchers believe that online healthcare may be most efficient as a supplement to in-person care. See also Personal Science Human Enhancement Quantified self Center for Telehealth and E-Health Law eHealthInsurance EUDRANET European Institute for Health Records Health 2.0 Telehealth Seth Roberts References Further reading External links Health informatics Telemedicine
EHealth
[ "Biology" ]
4,473
[ "Health informatics", "Medical technology" ]
1,547,135
https://en.wikipedia.org/wiki/C.mmp
The C.mmp was an early multiple instruction, multiple data (MIMD) multiprocessor system developed at Carnegie Mellon University (CMU) by William Wulf (1971). The notation C.mmp came from the PMS notation of Gordon Bell and Allen Newell, where a central processing unit (CPU) was designated as C, a variant was noted by the dot notation, and mmp stood for Multi-Mini-Processor. , the machine is on display at CMU, in Wean Hall, on the ninth floor. Structure Sixteen Digital Equipment Corporation PDP-11 minicomputers were used as the processing elements, named Compute Modules (CMs) in the system. Each CM had a local memory of 8K and a local set of peripheral devices. One of the challenges was that a device was only available through its unique connected processor, so the input/output (I/O) system (designed by Roy Levin) hid the connectivity of the devices and routed the requests to the hosting processor. If a processor went down, the devices connected to its Unibus became unavailable, which became a problem in overall system reliability. Processor 0 (the boot processor) had the disk drives attached. Each of the Compute Modules shared these communication pathways: An Interprocessor bus – used to distribute system-wide clock, interrupt, and process control messaging among the CMs A 16x16 crossbar switch – used to connect the 16 CMs on one side and 16 banks of shared memory on the other. If all 16 processors were accessing different banks of memory, the memory accesses would all be concurrent. If two or more processors were trying to access the same bank of memory, one of them would be granted access on one cycle and the remainder would be negotiated on subsequent memory cycles. Since the PDP-11 had a logical address space of 16-bits, another address translation unit was added to expand the address space to 25 bits for the shared memory space. The Unibus architecture provided 18 bits of physical address, and the two high-order bits were used to select one of four relocation registers which selected a bank of memory. Properly managing these registers was one of the challenges of programming the operating system (OS) kernel. The original C.mmp design used magnetic-core memory, but during its lifetime, higher performance dynamic random-access memory (RAM) became available and the system was upgraded. The original processors were PDP-11/20 processors, but in the final system, only five of these were used; the remaining 11 were PDP-11/40 processors, which were modified by having extra writeable microcode space. All modifications to these machines were designed and built at CMU. Most of the 11/20 modifications were custom changes to the wire-wrapped backplane, but because the PDP-11/40 was implemented in microcode, a separate proc-mod board was designed that intercepted certain instructions and implemented the protected operating system requirements. For example, it was necessary, for operating system integrity, that the stack pointer register never be odd. On the 11/20, this was done by clipping the lead to the low-order bit of the stack register. On the 11/40, any access to the stack was intercepted by the proc-mod board and generated an illegal data access trap if the low-order bit was 1. Operating system The operating system (OS) was named Hydra. It was capability-based, object-oriented, multi-user, and a microkernel. System resources were represented as objects and protected through capabilities. The OS and most application software was written in the programming language BLISS-11, which required cross-compiling on a PDP-10. The OS used very little assembly language. Among the programming languages available on the system was an ALGOL 68 variant which included extensions supporting parallel computing, to make good use of the C.mmp. The ALGOL compiler ran native on Hydra OS. Reliability Because overall system reliability depended on having all 16 CPUs running, there were serious problems with overall hardware reliability. If the mean time between failures (MTBF) of one processor was 24 hours, then the overall system reliability was 16/24 hours, or about 40 minutes. Overall, the system usually ran for between two and six hours. Many of these failures were due to timing glitches in the many custom circuits added to the processors. Great effort was expended to improve hardware reliability, and when a processor was noticeably failing, it was partitioned out, and would run diagnostics for several hours. When it had passed a first set of diagnostics, it was partitioned back in as an I/O processor and would not run application code (but its peripheral devices were now available); it continued to run diagnostics. If it passed these after several more hours, it was reinstated as a full member of the processor set. Similarly, if a block of memory (one page) was detected as faulty, it was removed from the pool of available pages, and until otherwise notified, the OS would ignore this page. Thus, the OS became an early example of a fault-tolerant system, able to deal with hardware problems which arose, inevitably. References ) Capability systems History of computing Parallel computing
C.mmp
[ "Technology" ]
1,078
[ "History of computing", "Capability systems", "Computers", "Computer systems" ]
1,547,157
https://en.wikipedia.org/wiki/Population%20size
In population genetics and population ecology, population size (usually denoted N) is a countable quantity representing the number of individual organisms in a population. Population size is directly associated with amount of genetic drift, and is the underlying cause of effects like population bottlenecks and the founder effect. Genetic drift is the major source of decrease of genetic diversity within populations which drives fixation and can potentially lead to speciation events. Genetic drift Of the five conditions required to maintain Hardy-Weinberg Equilibrium, infinite population size will always be violated; this means that some degree of genetic drift is always occurring. Smaller population size leads to increased genetic drift, it has been hypothesized that this gives these groups an evolutionary advantage for acquisition of genome complexity. An alternate hypothesis posits that while genetic drift plays a larger role in small populations developing complexity, selection is the mechanism by which large populations develop complexity. Population bottlenecks and founder effect Population bottlenecks occur when population size reduces for a short period of time, decreasing the genetic diversity in the population. The founder effect occurs when few individuals from a larger population establish a new population and also decreases the genetic diversity, and was originally outlined by Ernst Mayr. The founder effect is a unique case of genetic drift, as the smaller founding population has decreased genetic diversity that will move alleles within the population more rapidly towards fixation. Modeling genetic drift Genetic drift is typically modeled in lab environments using bacterial populations or digital simulation. In digital organisms, a generated population undergoes evolution based on varying parameters, including differential fitness, variation, and heredity set for individual organisms. Rozen et al. use separate bacterial strains on two different mediums, one with simple nutrient components and one with nutrients noted to help populations of bacteria evolve more heterogeneity. A digital simulation based on the bacterial experiment design was also used, with assorted assignations of fitness and effective population sizes comparable to those of the bacteria used based on both small and large population designations Within both simple and complex environments, smaller populations demonstrated greater population variation than larger populations, which showed no significant fitness diversity. Smaller populations had increased fitness and adapted more rapidly in the complex environment, while large populations adapted faster than small populations in the simple environment. These data demonstrate that the consequences of increased variation within small populations is dependent on the environment: more challenging or complex environments allow variance present within small populations to confer greater advantage. Analysis demonstrates that smaller populations have more significant levels of fitness from heterogeneity within the group regardless of the complexity of the environment; adaptive responses are increased in more complex environments. Adaptations in asexual populations are also not limited by mutations, as genetic variation within these populations can drive adaptation. Although small populations tend to face more challenges because of limited access to widespread beneficial mutation adaptation within these populations is less predictable and allows populations to be more plastic in their environmental responses. Fitness increase over time in small asexual populations is known to be strongly positively correlated with population size and mutation rate, and fixation probability of a beneficial mutation is inversely related to population size and mutation rate. LaBar and Adami use digital haploid organisms to assess differing strategies for accumulating genomic complexity. This study demonstrated that both drift and selection are effective in small and large populations, respectively, but that this success is dependent on several factors. Data from the observation of insertion mutations in this digital system demonstrate that small populations evolve larger genome sizes from fixation of deleterious mutations and large populations evolve larger genome sizes from fixation of beneficial mutations.  Small populations were noted to have an advantage in attaining full genomic complexity due to drift-driven phenotypic complexity. When deletion mutations were simulated, only the largest populations had any significant fitness advantage. These simulations demonstrate that smaller populations fix deleterious mutations by increased genetic drift. This advantage is likely limited by high rates of extinction. Larger populations evolve complexity through mutations that increase expression of particular genes; removal of deleterious alleles does not limit developing more complex genomes in the larger groups and a large number of insertion mutations that resulted in beneficial or non-functional elements within the genome were not required. When deletion mutations occur more frequently, the largest populations have an advantage that suggests larger populations generally have an evolutionary advantage for development of new traits. Critical Mutation Rate Critical mutation rate, or error threshold, limits the number of mutations that can exist within a self-replicating molecule before genetic information is destroyed in later generations. Contrary to the findings of previous studies, critical mutation rate has been noted to be dependent on population size in both haploid and diploid populations. When populations have fewer than 100 individuals, critical mutation rate can be exceeded, but will lead to loss of genetic material which results in further population decline and likelihood of extinction. This ‘speed limit’ is common within small, adapted asexual populations and is independent of mutation rate. Effective population size (Ne) The effective population size (Ne) is defined as "the number of breeding individuals in an idealized population that would show the same amount of dispersion of allele frequencies under random genetic drift or the same amount of inbreeding as the population under consideration." Ne is usually less than N (the absolute population size) and this has important applications in conservation genetics. Overpopulation may indicate any case in which the population of any species of animal may exceed the carrying capacity of its ecological niche. See also Carrying capacity Holocene extinction event Lists of organisms by population Overpopulation Population growth rate References Ecological metrics Population genetics Countable quantities
Population size
[ "Physics", "Mathematics" ]
1,128
[ "Scalar physical quantities", "Physical quantities", "Metrics", "Quantity", "Ecological metrics", "Dimensionless quantities", "Countable quantities" ]
1,547,225
https://en.wikipedia.org/wiki/Adrenergic%20agonist
An adrenergic agonist is a drug that stimulates a response from the adrenergic receptors. The five main categories of adrenergic receptors are: α1, α2, β1, β2, and β3, although there are more subtypes, and agonists vary in specificity between these receptors, and may be classified respectively. However, there are also other mechanisms of adrenergic agonism. Epinephrine and norepinephrine are endogenous and broad-spectrum. More selective agonists are more useful in pharmacology. An agent is a drug, or other substance, which has effects similar to, or the same as, epinephrine (adrenaline). Thus, it is a kind of sympathomimetic agent. Alternatively, it may refer to something which is susceptible to epinephrine, or similar substances, such as a biological receptor (specifically, the adrenergic receptors). Receptors Directly acting adrenergic agonists act on adrenergic receptors. All adrenergic receptors are G-protein coupled, activating signal transduction pathways. The G-protein receptor can affect the function of adenylate cyclase or phospholipase C, an agonist of the receptor will upregulate the effects on the downstream pathway (it will not necessarily upregulate the pathway itself). The receptors are broadly grouped into α and β receptors. There are two subclasses of α-receptor, α1 and α2 which are further subdivided into α1A, α1B, α1D, α2A, α2B and α2C. The α2C receptor has been reclassed from α1C, due to its greater homology with the α2 class, giving rise to the somewhat confusing nomenclature. The β receptors are divided into β1, β2 and β3. The receptors are classed physiologically, though pharmacological selectivity for receptor subtypes exists and is important in the clinical application of adrenergic agonists (and, indeed, antagonists). From an overall perspective, α1 receptors activate phospholipase C (via Gq), increasing the activity of protein kinase C (PKC); α2 receptors inhibit adenylate cyclase (via Gi), decreasing the activity of protein kinase A (PKA); β receptors activate adenylate cyclase (via Gs), thus increasing the activity of PKA. Agonists of each class of receptor elicit these downstream responses. Uptake and storage Indirectly acting adrenergic agonists affect the uptake and storage mechanisms involved in adrenergic signalling. Two uptake mechanisms exist for terminating the action of adrenergic catecholamines - uptake 1 and uptake 2. Uptake 1 occurs at the presynaptic nerve terminal to remove the neurotransmitter from the synapse. Uptake 2 occurs at postsynaptic and peripheral cells to prevent the neurotransmitter from diffusing laterally. There is also enzymatic degradation of the catecholamines by two main enzymes — monoamine oxidase and catechol-o-methyl transferase. Respectively, these enzymes oxidise monoamines (including catecholamines) and methylate the hydroxyl groups of the phenyl moiety of catecholamines. These enzymes can be targeted pharmacologically. Inhibitors of these enzymes act as indirect agonists of adrenergic receptors as they prolong the action of catecholamines at the receptors. Structure–activity relationship In general, a primary or secondary aliphatic amine separated by 2 carbons from a substituted benzene ring is minimally required for high agonist activity. Mechanisms A great number of drugs are available which can affect adrenergic receptors. Other drugs affect the uptake and storage mechanisms of adrenergic catecholamines, prolonging their action. The following headings provide some useful examples to illustrate the various ways in which drugs can enhance the effects of adrenergic receptors. Direct action These drugs act directly on one or more adrenergic receptors. According to receptor selectivity they are two types: Non-selective: drugs act on one or more receptors; these are: Adrenaline (almost all adrenergic receptors). Noradrenaline (acts on α1, α2, β1). Isoprenaline (acts on β1, β2, β3). Dopamine (acts on α1, α2, β1, D1, D2). Selective: drugs which act on a single receptor only; these are further classified into α selective & β selective. α1 selective: phenylephrine, methoxamine, midodrine, oxymetazoline. α2 selective: α-methyldopa, clonidine, brimonidine, dexmedetomidine, guanfacine. β1 selective: dobutamine. β2 selective: salbutamol/albuterol, terbutaline, salmeterol, formoterol, pirbuterol, clenbuterol. Indirect action These are agents that increase neurotransmission in endogenous chemicals, namely epinephrine and norepinephrine. The most common mechanisms of action includes competitive and non-competitive reuptake inhibition and releasing agents. Examples include methylphenidate, atomoxetine, cocaine, and some amphetamine based stimulants such as 4-hydroxyamphetamine. Mixed action Ephedrine Pseudoephedrine Precursores/Pró-farmacos Droxidopa ou L-DOPS L-DOPA See also Adrenergic receptor Alpha adrenergic agonist List of adrenergic drugs References External links Virtual Chembook article on adrenergic drugs Endocrine system
Adrenergic agonist
[ "Biology" ]
1,266
[ "Organ systems", "Endocrine system" ]
1,547,308
https://en.wikipedia.org/wiki/Subjunctive%20possibility
Subjunctive possibility (also called alethic possibility) is a form of modality studied in modal logic. Subjunctive possibilities are the sorts of possibilities considered when conceiving counterfactual situations; subjunctive modalities are modalities that bear on whether a statement might have been or could be true—such as might, could, must, possibly, necessarily, contingently, essentially, accidentally, and so on. Subjunctive possibilities include logical possibility, metaphysical possibility, nomological possibility, and temporal possibility. Subjunctive possibility and other modalities Subjunctive possibility is contrasted with (among other things) epistemic possibility (which deals with how the world may be, for all we know) and deontic possibility (which deals with how the world ought to be). Epistemic possibility The contrast with epistemic possibility is especially important to draw, since in ordinary language the same phrases ("it's possible," "it can't be", "it must be") are often used to express either sort of possibility. But they are not the same. We do not know whether Goldbach's conjecture is true or not (no-one has come up with a proof yet); so it is (epistemically) possible that it is true and it is (epistemically) possible that it is false. But if it is, in fact, provably true (as it may be, for all we know), then it would have to be (subjunctively) necessarily true; what being provable means is that it would not be (logically) possible for it to be false. Similarly, it might not be at all (epistemically) possible that it is raining outside—we might know beyond a shadow of a doubt that it is not—but that would hardly mean that it is (subjunctively) impossible for it to rain outside. This point is also made by Norman Swartz and Raymond Bradley. Deontic possibility There is some overlap in language between subjunctive possibilities and deontic possibilities: for example, we sometimes use the statement "You can/cannot do that" to express (i) what it is or is not subjunctively possible for you to do, and we sometimes use it to express (ii) what it would or would not be right for you to do. The two are less likely to be confused in ordinary language than subjunctive and epistemic possibility as there are some important differences in the logic of subjunctive modalities and deontic modalities. In particular, subjunctive necessity entails truth: if people logically must such and such, then you can infer that they actually do it. But in this non-ideal world, a deontic ‘must’ does not carry the moral certitude that people morally must do such and such. Types of subjunctive possibility There are several different types of subjunctive modality, which can be classified as broader or more narrow than one another depending on how restrictive the rules for what counts as "possible" are. Some of the most commonly discussed are: Logical possibility is usually considered the broadest sort of possibility; a proposition is said to be logically possible if there is no logical contradiction involved in its being true. "Dick Cheney is a bachelor" is logically possible, though in fact false; most philosophers have thought that statements like "If I flap my arms very hard, I will fly" are logically possible, although they are nomologically impossible. "Dick Cheney is a married bachelor," on the other hand, is logically impossible; anyone who is a bachelor is therefore not married, so this proposition is logically self-contradictory (though the sentence isn't, because it is logically possible for "bachelor" to mean "married man"). Metaphysical possibility is either equivalent to logical possibility or narrower than it (what a philosopher thinks the relationship between the two is depends, in part, on the philosopher's view of logic). Some philosophers have held that discovered identities such as Kripke's "Water is H2O" are metaphysically necessary but not logically necessary (they would claim that there is no formal contradiction involved in "Water is not H2O" even though it turns out to be metaphysically impossible). Nomological possibility is possibility under the actual laws of nature. Most philosophers since David Hume have held that the laws of nature are metaphysically contingent—that there could have been different natural laws than the ones that actually obtain. If so, then it would not be logically or metaphysically impossible, for example, for you to travel to Alpha Centauri in one day; it would just have to be the case that you could travel faster than the speed of light. But of course there is an important sense in which this is not possible; given that the laws of nature are what they are, there is no way that you could do it. (Some philosophers, such as Sydney Shoemaker , have argued that the laws of nature are in fact necessary, not contingent; if so, then nomological possibility is equivalent to metaphysical possibility.) Temporal possibility is possibility given the actual history of the world. David Lewis could have chosen to take his degree in Accounting rather than Philosophy; but there is an important sense in which he cannot now. The "could have" expresses the fact that there is no logical, metaphysical, or even nomological impossibility involved in Lewis's having a degree in Economics instead of Philosophy; the "cannot now" expresses the fact that that possibility is no longer open to becoming actual, given that the past is as it actually is. Similarly David Lewis could have taken a degree in Economics but not in, say, Aviation (because it was not taught at Harvard) or Cognitive Neuroscience (because the so-called 'conceptual space' for such a major did not exist). There is some debate whether this final type of possibility in fact constitutes a type of possibility distinct from Temporal, and is sometimes called Historical Possibility by thinkers like Ian Hacking. References Modal logic Possibility Concepts in metaphysics Concepts in logic
Subjunctive possibility
[ "Mathematics" ]
1,279
[ "Mathematical logic", "Modal logic" ]
1,547,359
https://en.wikipedia.org/wiki/Injective%20sheaf
In mathematics, injective sheaves of abelian groups are used to construct the resolutions needed to define sheaf cohomology (and other derived functors, such as sheaf Ext). There is a further group of related concepts applied to sheaves: flabby (flasque in French), fine, soft (mou in French), acyclic. In the history of the subject they were introduced before the 1957 "Tohoku paper" of Alexander Grothendieck, which showed that the abelian category notion of injective object sufficed to found the theory. The other classes of sheaves are historically older notions. The abstract framework for defining cohomology and derived functors does not need them. However, in most concrete situations, resolutions by acyclic sheaves are often easier to construct. Acyclic sheaves therefore serve for computational purposes, for example the Leray spectral sequence. Injective sheaves An injective sheaf is a sheaf that is an injective object of the category of abelian sheaves; in other words, homomorphisms from to can always be extended to any sheaf containing The category of abelian sheaves has enough injective objects: this means that any sheaf is a subsheaf of an injective sheaf. This result of Grothendieck follows from the existence of a generator of the category (it can be written down explicitly, and is related to the subobject classifier). This is enough to show that right derived functors of any left exact functor exist and are unique up to canonical isomorphism. For technical purposes, injective sheaves are usually superior to the other classes of sheaves mentioned above: they can do almost anything the other classes can do, and their theory is simpler and more general. In fact, injective sheaves are flabby (flasque), soft, and acyclic. However, there are situations where the other classes of sheaves occur naturally, and this is especially true in concrete computational situations. The dual concept, projective sheaves, is not used much, because in a general category of sheaves there are not enough of them: not every sheaf is the quotient of a projective sheaf, and in particular projective resolutions do not always exist. This is the case, for example, when looking at the category of sheaves on projective space in the Zariski topology. This causes problems when attempting to define left derived functors of a right exact functor (such as Tor). This can sometimes be done by ad hoc means: for example, the left derived functors of Tor can be defined using a flat resolution rather than a projective one, but it takes some work to show that this is independent of the resolution. Not all categories of sheaves run into this problem; for instance, the category of sheaves on an affine scheme contains enough projectives. Acyclic sheaves An acyclic sheaf over X is one such that all higher sheaf cohomology groups vanish. The cohomology groups of any sheaf can be calculated from any acyclic resolution of it (this goes by the name of De Rham-Weil theorem). Fine sheaves A fine sheaf over X is one with "partitions of unity"; more precisely for any open cover of the space X we can find a family of homomorphisms from the sheaf to itself with sum 1 such that each homomorphism is 0 outside some element of the open cover. Fine sheaves are usually only used over paracompact Hausdorff spaces X. Typical examples are the sheaf of germs of continuous real-valued functions over such a space, or smooth functions over a smooth (paracompact Hausdorff) manifold, or modules over these sheaves of rings. Also, fine sheaves over paracompact Hausdorff spaces are soft and acyclic. One can find a resolution of a sheaf on a smooth manifold by fine sheaves using the Alexander–Spanier resolution. As an application, consider a real manifold X. There is the following resolution of the constant sheaf by the fine sheaves of (smooth) differential forms: This is a resolution, i.e. an exact complex of sheaves by the Poincaré lemma. The cohomology of X with values in can thus be computed as the cohomology of the complex of globally defined differential forms: Soft sheaves A soft sheaf over X is one such that any section over any closed subset of X can be extended to a global section. Soft sheaves are acyclic over paracompact Hausdorff spaces. Flasque or flabby sheaves A flasque sheaf (also called a flabby sheaf) is a sheaf with the following property: if is the base topological space on which the sheaf is defined and are open subsets, then the restriction map is surjective, as a map of groups (rings, modules, etc.). Flasque sheaves are useful because (by definition) their sections extend. This means that they are some of the simplest sheaves to handle in terms of homological algebra. Any sheaf has a canonical embedding into the flasque sheaf of all possibly discontinuous sections of the étalé space, and by repeating this we can find a canonical flasque resolution for any sheaf. Flasque resolutions, that is, resolutions by means of flasque sheaves, are one approach to defining sheaf cohomology. Flasque sheaves are soft and acyclic. Flasque is a French word that has sometimes been translated into English as flabby. References "Sheaf cohomology and injective resolutions" on MathOverflow Algebraic geometry Homological algebra Sheaf theory
Injective sheaf
[ "Mathematics" ]
1,232
[ "Mathematical structures", "Fields of abstract algebra", "Sheaf theory", "Category theory", "Topology", "Algebraic geometry", "Homological algebra" ]
1,547,360
https://en.wikipedia.org/wiki/Villarceau%20circles
In geometry, Villarceau circles () are a pair of circles produced by cutting a torus obliquely through its center at a special angle. Given an arbitrary point on a torus, four circles can be drawn through it. One is in a plane parallel to the equatorial plane of the torus and another perpendicular to that plane (these are analogous to lines of latitude and longitude on the Earth). The other two are Villarceau circles. They are obtained as the intersection of the torus with a plane that passes through the center of the torus and touches it tangentially at two antipodal points. If one considers all these planes, one obtains two families of circles on the torus. Each of these families consists of disjoint circles that cover each point of the torus exactly once and thus forms a 1-dimensional foliation of the torus. The Villarceau circles are named after the French astronomer and mathematician Yvon Villarceau (1813–1883) who wrote about them in 1848. Example Consider a horizontal torus in xyz space, centered at the origin and with major radius 5 and minor radius 3. That means that the torus is the locus of some vertical circles of radius three whose centers are on a circle of radius five in the horizontal xy plane. Points on this torus satisfy this equation: Slicing with the z = 0 plane produces two concentric circles, x2 + y2 = 22 and x2 + y2 = 82, the outer and inner equator. Slicing with the x = 0 plane produces two side-by-side circles, (y − 5)2 + z2 = 32 and (y + 5)2 + z2 = 32. Two example Villarceau circles can be produced by slicing with the plane 3y = 4z. One is centered at (+3, 0, 0) and the other at (−3, 0, 0); both have radius five. They can be written in parametric form as and The slicing plane is chosen to be tangent to the torus at two points while passing through its center. It is tangent at (0, 16/5, 12/5) and at (0, -16/5, -12/5). The angle of slicing is uniquely determined by the dimensions of the chosen torus. Rotating any one such plane around the z-axis gives all of the Villarceau circles for that torus. Existence and equations A proof of the circles’ existence can be constructed from the fact that the slicing plane is tangent to the torus at two points. One characterization of a torus is that it is a surface of revolution. Without loss of generality, choose a coordinate system so that the axis of revolution is the z axis. [See the figure to the right.] Begin with a circle of radius r in the yz plane, centered at (0,R, 0): Sweeping this circle around the z-axis replaces y by (x2 + y2)1/2, and clearing the square root produces a quartic equation for the torus: The cross-section of the swept surface in the yz plane now includes a second circle, with equation This pair of circles has two common internal tangent lines, with slope at the origin found from the right triangle with hypotenuse R and opposite side r (which has its right angle at the point of tangency). Thus, on these tangent lines, z/y equals ±r / (R2 − r2)1/2, and choosing the plus sign produces the equation of a plane bitangent to the torus: We can calculate the intersection of this plane with the torus analytically, and thus show that the result is a symmetric pair of circles of radius R centered at A parametric description of these circles is These circles can also be obtained by starting with a circle of radius R in the xy-plane, centered at (r,0,0) or (-r,0,0), and then rotating this circle about the x-axis by an angle of arcsin(r/R). A treatment along these lines can be found in Coxeter (1969). A more abstract — and more flexible — approach was described by Hirsch (2002), using algebraic geometry in a projective setting. In the homogeneous quartic equation for the torus, setting w to zero gives the intersection with the “plane at infinity”, and reduces the equation to This intersection is a double point, in fact a double point counted twice. Furthermore, it is included in every bitangent plane. The two points of tangency are also double points. Thus the intersection curve, which theory says must be a quartic, contains four double points. But we also know that a quartic with more than three double points must factor (it cannot be irreducible), and by symmetry the factors must be two congruent conics, which are the two Villarceau circles. Hirsch extends this argument to any surface of revolution generated by a conic, and shows that intersection with a bitangent plane must produce two conics of the same type as the generator when the intersection curve is real. Filling space and the Hopf fibration The torus plays a central role in the Hopf fibration of the 3-sphere, S3, over the ordinary sphere, S2, which has circles, S1, as fibers. When the 3-sphere is mapped to Euclidean 3-space by stereographic projection, the inverse image of a circle of latitude on S2 under the fiber map is a torus, and the fibers themselves are Villarceau circles. Banchoff has explored such a torus with computer graphics imagery. One of the unusual facts about the circles making up the Hopf fibration is that each links through all the others, not just through the circles in its own torus but through the circles making up all the tori filling all of space; Berger has a discussion and drawing. Further properties Mannheim (1903) showed that the Villarceau circles meet all of the parallel circular cross-sections of the torus at the same angle, a result that he said a Colonel Schoelcher had presented at a congress in 1891. See also Hopf fibration Toric section Vesica piscis Citations References External links Flat Torus in the Three-Sphere The circles of the torus (Les cercles du tore) Circles Toric sections Fiber bundles
Villarceau circles
[ "Mathematics" ]
1,343
[ "Circles", "Pi" ]
1,547,519
https://en.wikipedia.org/wiki/Acierage
Metal plating
Acierage
[ "Chemistry" ]
5
[ "Metallurgical processes", "Coatings", "Metal plating" ]
1,547,778
https://en.wikipedia.org/wiki/Identity%20by%20descent
A DNA segment is identical by state (IBS) in two or more individuals if they have identical nucleotide sequences in this segment. An IBS segment is identical by descent (IBD) in two or more individuals if they have inherited it from a common ancestor without recombination, that is, the segment has the same ancestral origin in these individuals. DNA segments that are IBD are IBS per definition, but segments that are not IBD can still be IBS due to the same mutations in different individuals or recombinations that do not alter the segment. Theory All individuals in a finite population are related if traced back long enough and will, therefore, share segments of their genomes IBD. During meiosis segments of IBD are broken up by recombination. Therefore, the expected length of an IBD segment depends on the number of generations since the most recent common ancestor at the locus of the segment. The length of IBD segments that result from a common ancestor n generations in the past (therefore involving 2n meiosis) is exponentially distributed with mean 1/(2n) Morgans (M). The expected number of IBD segments decreases with the number of generations since the common ancestor at this locus. For a specific DNA segment, the probability of being IBD decreases as 2−2n since in each meiosis the probability of transmitting this segment is 1/2. Applications Identified IBD segments can be used for a wide range of purposes. As noted above the amount (length and number) of IBD sharing depends on the familial relationships between the tested individuals. Therefore, one application of IBD segment detection is to quantify relatedness. Measurement of relatedness can be used in forensic genetics, but can also increase information in genetic linkage mapping and help to decrease bias by undocumented relationships in standard association studies. Another application of IBD is genotype imputation and haplotype phase inference. Long shared segments of IBD, which are broken up by short regions may be indicative for phasing errors. IBD mapping IBD mapping is similar to linkage analysis, but can be performed without a known pedigree on a cohort of unrelated individuals. IBD mapping can be seen as a new form of association analysis that increases the power to map genes or genomic regions containing multiple rare disease susceptibility variants. Using simulated data, Browning and Thompson showed that IBD mapping has higher power than association testing when multiple rare variants within a gene contribute to disease susceptibility. Via IBD mapping, genome-wide significant regions in isolated populations as well as outbred populations were found while standard association tests failed. Houwen et al. used IBD sharing to identify the chromosomal location of a gene responsible for benign recurrent intrahepatic cholestasis in an isolated fishing population. Kenny et al. also used an isolated population to fine-map a signal found by a genome-wide association study (GWAS) of plasma plant sterol (PPS) levels, a surrogate measure of cholesterol absorption from the intestine. Francks et al. was able to identify a potential susceptibility locus for schizophrenia and bipolar disorder with genotype data of case-control samples. Lin et al. found a genome-wide significant linkage signal in a dataset of multiple sclerosis patients. Letouzé et al. used IBD mapping to look for founder mutations in cancer samples. IBD in population genetics Detection of natural selection in the human genome is also possible via detected IBD segments. Selection will usually tend to increase the number of IBD segments among individuals in a population. By scanning for regions with excess IBD sharing, regions in the human genome that have been under strong, very recent selection can be identified. In addition to that, IBD segments can be useful for measuring and identifying other influences on population structure. Gusev et al. showed that IBD segments can be used with additional modeling to estimate demographic history including bottlenecks and admixture. Using similar models Palamara et al. and Carmi et al. reconstructed the demographic history of Ashkenazi Jewish and Kenyan Maasai individuals. Botigué et al. investigated differences in African ancestry among European populations. Ralph and Coop used IBD detection to quantify the common ancestry of different European populations and Gravel et al. similarly tried to draw conclusions of the genetic history of populations in the Americas. Ringbauer et al. utilized geographic structure of IBD segments to estimate dispersal within Eastern Europe during the last centuries. Using the 1000 Genomes data Hochreiter found differences in IBD sharing between African, Asian and European populations as well as IBD segments that are shared with ancient genomes like the Neanderthal or Denisova. Methods and software Programs for the detection of IBD segments in unrelated individuals: RAPID: Ultra-fast Identity by Descent Detection in Biobank-Scale Cohorts using Positional Burrows–Wheeler Transform Parente: identifies IBD segments between pairs of individuals in unphased genotype data BEAGLE/fastIBD: finds segments of IBD between pairs of individuals in genome-wide SNP data BEAGLE/RefinedIBD: finds IBD segments in pairs of individuals using a hashing method and evaluates their significance via a likelihood ratio IBDseq: detects pairwise IBD segments in sequencing data GERMLINE: discovers in linear-time IBD segments in pairs of individuals DASH: builds upon pairwise IBD segments to infer clusters of individuals likely to be sharing a single haplotype PLINK: is a tool set for whole genome association and population-based linkage analyses including a method for pairwise IBD segment detection Relate: estimates the probability of IBD between pairs of individuals at a specific locus using SNPs MCMC_IBDfinder: is based on Markov chain Monte Carlo (MCMC) for finding IBD segments in multiple individuals IBD-Groupon: detects group-wise IBD segments based on pairwise IBD relationships HapFABIA: identifies very short IBD segments characterized by rare variants in large sequencing data simultaneously in multiple individuals See also Association mapping Genetic association Genetic linkage Genome-wide association study Identity by type Linkage disequilibrium Population genetics References Classical genetics Population genetics Human genome projects
Identity by descent
[ "Biology" ]
1,303
[ "Human genome projects", "Genome projects" ]
1,547,826
https://en.wikipedia.org/wiki/Mother%20Nature
Mother Nature (sometimes known as Mother Earth or the Earth Mother) is a personification of nature that focuses on the life-giving and nurturing aspects of nature by embodying it, in the form of a mother or mother goddess. European concept traditions Greek concept The Mycenaean Greek: Ma-ka (transliterated as ma-ga), "Mother Gaia", written in Linear B syllabic script (13th or 12th century BC), is the earliest known instance of the concept of earth as a mother. Greek myth of the seasons In Greek mythology, Persephone, daughter of Demeter (goddess of the harvest), was abducted by Hades (god of the dead), and taken to the underworld as his queen. The myth goes on to describe Demeter as so distraught that no crops would grow and the "entire human race [would] have perished of cruel, biting hunger if Zeus had not been concerned" (Larousse 152). According to myth, Zeus forced Hades to return Persephone to her mother, but while in the underworld, Persephone had eaten pomegranate seeds, the food of the dead and thus, she must then spend part of each year with Hades in the underworld. The myth continues that Demeter's grief for her daughter in the realm of the dead, was reflected in the barren winter months and her joy when Persephone returned was reflected in the bountiful summer months Ancient Rome Roman Epicurean poet Lucretius opened his didactic poem De rerum natura by addressing Venus as a veritable mother of nature. Lucretius used Venus as "a personified symbol for the generative aspect of nature". This largely had to do with the nature of Lucretius' work, which presents a nontheistic understanding of the world that eschewed superstition. Post-classical concept The pre-Socratic philosophers abstracted the entirety of phenomena of the world as singular: physis, and this was inherited by Aristotle. The word "nature" comes from the Latin word, "natura", meaning birth or character [see nature (philosophy)]. In English, its first recorded use (in the sense of the entirety of the phenomena of the world) was in 1266. "Natura" and the personification of Mother Nature were widely popular in the Middle Ages. As a concept, seated between the properly divine and the human, it can be traced to Ancient Greece, though Earth (or "Eorthe" in the Old English period) may have been personified as a goddess. The Norse also had a goddess called Jörð (Jord, or Erth). Medieval Christian thinkers did not see nature as inclusive of everything, but thought that it had been created by God; earth lay below the unchanging heavens and moon. Nature lay somewhere in the center, with agents above her (angels), and below her (demons and hell). Therefore mother nature became only a personification, not a goddess. Basque mythology Amalur (sometimes Ama Lur or Ama Lurra) was believed to be the goddess of the earth in the religion of the ancient Basque people. She was described as the mother of Ekhi, the sun, and Ilazki, the moon. Her name meant "mother earth" or "mother land"; the 1968 Basque documentary Ama lur was a celebration of the Basque countryside. Indigenous peoples of America Algonquian legend says that "beneath the clouds lives the Earth-Mother from whom is derived the Water of Life, who at her bosom feeds plants, animals and human" (Larousse 428). She is otherwise known as Nokomis, the Grandmother. In Inca mythology, Mama Pacha or Pachamama was a fertility goddess who presided over planting and harvesting. Pachamama is usually translated as "Mother Earth" but a more literal translation would be "Mother Universe" (in Aymara and Quechua mama = mother / pacha = world, space-time or the universe). It was believed that Pachamama and her husband, Inti, were the most benevolent deities and were worshiped in parts of the Andean mountain ranges (stretching from present day Ecuador to Chile and Argentina). In her book Coateteleco, pueblo indígena de pescadores ("Coatetelco, indigenous fishing town", Cuernavaca, Morelos: Vettoretti, 2015), Teódula Alemán Cleto states, En nuestra cultura prehispánica el respeto y la fe a nuestra madre naturaleza fueron primordiales para vivir en plena armonía como seres humanos. ("In our [Mexican] prehispanic culture, respect and faith in our Mother Nature [emphasis added] were paramount to living in full harmony as human beings.") Southeast Asia In the Mainland Southeast Asian countries of Cambodia, Laos and Thailand, earth (terra firma) is personified as Phra Mae Thorani, but is believed that her role in Buddhist mythology differs considerably from that of Mother Nature. In the Malay Archipelago, that role has been filled by Dewi Sri, The Rice-mother in the East Indies. Popular culture In the early 1970s, a television ad featured character actress Dena Dietrich as Mother Nature. Vexed by an off-screen narrator who informs her she has mistaken Chiffon margarine for butter, she responded with the trademarked slogan "It's not nice to fool Mother Nature" (underscored by thunder and lightning). Mother Nature is featured in The Year Without a Santa Claus voiced by Rhoda Mann. This version was the mother of Heat Miser and Snow Miser. When Mrs. Claus is unable to get them to compromise on a deal regarding snow in Southtown and a brief warm-up at the North Pole, she goes to Mother Nature for help. Mother Nature intimidates her children to doing as Mrs. Claus asks from them. Mother Nature appears in the live action remake of The Year Without a Santa Claus, portrayed by Carol Kane. Mother Nature appears in the 2008 sequel A Miser Brothers' Christmas voiced by Patricia Hamilton. Besides Heat Miser and Snow Miser, she is also shown to be the mother of Earthquake, Thunder and Lightning, the Tides, and North Wind. In the story after Santa Claus gets injured during one of the Miser Brothers' feuds (with some part of North Wind's henchmen secretly sabotaging Santa's new sleigh), she and Mrs. Claus make the Miser Brothers work at Santa's workshop to make it up to Santa Claus. Mother Nature appeared as a recurring character in The Smurfs voiced by June Foray. She resides in a cottage in the Smurfs' forest. Mother Nature is often mentioned in the Garfield comic strip. Mother Earth appears in The Earth Day Special, portrayed by Bette Midler. In the story when she falls from the sky and faints due to the problems with nature, she is rushed to the hospital where she is tended to by Doogie Howser and other doctors. Mother Nature was featured in Happily Ever After, voiced by Phyllis Diller. She was depicted as the most powerful force of good in the movie, having complete control over nature, as well as the ability to create creatures from potions she made in her sanctuary. Mother Nature is a recurring character in The New Woody Woodpecker Show, voiced by B. J. Ward. She was depicted as a fairy who often makes sure that Woody Woodpecker is doing his part in nature. Mother Nature was a supporting character in The Santa Clause 2 and The Santa Clause 3: The Escape Clause, portrayed by Aisha Tyler. She was shown as the head leader of the Council of Legendary Figures (which also consists of Santa Claus, the Easter Bunny, Cupid, Father Time, the Sandman, the Tooth Fairy, and Jack Frost). Mother Nature was featured in John Hancock written by Bo Bissett. She was referred to as Tara, a tribute to her name in Roman Mythology which was Terra or Terra Mater. Mother Nature was a recurring character featured in Stargate SG-1 where she was portrayed as an ascended Ancient called Oma Desala. The animated film Epic featured a character named Queen Tara (voiced by Beyoncé Knowles) who was a Mother Nature-like being. Mother Nature was a character in the Guardians of Childhood series by William Joyce. The long-lost daughter of the Boogieman Pitch, she was a young woman who could control phenomenons of nature. She stayed hidden while she watched the world. Her character was expanded in the latest book The Sandman and the War of Dreams. Mother Nature appeared in a major recurring role in the seventh season of Once Upon a Time. Mother Nature was a title for the leader of the dryads. In the story, the previous Mother Nature was Mother Flora (portrayed by Gabrielle Miller). Following the death of Mother Flora at the hands of some humans, Gothel (portrayed by Emma Booth) became the next Mother Nature. Jamie Lee Curtis co-wrote a graphic novel called "Mother Nature". See also Amalur Atabey (goddess) Ecofeminism Father Time Gaia hypothesis Jörð Mother goddess Pantheism Prakṛti References External links Allegory Comparative mythology Personifications of nature Symbols Earth goddesses
Mother Nature
[ "Mathematics" ]
1,931
[ "Symbols" ]
1,547,915
https://en.wikipedia.org/wiki/Combo%20%28video%20games%29
In video games, a combo (short for combination) is a set of actions performed in sequence, usually with strict timing limitations, that yield a significant benefit or advantage. The term originates from fighting games where it is based upon the concept of a striking combination. It has been since applied more generally to a wide variety of genres, such as puzzle games, shoot 'em ups, and sports games. Combos are commonly used as an essential gameplay element, but can also serve as a high score or attack power modifier, or simply as a way to exhibit a flamboyant playing style. In fighting games, combo specifically indicates a timed sequence of moves which produce a cohesive series of hits, each of which leaves the opponent unable to block. History John Szczepaniak of Hardcore Gaming 101 considers Data East's DECO Cassette System arcade title Flash Boy (1981), a scrolling action game based on the manga and anime series Astro Boy, to have a type of combo mechanic. When the player punches an enemy and it explodes, debris can destroy other enemies. The use of combo attacks originated from Technōs Japan's beat 'em up arcade games, Renegade in 1986 and Double Dragon in 1987. In contrast to earlier games that let players knock out enemies with a single blow, the opponents in Renegade and Double Dragon could take much more punishment, requiring a succession of punches, with the first hit temporarily immobilizing the enemy, making him unable to defend himself against successive punches. Combo attacks would later become more dynamic in Capcom's Final Fight, released in 1989. Fighting games The earliest known competitive fighting game that used a combo system was Culture Brain's Shanghai Kid in 1985; when the spiked speech balloon that reads "RUSH!" pops up during battle, the player had a chance to rhythmically perform a series of combos called "rush-attacking". The combo notion was reintroduced to competitive fighting games with Street Fighter II (1991) by Capcom, when skilled players learned that they could combine several attacks that left no time for the computer player to recover if they timed them correctly. Combos were a design accident; lead producer Noritaka Funamizu noticed that extra strikes were possible during a bug check on the car-smashing bonus stage. He thought that the timing required was too difficult to make it a useful game feature, but left it in as a hidden one. Combos have since become a design priority in almost all fighting games, and range from the simplistic to the highly intricate. The first game to count the hits of each combo, and reward the player for performing them, was Super Street Fighter II. Rhythm games In rhythm games, combo measures how many consecutive notes have received at least the second-worst judgment (i.e. other than the worst judgment). Never receiving the worst judgment in the entire song is called a full combo or a no miss. Receiving the best judgment for all notes in the song is called a full perfect combo or an all perfect. Some rhythm games have an internal judgment that is tighter than the best judgment, e.g. Critical Perfect in Maimai or S-Critical in Sound Voltex. Receiving an internal judgment for all notes in a song is called a 理論値. Other uses Many other types of video games include a combo system involving chains of tricks or other maneuvers, usually in order to build up bonus points to obtain a high score. Examples include the Tony Hawk's Pro Skater series, the Crazy Taxi series, and Pizza Tower. The first game with score combos was Data East's 1981 DECO Cassette System arcade game Flash Boy. Combos are a main feature in many puzzle games, such as Columns, Snood and Magical Drop. Primarily they are used as a scoring device, but in the modes of play that are level-based, are used to more quickly gain levels. Shoot 'em ups have increasingly incorporated combo systems, such as in Ikaruga, as have hack-and-slash games, such as Dynasty Warriors. See also Konami Code Fighting game terms at Wiktionary References Video game terminology
Combo (video games)
[ "Technology" ]
835
[ "Computing terminology", "Video game terminology" ]
1,548,014
https://en.wikipedia.org/wiki/Mister%20Hyde%20%28Marvel%20Comics%29
Mister Hyde (Calvin Zabo) is a supervillain appearing in American comic books published by Marvel Comics. Created by writer Stan Lee and artist Don Heck, the character first appeared in Journey into Mystery #99 (December 1963). Calvin Zabo is a supervillain known under the codename of Mister Hyde. He is the father of the superhero Daisy Johnson. The character has also been a member of the Masters of Evil. Calvin Zabo appeared in the second season of the television series Agents of S.H.I.E.L.D., portrayed by Kyle MacLachlan. Development Concept and creation Calvin Zabo / Mister Hyde is inspired by Dr. Jekyll and Mr. Hyde from the 1886 Gothic novella Strange Case of Dr Jekyll and Mr Hyde written by Robert Louis Stevenson. Publication history Calvin Zabo debuted in Journey into Mystery #99 (December 1963), created by Stan Lee and Don Heck. He has appeared as a regular character in Thunderbolts since issue #157, and remained with the team after the title transitioned into Dark Avengers beginning with issue #176. Fictional character biography Calvin Zabo was born in Trenton, New Jersey and becomes a morally abject but brilliant biochemist who discovered the effects of hormones on human physiology. His favorite storybook was Stevenson's 1886 classic, Strange Case of Dr Jekyll and Mr Hyde. He convinced himself that the experiment Dr. Jekyll performed in the story could actually be accomplished and became obsessed with the idea of unleashing his full bestial nature in a superhuman form. However, he needed money to do this, so he robbed his various employers systematically. Though too intelligent to be caught, the medical community became suspicious, due to his tendency of always getting employed by organizations which were subsequently robbed. Zabo eventually sought work as a surgeon in the hospital where Donald Blake worked as a directing physician, yet Blake would not allow him that job due to his history. Zabo became so enraged that Blake would not give him the position, even though he did indeed intend to rob the organization, and swore revenge. He eventually became successful in creating his formula and turned himself into a massive, Hulk-like creature he called Mister Hyde, named after the character in the story. In this new form, Hyde found out he had immense strength allowing him to crush cars and tear through steel as though it were made of cardboard. With his new superhuman powers he sought out Blake, whom he tried to kill by throwing him from a window, but Blake transformed into Thor by striking his cane on the wall and survived, with Thor claiming that he have saved Blake. Hyde, hearing of this on the radio, decided to eliminate Thor. He tried framing him for a bank robbery by using his vast strength to rip open a bank vault while disguised as Thor. While Blake and Jane Foster were out, Hyde met and kidnapped them at gunpoint. As Blake, he got tied up next to a bomb that would explode in 24 hours unless Hyde defused it. However, when attempting to steal a Polaris submarine to roam the seas like a pirate, After Thor defeated Hyde, the authorities saw his brute strength and realized he must have impersonated Thor, but Thor forced himself to let him escape, as Jane thought Blake is still in danger. Hyde went into business as a full-time professional supervillain and teamed up with the Cobra to get revenge upon Thor, but they were both defeated, despite getting Thor's hammer from him briefly. With the Cobra, he was bailed out and employed by Loki to kidnap Jane Foster and they battled Thor again. Loki doubled their powers to try to help them. Loki showed Thor where Jane was being held. The house had many traps set up for Thor, and Jane was almost killed in an explosion. Thor was able to defeat both villains, and Hyde was caught by a ray which paralyzed him. Both Hyde and the Cobra were jailed and Jane's life was saved by an Asgardian formula Balder sent to Thor. Hyde and the Cobra escaped prison, but were eventually recaptured by Daredevil. They teamed with the Jester to get revenge, but were defeated again. With the Scorpion, Hyde then battled Captain America and the Falcon. Teaming with the Cobra again, Hyde attempted to acquire Cagliostro's serum. While serving in prison following this failed attempt, Hyde was ensnared by the mind-control power of the Purple Man, and forced to battle Daredevil in an arena alongside the Cobra, the Jester and the Gladiator. Tiring of their repeated failures, the Cobra elected to sever their partnership when he escaped from Ryker's Island, taking the time to taunt Hyde before leaving. For a long time, Hyde never forgave him for the slight. With Batroc the Leaper, Hyde later blackmailed New York City with a hijacked supertanker and attempted to destroy the entire city to kill the Cobra. Eventually, he was defeated by Captain America with Batroc's aid. Hyde again stalked the Cobra seeking revenge, and this time battled Spider-Man. He was imprisoned again, but escaped Ryker's Island and battled Spider-Man and the Black Cat during another attempt on the Cobra's life. Hyde later battled Daredevil again. Hyde later became a member of Baron Helmut Zemo's incarnation of the Masters of Evil and invaded Avengers Mansion along with them, torturing the Black Knight and Edwin Jarvis. With Goliath and the Wrecking Crew, he nearly killed Hercules, but was defeated by the Avengers. Hyde later attempted an escape from the Vault alongside Titania, Vibro, the Griffin, and the Armadillo, but was defeated and recaptured by the Captain. He eventually escaped from the Vault alongside the Wizard and others. Hyde was later defeated in single combat by the Cobra, who earned Hyde's respect as a result. Hyde later fought with The Professor and received head trauma that limited his ability to transform. He was subsequently caught by the police when he coincidentally checked into the same hotel as the one where the Daily Bugle staff were attending Robbie Robertson's retirement party, allowing Peter Parker to defeat him using an improvised costume. Shortly after, Hyde had several run-ins with the Ghost Rider in which he was defeated with the Penance Stare. Hyde briefly helped the symbiote-bonded Toxin track the Cobra after a prison escape by providing a piece of skin for the symbiote to track with. Zabo was being held in the Raft (the Ryker's Island Prison complex) 6 months after the events of "Avengers Disassembled." When a prison break was caused by Electro, Zabo emerged in his Hyde persona, fought with Daredevil, and was knocked unconscious by Luke Cage. At one point, Zabo was discovered by the Young Avengers to be selling a derivative of his Hyde formula on the street as one of the various illegal substances known as Mutant Growth Hormone. Zabo grafts abilities similar to the powers of Spider-Man to homeless teenagers. After Spider-Man revealed his identity during the "Civil War" storyline, Zabo sought to recreate the circumstances of Spider-Man's "birth", by taking in orphans off the street, imbuing them with spider-powers, and seeing whether or not the teenagers would give in to their darker impulses. During the ensuing battle with Spider-Man, Hyde pulled webbing off his face, taking his eyelids with it, and was hit in the face with hydrochloric acid, courtesy of one of his own Guinea pigs. It is stated by Spider-Man that he has been left blinded and had his face ruined as a result. Dr. Curt Connors was later seen aiding Spider-Man in a cure for Zabo, one of Zabo's test subjects. Hyde's daughter is Daisy Johnson, who is a member of S.H.I.E.L.D.; her mother was apparently a sex worker whose services Calvin Zabo frequented and the girl was put up for adoption after birth. Daisy subsequently manifested superpowers inherited from Zabo's mutated genetic code. The Hood hired him as part of his criminal organization to take advantage of the split in the superhero community caused by the Superhuman Registration Act. Later, he was seen with Cobra (who was now operating as King Cobra), Firebrand and the Mauler, who attacked Yellowjacket, the Constrictor and other Initiative staff and trainees. Hyde worked with Boomerang, Tiger Shark, and Whirlwind to manipulate Venom III into procuring Norman Osborn's fortunes. This was thwarted by Venom and Green Goblin as Osborn threw a bomb into Hyde's mouth, causing him to spit out blood. Osborn then warned Hyde and the other villains that if they ever cross him again, he will kill everyone that they ever loved before they are tortured to death. Hyde joins the Grim Reaper's new Lethal Legion, claiming embarrassment over Norman Osborn blowing a bomb up in his mouth. Hyde appears as a member of the Hood's crime syndicate during an attack on the New Avengers. Hyde was selected to be a part of the "beta team" of the Thunderbolts, alongside Boomerang, the Shocker, Gunna and Centurius. Later, Hyde began a drug operation in California where he came into conflict with Robbie Reyes after his car had some of Hyde's pills inside. Hyde's mercenaries chase Robbie down during the race to retrieve the car and the pills. Robbie is gunned down by the mercenaries when he mistakes them for police and they torch the scene. Robbie is revived as a demonic being called the Ghost Rider and defeats Zabo, becoming something of a local hero and urban legend. Having regrouped and refined his Hyde formula into new blue pills, Calvin Zabo gradually takes over the L.A. criminal underground with his "Blue Hyde Brigade", which includes Guero and his gang, longtime enemies of Robbie, calling themselves the "Blue Krüe." During the "Avengers: Standoff!" storyline, Hyde was an inmate of Pleasant Hill, a gated community established by S.H.I.E.L.D. Mister Hyde was knocked out by Warwolf. During the "Opening Salvo" part of the Secret Empire storyline, Hyde is recruited by Baron Helmut Zemo to join the Army of Evil. During HYDRA's takeover of the United States, Hyde is one of a few Army of Evil members not in a stasis pod and is shown leading a group of HYDRA soldiers to invade New Attilan and capture the Inhumans. He, alongside HYDRA's Avengers, catch his daughter Daisy and her team, the Secret Warriors. During interrogation, Daisy uses her powers to destroy the Helicarrier they are in, forcing Hyde to retreat. Following the "Gang War" storyline, Mister Hyde is seen as an inmate at Ravencroft. When nurse Shay Marken is feeding the detained inmates, Mister Hyde claims that he is in his Calvin Zabo form which does not fool Nurse Marken. Then Mister Hyde vows to rip Nurse Marken apart as she states that now she is addressing Mister Hyde. In Immortal Thor, Hyde allies with Grey Gargoyle, King Cobra, and Radioactive Man. After Grey Gargoyle petrified Thor, Mister Hyde shattered his body. However, Thor regenerates with Enchantress' help. With help from Enchantress, Sif, and Magni of Earth-3515. Thor defeats the villains who are taken to Daedalus LLC, a subsidiary of Roxxon. Powers and abilities The process that transforms Calvin Zabo into his Mister Hyde persona are growth hormones caused by ingestion of a chemical formula. As his body adjusted to its new form, Hyde's strength, stamina, durability, and healing were all boosted to uncommon levels. Hyde's powers are so sufficient that he can stand up and face Joe Fixit in a fight. He was shown tearing apart an armored car door with ease. Through further experimental procedures over the years, his abilities have been increased beyond their original limits. Zabo must consume his special serum periodically for him to remain as one identity from another. However, mental stress or pain could impair this transformation into Hyde. He employs a wristwatch-like device supplied with the formula that injects itself directly into his bloodstream, thus enabling to transform himself by button pushing. Due to the nature of these transformations, Hyde's skin is warped. This gives his face a distorted look reminiscent of Lon Chaney's make-up used in The Phantom of the Opera. Zabo is also an intelligent research scientist with a Ph.D. in medicine and biochemistry. When assuming his Hyde form, he loses those skills. Reception Marc Buxton of Den of Geek ranked Mister Hyde 15th in their "Marvel’s 31 Best Monsters" list and called him a "monstrous force worthy of his classic monster namesake." Other versions Age of Apocalypse In the timeline of the Age of Apocalypse storyline, Mister Hyde (as well as the Cobra) is a near-feral and cannibal "scavenger". He is known to prowl graveyards and attack anyone entering his territory. Elseworlds Mister Hyde appeared in the Elseworlds crossover comic book Daredevil/Batman: Eye for an Eye. Two-Face partnered with Hyde for a series of technological robberies. In truth, Two-Face had implanted Hyde's brain with the material needed to "grow" an experimental "organic" computer chip and fed Hyde pills to keep him enraged. Once grown, the chip would kill Hyde, its current growth also weakening Hyde's strength as his energy is diverted to support the chip (Batman noting during the fight that Hyde should normally have a punch that could knock Superman into orbit). Hyde berates Two-Face, proud he has abandoned his past as Zabo and insults Two-Face for hanging onto his Harvey Dent side, as well as using a coin to decide between right and wrong. Two-Face is glad the process will kill Hyde. In the end, Daredevil uses his past friendship with Dent to talk Two-Face into supplying the antidote for the chip, which saves Hyde's life. House of M Mister Hyde appears as a member of the Hood's Masters of Evil. Before the Red Guard attacks Santo Rico, Hyde leaves the team alongside the Cobra, Crossbones, and Thunderball. Hyde was later seen as an Army scientist. Marvel Zombies A zombified Mister Hyde appears in Marvel Zombies 4. He is seen attacking the new Midnight Sons, trying to bite one of them, but he is quickly killed by the Man-Thing when he rips the zombie Hyde apart and then, holding a huge boulder, drops it down on him, crushing the zombie Hyde to death instantly. Thor: The Mighty Avenger Mister Hyde is the antagonist of the first two issues of this alternate universe retelling of Thor's origin. Thor, confused and partially amnesiac, stops Hyde from hassling an innocent woman. This drives Hyde into an obsession with Thor's new friend, a museum employee named Jane Foster. In other media Television Calvin Zabo / Mister Hyde appears in "The Mighty Thor" segment of The Marvel Super Heroes, voiced by Henry Comor. Calvin Johnson appears in the second season of Agents of S.H.I.E.L.D., portrayed by Kyle MacLachlan. This version, initially known as the "Doctor", uses a formula described as being primarily composed of "anabolic-androgenic steroids, a liver enzyme blocker, various metabolic enhancers, methamphetamines, gorilla testosterone, and a drop of peppermint", with a minimum of one milligram of adrenaline being required to achieve its full effect. Additionally, he is the husband of an Inhuman named Jiaying. Throughout his appearances, he joins forces with Jiaying to seek revenge on Daniel Whitehall for dissecting her and S.H.I.E.L.D. for denying him his revenge until he eventually realizes the error of his ways and saves Daisy Johnson from Jiaying by killing the latter for her. Following this, Phil Coulson alters his memory, which allows him to start over with a new identity and take up work as a veterinarian named "Winslow". As of the fourth season, Glenn Talbot successfully tasked scientists with recreating Zabo's formula and empowering Jeffrey Mace as part of "Project: Patriot". Video games Calvin Zabo / Mister Hyde appears as a boss in Iron Man and X-O Manowar in Heavy Metal, voiced by Tim Jones. Calvin Zabo / Mister Hyde appears as a boss and playable character in Marvel Avengers Alliance 2. Calvin Zabo / Mister Hyde appears as a boss in Marvel Heroes. The Lizard breaks him out of prison to keep his human side dormant. In exchange, Zabo injects the Lizard with his Hyde formula to make him stronger and so they can combine their respective formulas and poison the Bronx Zoo's water supply to create reptilian-animal hybrids, only to be defeated by the players. Cal Johnson / Mister Hyde appears as a playable character in Lego Marvel's Avengers via the "Agents of S.H.I.E.L.D." DLC. References External links Mister Hyde at Marvel.com Characters created by Don Heck Characters created by Stan Lee Comics characters introduced in 1963 Fictional biochemists Fictional biologists Fictional characters from New Jersey Fictional mad scientists Fictional medical specialists Marvel Comics characters with accelerated healing Marvel Comics characters with superhuman durability or invulnerability Marvel Comics characters with superhuman senses Marvel Comics characters with superhuman strength Marvel Comics male supervillains Marvel Comics mutates Marvel Comics scientists Marvel Comics supervillains Marvel Comics television characters Works based on Strange Case of Dr Jekyll and Mr Hyde
Mister Hyde (Marvel Comics)
[ "Chemistry" ]
3,678
[ "Fictional biochemists", "Biochemists" ]
1,548,091
https://en.wikipedia.org/wiki/Rafael%20Bombelli
Rafael Bombelli (baptised on 20 January 1526; died 1572) was an Italian mathematician. Born in Bologna, he is the author of a treatise on algebra and is a central figure in the understanding of imaginary numbers. He was the one who finally managed to address the problem with imaginary numbers. In his 1572 book, L'Algebra, Bombelli solved equations using the method of del Ferro/Tartaglia. He introduced the rhetoric that preceded the representative symbols +i and -i and described how they both worked. Life Rafael Bombelli was baptised on 20 January 1526 in Bologna, Papal States. He was born to Antonio Mazzoli, a wool merchant, and Diamante Scudieri, a tailor's daughter. The Mazzoli family was once quite powerful in Bologna. When Pope Julius II came to power, in 1506, he exiled the ruling family, the Bentivoglios. The Bentivoglio family attempted to retake Bologna in 1508, but failed. Rafael's grandfather participated in the coup attempt, and was captured and executed. Later, Antonio was able to return to Bologna, having changed his surname to Bombelli to escape the reputation of the Mazzoli family. Rafael was the oldest of six children. Rafael received no college education, but was instead taught by an engineer-architect by the name of Pier Francesco Clementi. Bombelli felt that none of the works on algebra by the leading mathematicians of his day provided a careful and thorough exposition of the subject. Instead of another convoluted treatise that only mathematicians could comprehend, Rafael decided to write a book on algebra that could be understood by anyone. His text would be self-contained and easily read by those without higher education. Bombelli died in 1572 in Rome. Bombelli's Algebra In the book that was published in 1572, entitled Algebra, Bombelli gave a comprehensive account of the algebra known at the time. He was the first European to write down the way of performing computations with negative numbers. The following is an excerpt from the text: "Plus times plus makes plus Minus times minus makes plus Plus times minus makes minus Minus times plus makes minus Plus 8 times plus 8 makes plus 64 Minus 5 times minus 6 makes plus 30 Minus 4 times plus 5 makes minus 20 Plus 5 times minus 4 makes minus 20" As was intended, Bombelli used simple language as can be seen above so that anybody could understand it. But at the same time, he was thorough. Notation Bombelli introduced, for the first time in a printed text (in Book II of his Algebra), a form of index notation in which the equation appeared as 1U3 a. 6U1 p. 40. in which he wrote the U3 as a raised bowl-shape (like the curved part of the capital letter U) with the number 3 above it. Full symbolic notation was developed shortly thereafter by the French mathematician François Viète. Complex numbers Perhaps more importantly than his work with algebra, however, the book also includes Bombelli's monumental contributions to complex number theory. Before he writes about complex numbers, he points out that they occur in solutions of equations of the form given that which is another way of stating that the discriminant of the cubic is negative. The solution of this kind of equation requires taking the cube root of the sum of one number and the square root of some negative number. Before Bombelli delves into using imaginary numbers practically, he goes into a detailed explanation of the properties of complex numbers. Right away, he makes it clear that the rules of arithmetic for imaginary numbers are not the same as for real numbers. This was a big accomplishment, as even numerous subsequent mathematicians were extremely confused on the topic. Bombelli avoided confusion by giving a special name to square roots of negative numbers, instead of just trying to deal with them as regular radicals like other mathematicians did. This made it clear that these numbers were neither positive nor negative. This kind of system avoids the confusion that Euler encountered. Bombelli called the imaginary number i "plus of minus" and used "minus of minus" for -i. Bombelli had the foresight to see that imaginary numbers were crucial and necessary to solving quartic and cubic equations. At the time, people cared about complex numbers only as tools to solve practical equations. As such, Bombelli was able to get solutions using Scipione del Ferro's rule, even in casus irreducibilis, where other mathematicians such as Cardano had given up. In his book, Bombelli explains complex arithmetic as follows: "Plus by plus of minus, makes plus of minus. Minus by plus of minus, makes minus of minus. Plus by minus of minus, makes minus of minus. Minus by minus of minus, makes plus of minus. Plus of minus by plus of minus, makes minus. Plus of minus by minus of minus, makes plus. Minus of minus by plus of minus, makes plus. Minus of minus by minus of minus makes minus." After dealing with the multiplication of real and imaginary numbers, Bombelli goes on to talk about the rules of addition and subtraction. He is careful to point out that real parts add to real parts, and imaginary parts add to imaginary parts. Reputation Bombelli is generally regarded as the inventor of complex numbers, as no one before him had made rules for dealing with such numbers, and no one believed that working with imaginary numbers would have useful results. Upon reading Bombelli's Algebra, Leibniz praised Bombelli as an ". . . outstanding master of the analytical art." Crossley writes in his book, "Thus we have an engineer, Bombelli, making practical use of complex numbers perhaps because they gave him useful results, while Cardan found the square roots of negative numbers useless. Bombelli is the first to give a treatment of any complex numbers. . . It is remarkable how thorough he is in his presentation of the laws of calculation of complex numbers. . ." In honor of his accomplishments, a Moon crater was named Bombelli. Bombelli's method of calculating square roots Bombelli used a method related to simple continued fractions to calculate square roots. He did not yet have the concept of a continued fraction, and below is the algorithm of a later version given by Pietro Cataldi (1613). The method for finding begins with with , from which it can be shown that . Repeated substitution of the expression on the right hand side for into itself yields a continued fraction for the root but Bombelli is more concerned with better approximations for . The value chosen for is either of the whole numbers whose squares lies between. The method gives the following convergents for while the actual value is 3.605551275... : The last convergent equals 3.605550883... . Bombelli's method should be compared with formulas and results used by Heros and Archimedes. The result used by Archimedes in his determination of the value of can be found by using 1 and 0 for the initial values of . References Footnotes Citations Sources Morris Kline, Mathematical Thought from Ancient to Modern Times, 1972, Oxford University Press, New York, David Eugene Smith, A Source Book in Mathematics, 1959, Dover Publications, New York, Daniel J. Curtin, et al., Rafael Bombelli's L'Algebra, 1996, https://www.people.iup.edu/gsstoudt/history/bombelli/bombelli.pdf External links L'Algebra, Libri I, II, III, IV e V, original Italian texts. Background 1526 births 1572 deaths 16th-century Italian mathematicians Algebraists Scientists from Bologna
Rafael Bombelli
[ "Mathematics" ]
1,577
[ "Algebra", "Algebraists" ]
1,548,123
https://en.wikipedia.org/wiki/Fractional-order%20integrator
A fractional-order integrator or just simply fractional integrator is an integrator device that calculates the fractional-order integral or derivative (usually called a differintegral) of an input. Differentiation or integration is a real or complex parameter. The fractional integrator is useful in fractional-order control where the history of the system under control is important to the control system output. Overview The differintegral function, includes the integer order differentiation and integration functions, and allows a continuous range of functions around them. The differintegral parameters are a, t, and q. The parameters a and t describe the range over which to compute the result. The differintegral parameter q may be any real number or complex number. If q is greater than zero, the differintegral computes a derivative. If q is less than zero, the differintegral computes an integral. The integer order integration can be computed as a Riemann–Liouville differintegral, where the weight of each element in the sum is the constant unit value 1, which is equivalent to the Riemann sum. To compute an integer order derivative, the weights in the summation would be zero, with the exception of the most recent data points, where (in the case of the first unit derivative) the weight of the data point at t − 1 is −1 and the weight of the data point at t is 1. The sum of the points in the input function using these weights results in the difference of the most recent data points. These weights are computed using ratios of the Gamma function incorporating the number of data points in the range [a,t], and the parameter q. Digital devices Digital devices have the advantage of being versatile, and are not susceptible to unexpected output variation due to heat or noise. The discrete nature of a computer however, does not allow for all of history to be computed. Some finite range [a,t] must exist. Therefore, the number of data points that can be stored in memory (N), determines the oldest data point in memory, so that the value a is never more than N samples old. The effect is that any history older than a is completely forgotten, and no longer influences the output. A solution to this problem is the Coopmans approximation, which allows old data to be forgotten more gracefully (though still with exponential decay, rather than with the power law decay of a purely analog device). Analog devices Analog devices have the ability to retain history over longer intervals. This translates into the parameter a staying constant, while t increases. There is no error due to round-off, as in the case of digital devices, but there may be error in the device due to leakages, and also unexpected variations in behavior caused by heat and noise. An example fractional-order integrator is a modification of the standard integrator circuit, where a capacitor is used as the feedback impedance on an opamp. By replacing the capacitor with an RC Ladder circuit, a half order integrator, that is, with can be constructed. See also Signal analysis Fourier series References Cybernetics Fractional calculus
Fractional-order integrator
[ "Mathematics" ]
656
[ "Fractional calculus", "Calculus" ]
1,548,201
https://en.wikipedia.org/wiki/Polyalkylimide
Polyalkylimide is a polymer whose structure contains no free monomers. It is used in permanent dermal fillers to treat soft tissue deficits such as facial lipoatrophy, gluteal atrophy, acne, and scars. In plastic and reconstructive surgery it is used for building facial volume in the cheeks, chin, jaw, and lips. Reports of infections and migration of polyalkylimide in the face has led Canada to remove it from the market, and the manufacturer of Biolcamid ceasing production. A class action lawsuit was filed against the company. See also Plastic Surgery References Polymers Plastic surgery filler
Polyalkylimide
[ "Chemistry", "Materials_science" ]
138
[ "Polymer stubs", "Polymers", "Organic chemistry stubs", "Polymer chemistry" ]
1,548,236
https://en.wikipedia.org/wiki/Menuconfig
make menuconfig is one of five similar tools that can assist a user in configuring the Linux kernel before building, a necessary step needed to compile the source code. make menuconfig, with a menu-driven user interface, allows the user to choose which features and modules to compile. It is normally invoked using the command make menuconfig; menuconfig is a target in the Linux Makefile. Overview make menuconfig was not in the first version of Linux. The predecessor tool is a question-and-answer-based utility (make config, make oldconfig). Variations of the tool for Linux configuration include: make xconfig, which requires Qt make gconfig, which uses GTK+ make nconfig, which is similar to make menuconfig. All these tools use the Kconfig language internally. Kconfig is also used in other projects, such as Das U-Boot, a bootloader for embedded devices, Buildroot, a tool for generating embedded Linux systems, and BusyBox, a single-executable shell utility toolbox for embedded systems. Advantages over earlier versions Despite being a simple design, make menuconfig offers considerable advantages to the question-and-answer-based configuration tool make config, the most notable being a basic search system and the ability to load and save files with filenames different from ".config". make menuconfig allows navigation forwards or backwards directly between features, rather than make config's approach of listing every single option one by one, which requires pressing the key repeatedly to view all options. If the user is satisfied with a previous .config file, using make oldconfig uses this previous file to answer all questions that it can, only interactively presenting the new features. This is intended for a version upgrade, but may be appropriate at other times. make menuconfig is a light load on system resources unlike make xconfig (uses Qt as of version 2.6.31.1, formerly Tk) or make gconfig, which utilizes GTK+. Instead of editing the .config by hand, make menuconfig shows the descriptions of each feature (by pressing the "Help" button while on a menu option), and adds some (primitive in version 2.6.31.1) dependency checking. The help information is distributed throughout the kernel source tree in the various files called Kconfig. Dependencies To use make menuconfig, Linux source is a requirement, a make tool, a C compiler, and the ncurses library. Key strokes Symbols To the left of the features is the setting (y, M, or empty) enclosed in two punctuation marks. Note that the supplied dependency information is primitive, it does not tell you the names of the dependant features. menuconfig in the kernel-build workflow The user is encouraged to read the Linux README, since there are also many other make targets (beyond modules_install and install). Each will configure the kernel, but with different features activated, or using a different interactive interface; such as tinyconfig or allyesconfig. simple (but effective) workflow make menuconfig Next build the compressed kernel and its modules, a long process. make. Install using your favorite method such as make modules_install, make install. See also GNU Compiler Collection TUI References The make menuconfig tool itself. Linux From Scratch How to Build a Minimal Linux System Creating custom kernels with Debian's kernel-package system Cross compiling Linux on IBM System z How to roll your own Linux Building A Kernel The Traditional Way The Linux Kernel HOWTO Kconfig language External links The Linux Kernel Archives Linux kernel Linux configuration utilities Configuration management Build automation Free software that uses ncurses
Menuconfig
[ "Engineering" ]
824
[ "Systems engineering", "Configuration management" ]
1,548,293
https://en.wikipedia.org/wiki/Metaplasia
Metaplasia () is the transformation of a cell type to another cell type. The change from one type of cell to another may be part of a normal maturation process, or caused by some sort of abnormal stimulus. In simplistic terms, it is as if the original cells are not robust enough to withstand their environment, so they transform into another cell type better suited to their environment. If the stimulus causing metaplasia is removed or ceases, tissues return to their normal pattern of differentiation. Metaplasia is not synonymous with dysplasia, and is not considered to be an actual cancer. It is also contrasted with heteroplasia, which is the spontaneous abnormal growth of cytologic and histologic elements. Today, metaplastic changes are usually considered to be an early phase of carcinogenesis, specifically for those with a history of cancers or who are known to be susceptible to carcinogenic changes. Metaplastic change is thus often viewed as a premalignant condition that requires immediate intervention, either surgical or medical, lest it lead to cancer via malignant transformation. Causes When cells are faced with physiological or pathological stresses, they respond by adapting in any of several ways, one of which is metaplasia. It is a benign (i.e. non-cancerous) change that occurs as a response to change of milieu (physiological metaplasia) or chronic physical or chemical irritation. One example of pathological irritation is cigarette smoke, which causes the mucus-secreting ciliated pseudostratified columnar respiratory epithelial cells that line the airways to be replaced by stratified squamous epithelium, or a stone in the bile duct that causes the replacement of the secretory columnar epithelium with stratified squamous epithelium (squamous metaplasia). Metaplasia is an adaptation that replaces one type of epithelium with another that is more likely to be able to withstand the stresses it is faced with. It is also accompanied by a loss of endothelial function, and in some instances considered undesirable; this undesirability is underscored by the propensity for metaplastic regions to eventually turn cancerous if the irritant is not eliminated. The cell of origin for many types of metaplasias are controversial or unknown. For example, there is evidence supporting several different hypotheses of origin in Barrett's esophagus. They include direct transdifferentiation of squamous cells to columnar cells, the stem cell changing from esophageal type to intestinal type, migration of gastric cardiac cells, and a population of resident embryonic cells present through adulthood. Significance in disease Normal physiological metaplasia, such as that of the endocervix, is highly desirable. The medical significance of metaplasia is that in some sites where pathological irritation is present, cells may progress from metaplasia, to develop dysplasia, and then malignant neoplasia (cancer). Thus, at sites where abnormal metaplasia is detected, efforts are made to remove the causative irritant, thereby decreasing the risk of progression to malignancy. The metaplastic area must be carefully monitored to ensure that dysplastic change does not begin to occur. A progression to significant dysplasia indicates that the area could need removal to prevent the development of cancer. Examples Barrett's esophagus is an abnormal change in the cells of the lower esophagus, thought to be caused by damage from chronic stomach acid exposure. The following table lists some common tissues susceptible to metaplasia, and the stimuli that can cause the change: Intestinal metaplasia Intestinal metaplasia is a premalignant condition that increases the risk for subsequent gastric cancer. Intestinal metaplasia lesions with an active DNA damage response will likely undergo extended latency in the premalignant state until further damaging hits override the DNA damage response leading to clonal expansion and progression. The DNA damage response includes expression of proteins that detect DNA damages and activate downstream responses like DNA repair, cell cycle checkpoints or apoptosis. See also Epigenetics Induced stem cells List of biological development disorders Pleomorphism Reprogramming Transdifferentiation Notes The AMA Home Medical Encyclopedia, Random House, p. 683 Robbins and Cotran - Pathologic Basis of Disease, 7th Edition, Saunders, p. 10 Prof. Dr. Clark S., Australian Cancer institute, premalignant conditions. 1st edition pages(321-376). Reviewed. References External links Histopathology Oncology Induced stem cells
Metaplasia
[ "Chemistry", "Biology" ]
974
[ "Stem cell research", "Induced stem cells", "Histopathology", "Microscopy" ]
1,548,303
https://en.wikipedia.org/wiki/Quinacridone
Quinacridone is an organic compound used as a pigment. Numerous derivatives constitute the quinacridone pigment family, which finds extensive use in industrial colorant applications such as robust outdoor paints, inkjet printer ink, tattoo inks, artists' watercolor paints, and color laser printer toner. As pigments, the quinacridones are insoluble. The development of this family of pigments supplanted the alizarin dyes. Synthesis The name indicates that the compounds are a fusion of acridone and quinoline, although they are not made that way. Classically the parent is prepared from the 2,5-dianilide of terephthalic acid (C6H2(NHPh)2(CO2H)2). Condensation of succinosuccinate esters with aniline followed by cyclization affords dihydroquinacridone, which are readily dehydrogenated. The latter is oxidized to quinacridone. Derivatives of quinacridone can be readily obtained by employing substituted anilines. Linear cis-Quinacridones can be prepared from isophthalic acid. Derivatives Quinacridone-based pigments are used to make high performance paints. Quinacridones were first sold as pigments by Du Pont in 1958. Quinacridones are considered "high performance" pigments because they have exceptional color and weather fastness. Major uses for quinacridones include automobile and industrial coatings. Nanocrystalline dispersions of quinacridone pigments functionalized with solubilizing surfactants are the most common magenta printing ink. Typically deep red to violet in color, the hue of quinacridone is affected not only by the R-groups on the molecule but by the crystal form of the solid. For example, the γ crystal modification of unsubstituted quinacridone provides a strong red shade that has excellent color fastness and resistance to solvation. Another important modification is the β phase which provides a maroon shade that is also more weather resistant and light-fast. Both crystal modifications are more thermodynamically stable than the α crystal phase. The γ crystal modification is characterized by a criss-cross lattice where each quinacridone molecule hydrogen-bonds to four neighbors via single H-bonds. The β phase, meanwhile, consists of linear chains of molecules with double H-bonds between each quinacridone molecule and two neighbors. Basic modifications to the chemical structure of quinacridones include the addition of CH3 and Cl substituents. Some magenta shades of quinacridone are labeled under the proprietary name "Thio Violet" and "Acra Violet". Semiconductor properties Quinacridone derivatives exhibit intense fluorescence in the dispersed state, and high carrier mobility. These properties complement good photo-, thermal, and electrochemical stability. These properties are desired for optoelectronic applications including organic light-emitting diodes (OLEDs), organic solar cells (OSCs), and organic field-effect transistors (OFETs). Due to interplay of intermolecular H-bonding and pi-pi stacking, quinacridone can form a self-assembling, supramolecular organic semiconductor. References Additional reading Organic semiconductors Organic pigments Diketones Heterocyclic compounds with 5 rings Nitrogen heterocycles Acridines
Quinacridone
[ "Chemistry" ]
723
[ "Semiconductor materials", "Molecular electronics", "Organic semiconductors" ]
1,548,510
https://en.wikipedia.org/wiki/Fractional-order%20control
Fractional-order control (FOC) is a field of control theory that uses the fractional-order integrator as part of the control system design toolkit. The use of fractional calculus can improve and generalize well-established control methods and strategies. The fundamental advantage of FOC is that the fractional-order integrator weights history using a function that decays with a power-law tail. The effect is that the effects of all time are computed for each iteration of the control algorithm. This creates a "distribution of time constants", the upshot of which is there is no particular time constant, or resonance frequency, for the system. In fact, the fractional integral operator is different from any integer-order rational transfer function , in the sense that it is a non-local operator that possesses an infinite memory and takes into account the whole history of its input signal. Fractional-order control shows promise in many controlled environments that suffer from the classical problems of overshoot and resonance, as well as time diffuse applications such as thermal dissipation and chemical mixing. Fractional-order control has also been demonstrated to be capable of suppressing chaotic behaviors in mathematical models of, for example, muscular blood vessels and robotics. Initiated from the 1980's by the Pr. Oustaloup's group, the CRONE approach is one of the most developed control-system design methodologies that uses fractional-order operator properties. See also Differintegral Fractional calculus Fractional-order system External links Dr. YangQuan Chen's latest homepage for the applied fractional calculus (AFC) Dr. YangQuan Chen's page about fractional calculus on Google Sites References Control theory Cybernetics
Fractional-order control
[ "Mathematics" ]
357
[ "Applied mathematics", "Control theory", "Applied mathematics stubs", "Dynamical systems" ]
1,548,669
https://en.wikipedia.org/wiki/Euclidean%20tilings%20by%20convex%20regular%20polygons
Euclidean plane tilings by convex regular polygons have been widely used since antiquity. The first systematic mathematical treatment was that of Kepler in his (Latin: The Harmony of the World, 1619). Notation of Euclidean tilings Euclidean tilings are usually named after Cundy & Rollett’s notation. This notation represents (i) the number of vertices, (ii) the number of polygons around each vertex (arranged clockwise) and (iii) the number of sides to each of those polygons. For example: 36; 36; 34.6, tells us there are 3 vertices with 2 different vertex types, so this tiling would be classed as a ‘3-uniform (2-vertex types)’ tiling. Broken down, 36; 36 (both of different transitivity class), or (36)2, tells us that there are 2 vertices (denoted by the superscript 2), each with 6 equilateral 3-sided polygons (triangles). With a final vertex 34.6, 4 more contiguous equilateral triangles and a single regular hexagon. However, this notation has two main problems related to ambiguous conformation and uniqueness First, when it comes to k-uniform tilings, the notation does not explain the relationships between the vertices. This makes it impossible to generate a covered plane given the notation alone. And second, some tessellations have the same nomenclature, they are very similar but it can be noticed that the relative positions of the hexagons are different. Therefore, the second problem is that this nomenclature is not unique for each tessellation. In order to solve those problems, GomJau-Hogg’s notation is a slightly modified version of the research and notation presented in 2012, about the generation and nomenclature of tessellations and double-layer grids. Antwerp v3.0, a free online application, allows for the infinite generation of regular polygon tilings through a set of shape placement stages and iterative rotation and reflection operations, obtained directly from the GomJau-Hogg’s notation. Regular tilings Following Grünbaum and Shephard (section 1.3), a tiling is said to be regular if the symmetry group of the tiling acts transitively on the flags of the tiling, where a flag is a triple consisting of a mutually incident vertex, edge and tile of the tiling. This means that, for every pair of flags, there is a symmetry operation mapping the first flag to the second. This is equivalent to the tiling being an edge-to-edge tiling by congruent regular polygons. There must be six equilateral triangles, four squares or three regular hexagons at a vertex, yielding the three regular tessellations. C&R: Cundy & Rollet's notation GJ-H: Notation of GomJau-Hogg Archimedean, uniform or semiregular tilings Vertex-transitivity means that for every pair of vertices there is a symmetry operation mapping the first vertex to the second. If the requirement of flag-transitivity is relaxed to one of vertex-transitivity, while the condition that the tiling is edge-to-edge is kept, there are eight additional tilings possible, known as Archimedean, uniform or semiregular tilings. Note that there are two mirror image (enantiomorphic or chiral) forms of 34.6 (snub hexagonal) tiling, only one of which is shown in the following table. All other regular and semiregular tilings are achiral. C&R: Cundy & Rollet's notation GJ-H: Notation of GomJau-Hogg Grünbaum and Shephard distinguish the description of these tilings as Archimedean as referring only to the local property of the arrangement of tiles around each vertex being the same, and that as uniform as referring to the global property of vertex-transitivity. Though these yield the same set of tilings in the plane, in other spaces there are Archimedean tilings which are not uniform. Plane-vertex tilings There are 17 combinations of regular convex polygons that form 21 types of plane-vertex tilings. Polygons in these meet at a point with no gap or overlap. Listing by their vertex figures, one has 6 polygons, three have 5 polygons, seven have 4 polygons, and ten have 3 polygons. Three of them can make regular tilings (63, 44, 36), and eight more can make semiregular or archimedean tilings, (3.12.12, 4.6.12, 4.8.8, (3.6)2, 3.4.6.4, 3.3.4.3.4, 3.3.3.4.4, 3.3.3.3.6). Four of them can exist in higher k-uniform tilings (3.3.4.12, 3.4.3.12, 3.3.6.6, 3.4.4.6), while six can not be used to completely tile the plane by regular polygons with no gaps or overlaps - they only tessellate space entirely when irregular polygons are included (3.7.42, 3.8.24, 3.9.18, 3.10.15, 4.5.20, 5.5.10). k-uniform tilings Such periodic tilings may be classified by the number of orbits of vertices, edges and tiles. If there are orbits of vertices, a tiling is known as -uniform or -isogonal; if there are orbits of tiles, as -isohedral; if there are orbits of edges, as -isotoxal. k-uniform tilings with the same vertex figures can be further identified by their wallpaper group symmetry. 1-uniform tilings include 3 regular tilings, and 8 semiregular ones, with 2 or more types of regular polygon faces. There are 20 2-uniform tilings, 61 3-uniform tilings, 151 4-uniform tilings, 332 5-uniform tilings and 673 6-uniform tilings. Each can be grouped by the number m of distinct vertex figures, which are also called m-Archimedean tilings. Finally, if the number of types of vertices is the same as the uniformity (m = k below), then the tiling is said to be Krotenheerdt. In general, the uniformity is greater than or equal to the number of types of vertices (m ≥ k), as different types of vertices necessarily have different orbits, but not vice versa. Setting m = n = k, there are 11 such tilings for n = 1; 20 such tilings for n = 2; 39 such tilings for n = 3; 33 such tilings for n = 4; 15 such tilings for n = 5; 10 such tilings for n = 6; and 7 such tilings for n = 7. Below is an example of a 3-unifom tiling: 2-uniform tilings There are twenty (20) 2-uniform tilings of the Euclidean plane. (also called 2-isogonal tilings or demiregular tilings) Vertex types are listed for each. If two tilings share the same two vertex types, they are given subscripts 1,2. Higher k-uniform tilings k-uniform tilings have been enumerated up to 6. There are 673 6-uniform tilings of the Euclidean plane. Brian Galebach's search reproduced Krotenheerdt's list of 10 6-uniform tilings with 6 distinct vertex types, as well as finding 92 of them with 5 vertex types, 187 of them with 4 vertex types, 284 of them with 3 vertex types, and 100 with 2 vertex types. Fractalizing k-uniform tilings There are many ways of generating new k-uniform tilings from old k-uniform tilings. For example, notice that the 2-uniform [3.12.12; 3.4.3.12] tiling has a square lattice, the 4(3-1)-uniform [343.12; (3.122)3] tiling has a snub square lattice, and the 5(3-1-1)-uniform [334.12; 343.12; (3.12.12)3] tiling has an elongated triangular lattice. These higher-order uniform tilings use the same lattice but possess greater complexity. The fractalizing basis for theses tilings is as follows: The side lengths are dilated by a factor of . This can similarly be done with the truncated trihexagonal tiling as a basis, with corresponding dilation of . Fractalizing examples Tilings that are not edge-to-edge Convex regular polygons can also form plane tilings that are not edge-to-edge. Such tilings can be considered edge-to-edge as nonregular polygons with adjacent colinear edges. There are seven families of isogonal figures, each family having a real-valued parameter determining the overlap between sides of adjacent tiles or the ratio between the edge lengths of different tiles. Two of the families are generated from shifted square, either progressive or zig-zagging positions. Grünbaum and Shephard call these tilings uniform although it contradicts Coxeter's definition for uniformity which requires edge-to-edge regular polygons. Such isogonal tilings are actually topologically identical to the uniform tilings, with different geometric proportions. See also Grid (spatial index) Uniform tilings in hyperbolic plane List of uniform tilings Wythoff symbol Tessellation Wallpaper group Regular polyhedron (the Platonic solids) Semiregular polyhedron (including the Archimedean solids) Hyperbolic geometry Penrose tiling Tiling with rectangles Lattice (group) References Order in Space: A design source book, Keith Critchlow, 1970 Chapter X: The Regular Polytopes Dale Seymour and Jill Britton, Introduction to Tessellations, 1989, , pp. 50–57 External links Euclidean and general tiling links: n-uniform tilings, Brian Galebach Euclidean plane geometry Regular tilings Tessellation
Euclidean tilings by convex regular polygons
[ "Physics", "Mathematics" ]
2,191
[ "Tessellation", "Planes (geometry)", "Euclidean plane geometry", "Symmetry" ]
1,548,695
https://en.wikipedia.org/wiki/Vision%20mixer
A vision mixer is a device used to select between different live video sources and, in some cases, compositing live video sources together to create visual effects. In most of the world, both the equipment and its operator are called a vision mixer or video mixer; however, in the United States, the equipment is called a video switcher, production switcher or video production switcher, and its operator is known as a technical director. The role of the vision mixer for video is similar to what a mixing console does for audio. Typically a vision mixer would be found in a video production environment such as a production control room of a television studio, production truck or post-production facility. Capabilities and usage Besides hard cuts (switching directly between two input signals), mixers can also generate a variety of other transitions, from simple dissolves to pattern wipes. Additionally, most vision mixers can perform keying operations (called mattes in this context) and generate color signals. Vision mixers may include digital video effects (DVE) and still store functionality. Most vision mixers are targeted at the professional market, with analog models having component video connections and digital ones using serial digital interface (SDI) or SMPTE 2110. They are used in live television, such as outside broadcasting, with video tape recording (VTR) and video servers for linear video editing, even though the use of vision mixers in video editing has been largely supplanted by computer-based non-linear editing systems. While professional analog mixers work with component video inputs. Consumer video switchers may use composite video or S-Video. These are often used for VJing, presentations, and small multi-camera productions. Operation The most basic part of a vision mixer is a bus, which is a signal path consisting of multiple video inputs that feed a single output. On the panel, a bus is represented by a row of buttons; pressing one of those buttons selects the video signal in that bus. Older video mixers had two equivalent buses (called the A and B bus; such a mixer is known as an A/B mixer), and one of these buses could be selected as the main out (or program) bus. Most modern mixers, however, have one bus that is always the program bus, the second main bus being the preview (sometimes called preset) bus. These mixers are called flip-flop mixers, since the selected source of the preview and program buses can be exchanged. Some switchers allow the operator to select A/B or flip-flop modes. Both the preview and program buses usually have their own video monitors displaying the video selected. Another main feature of a vision mixer is the transition lever, also called a T-bar or fader bar. This lever, similar to an audio fader, is used to transition between two buses. Note that in a flip-flop mixer, the position of the main transition lever does not indicate which bus is active, since the program bus is always the active or hot bus. Instead of moving the lever by hand, a button (commonly labeled mix, auto or auto trans) can be used, which performs the transition over a user-defined period of time. Another button, usually labeled cut or take, swaps the preview signal to the program signal instantaneously. The type of transition used can be selected in the transition section. Common transitions include dissolves (similar to an audio crossfade) and pattern wipes. A third bus used for compositing is the key bus. A mixer may have more than one key bus, but often they share only one set of buttons. Here, one signal can be selected for keying over the program bus. The digital on-screen graphic image that will be seen in the program is called the fill, while the mask used to cut the key's translucence is called the source. This source, e.g. chrominance, luminance, pattern or split and can be selected in the keying section of the mixer. Usually, a key is turned on and off the same way a transition is. For this, the transition section can be switched from program mode to key mode. These three main buses together form the basic mixer section called Program/Preset or P/P. Bigger production mixers may have a number of additional sections of this type, which are called Mix/Effects (M/E for short) sections and numbered. Any M/E section can be selected as a source in the P/P stage, making the mixer operations much more versatile, since effects or keys can be composed offline in an M/E and then go live at the push of one button. After the P/P section, there is another keying stage called the downstream keyer (DSK). It is mostly used for keying text or graphics and has its own cut and mix buttons. The signal before the DSK keyer is called clean feed. After the DSK is one last stage that overrides any signal with black, usually called Fade To Black or FTB. Modern vision mixers may also have additional functions, such as serial communications with the ability to use proprietary communications protocols, control auxiliary channels for routing video signals to other sources than the program out, macro programming, and DVE capabilities. Mixers are often equipped with effects memory registers, which can store a snapshot of any part of a complex mixer configuration and then recall the setup with one button press. Setup Since vision mixers combine various video signals such as VTRs and professional video cameras, it is very important that all these sources are in proper synchronization with one another. In professional analog facilities all the equipment is genlocked with black and burst or tri-level sync from a video-signal generator. The signals which cannot be synchronized (either because they originate outside the facility or because the particular equipment doesn't accept external sync) must go through a frame synchronizer. Some vision mixers have internal frame-syncs or they can be a separate piece of equipment, such as a time base corrector. If the mixer is used for video editing, the editing console (which usually controls the vision mixer remotely) must also be synched. Most larger vision mixers divide the control panel from the actual hardware that performs the mixer functions because of noise, temperature and cable length considerations. With such mixers, the control panel is located in the production control room, while the main unit, to which all cables are connected, is often located in a machine room alongside the other hardware. Manufacturers Analog Way (manufacturer) Barco (manufacturer) Blackmagic Design: ATEM Broadcast Pix Datavideo EVS Broadcast Equipment: Dyvi Focus Enhancements (Videonics, former) FOR-A Grass Valley Guramex Kramer Electronics Ltd. NewTek (Video Toaster and TriCaster, bought by Vizrt) Panasonic Philips (Broadcast Television Systems Inc., broadcast division bought by Thomson SA and later integrated into Grass Valley) Roland Corporation Ross Video Snell (former, bought by Grass Valley) Sony See also Audio router Mixing console Patch panel Video router References Sources Luff, John: "Production switchers ". Broadcast Engineering, November 1, 2002 Moore, Jeff: "Production Switcher Primer ". Ross Video Production Switcher Primer. VideoSolutions group "ODYSSEY Mixers Family ". Monarch Innovative technology pvt ltd" ". thameside.tv Sony Vision Mixer DVS-7000 External links Outsite Broadcast Director setting up their vision mixer for an upcoming sports OB Outside broadcast director using vision mixer during recreation of 1970s sports coverage Video equipment collection Television terminology Television technology Television occupations Film and video technology ja:スイッチャー (映像製作)
Vision mixer
[ "Technology" ]
1,597
[ "Information and communications technology", "Television technology" ]
1,548,703
https://en.wikipedia.org/wiki/Nature%20worship
Nature worship, also called naturism or physiolatry, is any of a variety of religious, spiritual and devotional practices that focus on the worship of a nature deity, considered to be behind the natural phenomena visible throughout nature. A nature deity can be in charge of nature, a place, a biotope, the biosphere, the cosmos, or the universe. Nature worship is often considered the primitive source of modern religious beliefs and can be found in animism, pantheism, panentheism, polytheism, deism, totemism, shamanism, Taoism, Hinduism, some theism and paganism including Wicca. Common to most forms of nature worship is a spiritual focus on the individual's connection and influence on some aspects of the natural world and reverence towards it. Due to their admiration of nature, the works of Edmund Spenser, Anthony Ashley-Cooper and Carl Linnaeus were viewed as nature worship. In the Western World Paganism in Europe In ancient European paganism, the deification of natural forces was central to religious life. The Celts and Germanic tribes believed that gods and spirits resided in natural elements such as trees, rivers, and mountains. For example, Thor was associated with thunder, and his hammer, Mjolnir, was believed to control storms and lightning. Similarly, the goddess Nerthus was linked to fertility and the earth, with rituals involving plowing sacred fields to ensure a bountiful harvest. The reverence for these deified natural forces was expressed through various rituals, including food offerings, sacrifices, and festivals. Sacred groves were considered the dwelling places of these deities, and entering such spaces was often restricted to priests or those performing rituals. Ancient Greece In ancient Greece, many natural forces were personified and worshipped as gods and goddesses. For example, Poseidon was the god of the sea, controlling storms, earthquakes, and horses. Demeter, the goddess of agriculture, was believed to be responsible for the fertility of the earth and the changing seasons. Rituals dedicated to these deities often included offerings, sacrifices, and festivals like the Eleusinian Mysteries, which celebrated the cyclical nature of life, death, and rebirth in alignment with the agricultural calendar. The deification of natural forces in Greek religion reflects the deep connection between humans and the environment, where natural phenomena were seen as manifestations of divine power that needed to be respected and honored through ritual practices. Native American Traditions Among Native American tribes, natural forces were often deified and revered as powerful spiritual beings. The Great Spirit, a central figure in many Native American belief systems, was considered the creator and sustainer of all life, with control over the natural world. Specific tribes also worshipped particular natural forces, such as the Iroquois' reverence for Thunder Beings, who were believed to bring rain and fertility to the land. Rituals to honor these deities included dances, songs, and offerings. The Sun Dance, practiced by several Plains tribes, was a key ritual that involved fasting, dancing, and other ceremonies to seek the favor of the sun, considered a powerful life-giving force. In the Eastern World Hinduism In Hinduism, the deification of natural forces is evident in the worship of gods and goddesses associated with various elements of nature. Agni, the god of fire, is one of the most ancient and revered deities, representing the vital force of life and the medium through which offerings are made to other gods. Indra, the god of rain and thunderstorms, is another example of a natural force personified as a deity, with rituals performed to invoke his blessings for rainfall and agricultural prosperity. The concept of Prakriti, or nature, in Hindu philosophy further emphasizes the divine nature of the natural world. Rituals often involve offerings to rivers, trees, and mountains, which are seen as embodiments of the divine feminine energy, or Shakti. Shintoism in Japan Shinto, the indigenous religion of Japan, is fundamentally a form of nature worship where natural forces are deified as kami (spirits). The sun goddess Amaterasu is the most revered kami in Shinto, symbolizing life, growth, and the continuity of the Japanese nation. Mountains like Mount Fuji are also considered sacred, believed to be the dwelling places of powerful kami. Shinto rituals often involve purification rites, offerings of food and sake, and festivals like Matsuri that celebrate the natural forces and ensure their continued favor. Buddhism and Taoism In Mahayana Buddhism, nature worship is reflected in the reverence for sacred mountains and trees, such as the Bodhi tree, under which the Buddha attained enlightenment. Taoism, with its focus on harmony with the Tao (the natural way), venerates natural landscapes and elements as expressions of the divine. Laozi, the founder of Taoism, taught that the natural world and its forces should be revered as manifestations of the Tao, leading to the deification of mountains, rivers, and other natural elements. Criticism of "Nature Worship" English historian, Ronald Hutton, has been critical of the antiquity of Nature Worship since at least 1998 until the present. He has argued that the gods of Ancient Mediterranean were not Nature Deities of any sort; rather, they were gods of "civilization and human activity," meanwhile the "Earth-Mother goddesses" are characterized by him as mere literary figures as opposed to deities, because he believes they lack any temples dedicated to them or a priesthood to serve them. He strongly juxtaposes this view by differentiating ancient pagans from Neopagans and Wiccans who profess to be nature worshippers as an essential component of their faith, which he believes is unlike any other in recorded history. Despite having been charged by New Zealand Wiccan, Ben Whitmore, with having disenfranchised those Neopagans "who feel kinship and connection" with the gods and pagans of the Ancient World, Prof. Hutton has reprised these views, virtually verbatim, in the second edition of his book, Triumph of the Moon. Forms and aspects of nature worship See also Goddess worship (disambiguation) References Spirituality
Nature worship
[ "Biology" ]
1,259
[ "Behavior", "Human behavior", "Spirituality" ]
1,548,726
https://en.wikipedia.org/wiki/AES47
AES47 is a standard which describes a method for transporting AES3 professional digital audio streams over Asynchronous Transfer Mode (ATM) networks. The Audio Engineering Society (AES) published AES47 in 2002. The method described by AES47 is also published by the International Electrotechnical Commission as IEC 62365. Introduction Many professional audio systems are now combined with telecommunication and IT technologies to provide new functionality, flexibility and connectivity over both local and wide area networks. AES47 was developed to provide a standardised method of transporting the standard digital audio per AES3 over telecommunications networks that provide a quality of service required by many professional low-latency live audio uses. AES47 may be used directly between specialist audio devices or in combination with telecommunication and computer equipment with suitable network interfaces. In both cases, AES47 the same physical structured cable used as standard by the telecommunications networks. Common network protocols like Ethernet use large packet sizes, which produce a larger minimum latency. Asynchronous transfer mode divides data into 48-byte cells which provide lower latency. History The original work was carried out at the British Broadcasting Corporation’s R&D department and published as "White Paper 074", which established that this approach provides the necessary performance for professional media production. AES47 was originally published in 2002 and was republished with minor revisions in February 2006. Amendment 1 to AES47 was published in February 2009, adding code points in the ATM Adaptation Layer Parameters Information Element to signal that the time to which each audio sample relates can be identified as specified in AES53. The change in thinking from traditional ATM network design is not to necessarily use ATM to pass IP traffic (apart from management traffic) but to use AES47 in parallel with standard Ethernet structures to deal with extremely high performance secure media streams. AES47 has been developed to allow the simultaneous transport and switched distribution of a large number of AES3 linear audio streams at different sample frequencies. AES47 can support any of the standard AES3 sample rates and word size. AES11 Annex D (the November 2005 printing or version of AES11-2003) shows an example method to provide isochronous timing relationships for distributed AES3 structures over asynchronous networks such as AES47 where reference signals may be locked to common timing sources such as GPS. AES53 specifies how timing markers within AES47 can be used to associate an absolute time stamp with individual audio samples as described in AES47 Amendment 1. An additional standard has been published by the Audio Engineering Society to extend AES3 digital audio carried as AES47 streams to enable this to be transported over standard physical Ethernet hardware. This additional standard is known as AES51-2006. AES47 details For minimum latency, AES47 uses "raw" ATM cells, ATM adaptation layer 0. Each ATM virtual circuit negotiates the parameters of a stream at connection time. In addition to the same rate and number of channels (which may be more than the 2 supported by AES3), the negotiation covers the number of bits per sample and the presence of an optional data byte. The total must be 1, 2, 3, 4 or 6 bytes per sample, so it evenly divides the ATM cell size. AES3 uses 4 bytes per sample (24 bits of sample plus the optional data byte), but AES47 supports additional formats. The optional data byte contains four "ancillary" bits corresponding to the AES3 VUCP bits. However, the P (parity) bit is replaced by a B bit which is set on the first sample of each audio block, and clear at all other times. This serves the same function as the B (or Z) synchronization preamble. The other half of the data byte contains three "data protection" bits for error control and a sequencing bit. The concatenation of the sequencing bits from all samples in a cell (combined little-endian) form a sequencing word of 8, 12, 16, or 24 bits. Only the first 12 bits are defined. The first four bits of the sequencing word are a sequencing number, used to detect dropped cells. This increments by 1 for each cell transmitted. The second four bits are for error detection, with bit 7 being an even parity bit for the first byte. The third four bits, if present, are a second sequencing number which can be used to align multiple virtual circuits. AES53 AES53 is a standard first published in October 2006 by the Audio Engineering Society that specifies how the timing markers specified in AES47 may be used to associate an absolute time-stamp with individual audio samples. A recommendation is made to refer these timestamps to the SMPTE epoch which in turn provides a reference to UTC and GPS time. It thus provides a way of aligning streams from disparate sources, including synchronizing audio to video, and also allows the total delay across a network to be controlled when the transit time of individual cells is unknown. This is most effective in systems where the audio is aligned with an absolute time reference such as GPS, but can also be used with a local reference. This standard may be studied by downloading a copy of the latest version from the AES standards web site as AES53-2018. See also AES51 Voice over ATM Audio over Ethernet References Audio engineering Networking standards Broadcast engineering Audio network protocols IEC standards Audio Engineering Society standards Asynchronous Transfer Mode
AES47
[ "Technology", "Engineering" ]
1,133
[ "Broadcast engineering", "Asynchronous Transfer Mode", "Computer standards", "Computer networks engineering", "IEC standards", "Audio Engineering Society standards", "Electronic engineering", "Networking standards", "Electrical engineering", "Audio engineering" ]
1,548,925
https://en.wikipedia.org/wiki/SAX%20J1808.4%E2%88%923658
The first accreting millisecond pulsar discovered in 1998 by the Italian-Dutch BeppoSAX satellite, SAX J1808.4−3658 revealed X-ray pulsations at the 401 Hz neutron star spin frequency when it was observed during a subsequent outburst in 1998 by NASA's RXTE satellite. The neutron star is orbited by a brown dwarf binary companion with a likely mass of 0.05 solar masses, every 2.01 hours. X-ray burst oscillations and quasi-periodic oscillations in addition to coherent X-ray pulsations have been seen from SAX J1808.4-3658, making it a Rosetta stone for interpretation of the timing behavior of low-mass X-ray binaries. These accreting millisecond X-ray pulsars are thought to be the evolutionary progenitors of recycled radio millisecond pulsars. A total of thirteen accreting millisecond X-ray pulsars have been discovered as of January 2011. Three of them are Intermittent millisecond X-ray pulsars (HETE J1900.1-2455, Aql X-1 and SAX J1748.9-2021), i.e. they emit pulsations sporadically during the outburst. On 21 August 2019 (UTC; 20 August in the US), Neutron Star Interior Composition Explorer (NICER) spotted the brightest X-ray burst so far observed. It came from SAX J1808.4−3658. References Accreting millisecond pulsars Sagittarius (constellation) Sagittarii, V4580 ?
SAX J1808.4−3658
[ "Astronomy" ]
358
[ "Sagittarius (constellation)", "Constellations" ]
1,549,329
https://en.wikipedia.org/wiki/Cuisenaire%20rods
Cuisenaire rods are mathematics learning aids for pupils that provide an interactive, hands-on way to explore mathematics and learn mathematical concepts, such as the four basic arithmetical operations, working with fractions and finding divisors. In the early 1950s, Caleb Gattegno popularised this set of coloured number rods created by Georges Cuisenaire (1891–1975), a Belgian primary school teacher, who called the rods réglettes. According to Gattegno, "Georges Cuisenaire showed in the early 1950s that pupils who had been taught traditionally, and were rated 'weak', took huge strides when they shifted to using the material. They became 'very good' at traditional arithmetic when they were allowed to manipulate the rods." History The educationalists Maria Montessori and Friedrich Fröbel had used rods to represent numbers, but it was Georges Cuisenaire who introduced the rods that were to be used across the world from the 1950s onwards. In 1952, he published Les nombres en couleurs, Numbers in Color, which outlined their use. Cuisenaire, a violin player, taught music as well as arithmetic in the primary school in Thuin. He wondered why children found it easy and enjoyable to pick up a tune and yet found mathematics neither easy nor enjoyable. These comparisons with music and its representation led Cuisenaire to experiment in 1931 with a set of ten rods sawn out of wood, with lengths from to . He painted each length of rod a different colour and began to use these in his teaching of arithmetic. The invention remained almost unknown outside the village of Thuin for about 23 years until, in April 1953, British mathematician and mathematics education specialist Caleb Gattegno was invited to see pupils using the rods in Thuin. At this point he had already founded the International Commission for the Study and Improvement of Mathematics Education (CIEAEM) and the Association of Teachers of Mathematics, but this marked a turning point in his understanding: Then, Cuisenaire took us to a table in one corner of the room where pupils were standing round a pile of colored sticks and doing sums which seemed to me to be unusually hard for children of that age. At this sight, all other impressions of the surrounding vanished, to be replaced by a growing excitement. After listening to Cuisenaire asking his first and second grade pupils questions and hearing their answers immediately and with complete self-assurance and accuracy, the excitement then turned into irrepressible enthusiasm and a sense of illumination. Gattegno named the rods "Cuisenaire rods" and began trialing and popularizing them. Seeing that the rods allowed pupils "to expand on their latent mathematical abilities in a creative and enjoyable fashion", Gattegno's pedagogy shifted radically as he began to stand back and allow pupils to take a leading role: Cuisenaire's gift of the rods led me to teach by non-interference making it necessary to watch and listen for the signs of truth that are made, but rarely recognized. While the material has found an important place in myriad teacher-centered lessons, Gattegno's student-centered practice also inspired a number of educators. The French-Canadian educator Madeleine Goutard in her 1963 Mathematics and Children, wrote: The teacher is not the person who teaches him what he does not know. He is the one who reveals the child to himself by making him more conscious of, and more creative with his own mind. The parents of a little girl of six who was using the Cuisenaire rods at school marveled at her knowledge and asked her: "Tell us how the teacher teaches you all this", to which the little girl replied: "The teacher teaches us nothing. We find everything out for ourselves." John Holt, in his 1964 How Children Fail, wrote: This work has changed most of my ideas about the way to use Cuisenaire rods and other materials. It seemed to me at first that we could use them as devices for packing in recipes much faster than before, and many teachers seem to be using them this way. But this is a great mistake. What we ought to do is use these materials to enable children to make for themselves, out of their own experience and discoveries, a solid and growing understanding of the ways in which numbers and the operations of arithmetic work. Our aim must be to build soundly, and if this means that we must build more slowly, so be it. Some things we will be able to do much earlier than we used to, fractions for example. Gattegno formed the Cuisenaire Company in Reading, England, in 1954, and by the end of the 1950s, Cuisenaire rods had been adopted by teachers in 10,000 schools in more than a hundred countries. The rods received wide use in the 1960s and 1970s. In 2000, the United States–based company Educational Teaching Aids (ETA) acquired the US Cuisenaire Company and formed ETA/Cuisenaire to sell Cuisenaire rods-related material. In 2004, Cuisenaire rods were featured in an exhibition of paintings and sculptures by New Zealand artist Michael Parekowhai. Rods Another arrangement, common in Eastern Europe, extended by two large (> 10 cm or 4 in) sizes of rods, is the following: Use in mathematics teaching The rods are used in teaching a variety of mathematical concepts, and with a wide age range of learners. Topics they are used for include: counting, sequences, patterns and algebraic reasoning; addition and subtraction (additive reasoning); multiplication and division (multiplicative reasoning); fractions, ratio and proportion; modular arithmetic leading to group theory. The Silent Way Though primarily used for mathematics, they have also become popular in language-teaching classrooms, particularly The Silent Way. They can be used: to demonstrate most grammatical structures such as prepositions of place, comparatives and superlatives, determiners, tenses, adverbs of time, manner, etc.; to show sentence and word stress, rising and falling intonation and word groupings; to create a visual model of constructs, for example the English verb tense system; to represent physical objects: clocks, floor-plans, maps, people, animals, fruit, tools, etc., which can and has led to the creation of stories. Other coloured rods In her first school, and in schools since then, Maria Montessori used coloured rods in the classroom to teach concepts of both mathematics and length. This is possibly the first instance of coloured rods being used in the classroom for this purpose. Catherine Stern also devised a set of coloured rods produced by staining wood with aesthetically pleasing colours, and published books on their use at around the same time as Cuisenaire and Gattegno. Her rods were different colours to Cuisenaire's, and also larger, with a unit cube rather than . She produced various resources to complement the rods, such as trays to arrange the rods in, and tracks to arrange them on. Tony Wing, in producing resources for Numicon, built on many of Stern's ideas, also making trays and tracks available for use with Cuisenaire rods. In 1961, Seton Pollock produced the Colour Factor system, consisting of rods from lengths . Based on the work of Cuisenaire and Gattegno, he had invented a unified system for logically assigning a color to any number. After white (1), the primary colors red, blue and yellow are assigned to the first three primes (2, 3 and 5). Higher primes (7, 11 etc.) are associated with darkening shades of grey. The colors of non-prime numbers are obtained by mixing the colors associated with their factors – this is the key concept. A patent is registered in Pollock's name for an "Apparatus for teaching or studying mathematics". The aesthetic and numerically comprehensive Color Factor system was marketed for some years by Seton Pollock's family, before being conveyed to the educational publishing house Edward Arnold. The colors of Pollock's system were named distinctively using, for example, "scarlet" instead of "red", and "amber" instead of "orange". They are listed below. See also Number line References Further reading Cuisenaire rods in the language classroom – article by John Mullen Maths with Rods - 40 exercise tabs to play with parents – downloadable book with Creative Commons License Learn Fractions with Cuisenaire Rods. Introduction External links A 1961 film from the National Film Board of Canada. Caleb Gattegno conducting a demonstration lesson with Cuisenaire rods: In 3 parts on YouTube Online Cuisenaire rods (NumBlox Freeplay) Online interactive Cuisenaire rods The Cuisenaire Company – registered UK trademark holder, with background to Cuisenaire and Gattegno. La méthode Cuisenaire – Les nombres en Couleurs – site officiel (in French) History of the number rods from 1806 to 2020 (in French). Belgian inventions Language education materials Mathematical manipulatives Cuisenaire
Cuisenaire rods
[ "Mathematics" ]
1,846
[ "Recreational mathematics", "Mathematical manipulatives" ]
1,549,339
https://en.wikipedia.org/wiki/NOTAR
NOTAR ("no tail rotor") is a helicopter system which avoids the use of a tail rotor. It was developed by McDonnell Douglas Helicopter Systems (through their acquisition of Hughes Helicopters). The system uses a fan inside the tail boom to build a high volume of low-pressure air, which exits through two slots and creates a boundary layer flow of air along the tailboom utilizing the Coandă effect. The boundary layer changes the direction of airflow around the tailboom, creating thrust opposite the motion imparted to the fuselage by the torque effect of the main rotor. Directional yaw control is gained through a vented, rotating drum at the end of the tailboom, called the direct jet thruster. Advocates of NOTAR assert that the system offers quieter and safer operation than a traditional tail rotor. Development The use of directed air to provide anti-torque control had been tested as early as 1945 in the British Cierva W.9. During 1957, a Spanish prototype designed and built by Aerotecnica flew using exhaust gases from the turbine instead of a tail rotor. This model was designated as Aerotecnica AC-14. The Fiat 7005 used a pusher propeller that blew against a cascade of tail vanes at the rear of its fuselage. Development of the NOTAR system dates back to 1975, when engineers at Hughes Helicopters began concept development work. On December 17, 1981, Hughes flew an OH-6A fitted with NOTAR for the first time. The OH-6A helicopter (serial number 65-12917) was supplied by the U.S. Army for Hughes to develop the NOTAR technology and was the second OH-6 built by Hughes for the U.S. Army. A more heavily modified version of the prototype demonstrator first flew in March 1986 (by which time McDonnell Douglas had acquired Hughes Helicopters). The original prototype last flew in June 1986 and is now at the U.S. Army Aviation Museum in Fort Novosel, Alabama. A production model NOTAR 520N (N520NT) was later produced and first flew on May 1, 1990. It collided with an Apache AH-64D and crashed on September 27, 1994 while flying as a chase aircraft for the Apache. Concept Although the concept took over three years to refine, the NOTAR system is simple in theory and works to provide some directional control using the Coandă effect. A variable pitch fan is enclosed in the aft fuselage section immediately forward of the tail boom and driven by the main rotor transmission. This fan forces low pressure air through two slots on the right side of the tailboom, causing the downwash from the main rotor to hug the tail boom, producing lift, and thus a measure of directional control. This is augmented by a direct jet thruster and vertical stabilisers. Benefits of the NOTAR system include increased safety (the tail rotor being vulnerable), and greatly reduced external noise as tail rotors on helicopters produce much of the aircraft's sound. NOTAR-equipped helicopters are among the quietest helicopters certified by FAA. Applications There are several production helicopters that utilize the NOTAR system, which are produced by MD Helicopters: MD 520N: a NOTAR variant of the Hughes/MD500 series helicopter MD 600N: a larger version of the MD 520N MD Explorer: a twin-engine, eight-seat light helicopter See also Coaxial rotors Fenestron Intermeshing rotors (synchropter) Tandem rotors Transverse rotors Multirotors Quadrotors Tiltrotors Tip jet rotor Youngcopter Neo Notes References Aerospace engineering Helicopter components
NOTAR
[ "Engineering" ]
738
[ "Aerospace engineering" ]
1,549,595
https://en.wikipedia.org/wiki/Index%20calculus%20algorithm
In computational number theory, the index calculus algorithm is a probabilistic algorithm for computing discrete logarithms. Dedicated to the discrete logarithm in where is a prime, index calculus leads to a family of algorithms adapted to finite fields and to some families of elliptic curves. The algorithm collects relations among the discrete logarithms of small primes, computes them by a linear algebra procedure and finally expresses the desired discrete logarithm with respect to the discrete logarithms of small primes. Description Roughly speaking, the discrete log problem asks us to find an x such that , where g, h, and the modulus n are given. The algorithm (described in detail below) applies to the group where q is prime. It requires a factor base as input. This factor base is usually chosen to be the number −1 and the first r primes starting with 2. From the point of view of efficiency, we want this factor base to be small, but in order to solve the discrete log for a large group we require the factor base to be (relatively) large. In practical implementations of the algorithm, those conflicting objectives are compromised one way or another. The algorithm is performed in three stages. The first two stages depend only on the generator g and prime modulus q, and find the discrete logarithms of a factor base of r small primes. The third stage finds the discrete log of the desired number h in terms of the discrete logs of the factor base. The first stage consists of searching for a set of r linearly independent relations between the factor base and power of the generator g. Each relation contributes one equation to a system of linear equations in r unknowns, namely the discrete logarithms of the r primes in the factor base. This stage is embarrassingly parallel and easy to divide among many computers. The second stage solves the system of linear equations to compute the discrete logs of the factor base. A system of hundreds of thousands or millions of equations is a significant computation requiring large amounts of memory, and it is not embarrassingly parallel, so a supercomputer is typically used. This was considered a minor step compared to the others for smaller discrete log computations. However, larger discrete logarithm records were made possible only by shifting the work away from the linear algebra and onto the sieve (i.e., increasing the number of equations while reducing the number of variables). The third stage searches for a power s of the generator g which, when multiplied by the argument h, may be factored in terms of the factor base gsh = (−1)f0 2f1 3f2···prfr. Finally, in an operation too simple to really be called a fourth stage, the results of the second and third stages can be rearranged by simple algebraic manipulation to work out the desired discrete logarithm x = f0logg(−1) + f1logg2 + f2logg3 + ··· + frloggpr − s. The first and third stages are both embarrassingly parallel, and in fact the third stage does not depend on the results of the first two stages, so it may be done in parallel with them. The choice of the factor base size r is critical, and the details are too intricate to explain here. The larger the factor base, the easier it is to find relations in stage 1, and the easier it is to complete stage 3, but the more relations you need before you can proceed to stage 2, and the more difficult stage 2 is. The relative availability of computers suitable for the different types of computation required for stages 1 and 2 is also important. Applications in other groups The lack of the notion of prime elements in the group of points on elliptic curves makes it impossible to find an efficient factor base to run index calculus method as presented here in these groups. Therefore this algorithm is incapable of solving discrete logarithms efficiently in elliptic curve groups. However: For special kinds of curves (so called supersingular elliptic curves) there are specialized algorithms for solving the problem faster than with generic methods. While the use of these special curves can easily be avoided, in 2009 it has been proven that for certain fields the discrete logarithm problem in the group of points on general elliptic curves over these fields can be solved faster than with generic methods. The algorithms are indeed adaptations of the index calculus method. The algorithm Input: Discrete logarithm generator , modulus and argument . Factor base , of length . Output: such that . relations ← empty_list for Using an integer factorization algorithm optimized for smooth numbers, try to factor (Euclidean residue) using the factor base, i.e. find 's such that Each time a factorization is found: Store and the computed 's as a vector (this is a called a relation) If this relation is linearly independent to the other relations: Add it to the list of relations If there are at least relations, exit loop Form a matrix whose rows are the relations Obtain the reduced echelon form of the matrix The first element in the last column is the discrete log of and the second element is the discrete log of and so on for Try to factor over the factor base When a factorization is found: Output Complexity Assuming an optimal selection of the factor base, the expected running time (using L-notation) of the index-calculus algorithm can be stated as . History The basic idea of the algorithm is due to Western and Miller (1968), which ultimately relies on ideas from Kraitchik (1922). The first practical implementations followed the 1976 introduction of the Diffie-Hellman cryptosystem which relies on the discrete logarithm. Merkle's Stanford University dissertation (1979) was credited by Pohlig (1977) and Hellman and Reyneri (1983), who also made improvements to the implementation. Adleman optimized the algorithm and presented it in the present form. The Index Calculus family Index Calculus inspired a large family of algorithms. In finite fields with for some prime , the state-of-art algorithms are the Number Field Sieve for Discrete Logarithms, , when is large compared to , the function field sieve, , and Joux, for , when is small compared to and the Number Field Sieve in High Degree, for when is middle-sided. Discrete logarithm in some families of elliptic curves can be solved in time for , but the general case remains exponential. External links Discrete logarithms in finite fields and their cryptographic significance, by Andrew Odlyzko Discrete Logarithm Problem, by Chris Studholme, including the June 21, 2002 paper "The Discrete Log Problem". Notes Group theory
Index calculus algorithm
[ "Mathematics" ]
1,392
[ "Group theory", "Fields of abstract algebra" ]
1,549,715
https://en.wikipedia.org/wiki/Polysulfide
Polysulfides are a class of chemical compounds derived from anionic chains of sulfur atoms. There are two main classes of polysulfides: inorganic and organic. The inorganic polysulfides have the general formula . These anions are the conjugate bases of polysulfanes . Organic polysulfides generally have the formulae , where R is an alkyl or aryl group. Polysulfide salts and complexes The alkali metal polysulfides arise by treatment of a solution of the sulfide with elemental sulfur, e.g. sodium sulfide to sodium polysulfide: In some cases, these anions have been obtained as organic salts, which are soluble in organic solvents. The energy released in the reaction of sodium and elemental sulfur is the basis of battery technology. The sodium–sulfur battery and the lithium–sulfur battery require high temperatures to maintain liquid polysulfide and -conductive membranes that are unreactive toward sodium, sulfur, and sodium sulfide. Polysulfides are ligands in coordination chemistry. Examples of transition metal polysulfido complexes include , , and . Main group elements also form polysulfides. Organic polysulfides In commerce, the term "polysulfide" usually refers to a class of polymers with alternating chains of several sulfur atoms and hydrocarbons. They have the formula . In this formula n indicates the number of sulfur atoms (or "rank"). Polysulfide polymers can be synthesized by condensation polymerization reactions between organic dihalides and alkali metal salts of polysulfide anions: Dihalides used in this condensation polymerization are dichloroalkanes such as 1,2-dichloroethane, bis(2-chloroethoxy)methane (), and 1,3-dichloropropane. The polymers are called thiokols. In some cases, polysulfide polymers can be formed by ring-opening polymerization reactions. Polysulfide polymers are also prepared by the addition of polysulfanes to alkenes. An idealized equation is: In reality, homogeneous samples of are difficult to prepare. Polysulfide polymers are insoluble in water, oils, and many other organic solvents. Because of their solvent resistance, these materials find use as sealants to fill the joints in pavement, automotive window glass, and aircraft structures. Polymers containing one or two sulfur atoms separated by hydrocarbon sequences are usually not classified polysulfides, e.g. poly(p-phenylene) sulfide . Polysulfides in vulcanized rubber Many commercial elastomers contain polysulfides as crosslinks. These crosslinks interconnect neighboring polymer chains, thereby conferring rigidity. The degree of rigidity is related to the number of crosslinks. Elastomers, therefore, have a characteristic ability to return to their original shape after being stretched or compressed. Because of this memory for their original cured shape, elastomers are commonly referred to as rubbers. The process of crosslinking the polymer chains in these polymers with sulfur is called vulcanization. The sulfur chains attach themselves to the allylic carbon atoms, which are adjacent to C=C linkages. Vulcanization is a step in the processing of several classes of rubbers, including polychloroprene (Neoprene), styrene-butadiene, and polyisoprene, which is chemically similar to natural rubber. Charles Goodyear's discovery of vulcanization, involving the heating of polyisoprene with sulfur, was revolutionary because it converted a sticky and almost useless material into an elastomer that could be fabricated into useful products. Occurrence in gas giants In addition to water and ammonia, the clouds in the atmospheres of the gas giant planets contain ammonium sulfides. The reddish-brownish clouds are attributed to polysulfides, arising from the exposure of the ammonium sulfides to light. Properties Polysulfides, like sulfides, can induce stress corrosion cracking in carbon steel and stainless steel. See also References Sulfur compounds Anions Inorganic polymers Corrosion Polysulfides
Polysulfide
[ "Physics", "Chemistry", "Materials_science" ]
872
[ "Matter", "Inorganic compounds", "Anions", "Metallurgy", "Inorganic polymers", "Corrosion", "Electrochemistry", "Materials degradation", "Ions" ]
1,549,922
https://en.wikipedia.org/wiki/136%20%28number%29
136 (one hundred [and] thirty-six) is the natural number following 135 and preceding 137. In mathematics 136 is a refactorable number and a composite number. External links 136 cats (video) References Integers
136 (number)
[ "Mathematics" ]
47
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
1,549,929
https://en.wikipedia.org/wiki/173%20%28number%29
173 (one hundred [and] seventy-three) is the natural number following 172 and preceding 174. In mathematics 173 is: an odd number. a deficient number. an odious number. a balanced prime. an Eisenstein prime with no imaginary part. a Sophie Germain prime. a Pythagorean prime. a Higgs prime. an isolated prime. a regular prime. a sexy prime. a truncatable prime. an inconsummate number. the sum of 2 squares: 22 + 132. the sum of three consecutive prime numbers: 53 + 59 + 61. Palindromic number in bases 3 (201023) and 9 (2129). the 40th prime number following 167 and preceding 179. External links Number Facts and Trivia: 173 Prime curiosities: 173 Number Gossip: 173 References Integers
173 (number)
[ "Mathematics" ]
174
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
1,549,946
https://en.wikipedia.org/wiki/Ethyl%20tert-butyl%20ether
Ethyl tertiary-butyl ether (ETBE), also known as ethyl tert-butyl ether, is commonly used as an oxygenate gasoline additive in the production of gasoline from crude oil. ETBE offers equal or greater air quality benefits than ethanol, while being technically and logistically less challenging. Unlike ethanol, ETBE does not induce evaporation of gasoline, which is one of the causes of smog, and does not absorb moisture from the atmosphere. Production Ethyl tert-butyl ether is manufactured industrially by the acidic etherification of isobutylene with ethanol at a temperature of 30–110 °C and a pressure of 0,8–1,3 MPa. The reaction is carried out with an acidic ion-exchange resin as a catalyst. Suitable reactors are fixed-bed reactors such as tube bundle or circulation reactors in which the reflux can be cooled optionally. Ethanol, produced by fermentation and distillation, is more expensive than methanol, which is derived from natural gas. Therefore, MTBE, made from methanol is cheaper than ETBE, made from ethanol. See also Methyl tert-butyl ether (MTBE) tert-Amyl methyl ether (TAME) Tetraethyllead (TEL) List of gasoline additives References External links EC Joint Research Centre ETBE risk assessment report Directive 98/70/EC of the European Parliament and of the Council of 13 October 1998 relating to the quality of petrol and diesel fuels and amending Council Directive 93/12/EEC An assessment of the impact of ethanol-blended petrol on the total NMVOC emission from road transport in selected countries Commodity chemicals Dialkyl ethers Ether solvents Oxygenates Pollutants Tert-butyl compounds
Ethyl tert-butyl ether
[ "Chemistry" ]
364
[ "Commodity chemicals", "Products of chemical industry" ]
1,550,259
https://en.wikipedia.org/wiki/Minimum%20viable%20population
Minimum viable population (MVP) is a lower bound on the population of a species, such that it can survive in the wild. This term is commonly used in the fields of biology, ecology, and conservation biology. MVP refers to the smallest possible size at which a biological population can exist without facing extinction from natural disasters or demographic, environmental, or genetic stochasticity. The term "population" is defined as a group of interbreeding individuals in similar geographic area that undergo negligible gene flow with other groups of the species. Typically, MVP is used to refer to a wild population, but can also be used for ex situ conservation (Zoo populations). Estimation There is no unique definition of what constitutes a sufficient population for the continuation of a species, because whether a species survives will depend to some extent on random events. Thus, any calculation of a minimum viable population (MVP) will depend on the population projection model used. A set of random (stochastic) projections might be used to estimate the initial population size needed (based on the assumptions in the model) for there to be, (for example) a 95% or 99% probability of survival 1,000 years into the future. Some models use generations as a unit of time rather than years in order to maintain consistency between taxa. These projections (population viability analyses, or PVA) use computer simulations to model populations using demographic and environmental information to project future population dynamics. The probability assigned to a PVA is arrived at after repeating the environmental simulation thousands of times. Extinction Small populations are at a greater risk of extinction than larger populations due to small populations having less capacity to recover from adverse stochastic (i.e. random) events. Such events may be divided into four sources: Demographic stochasticity Demographic stochasticity is often only a driving force toward extinction in populations with fewer than 50 individuals. Random events influence the fecundity and survival of individuals in a population, and in larger populations, these events tend to stabilize toward a steady growth rate. However, in small populations there is much more relative variance, which can in turn cause extinction. Environmental stochasticity Small, random changes in the abiotic and biotic components of the ecosystem that a population inhabits fall under environmental stochasticity. Examples are changes in climate over time and the arrival of another species that competes for resources. Unlike demographic and genetic stochasticity, environmental stochasticity tends to affect populations of all sizes. Natural catastrophes An extension of environmental stochasticity, natural disasters are random, large scale events such as blizzards, droughts, storms, or fires that directly reduce a population within a short period of time. Natural catastrophes are the hardest events to predict, and MVP models often have difficulty factoring them in. Genetic stochasticity Small populations are vulnerable to genetic stochasticity, the random change in allele frequencies over time, also known as genetic drift. Genetic drift can cause alleles to disappear from a population, and this lowers genetic diversity. In small populations, low genetic diversity can increase rates of inbreeding, which can result in inbreeding depression, in which a population made up of genetically similar individuals loses fitness. Inbreeding in a population reduces fitness by causing deleterious recessive alleles to become more common in the population, and also by reducing adaptive potential. The so-called "50/500 rule", where a population needs 50 individuals to prevent inbreeding depression, and 500 individuals to guard against genetic drift at-large, is an oft-used benchmark for an MVP, but a recent study suggests that this guideline is not applicable across a wide diversity of taxa. Application MVP does not take external intervention into account. Thus, it is useful for conservation managers and environmentalists; a population may be increased above the MVP using a captive breeding program or by bringing other members of the species in from other reserves. There is naturally some debate on the accuracy of PVAs, since a wide variety of assumptions are generally required for forecasting; however, the important consideration is not absolute accuracy but the promulgation of the concept that each species indeed has an MVP, which at least can be approximated for the sake of conservation biology and Biodiversity Action Plans. There is a marked trend for insularity, surviving genetic bottlenecks, and r-strategy to allow far lower MVPs than average. Conversely, taxa easily affected by inbreeding depression –having high MVPs – are often decidedly K-strategists, with low population densities occurring over a wide range. An MVP of 500 to 1,000 has often been given as an average for terrestrial vertebrates when inbreeding or genetic variability is ignored. When inbreeding effects are included, estimates of MVP for many species are in the thousands. Based on a meta-analysis of reported values in the literature for many species, Traill et al. reported concerning vertebrates "a cross-species frequency distribution of MVP with a median of 4169 individuals (95% CI = 3577–5129)." See also Effective population size Inbreeding depression Human population Metapopulation Rescue effect References Ecological metrics Biostatistics Environmental terminology Habitat
Minimum viable population
[ "Mathematics" ]
1,074
[ "Ecological metrics", "Quantity", "Metrics" ]
1,550,261
https://en.wikipedia.org/wiki/API%20gravity
The American Petroleum Institute gravity, or API gravity, is a measure of how heavy or light a petroleum liquid is compared to water: if its API gravity is greater than 10, it is lighter and floats on water; if less than 10, it is heavier and sinks. API gravity is thus an inverse measure of a petroleum liquid's density relative to that of water (also known as specific gravity). It is used to compare densities of petroleum liquids. For example, if one petroleum liquid is less dense than another, it has a greater API gravity. Although API gravity is mathematically a dimensionless quantity (see the formula below), it is referred to as being in 'degrees'. API gravity is graduated in degrees on a hydrometer instrument. API gravity values of most petroleum liquids fall between 10 and 70 degrees. In 1916, the U.S. National Bureau of Standards accepted the Baumé scale, which had been developed in France in 1768, as the U.S. standard for measuring the specific gravity of liquids less dense than water. Investigation by the U.S. National Academy of Sciences found major errors in salinity and temperature controls that had caused serious variations in published values. Hydrometers in the U.S. had been manufactured and distributed widely with a modulus of 141.5 instead of the Baumé scale modulus of 140. The scale was so firmly established that, by 1921, the remedy implemented by the American Petroleum Institute was to create the API gravity scale, recognizing the scale that was actually being used. API gravity formulas The formula to calculate API gravity from specific gravity (SG) is: Conversely, the specific gravity of petroleum liquids can be derived from their API gravity value as Thus, a heavy oil with a specific gravity of 1.0 (i.e., with the same density as pure water at 60 °F) has an API gravity of: Using API gravity to calculate barrels of crude oil per metric ton In the oil industry, quantities of crude oil are often measured in metric tons. One can calculate the approximate number of barrels per metric ton for a given crude oil based on its API gravity: For example, a metric ton of West Texas Intermediate (39.6° API) has a volume of about 7.6 barrels. Measurement of API gravity from its specific gravity To derive the API gravity, the specific gravity (i.e., density relative to water) is first measured using either the hydrometer, detailed in ASTM D1298 or with the oscillating U-tube method detailed in ASTM D4052. Density adjustments at different temperatures, corrections for soda-lime glass expansion and contraction and meniscus corrections for opaque oils are detailed in the Petroleum Measurement Tables, details of usage specified in ASTM D1250. The specific gravity is defined by the formula below. With the formula presented in the previous section, the API gravity can be readily calculated. When converting oil density to specific gravity using the above definition, it is important to use the correct density of water, according to the standard conditions used when the measurement was made. The official density of water at 60 °F according to the 2008 edition of ASTM D1250 is 999.016 kg/m3. The 1980 value is 999.012 kg/m3. In some cases the standard conditions may be 15 °C (59 °F) and not 60 °F (15.56 °C), in which case a different value for the water density would be appropriate (see standard conditions for temperature and pressure). Direct measurement of API gravity (hydrometer method) There are advantages to field testing and on-board conversion of measured volumes to volume correction. This method is detailed in ASTM D287. The hydrometer method is a standard technique for directly measuring API gravity of petroleum and petroleum products. This method is based on the principle of buoyancy and utilizes a specially calibrated hydrometer to determine the API gravity of a liquid sample. The procedure typically involves the following steps: Sample preparation: The petroleum sample is brought to a standard temperature, usually 60°F (15.6°C), to ensure consistency in measurements across different samples and conditions. Hydrometer selection: An appropriate API gravity hydrometer is chosen based on the expected range of the sample. These hydrometers are typically calibrated to read API gravity directly. Measurement: The hydrometer is gently lowered into the sample contained in a cylindrical vessel. It is allowed to float freely until it reaches equilibrium. Reading: The API gravity is read at the point where the surface of the liquid intersects the hydrometer scale. For maximum accuracy, the reading is taken at the bottom of the meniscus formed by the liquid on the hydrometer stem. Temperature correction: If the measurement is not performed at the standard temperature, a correction factor is applied to adjust the reading to the equivalent value at 60°F. The hydrometer method is widely used due to its simplicity and low cost. However, it requires a relatively large sample volume and may not be suitable for highly viscous or opaque fluids. Proper cleaning and handling of the hydrometer are crucial to maintain accuracy, and for volatile liquids, special precautions may be necessary to prevent evaporation during measurement. Classifications or grades Generally speaking, oil with an API gravity between 40 and 45° commands the highest prices. Above 45°, the molecular chains become shorter and less valuable to refineries. Crude oil is classified as light, medium, or heavy according to its measured API gravity. Light crude oil has an API gravity higher than 31.1° (i.e., less than 870 kg/m3) Medium oil has an API gravity between 22.3 and 31.1° (i.e., 870 to 920 kg/m3) Heavy crude oil has an API gravity below 22.3° (i.e., 920 to 1000 kg/m3) Extra heavy oil has an API gravity below 10.0° (i.e., greater than 1000 kg/m3) However, not all parties use the same grading. The United States Geological Survey uses slightly different ranges. Crude oil with API gravity less than 10° is referred to as extra heavy oil or bitumen. Bitumen derived from oil sands deposits in Alberta, Canada, has an API gravity of around 8°. It can be diluted with lighter hydrocarbons to produce diluted bitumen, which has an API gravity of less than 22.3°, or further "upgraded" to an API gravity of 31 to 33° as synthetic crude. References External links Comments on API gravity adjustment scale Instructions for using a glass hydrometer measured in API gravity Units of density Physical quantities Petroleum geology Petroleum production Gravity
API gravity
[ "Physics", "Chemistry", "Mathematics" ]
1,363
[ "Physical phenomena", "Units of density", "Physical quantities", "Quantity", "Petroleum", "Density", "Petroleum geology", "Physical properties", "Units of measurement" ]
1,550,377
https://en.wikipedia.org/wiki/Count%20of%20the%20Saxon%20Shore
The Count of the Saxon Shore for Britain () was the head of the Saxon Shore military command of the later Roman Empire. The post was possibly created during the reign of Constantine I, and was probably in existence by AD 367 when Nectaridus is elliptically referred to as such a leader by Ammianus Marcellinus. The Count's remit covered the southern and eastern coasts of Roman Britain during a period of increasing maritime raids from barbarian tribes outside the empire. The Count was one of three commands covering Britain at the time, along with the northern Dux Britanniarum and central Comes Britanniarum. Originally, the command may have covered both sides of the English Channel as well as Britain's western coast, as Carausius's position had, but by the end of the 4th century the role had been diminished and Gaul had its own dux tractus Armoricani and dux Belgicae Secundae. In 367, a series of invasions from Picts, Franks, Saxons, Scots and Attacotti appears to have defeated the army of Britain and resulted in the death of Nectaridus. Under Count Theodosius's reforms, the command was reorganised slightly. Although Ammianus speaks of a 'conspiracy of the savages,' he states that the Saxons and Francs attacked the Gallic (French) regions, while in Britain, the savages in question were only Picts, Scots and Attacotti. Eutropius had already spoken of the channel being cleared by Carausius, since the Armorican and Belgian coasts had been 'infested' with Francs and Saxons. The 5th-century Notitia Dignitatum lists the names of the Saxon Shore forts, from Norfolk to Hampshire that were under the Count's command. Further stations up the North Sea coast were probably also his responsibility. Forces he controlled were classified as limitanei, or frontier troops. In 401 many of his soldiers appear to have been withdrawn for the defence of Italy, rendering Britain much more vulnerable to attack. According to the Anglo-Saxon Chronicles, the eighth fort 'Anderida' was stormed by Saxons in 491, and the British garrison and inhabitants exterminated. Notes External links Fields, Nic. Rome's Saxon Shore: Coastal Defences of Roman Britain, AD 250–500, Osprey Publishing, 2006 Saxon Shore Late Roman military ranks
Count of the Saxon Shore
[ "Engineering" ]
492
[ "Fortification lines", "Saxon Shore" ]
1,550,674
https://en.wikipedia.org/wiki/Radial%20stress
Radial stress is stress toward or away from the central axis of a component. Pressure vessels The walls of pressure vessels generally undergo triaxial loading. For cylindrical pressure vessels, the normal loads on a wall element are longitudinal stress, circumferential (hoop) stress and radial stress. The radial stress for a thick-walled cylinder is equal and opposite to the gauge pressure on the inside surface, and zero on the outside surface. The circumferential stress and longitudinal stresses are usually much larger for pressure vessels, and so for thin-walled instances, radial stress is usually neglected. Formula The radial stress for a thick walled pipe at a point from the central axis is given by where is the inner radius, is the outer radius, is the inner absolute pressure and is the outer absolute pressure. Maximum radial stress occurs when (at the inside surface) and is equal to gauge pressure, or . References Solid mechanics
Radial stress
[ "Physics" ]
186
[ "Solid mechanics", "Mechanics" ]
1,550,677
https://en.wikipedia.org/wiki/Cylinder%20stress
In mechanics, a cylinder stress is a stress distribution with rotational symmetry; that is, which remains unchanged if the stressed object is rotated about some fixed axis. Cylinder stress patterns include: circumferential stress, or hoop stress, a normal stress in the tangential (azimuth) direction. axial stress, a normal stress parallel to the axis of cylindrical symmetry. radial stress, a normal stress in directions coplanar with but perpendicular to the symmetry axis. These three principal stresses- hoop, longitudinal, and radial can be calculated analytically using a mutually perpendicular tri-axial stress system. The classical example (and namesake) of hoop stress is the tension applied to the iron bands, or hoops, of a wooden barrel. In a straight, closed pipe, any force applied to the cylindrical pipe wall by a pressure differential will ultimately give rise to hoop stresses. Similarly, if this pipe has flat end caps, any force applied to them by static pressure will induce a perpendicular axial stress on the same pipe wall. Thin sections often have negligibly small radial stress, but accurate models of thicker-walled cylindrical shells require such stresses to be considered. In thick-walled pressure vessels, construction techniques allowing for favorable initial stress patterns can be utilized. These compressive stresses at the inner surface reduce the overall hoop stress in pressurized cylinders. Cylindrical vessels of this nature are generally constructed from concentric cylinders shrunk over (or expanded into) one another, i.e., built-up shrink-fit cylinders, but can also be performed to singular cylinders though autofrettage of thick cylinders. Definitions Hoop stress The hoop stress is the force over area exerted circumferentially (perpendicular to the axis and the radius of the object) in both directions on every particle in the cylinder wall. It can be described as: where: F is the force exerted circumferentially on an area of the cylinder wall that has the following two lengths as sides: t is the radial thickness of the cylinder l is the axial length of the cylinder. An alternative to hoop stress in describing circumferential stress is wall stress or wall tension (T), which usually is defined as the total circumferential force exerted along the entire radial thickness: Along with axial stress and radial stress, circumferential stress is a component of the stress tensor in cylindrical coordinates. It is usually useful to decompose any force applied to an object with rotational symmetry into components parallel to the cylindrical coordinates r, z, and θ. These components of force induce corresponding stresses: radial stress, axial stress, and hoop stress, respectively. Relation to internal pressure Thin-walled assumption For the thin-walled assumption to be valid, the vessel must have a wall thickness of no more than about one-tenth (often cited as Diameter / t > 20) of its radius. This allows for treating the wall as a surface, and subsequently using the Young–Laplace equation for estimating the hoop stress created by an internal pressure on a thin-walled cylindrical pressure vessel: (for a cylinder) (for a sphere) where P is the internal pressure t is the wall thickness r is the mean radius of the cylinder is the hoop stress. The hoop stress equation for thin shells is also approximately valid for spherical vessels, including plant cells and bacteria in which the internal turgor pressure may reach several atmospheres. In practical engineering applications for cylinders (pipes and tubes), hoop stress is often re-arranged for pressure, and is called Barlow's formula. Inch-pound-second system (IPS) units for P are pounds-force per square inch (psi). Units for t, and d are inches (in). SI units for P are pascals (Pa), while t and d=2r are in meters (m). When the vessel has closed ends, the internal pressure acts on them to develop a force along the axis of the cylinder. This is known as the axial stress and is usually less than the hoop stress. Though this may be approximated to There is also a radial stress that is developed perpendicular to the surface and may be estimated in thin walled cylinders as: In the thin-walled assumption the ratio is large, so in most cases this component is considered negligible compared to the hoop and axial stresses. Thick-walled vessels When the cylinder to be studied has a ratio of less than 10 (often cited as ) the thin-walled cylinder equations no longer hold since stresses vary significantly between inside and outside surfaces and shear stress through the cross section can no longer be neglected. These stresses and strains can be calculated using the Lamé equations, a set of equations developed by French mathematician Gabriel Lamé. where: and are constants of integration, which may be found from the boundary conditions, is the radius at the point of interest (e.g., at the inside or outside walls). For cylinder with boundary conditions: (i.e. internal pressure at inner surface), (i.e. external pressure at outer surface), the following constants are obtained: , . Using these constants, the following equation for radial stress and hoop stress are obtained, respectively: , . Note that when the results of these stresses are positive, it indicates tension, and negative values, compression. For a solid cylinder: then and a solid cylinder cannot have an internal pressure so . Being that for thick-walled cylinders, the ratio is less than 10, the radial stress, in proportion to the other stresses, becomes non-negligible (i.e. P is no longer much, much less than Pr/t and Pr/2t), and so the thickness of the wall becomes a major consideration for design (Harvey, 1974, pp. 57). In pressure vessel theory, any given element of the wall is evaluated in a tri-axial stress system, with the three principal stresses being hoop, longitudinal, and radial. Therefore, by definition, there exist no shear stresses on the transverse, tangential, or radial planes. In thick-walled cylinders, the maximum shear stress at any point is given by half of the algebraic difference between the maximum and minimum stresses, which is, therefore, equal to half the difference between the hoop and radial stresses. The shearing stress reaches a maximum at the inner surface, which is significant because it serves as a criterion for failure since it correlates well with actual rupture tests of thick cylinders (Harvey, 1974, p. 57). Practical effects Engineering Fracture is governed by the hoop stress in the absence of other external loads since it is the largest principal stress. Note that a hoop experiences the greatest stress at its inside (the outside and inside experience the same total strain, which is distributed over different circumferences); hence cracks in pipes should theoretically start from inside the pipe. This is why pipe inspections after earthquakes usually involve sending a camera inside a pipe to inspect for cracks. Yielding is governed by an equivalent stress that includes hoop stress and the longitudinal or radial stress when absent. Medicine In the pathology of vascular or gastrointestinal walls, the wall tension represents the muscular tension on the wall of the vessel. As a result of the Law of Laplace, if an aneurysm forms in a blood vessel wall, the radius of the vessel has increased. This means that the inward force on the vessel decreases, and therefore the aneurysm will continue to expand until it ruptures. A similar logic applies to the formation of diverticuli in the gut. Theory development The first theoretical analysis of the stress in cylinders was developed by the mid-19th century engineer William Fairbairn, assisted by his mathematical analyst Eaton Hodgkinson. Their first interest was in studying the design and failures of steam boilers. Fairbairn realized that the hoop stress was twice the longitudinal stress, an important factor in the assembly of boiler shells from rolled sheets joined by riveting. Later work was applied to bridge-building and the invention of the box girder. In the Chepstow Railway Bridge, the cast iron pillars are strengthened by external bands of wrought iron. The vertical, longitudinal force is a compressive force, which cast iron is well able to resist. The hoop stress is tensile, and so wrought iron, a material with better tensile strength than cast iron, is added. See also Can be caused by cylinder stress: Boston Molasses Disaster Boiler explosion Boiling liquid expanding vapor explosion Related engineering topics: Stress concentration Hydrostatic test Buckling Blood pressure#Relation_to_wall_tension Piping#Stress_analysis Designs very affected by this stress: Pressure vessel Rocket engine Flywheel The dome of Florence Cathedral References Mechanics
Cylinder stress
[ "Physics", "Engineering" ]
1,774
[ "Mechanics", "Mechanical engineering" ]
1,550,685
https://en.wikipedia.org/wiki/Strong%20antichain
In order theory, a subset A of a partially ordered set P is a strong downwards antichain if it is an antichain in which no two distinct elements have a common lower bound in P, that is, In the case where P is ordered by inclusion, and closed under subsets, but does not contain the empty set, this is simply a family of pairwise disjoint sets. A strong upwards antichain B is a subset of P in which no two distinct elements have a common upper bound in P. Authors will often omit the "upwards" and "downwards" term and merely refer to strong antichains. Unfortunately, there is no common convention as to which version is called a strong antichain. In the context of forcing, authors will sometimes also omit the "strong" term and merely refer to antichains. To resolve ambiguities in this case, the weaker type of antichain is called a weak antichain. If (P, ≤) is a partial order and there exist distinct x, y ∈ P such that {x, y} is a strong antichain, then (P, ≤) cannot be a lattice (or even a meet semilattice), since by definition, every two elements in a lattice (or meet semilattice) must have a common lower bound. Thus lattices have only trivial strong antichains (i.e., strong antichains of cardinality at most 1). References Order theory
Strong antichain
[ "Mathematics" ]
312
[ "Order theory" ]
1,550,771
https://en.wikipedia.org/wiki/Countable%20chain%20condition
In order theory, a partially ordered set X is said to satisfy the countable chain condition, or to be ccc, if every strong antichain in X is countable. Overview There are really two conditions: the upwards and downwards countable chain conditions. These are not equivalent. The countable chain condition means the downwards countable chain condition, in other words no two elements have a common lower bound. This is called the "countable chain condition" rather than the more logical term "countable antichain condition" for historical reasons related to certain chains of open sets in topological spaces and chains in complete Boolean algebras, where chain conditions sometimes happen to be equivalent to antichain conditions. For example, if κ is a cardinal, then in a complete Boolean algebra every antichain has size less than κ if and only if there is no descending κ-sequence of elements, so chain conditions are equivalent to antichain conditions. Partial orders and spaces satisfying the ccc are used in the statement of Martin's axiom. In the theory of forcing, ccc partial orders are used because forcing with any generic set over such an order preserves cardinals and cofinalities. Furthermore, the ccc property is preserved by finite support iterations (see iterated forcing). For more information on ccc in the context of forcing, see . More generally, if κ is a cardinal then a poset is said to satisfy the κ-chain condition, also written as κ-c.c., if every antichain has size less than κ. The countable chain condition is the ℵ1-chain condition. Examples and properties in topology A topological space is said to satisfy the countable chain condition, or Suslin's Condition, if the partially ordered set of non-empty open subsets of X satisfies the countable chain condition, i.e. every pairwise disjoint collection of non-empty open subsets of X is countable. The name originates from Suslin's Problem. Every separable topological space has ccc. Furthermore, a product space of arbitrary amount of separable spaces has ccc. A metric space has ccc if and only if it's separable. In general, a topological space with ccc need not be separable. For example, a Cantor cube with the product topology has ccc for any cardinal , though not separable for . Paracompact ccc spaces are Lindelöf. An example of a topological space with ccc is the real line. References Products of Separable Spaces, K. A. Ross, and A. H. Stone. The American Mathematical Monthly 71(4):pp. 398–403 (1964) Kunen, Kenneth. Set Theory: An Introduction to Independence Proofs. Order theory Forcing (mathematics)
Countable chain condition
[ "Mathematics" ]
581
[ "Forcing (mathematics)", "Mathematical logic", "Order theory" ]
1,550,783
https://en.wikipedia.org/wiki/Martin%27s%20axiom
In the mathematical field of set theory, Martin's axiom, introduced by Donald A. Martin and Robert M. Solovay, is a statement that is independent of the usual axioms of ZFC set theory. It is implied by the continuum hypothesis, but it is consistent with ZFC and the negation of the continuum hypothesis. Informally, it says that all cardinals less than the cardinality of the continuum, 𝔠, behave roughly like ℵ0. The intuition behind this can be understood by studying the proof of the Rasiowa–Sikorski lemma. It is a principle that is used to control certain forcing arguments. Statement For a cardinal number κ, define the following statement: MA(κ) For any partial order P satisfying the countable chain condition (hereafter ccc) and any set D = {Di}i∈I of dense subsets of P such that |D| ≤ κ, there is a filter F on P such that F ∩ Di is non-empty for every Di ∈ D. In this context, a set D is called dense if every element of P has a lower bound in D. For application of ccc, an antichain is a subset A of P such that any two distinct members of A are incompatible (two elements are said to be compatible if there exists a common element below both of them in the partial order). This differs from, for example, the notion of antichain in the context of trees. MA(ℵ0) is provable in ZFC and known as the Rasiowa–Sikorski lemma. MA(2ℵ0) is false: [0, 1] is a separable compact Hausdorff space, and so (P, the poset of open subsets under inclusion, is) ccc. But now consider the following two 𝔠-size sets of dense sets in P: no x ∈ [0, 1] is isolated, and so each x defines the dense subset { S | x ∉ S }. And each r ∈ (0, 1], defines the dense subset { S | diam(S) < r }. The two sets combined are also of size 𝔠, and a filter meeting both must simultaneously avoid all points of [0, 1] while containing sets of arbitrarily small diameter. But a filter F containing sets of arbitrarily small diameter must contain a point in ⋂F by compactness. (See also .) Martin's axiom is then that MA(κ) holds for every κ for which it could: Martin's axiom (MA) MA(κ) holds for every κ < 𝔠. Equivalent forms of MA(κ) The following statements are equivalent to MA(κ): If X is a compact Hausdorff topological space that satisfies the ccc then X is not the union of κ or fewer nowhere dense subsets. If P is a non-empty upwards ccc poset and Y is a set of cofinal subsets of P with |Y| ≤ κ then there is an upwards-directed set A such that A meets every element of Y. Let A be a non-zero ccc Boolean algebra and F a set of subsets of A with |F| ≤ κ. Then there is a Boolean homomorphism φ: A → Z/2Z such that for every X ∈ F, there is either an a ∈ X with φ(a) = 1 or there is an upper bound b ∈ X with φ(b) = 0. Consequences Martin's axiom has a number of other interesting combinatorial, analytic and topological consequences: The union of κ or fewer null sets in an atomless σ-finite Borel measure on a Polish space is null. In particular, the union of κ or fewer subsets of R of Lebesgue measure 0 also has Lebesgue measure 0. A compact Hausdorff space X with |X| < 2κ is sequentially compact, i.e., every sequence has a convergent subsequence. No non-principal ultrafilter on N has a base of cardinality less than κ. Equivalently for any x ∈ βN\N we have 𝜒(x) ≥ κ, where 𝜒 is the character of x, and so 𝜒(βN) ≥ κ. MA(ℵ1) implies that a product of ccc topological spaces is ccc (this in turn implies there are no Suslin lines). MA + ¬CH implies that there exists a Whitehead group that is not free; Shelah used this to show that the Whitehead problem is independent of ZFC. Further development Martin's axiom has generalizations called the proper forcing axiom and Martin's maximum. Sheldon W. Davis has suggested in his book that Martin's axiom is motivated by the Baire category theorem. References Further reading Jech, Thomas, 2003. Set Theory: The Third Millennium Edition, Revised and Expanded. Springer. . Kunen, Kenneth, 1980. Set Theory: An Introduction to Independence Proofs. Elsevier. . Axioms of set theory Independence results Set theory
Martin's axiom
[ "Mathematics" ]
1,050
[ "Independence results", "Set theory", "Mathematical logic", "Mathematical axioms", "Axioms of set theory" ]
1,550,845
https://en.wikipedia.org/wiki/Richard%20Ferber
Richard Ferber is a physician and the director of The Center for Pediatric Sleep Disorders, at Children's Hospital Boston. He has been researching sleep and sleep disorders in children for over 30 years. He is best known for his methods—popularly called Ferberization—that purports to teach infants to learn how to fall asleep on their own, which are described in his book Solve Your Child's Sleep Problems (first edition 1985). He graduated from Harvard College and Harvard Medical School. References American pediatricians Living people Sleep Year of birth missing (living people) Place of birth missing (living people) Harvard College alumni Harvard Medical School alumni
Richard Ferber
[ "Biology" ]
132
[ "Behavior", "Sleep" ]
1,550,886
https://en.wikipedia.org/wiki/Dux%20Britanniarum
Dux Britanniarum was a military post in Roman Britain, probably created by Emperor Diocletian or Constantine I during the late third or early fourth century. The Dux (literally, "(military) leader" was a senior officer in the late Roman army of the West in Britain. It is listed in the Notitia Dignitatum as being one of the three commands in Britain, along with the Comes Britanniarum and Count of the Saxon Shore. His responsibilities covered the area along Hadrian's Wall, including the surrounding areas to the Humber estuary in the southeast of today's Yorkshire, Cumbria and Northumberland to the mountains of the Southern Pennines. The headquarters were in the city of Eboracum (York). The purpose of this buffer zone was to preserve the economically important and prosperous southeast of the island from attacks by the Picts (tribes of what are now the Scottish lowlands) and against the Scots (Irish raiders). History The Dux Britanniarum was commander of the troops of the Northern Region, primarily along Hadrian's Wall. The position carried the rank of viri spectabiles, but was below that of the Comes Britanniarum. His responsibilities would have included protection of the frontier, maintenance of fortifications, and recruitment. Provisioning the troops would have played a significant part in the economy of the area. The Dux would have had considerable influence within his geographical jurisdiction, and exercised significant autonomy due in part to the distance from headquarters of his superiors. The Notitia Dignitatum lists the garrison along Hadrian's Wall (along with several sites on the coast of Cumbria) under the command of the Dux Britanniarum. Archaeological evidence shows that other units must have been stationed here, which are not, however, mentioned in the Notita. Most of them were established during the 3rd Century. Castles and units His troops were limitanei or frontier guards and not the comitatenses or field army commanded by the Comes Britanniarum. Fourteen units in north Britain are listed in the Notitia as being under his command, stationed in either modern Yorkshire, Cumbria or Northumberland. Archaeological evidence indicates there were other posts occupied at the time which are not listed. His forces included three cavalry vexillationes with the rest being infantry. They were newly raised units rather than being third century creations. In addition to these fort garrisons, the dux commanded the troops at Hadrian's Wall: the Notitia lists their stations from east to west, as well as additional forts on the Cumbrian coast. These troops appear to have been third century regiments, although the reliability of the Notitia makes it difficult to infer any solid information from it. From Chapter XL: sub dispositione viri spectabilis Ducis Britanniarum (literally "made available to the most honorable military commander of the British provinces") ...in addition to the administrative staff (Officium) lists 14 prefects and their units with their deployment locations under the command of this Dux: Praefectus Legionis sextae Praefectus Numeri directorum, Verteris Praefectus Numeri exploratorum, Lavatrae Praefectus Equitum Dalmatarum, Praesidio Praefectus Equitum Crispianorum, Dano Praefectus Numeri defensorum, Barboniaco Praefectus Equitum, catafractariorum, Morbio Praefectus Numeri Solensium, Maglone Praefectus Numeri barcariorum Tigrisiensium, Arbeia Praefectus Numeri Pacensium, Magis Praefectus Numeri Nerviorum Dictensium, Dicti Praefectus Numeri Longovicanorum, Longovicium Praefectus Numeri vigilum, Concangis Praefectus Numeri supervenientium Petueriensium, Deruentione (Derventio?) Then follow the garrisons along Hadrian's Wall (per item lineam Valli): Cohortis quaternary Lingonum, Segedunum Tribune Alae Petrianae, Petriana Praefectus cohortis primae Cornoviorum, Pons Aelius Tribune Alae primae Asturum, Cilurnum or Cilurvum Praefectus Numeri Maurorum Aurelianorum, Aballaba Praefectus cohortis primae Frixagorum, Vindobala Tribune cohortis secundae Lingonum, Segedunum Tribune Alae Sabinianae, Hunnum or Onnum Praefectus cohortis primae Hispanorum, Uxelodunum or Petriana Tribune Alae secundae Asturum, Aesica Praefectus cohortis secundae Thracum, Gabrosenti Tribune cohortis primae Batavorum, Procolita Tribune cohortis primae Aeliae Classicae, Tunnocelo Tribune cohortis primae Tungrorum Classicae, Vercovicium Tribune cohortis primae Morinorum, Glannoventa Tribune cohortis quaternary Gallorum, Vindolanda Tribune cohortis tertiae Nerviorum, Alione (Alauna?) Tribune cohortis primae Asturum, Aesica Cuneus Sarmatarum, Bremetenraco (Bremenium?)(no officer stated) Cohortis secundae Dalmatarum, Magnis Tribune Alae primae Herculeae, Olenaco Praefectus cohortis primae Aeliae Dacorum, Camboglanna or Banna Tribune cohortis sextae Nerviorum, Virosido and an unknown unit in the fort Luguvalium The Dux Britanniarum held command over thirty-eight regimental commanders. Infantry units were concentrated along the Wall. A Sarmatian unit of heavy cavalry (Cuneus Sarmatarum), was stationed near the crossroads at Ribchester. As their name suggests the Praefectus Numeri exploratorum were used for reconnaissance. The Equites Crispianorum was located at Doncaster, and a naval unit at the mouth of the Tyne. Collins estimates troop counts from a low of 7,000 to as much as 15,000, with the average approximating 12,500. Origin The Legio sexta is an ancient tribal legion of Britain, the Legio VI Eburacum (York). They seem to have had in late antiquity no fixed posting. One might expect that this legion (full name: Legio VI Victrix Pia Fidelis Britannica) at this time still to be stationed in Eburacum: this absence may indicate that the unit had been moved to another site when the list of the Dux Britanniarum was compiled in the Notita Dignitatum. ("Possibly is the VI."?) but also in connection with the non-historically tangible primani iuniores in the army of the Comes Britanniarum. The men under the Praefectus Numbers Solensium could (per Arnold Hughes Martin Jones, 1986) be the descendants of another British unit, the Legio XX Valeria Victrix. This is the only legion no longer listed in the Notitia Dignitatum. The last epigraphic evidence of their presence in Britain is a mention on coins of the usurper Carausius, a century before the Notita Dignitatum was compiled. See also Fullofaudes Dulcitius Notes Sources Alexander Demandt: Geschichte der Spätantike: Das Römische Reich von Diocletian bis Justinian 284-565 n. Chr. München 1998, (Beck Historische Bibliothek). Nick Fields: Rome's Saxon Shore Coastal Defences of Roman Britain AD 250–500. Osprey Books, 2006, (Fortress 56). Arnold Hugh Martin Jones: The Later Roman Empire, 284–602. A Social, Economic and Administrative Survey. 2 Bde. Johns Hopkins University Press, Baltimore 1986, . Simon MacDowall: Late Roman Infantryman, 236-565 AD. Weapons, Armour, Tactics. Osprey Books, 1994, (Warrior 9). Ralf Scharf: Der Dux Mogontiacensis und die Notitia Dignitatum. de Gruyter, Berlin 2005, . Fran & Geoff Doel, Terry Lloyd: König Artus und seine Welt, Aus dem Englischen von Christof Köhler. Sutton, Erfurt 2000, . Guy de la Bedoyere: Hadrians Wall, History and Guide. Tempus, Stroud 1998, . Roman Britain Saxon Shore Military history of Roman Britain Late Roman military ranks
Dux Britanniarum
[ "Engineering" ]
1,847
[ "Fortification lines", "Saxon Shore" ]
1,550,892
https://en.wikipedia.org/wiki/Ren%C3%A9-Louis%20Baire
René-Louis Baire (; 21 January 1874 – 5 July 1932) was a French mathematician most famous for his Baire category theorem, which helped to generalize and prove future theorems. His theory was published originally in his dissertation Sur les fonctions de variables réelles ("On the Functions of Real Variables") in 1899. Education and career The son of a tailor, Baire was one of three children from a poor working-class family in Paris. He started his studies when he entered the Lycée Lakanal through the use of a scholarship. In 1890, Baire completed his advanced classes and entered the special mathematics section of the Lycée Henri IV. While there, he prepared for and passed the entrance examination for the École Normale Supérieure and the École Polytechnique. He decided to attend the École Normale Supérieure in 1891. After receiving his three-year degree, Baire proceeded toward his agrégation. He did better than all the other students on the writing portion of the test but he did not pass the oral examination due to a lack of explanation and clarity in his lesson. After retaking the agrégation and passing, he was assigned to teach at the secondary school (lycée) in Bar-le-Duc. While there, Baire researched the concept of limits and discontinuity for his doctorate. He presented his thesis on March 24, 1899 and was awarded his doctorate. He continued to teach in secondary schools around France but was not happy teaching lower level mathematics. In 1901 Baire was appointed to the University of Montpellier as a "Maître de conférences". In 1904 he was awarded a Peccot Foundation Fellowship to spend a semester in a university and develop his skills as a professor. Baire chose to attend the Collège de France where he lectured on the subject of analysis. He was appointed to a university post in 1905 when he joined the Faculty of Science at the University of Dijon. In 1907 he was promoted to Professor of Analysis at Dijon where he continued his research in analysis. Illness Since he was young, Baire always had "delicate" health. He had developed problems with his esophagus before he attended school and he would occasionally experience severe attacks of agoraphobia. From time to time, his health would prevent him from working or studying. The bad spells became more frequent, immobilizing him for long periods of time. Over time, he had developed a kind of psychological disorder that made him unable to undertake work that required long periods of concentration. At times this would make his ability to research mathematics impossible. Between 1909 and 1914 this problem continually plagued him and his teaching duties became more and more difficult. In 1914 he was given a leave of absence from the University of Dijon due to his poor health, after which he spent the rest of his life in Lausanne, Switzerland, and around Lake Geneva. He retired from Dijon in 1925 and spent his last years living in multiple hotels that he could afford with his meager pension. He committed suicide in 1932. Contributions to mathematics Baire's skill in mathematical analysis led him to study with other major names in analysis such as Vito Volterra and Henri Lebesgue. In his dissertation Sur les fonctions de variable réelles ("On the Functions of Real Variables"), Baire studied a combination of set theory and analysis topics to arrive at the Baire category theorem and the definition of a nowhere dense set. He then used these topics to prove the theorems of those he studied with and further the understanding of continuity. Among Baire's other most important works are Théorie des nombres irrationnels, des limites et de la continuité (Theory of Irrational Numbers, Limits, and Continuity) published in 1905 and both volumes of Leçons sur les théories générales de l’analyse (Lessons on the General Theory of Analysis) published in 1907–08. See also Baire space (set theory) References External links 1874 births 1932 deaths Scientists from Paris 19th-century French mathematicians 20th-century French mathematicians Members of the French Academy of Sciences Topologists École Normale Supérieure alumni Lycée Henri-IV alumni
René-Louis Baire
[ "Mathematics" ]
862
[ "Topologists", "Topology" ]
1,550,921
https://en.wikipedia.org/wiki/Comparison%20of%20wiki%20software
The following tables compare general and technical information for many wiki software packages. General information Systems listed on a light purple background are no longer in active development. Target audience Features 1 Features 2 Installation See also Comparison of wiki farms notetaking software text editors HTML editors word processors wiki hosting services List of wikis wiki software personal information managers text editors outliners for desktops mobile devices web-based Footnotes Comparison Wiki software Text editor comparisons Wiki software
Comparison of wiki software
[ "Technology" ]
94
[ "Software comparisons", "Computing comparisons" ]
1,550,927
https://en.wikipedia.org/wiki/Appel%20reaction
The Appel reaction is an organic reaction that converts an alcohol into an alkyl chloride using triphenylphosphine and carbon tetrachloride. The use of carbon tetrabromide or bromine as a halide source will yield alkyl bromides, whereas using carbon tetraiodide, methyl iodide or iodine gives alkyl iodides. The reaction is credited to and named after Rolf Appel, it had however been described earlier. The use of this reaction is becoming less common, due to carbon tetrachloride being restricted under the Montreal protocol. Drawbacks to the reaction are the use of toxic halogenating agents and the coproduction of organophosphorus product that must be separated from the organic product. The phosphorus reagent can be used in catalytic quantities. The corresponding alkyl bromide can also be synthesised by addition of lithium bromide as a source of bromide ions. A more sustainable version of the Appel reaction has been reported, which uses a catalytic amount of phosphine that is regenerated with oxalyl chloride. Mechanism The Appel reaction begins with the formation of the phosphonium salt 3, which is thought to exist as a tight ion pair with 4 and therefore is unable to undergo an alpha-elimination to give dichlorocarbene. Deprotonation of the alcohol, forming chloroform, yields an alkoxide 5. The nucleophilic displacement of the chloride by the alkoxide yields intermediate 7. With primary and secondary alcohols, the halide reacts in a SN2 process forming the alkyl halide 8 and triphenylphosphine oxide. Tertiary alcohols form the products 6 and 7 via a SN1 mechanism. The driving force behind this and similar reactions is the formation of the strong PO double bond. The reaction is somewhat similar to the Mitsunobu reaction, where the combination of an organophosphine as an oxide acceptor, an azo compound as a hydrogen acceptor reagent, and a nucleophile are used to convert alcohols to esters and other applications like this. Illustrative use of the Appel reaction is the chlorination of geraniol to geranyl chloride. Modifications The Appel reaction is also effective on carboxylic acids; this has been used to convert them to oxazolines, oxazines and thiazolines. See also Atherton–Todd reaction Corey–Fuchs reaction Mitsunobu reaction References Substitution reactions Name reactions
Appel reaction
[ "Chemistry" ]
539
[ "Name reactions" ]
1,551,036
https://en.wikipedia.org/wiki/Food%20bank
A food bank or food pantry is a non-profit, charitable organization that distributes food to those who have difficulty purchasing enough to avoid hunger, usually through intermediaries like food pantries and soup kitchens. Some food banks distribute food directly with their food pantries. St. Mary's Food Bank was the world's first food bank, established in the US in 1967. Since then, many thousands have been set up all over the world. In Europe, their numbers grew rapidly after the global increase in the price of food which began in late 2006, and especially after the financial crisis of 2007–2008 began to worsen economic conditions for those on low incomes. Likewise, the inflation and economic crisis of the 2020s has exponentially driven low and even some middle income class consumers to at least partially get their food. The growth of food banks has been welcomed by commentators who see them as examples of active, caring citizenship. Other academics and commentators have expressed concern that the rise of food banks may erode political support for welfare provision. Researchers have reported that in some cases food banks can be inefficient compared with state-run welfare. Operational models With thousands of food banks operating around the world, there are many different models. A major distinction between food banks is whether or not they operate on the "front line" model, giving out food directly to the hungry, or whether they operate with the "warehouse" model, supplying food to intermediaries like food pantries, soup kitchens and other front-line organizations. In the US, Australia and to an extent in Canada, the standard model is for food banks to act as warehouses rather than as suppliers to the end user, though there are exceptions. In other countries, food banks usually hand out food parcels direct to hungry people, providing the service that in the US is offered by food pantries. Another distinction is between the charity model and the labor union model. At least in Canada and the US, food banks run by charities often place relatively more weight on the salvaging of food that would otherwise go to waste, and on encouraging voluntarism, whereas those run by unions can place greater emphasis on feeding the hungry by any means available, on providing work for the unemployed, and on education, especially on explaining to users their civil rights. In the US, cities will often have a single food bank that acts as a centralized warehouse and will serve several hundred front-line agencies. Like a blood bank, that warehouse serves as a single collection and distribution point for food donations. A food bank operates a lot like a for-profit food distributor, but in this case, it distributes food to charities, not to food retailers. There is often no charge to the charities, but some food banks do charge a small "shared maintenance" fee to help defray the cost of storage and distribution. For many US food banks, most of their donated food comes from food left over from the normal processes of for-profit companies. It can come from any part of the food chain, e.g. from growers who have produced too much or whose food is not sufficiently visually appealing; from manufacturers who overproduced; or from retailers who over-ordered. Often the product is approaching or past its "sell by" date. In such cases, the food bank liaises with the food industry and with regulators to make sure the food is safe and legal to distribute and eat. Other sources of food include the general public, sometimes in the form of "food drives", and government programs that buy and distribute excess farm products mostly to help support higher commodity prices. Food banks can also buy food either at market prices or from wholesalers and retailers at discounted prices, often at a cost. Sometimes farmers will allow food banks to send gleaners to salvage leftover crops for free once their primary harvest is complete. A few food banks have even taken over their farms, though such initiatives have not always been successful. Many food banks do not accept fresh produce, preferring canned or packaged food due to health and safety concerns, though some have tried to change this as part of a growing worldwide awareness of the importance of nutrition. As an example, in 2012, London Food Bank (Canada) started accepting perishable food, reporting that as well as the obvious health benefits, there were noticeable emotional benefits to recipients when they were given fresh food. Summer can be a challenging time for food banks, particularly in regions where school children are usually given regular free meals during term time. Spikes in demand can coincide with periods where donations fall due to folk being on holiday. United States History The world's first food bank was St. Mary's Food Bank in Phoenix, Arizona, founded by John van Hengel in 1967. According to sociology professor Janet Poppendieck, the hunger within the US was widely considered to be a solved problem until the mid-1960s. By the mid-sixties, several states had ended the free distribution of federal food surpluses, instead providing an early form of food stamps which had the benefit of allowing recipients to choose food of their liking, rather than having to accept whatever happened to be in surplus at the time. However, there was a minimum charge and some people could not afford the stamps, leading to severe hunger. One response from American society to the rediscovery of hunger was to step up the support provided by soup kitchens and similar civil society food relief agencies – some of these dated back to the Great Depression and earlier. In 1965, while volunteering for a community dining room, van Hengel learned that grocery stores often had to throw away food that had damaged packaging or was near expiration. He started collecting that food for the dining room but soon had too much for that one program. He thought of creating a central location from which any agency can receive donations. Described as a classic case of "if you build it they will come", the first food bank was created with the help of St. Mary's Basilica, which became the namesake of the organization. Food banks spread across the United States, and Canada. By 1976, van Hengel had established the organization known today as Feeding America. As of the early 21st century, their network of over 200 food banks provides support for 90,000 projects. Other large networks exist such as AmpleHarvest.org, created by CNN Hero and World Food Prize nominee Gary Oppenheimer which lists nearly 9,000 food pantries (1 out of every 4 in America) across all 50 states that are eager to receive surplus locally grown garden produce from any of America's 62 million home or community gardeners. In the 1980s, U.S. food banks began to grow rapidly. A second response to the "rediscovery" of hunger in the mid-sixties had been extensive lobbying of politicians to improve welfare. Until the 1980s, this approach had a greater impact. In the 1970s, U.S. Federal expenditure on hunger relief grew by about 500%, with food stamps distributed free of charge to those in greatest need. According to Poppendieck, welfare was widely considered preferable to grassroots efforts, as the latter could be unreliable and did not give recipients consumer-style choice in the same way as did food stamps. It also risked recipients feeling humiliated by having to turn to charity. In the early 1980s, Ronald Reagan's administration scaled back welfare provision, leading to a rapid rise in activity from grassroots hunger relief agencies. According to a comprehensive government survey completed in 2002, over 90% of food banks were established in the US after 1981. Poppendieck says that for the first few years after the change, there was vigorous opposition from the left, who argued that state welfare was much more suitable for meeting recipients needs. But in the decades that followed, food banks have become an accepted part of America's response to hunger. Demand for the services of US food banks increased further in the late 1990s, after the "end of welfare as we know it" with Bill Clinton's Personal Responsibility and Work Opportunity Act. In Canada, foodbanks underwent a period of rapid growth after the cutbacks in welfare that took place in the mid-1990s. As early as the 1980s, food banks had also begun to spread from the United States to the rest of the world. The first European food bank was founded in France in 1984. In the 1990s and early 2000s, food banks were established in South America, Africa, and Asia, in several cases with van Hengel acting as a consultant. In 2007, The Global Food Banking Network was formed. Food aid for pets Some U.S. cities have organizations that provide dog and cat food for pets whose owners qualify for food assistance. For example, Daffy's Pet Soup Kitchen in Lawrenceville, Georgia is considered the largest pet food aid agency in Georgia, distributing over 800,000 pounds of dog and cat food in 2012. Daffy's Pet Soup Kitchen was started in 1997 by Tom Wargo, a repairman who was working in an elderly woman's home when he noticed her sharing her Meals On Wheels lunch with her pet cat because she could not afford cat food. Daffy's was one of seven non-profit organizations recognized by Barefoot Wine in 2013 through a $10,000 donation and by being featured on labels of the vintner's Impression Red Blend wines. Pet Buddies Food Pantry in Atlanta, Georgia is another example of an establishment that provides food aid for pets. The St. Augustine Humane Society in St. Augustine, Florida, distributes over 1,600 pounds of pet food each month to families who are experiencing economic hardship and cannot afford to feed their pets. Food pantries for students The college and University Food Bank Alliance, which was formed in 2012, has 570 campus food pantries nationwide. On-campus food pantries were available at 70% of State University of New York locations by 2019. After the 2007 financial crisis Following the financial crisis of 2007–08, and the lasting inflation in the price of food that began in late 2006, there has been a further increase in the number of individuals requesting help from American and Canadian food banks. By 2012, according to Food Banks Canada, over 850,000 Canadians needed help from a food bank each month. For the United States, Gleaners Indiana Food bank reported in 2012 that there were then 50  million Americans struggling with food insecurity (about 1 in 6 of the population), with the number of individuals seeking help from food banks having increased by 46% since 2005. According to a 2012 UCLA Center for Health Policy Research study, there has been a 40% increase in demand for Californian food banks since 2008, with married couples who both work sometimes requiring the aid of food banks. Dave Krepcho, Director of the Second Harvest Food Bank in Orlando, has said that college-educated professional couples have begun to turn to food pantries. By mid-2012, US food banks had expressed concerns about the expected difficulty in feeding the hungry over the coming months. Rapidly rising demand has been coinciding with higher food prices and with a decrease in donations, partly as the food industry is becoming more efficient and so has less mislabelled and other slightly defective food to give away. Also, there has been less surplus federal food on offer. Additionally, there have been recent decreases in government funding, and Congress has been debating possible further cuts, including potentially billions of dollars from the Supplemental Nutrition Assistance Program (food stamp program). In September 2012, Feeding America launched Hunger Action Month, with events planned all over the nation. Food banks and other agencies involved hoped to raise awareness that about one in six Americans are struggling with hunger and to get more Americans involved in helping out. Food banks and COVID-19 The COVID-19 outbreak impacted European food banks since value chains were notably disrupted and food banks lacked the support of volunteers. Compared to 2019, the amount of food distributed increased in 2020. Possibly through an increase in people in need. At the same time, the deliveries of shelf-stable food decreased by 20% due to panic buying, especially at the beginning of the crisis. Europe The first European food bank was opened in France in 1984. The first food bank in Italy was established in 1989. Similar to the UK's experience, food banks have become much more common across continental Europe since the crisis that began in 2008. In Spain, food banks can operate on the warehouse model, supplying a network of surrounding soup kitchens and other food relief agencies. The helped to feed about 800,000 people during 2008–11, according to the Carrefour Foundation. By October 2014, Spain had 55 food banks in total, with the number who depend on them having increased to 1.5  million. In Belgium, food banks helped about 121,000 people in 2012. That was an increase of about 4,500 compared with 2011, the biggest increase since the start of the 2008 crisis. Belgian food banks account for about 65% of all food aid given out within the country. The number of food banks has increased rapidly in Germany, a country that weathered the crisis relatively well, and did not implement severe austerity measures. In 2012, professor Sabine Pfeiffer of Munich University of Applied Sciences said there has been an "explosion" of food bank usage. European Union programs While many European food banks have long been run by civil society with no government assistance, an EU-funded project, the Most deprived persons program (MDP), had specialized in supplying food to marginalized people who are not covered by the benefits system and who were in some cases reluctant to approach the more formal food banks. The program involved the EU buying surplus agricultural products, which were then distributed to the poor largely by Catholic churches. The MDP was wound down in late 2013 and was replaced by the Fund for European Aid to the Most Deprived (FEAD), which is set to run until at least 2020. The FEAD program has a wider scope than the MDP, helping deprived people not just with food aid, but with social inclusion projects and housing. The actual methods employed by FEAD tend to vary from country to country, but in several EU states, such as Poland, its activities include helping to fund local food bank networks. United Kingdom In 2022 there were over 2,572 UK food banks in the UK. Professor Jon May, of Queen Mary University of London and the Independent Food Aid Network said statistics showed a rapid rise in several food banks during the last five years. Though food banks were rarely seen in the UK in the second half of the twentieth century, their use has started to grow, especially in the 2000s, and have since dramatically expanded. The increase in the dependency on food banks has been blamed by some, such as Guardian columnist George Monbiot, on the 2008 recession and the Conservative government's austerity policies. These policies included cuts to the welfare state and caps on the total amount of welfare support that a family can claim. The OECD found that the number of people who answered yes to the question 'Have there been times in the past 12 months when you did not have enough money to buy food that you or your family needed?' has decreased from 9.8% in 2007 to 8.1% in 2012, with Spectator editor Toby Young speculating in 2015 that the initial rise was due to both more awareness of food banks, and Jobcentres referring people to food banks when they were hungry. Rachel Loopstra, lecturer on nutrition at King's College London and food insecurity expert, said: Those who are short of food are likely to frequently also be short of other essential products, like shampoo and basic hygiene products (e.g. soap, toilet rolls and sanitary products). Some people must choose between buying food and buying basic toiletries. As of January 2014, the largest group co-ordinating UK food banks was The Trussell Trust, a Christian charity based organization in Salisbury. About 43% of the UK's food banks were run by Trussell, about 20% by smaller church networks such as Besom and Basic, about 31% were independent, and about 4% were run by secular food bank networks such as Fare Share and Food Cycle. Before the 2008 credit crunch, food banks were "almost unheard of" in the UK. In 2004, Trussell only ran two food banks. In 2011, about one new food bank was being opened per week. In 2012, the Trussell Trust reported that the rate of new openings had increased to three per week. In August, the rate of new openings spiked to four per week, with three new food banks being opened in that month for Nottingham alone. In 2022 the number of food banks run by Trussell had risen to over 1,400. Most UK food banks are hosted by churches in partnership with the wider community. They operate on the "frontline" model, giving out food directly to the hungry. Over 90% of the food given out is donated by the public, including schools, churches, businesses and individuals. The Trussell Trust had aimed to provide short-term support for people whose needs have not yet been addressed by official state welfare provision; those who had been "falling into the cracks in the system". The Trussell franchise has procedures which aim to prevent long-term dependency on their services and to ensure that those in need are referred to qualified outside agencies. The charity suggests that the credit crunch caused an upsurge in the number of people needing emergency food. Since 2010, demand for food banks continued to increase, and at a more rapid rate, partly as austerity began to take effect, and partly as those on low incomes began to draw down savings and run out of friends of whom they were willing to ask for help. Unlike soup kitchens, most, but not all UK food banks are unable to help people who come in off the street without a referral – instead, they operate with a referral system. Vouchers are handed out to those in need by various sorts of frontline care professionals, such as social workers, health visitors, Citizens Advice Bureaux, Jobcentres and housing officials. The voucher can typically be exchanged at the food bank for a package of food sufficient to last three days. The year to April 2013 saw close to 350,000 referrals to Trussell Trust foodbanks, more than double the amount from the previous year. Several food banks have been set up outside of the Trussell system, some faith-based, others secular in part as they do not like having to turn away people without referrals, although Trussell Trust food banks do help clients in need without vouchers to get one as quickly as possible. There is also FareShare, a London-based charity which operates some nineteen depots on the American-style warehouse model. Rather than giving out food directly to individuals, FareShare distributes food to over 700 smaller agencies, mainly smaller independent operations like soup kitchens and breakfast clubs. Great emphasis is placed on reducing food waste as well as relieving food poverty. Fareshare operates on a business basis, employing several Managers to oversee operations alongside their army of volunteers. Employee costs constituted over 50% of their expenditure in both 2011 and 2012. People who turn to food banks are typically grateful both for the food and for the warmth and kindness they receive from the volunteers. However, sometimes food banks have run out of supplies by the time they arrive. Some find it humiliating to have to ask for food, and the packages they receive do not always seem nutritious. Some food banks have tried to respond with innovative programs; London Street Food bank for example began asking donors to send in supermarket vouchers so that those they serve will be able to choose food that best meets their nutritional needs. The Trussell Trust revealed a 47% increase in several three-day emergency supplies provided by their food banks in December 2016 compared to the monthly average for the 2016–17 financial year. Public donations in December 2016 meant foodbanks met the increased need in that month, but donations in January, February and March 2017 all fell below the monthly average of 931 tonnes for the 2016-17 financial year. Although going for a few years by various small charities around the world, 2017 saw a significant increase in media coverage and take up of the reverse advent calendar. The UK Money bloggers campaign encouraging the public to give something to a food bank every day for 25 days was covered by The Mirror, The Guardian and others. Emma Revie of the Trussell Trust said, "for too many people, staying above water is a daily struggle". Food bank use has increased since Universal Credit was implemented as part of the Welfare Reform Act 2012. Delays in providing the first payment force claimants to use food banks, also Universal Credit does not provide enough to cover basic living expenses. Claiming Universal Credit is complex and the system is hard to navigate, as many claimants cannot afford internet access and cannot access online help with claiming. A report by the Trussell Trust says: UK food banks appealed for volunteers and supplies, fearing an increase in demand for food as Universal Credit was rolled out further. UK food bank users According to a May 2013 report by Oxfam and Church Action on Poverty, about half a million Britons had used food banks. The Trussell Trust reports that their food banks alone helped feed 346,992 people in 2012–13. Numbers using food banks more than doubled during the period 2012–13. "Foodbanks help prevent crime, housing loss, family breakdown and mental health problems." Reasons why people have difficulty getting enough to eat include redundancy, sickness, delays over receiving benefits, domestic violence, family breakdown, debt, and additional fuel costs in winter. Some clients of foodbanks are at work but cannot afford everything they need due to low pay. Close to half of those needing to use food banks have had issues with their benefit payments. Sanctioning benefits was the single most frequent reason for food bank referrals and there has been criticism over sanctions being imposed for allegedly spurious reasons. A joint report from the Trussell Trust, the Church of England, and the charities Oxfam and Child Poverty Action Group found that food bank users were more likely to live in rented accommodation, be single adults or lone parents, be unemployed, and have experienced a "sanction", where their unemployment benefits were cut for at least one month Delays in payment of housing benefit, disability benefit and other benefits and general bureaucratic issues with benefits can force people to use food banks. Many further people who need food banks have low-income jobs but struggle to afford food after making debt repayments and all other expenses. Low-paid workers, part-time workers and those with zero-hour contracts are particularly vulnerable to financial crisis and sometimes need the assistance of food banks. As had been predicted, demand for food banks further increased after cuts to welfare came into effect in April 2013, which included the abolition of Crisis loans. In April 2014, Trussell reported that they had handed out 913,000 food parcels in the last year, up from 347,000 the year before. Several councils have begun looking at funding food banks to increase their capability, as cuts to their budgets mean they will be less able to help vulnerable people directly. Sabine Goodwin, an Independent Food Aid Network researcher, said most food bank workers reported increasing demand for food aid. UK government According to an all-party parliamentary report released in December 2014, key reasons for the increased demand for UK foodbanks are delays in paying benefits, welfare sanctions, and the recent reversal of the post-WWII trend for poor people's incomes to rise above or in line with increased costs for housing, utility bills and food. In 2013, the UK Government blocked a £22,000,000 European Union fund to help finance food banks in the UK. This disappointed Labour MEP, Richard Howitt, who assisted in negotiating the fund. Howitt stated: Haroon Siddiqui said that the rise in food bank use coincided with the imposition of austerity and feels the government is reluctant to admit the obvious link. Siddiqui said that during the 2017 general election campaign, Conservative Prime Minister, Theresa May was asked about even nurses (then subject to a 1% annual pay freeze) using food banks and May merely replied, "There are many complex reasons why people go to food banks." Siddiqui wrote further, "(...) the reasons people turn to food banks are quite plain (and there have been studies that support them). The Trussell Trust, the UK's biggest food bank network, has said that they help people with "nowhere else to turn". Earlier [in 2018] it said that food banks in areas where the full Universal Credit service had been in place for 12 months or more were four times as busy. Then-UK Prime Minister David Cameron said in the House of Commons in 2012 that he welcomed the efforts of food banks. Caroline Spelman, his Secretary of State for Environment, Food and Rural Affairs, has described food banks as an "excellent example" of active citizenship. Labour MP Kate Green has a different view, feeling that the rise of food banks reflects people being let down by the state welfare system, saying: "I feel a real burning anger about them ... People are very distressed at having to ask for food; it's humiliating and distressing." Cookery writer and poverty campaigner Jack Monroe wrote that those referred to food banks or given vouchers were "the lucky ones with a good doctor or health visitor who knows us well enough to recognize that something has gone seriously wrong" and expressed concern for those who lack this support. Food banks need extra donations during the summer holidays because school children do not receive free school meals during that time. Germany As of 2013, there were over 900 food banks in Germany, up from just 1 in 1993. In 2014, 1.5 million people a week used food banks in Germany. France In total, around 3.5  million people rely on food banks in France. One provider, the Banque Alimentaire has over 100 branches in France, serving 200  a million meals a year to 1.85  million people. Asia Several Asian places have begun to use food banks; these include Nepal, South Korea, Japan, Taiwan and Singapore. Hong Kong The first food bank in Hong Kong is Feeding Hong Kong, which was founded in 2009. Food Angel is also a food bank in Hong Kong as well as the Foodlink Foundation. Japan According to the Ministry of Agriculture, Forestry and Fisheries in Japan, the number of such organizations stood at 178 in the 2022 fiscal year through March, marking a significant increase from the 120 seen two years earlier. As of 2022, there is at least one food bank organization in every prefecture in Japan. The importance of food banks has become more recognized during the COVID-19 pandemic. Singapore Founded in 2012, The Food Bank Singapore is a registered charity and part of the Global Food Banking Network that has an outreach of over 50 countries. Food from the Heart and Jamiyah FoodBank are also 2 other food banks in the food-insecure nation of Singapore. Africa The Egyptian Food Bank was established in Cairo in 2006, and less than ten years later, food banks run on similar principles spread to other Arab countries in North Africa and the Middle East. In Sub-Saharan Africa, there are charity-run food banks that operate on a semi-commercial system that differs from both the more common "warehouse" and "frontline" models. In some rural least developed countries such as Malawi, food is often relatively cheap and plentiful for the first few months after the harvest but then becomes more and more expensive. Food banks in those areas can buy large amounts of food shortly after the harvest, and then as food prices start to rise, they sell it back to local people throughout the year at well below market prices. Such food banks will sometimes also act as centres to provide smallholders and subsistence farmers with various forms of support. Formed in 2009, Food Bank South Africa (Food Bank SA) is South Africa's national food banking network and a member of the Global Food Banking Network. Worldwide Since the 1980s food banking has spread around the world. There are over 40 countries and regions with active food bank groups under the umbrella of the Global Food Banking Network. Countries and regions in the international network include Australia, Israel, Turkey, Russia, India, Taiwan, Colombia, Brazil, Argentina, Chile, Guatemala, South Africa, Hong Kong, Singapore, South Korea and the UK. There are also several countries with food banks which have not yet joined the network, either because they do not yet meet the required criteria or they have not applied. Climate change Food banking and related models have been proposed as a key solution to the reduction greenhouse gas emissions. Around 8% of total emissions are due to food loss and waste. Through food rescue programs, food banks help reduce emissions by ensuring the productive use of energy involved in the production of food and by diverting food away from landfills, where it would have spoiled and generated methane and other greenhouse gasses. One estimate puts the greenhouse gas avoidance from food banks at more than 1.7 million tons in 2021. Reactions and concerns The rise of food banks has been "broadly welcomed". For it is said that "not only do they provide a solution to the problem of hunger that does not require resources from the state", but they can be viewed "as evidence of increasing community spirit and of active, caring citizenship". In the UK for example, Patrick Butler, society editor for The Guardian, has said that: "Many politicians and campaigners are fascinated by the possibilities of food banks. After the initial shock that "things have come to this" there is, on the left of the political spectrum, a nervous excitement about the potential for community self-help. On the right, there is outright enthusiasm for what is seen as "big society" welfare in its purest form." There has also been concern expressed about food banks by some researchers and politicians. Drawing on the United States's experience after the rapid rise of food banks in the 1980s, American Sociology Professor Janet Poppendieck warned that the rise of food banks can contribute to the long-term erosion of human rights and support for welfare systems. Once food banks become well established, it can be politically impossible to return responsibility for meeting the needs of hungry people to the state. Poppendieck says that the logistics of running food banks can be so demanding that they prevent kind-hearted people from having time to participate in public policy advocacy; yet she also says if they can be encouraged to lobby politicians for long-term changes, that would help those on a low income. They often have considerable credibility with legislators. As of 2012, senior US food bank staff members have "expressed a preference" to remain politically neutral/refused to take a stand, which political activists have suggested may relate to their sources of funding/political pressure. The emergence of "Little Free Food Pantries" and "Blessing Boxes", modelled on the "Little Free Libraries" boxes, has been criticized as "feel-good local philanthropy" which is too small to make a significant impact on hunger, for its lack of access to fresh foods, for food safety concerns, and as a public relations effort by Tyson Foods, which seeks to cut federal SNAP food assistance in the US. Rachel Loopstra from University of Toronto has said food banks are often inefficient, unreliable and unable to supply nutritional food. She said a survey in Toronto found that only 1 in 5 families suffering from food insecurity would turn to food banks, in part because there is a stigma associated with having to do so. Elizabeth Dowler, Professor of Food & Social Policy at Warwick University, said that most British people prefer the state to take responsibility for helping the hungry. Hannah Lambie-Mumford, from Sheffield University, echoed the view that some users of food banks find having to ask for food humiliating, and also that food bank volunteers should be encouraged to advocate for long-term solutions to the underlying causes of poverty and hunger. Olivier De Schutter, a senior United Nations official charged with ensuring governments honour their obligation to safeguard their citizens' right to food, has expressed alarm at the rise of food banks. He has reminded the governments of the advanced economies in Europe and Canada that they have a "duty to protect" their citizens from hunger, and suggested that leaving such an obligation to food banks may be an abuse of human rights. Other criticism expresses alarm at "transnational corporate food banking which construct[s] domestic hunger as a matter for charity, thereby allowing indifferent and austerity-minded governments to ignore increasing poverty and food insecurity and their moral, legal and political obligations, under international law, to realize the right to food." See also Ag Against Hunger Canstruction Emerson Good Samaritan Food Donation Act FoodCloud (Ireland) Food Not Bombs Food security Gleaners Good Shepherd Food Bank Hopelink List of food banks National Association of Letter Carriers' Stamp Out Hunger Food Drive Northwest Harvest Olio (app) Poverty Notes References Further reading Canice Prendergast. 2017. "How Food Banks Use Markets to Feed the Poor." Journal of Economic Perspectives 31(4): 145–162. Canice Prendergast. 2022. "The Allocation of Food to Food Banks". Journal of Political Economy. External links The Global Foodbank network - includes resources to find food banks throughout the world. Charity Food waste Private aid programs Sharing economy
Food bank
[ "Biology" ]
6,850
[ "Behavior", "Altruism", "Private aid programs" ]
1,551,095
https://en.wikipedia.org/wiki/Leif%20Elggren
Leif Elggren (born 1950, Linköping, Sweden), is a Swedish artist who lives and works in Stockholm. Active since the late 1970s, Leif Elggren has become one of the most constantly surprising conceptual artists to work in the combined worlds of audio and visual. A writer, visual artist, stage performer and composer, he has many albums to his credits, solo and with the Sons of God, on labels such as Ash International, Touch, Radium and his own Firework Edition. His music, often conceived as the soundtrack to a visual installation or experimental stage performance, usually presents carefully selected sound sources over a long stretch of time and can range from quiet electronics to harsh noise. His wide-ranging and prolific body of art often involves dreams and subtle absurdities, social hierarchies turned upside-down, hidden actions and events taking on the quality of icons. Together with artist Carl Michael von Hausswolff, he is a founder of the Kingdoms of Elgaland-Vargaland (KREV) where he enjoys the title of king. History Elggren spent five years at the Academy of Fine Arts in Stockholm, specializing in drawing, design and bookprinting. In the late ‘70s he began to associate with performance groups, meeting people like Hausswolff and Thomas Liljenberg. With the latter he formed Firework in 1978, a duo that put up exhibitions and performances. Around the same time he purchased a press and started to publish art books. In 1982 he founded Firework Edition, a small publishing company, together with Liljenberg. In 1988 he formed the duo Guds Söner (The Sons of God) with Kent Tankred, whom he had met four years earlier. The duo excels in creating long, puzzling stage performances that give equal roles to physical action (or inaction) and soundtrack (live or taped) with themes such as violence, love, the quotidian, food and royalty. Elggren released his first 7" records in 1982 and 1984 on Hausswolff's label Radium. A first solo LP, Flown Over by an Old King, came out in 1988. The inception of Firework Edition in 1996 allowed Elggren to release more of his music and the growing popularity of installation art in avant-garde music circles (thanks to its ties with experimental electronica) has given his work more international exposure since the late ‘90s. Other key solo works include Talking to a Dead Queen (1996) and Pluralis Majestatis (2000). Together with Hausswolff, Elggren curated the Nordic Pavilion at the Venice Biennale in 2001. In 2007 he appeared (with John Duncan) at the Netmage festival in Bologna organized by xing and executed "Something Like Seeing in the Dark". Selected discography Flown Over by an Old King (Radium 226.05, 1988) Talking to a Dead Queen (Fylkingen, 1996) Pluralis Majestatis (Firework Edition, 2000) 9.11 (Desperation Is the Mother of Laughter) (Firework Edition, 2000) with Thomas Liljenberg UGN/MAT (2000) with Per Jonsson and Kent Tankred Two Thin Eating One Fat (Firework Edition, 2000) with Thomas Liljenberg Triangles (Moikai, 2001) with Kevin Drumm Extraction (2002) DEG (Firework Edition, 2002) with Mats Gustafsson and Kevin Drumm The Cobblestone Is the Weapon of the Proletariat (Firework Edition, 2004) Gottesdienst (iDEAL, 2006) Das Baank (Fragment Factory/Rekem Records, 2016) MOTOR for an Unknown Vehicle (Opening of the Grave) (Fragment Factory, 2019) Compilation appearances Emre (Dark Matter) (2000) suRRism-Phonoethics sPE_0100: Peak the Source Vol.3 (2011), Surrism-Phonoethics 30/4 (2013), Fragment Factory Interviews interview from Bananafish #16 interview from Dusted (2004) interview from Perfect Sound Forever (2005) External links Leif Elggren The Sons of God Firework Edition Records The Kingdoms of Elgaland-Vargaland [ AllMusic entry] Discography at discogs.com "At Venice Biennale, Artists Plant Flag for Their State (of Mind)" New York Times (June 9, 2007) 1950 births Living people Swedish artists Multimedia artists Micronational leaders
Leif Elggren
[ "Technology" ]
917
[ "Multimedia", "Multimedia artists" ]
1,551,118
https://en.wikipedia.org/wiki/Centaurus%20X-3
Centaurus X-3 (4U 1118–60) is an X-ray pulsar with a period of 4.84 seconds. It was the first X-ray pulsar to be discovered, and the third X-ray source to be discovered in the constellation Centaurus. The system consists of a neutron star orbiting a massive, O-type supergiant star dubbed Krzemiński's star after its discoverer, Wojciech Krzemiński. Matter is being accreted from the star onto the neutron star, resulting in X-ray emission. History Centaurus X-3 was first observed during experiments of cosmic X-ray sources made on May 18, 1967. These initial X-ray spectrum and location measurements were performed using a sounding rocket. In 1971, further observations were performed with the Uhuru satellite, in the form of twenty-seven 100-second duration sightings. These sightings were found to pulsate with an average period of 4.84 seconds, with a variation in the period of 0.02 seconds. Later, it became clear that the period variations followed a 2.09 day sinusoidal curve around the 4.84 second period. These variations in arrival time of the pulses were attributed to the Doppler effect caused by orbital motion of the source, and were therefore evidence for the binary nature of Centaurus X-3. Despite detailed data from the Uhuru satellite as to the orbital period of the binary, and the pulsation period in the X-ray band as well as the minimum mass of the occulting star, the optical component remained undiscovered for three years. This was partly because Cen X-3 lies in the plane of the Galaxy in the direction of the Carina Spiral Arm, and so observations were forced to differentiate among dozens of faint objects. Centaurus X-3 was finally identified with a faint, heavily reddened variable star lying just outside the error box predicted by Uhuru observations. The visible star was later named after its discoverer, Poland astronomer Wojtek Krzemiński. Centaurus X-3 was the first sources observed by the Russian x-ray telescope ART-XC. An image was released with the title "First Light Image of the Spektr-RG Observatory", showing the source imaged by the individual telescopes of ART-XC, as well as the light curve of Centaurus X-3 folded at its pulse period of 4.8s. System Centaurus X-3 is located in the galactic plane about 5.7 kiloparsecs away, towards the direction of the Carina–Sagittarius Arm, and is a member of an occulting spectroscopic binary system. The visible component is Krzemiński's Star, a supergiant; the X-ray component is a rotating, magnetized neutron star. X-ray component The X-ray emission is fueled by the accretion of matter from the distended atmosphere of the blue giant falling through the inner Lagrangian point, L1. The overflowing gas probably forms an accretion disc and ultimately spirals inwards and falls onto the neutron star, releasing gravitational potential energy. The magnetic field of the neutron star channels the inflowing gas onto localized hot spots on the neutron star surface where the X-ray emission occurs. The neutron star is regularly eclipsed by its giant companion every 2.1 days; these regular X-ray eclipses last approximately 1/4 the orbital period. There are also sporadic X-ray off durations. The spin period history of Centaurus X-3 shows a spin-up trend that is very prominent in the long-term decrease in its pulse period. This spin-up was first noted in Centaurus X-3 and Hercules X-1 and is now noted in other X-ray pulsars. The most feasible way of explaining the origin of this effect is by a torque exerted on the neutron star by accreting material. Krzemiński's Star Krzemiński's Star is a 20.5 solar mass (), slightly evolved hot massive star with a radius of and spectral type O6-7 II-III. There is little doubt as to the correctness of the optical candidate, since it is in apparent agreement with the period and phase of Cen X-3, and exhibits the same similarity in its double wave and amplitude light curve seen in other known massive binary systems. The double wave ellipsoidal light variations are produced by a tidally deformed giant that nearly fills its Roche lobe. The visible component corresponds to an OB II class star, comparable with the mass derived from X-ray data, consistent with the minimum radius that has been fixed by X-ray eclipse duration. See also X-ray pulsar List of X-ray pulsars References External links Spin frequency history of Cen X-3 Cen+X-3 X-ray pulsars Centaurus X-ray binaries Neutron stars O-type bright giants Centauri, V779 O-type giants
Centaurus X-3
[ "Astronomy" ]
1,060
[ "Centaurus", "Constellations" ]
1,551,135
https://en.wikipedia.org/wiki/Absorption%20band
In spectroscopy, an absorption band is a range of wavelengths, frequencies or energies in the electromagnetic spectrum that are characteristic of a particular transition from initial to final state in a substance. According to quantum mechanics, atoms and molecules can only hold certain defined quantities of energy, or exist in specific states. When such quanta of electromagnetic radiation are emitted or absorbed by an atom or molecule, energy of the radiation changes the state of the atom or molecule from an initial state to a final state. Overview When electromagnetic radiation is absorbed by an atom or molecule, the energy of the radiation changes the state of the atom or molecule from an initial state to a final state. The number of states in a specific energy range is discrete for gaseous or diluted systems, with discrete energy levels. Condensed systems, like liquids or solids, have a continuous density of states distribution and often possess continuous energy bands. In order for a substance to change its energy it must do so in a series of "steps" by the absorption of a photon. This absorption process can move a particle, like an electron, from an occupied state to an empty or unoccupied state. It can also move a whole vibrating or rotating system, like a molecule, from one vibrational or rotational state to another or it can create a quasiparticle like a phonon or a plasmon in a solid. Electromagnetic transitions When a photon is absorbed, the electromagnetic field of the photon disappears as it initiates a change in the state of the system that absorbs the photon. Energy, momentum, angular momentum, magnetic dipole moment and electric dipole moment are transported from the photon to the system. Because there are conservation laws, that have to be satisfied, the transition has to meet a series of constraints. This results in a series of selection rules. It is not possible to make any transition that lies within the energy or frequency range that is observed. The strength of an electromagnetic absorption process is mainly determined by two factors. First, transitions that only change the magnetic dipole moment of the system are much weaker than transitions that change the electric dipole moment and that transitions to higher order moments, like quadrupole transitions, are weaker than dipole transitions. Second, not all transitions have the same transition matrix element, absorption coefficient or oscillator strength. For some types of bands or spectroscopic disciplines temperature and statistical mechanics plays an important role. For (far) infrared, microwave and radio frequency ranges the temperature dependent occupation numbers of states and the difference between Bose-Einstein statistics and Fermi-Dirac statistics determines the intensity of observed absorptions. For other energy ranges thermal motion effects, like Doppler broadening may determine the linewidth. Band and line shape A wide variety of absorption band and line shapes exist, and the analysis of the band or line shape can be used to determine information about the system that causes it. In many cases it is convenient to assume that a narrow spectral line is a Lorentzian or Gaussian, depending respectively on the decay mechanism or temperature effects like Doppler broadening. Analysis of the spectral density and the intensities, width and shape of spectral lines sometimes can yield a lot of information about the observed system like it is done with Mössbauer spectra. In systems with a very large number of states like macromolecules and large conjugated systems the separate energy levels can't always be distinguished in an absorption spectrum. If the line broadening mechanism is known and the shape of then spectral density is clearly visible in the spectrum, it is possible to get the desired data. Sometimes it is enough to know the lower or upper limits of the band or its position for an analysis. For condensed matter and solids the shape of absorption bands are often determined by transitions between states in their continuous density of states distributions. For crystals, the electronic band structure determines the density of states. In fluids, glasses and amorphous solids, there is no long range correlation and the dispersion relations are isotropic. For charge-transfer complexes and conjugated systems, the band width is complicated by a variety of factors, compared to condensed matter. Types Electronic transitions Electromagnetic transitions in atoms, molecules and condensed matter mainly take place at energies corresponding to the UV and visible part of the spectrum. Core electrons in atoms, and many other phenomena, are observed with different brands of XAS in the X-ray energy range. Electromagnetic transitions in atomic nuclei, as observed in Mössbauer spectroscopy, take place in the gamma ray part of the spectrum. The main factors that cause broadening of the spectral line into an absorption band of a molecular solid are the distributions of vibrational and rotational energies of the molecules in the sample (and also those of their excited states). In solid crystals the shape of absorption bands are determined by the density of states of initial and final states of electronic states or lattice vibrations, called phonons, in the crystal structure. In gas phase spectroscopy, the fine structure afforded by these factors can be discerned, but in solution-state spectroscopy, the differences in molecular micro environments further broaden the structure to give smooth bands. Electronic transition bands of molecules may be from tens to several hundred nanometers in breadth. Vibrational transitions Vibrational transitions and optical phonon transitions take place in the infrared part of the spectrum, at wavelengths of around 1-30 micrometres. Rotational transitions Rotational transitions take place in the far infrared and microwave regions. Other transitions Absorption bands in the radio frequency range are found in NMR spectroscopy. The frequency ranges and intensities are determined by the magnetic moment of the nuclei that are observed, the applied magnetic field and temperature occupation number differences of the magnetic states. Applications Materials with broad absorption bands are being applied in pigments, dyes and optical filters. Titanium dioxide, zinc oxide and chromophores are applied as UV absorbers and reflectors in sunscreen. Absorption bands of interest to the atmospheric physicist In oxygen: the Hopfield bands, very strong, between about 67 and 100 nanometres in the ultraviolet (named after John J. Hopfield); a diffuse system between 101.9 and 130 nanometres; the Schumann–Runge continuum, very strong, between 135 and 176 nanometres; the Schumann–Runge bands between 176 and 192.6 nanometres (named for Victor Schumann and Carl Runge); the Herzberg bands between 240 and 260 nanometres (named after Gerhard Herzberg); the atmospheric bands between 538 and 771 nanometres in the visible spectrum; including the oxygen δ (~580 nm), γ (~629 nm), B (~688 nm), and A-band (~759-771 nm) a system in the infrared at about 1000 nanometres. In ozone: the Hartley bands between 200 and 300 nanometres in the ultraviolet, with a very intense maximum absorption at 255 nanometres (named after Walter Noel Hartley); the Huggins bands, weak absorption between 320 and 360 nanometres (named after Sir William Huggins); the Chappuis bands (sometimes misspelled "Chappius"), a weak diffuse system between 375 and 650 nanometres in the visible spectrum (named after J. Chappuis); and the Wulf bands in the infrared beyond 700 nm, centered at 4,700, 9,600 and 14,100 nanometres, the latter being the most intense (named after Oliver R. Wulf). In nitrogen: The Lyman–Birge–Hopfield bands, sometimes known as the Birge–Hopfield bands, in the far ultraviolet: 140– 170 nm (named after Theodore Lyman, Raymond T. Birge, and John J. Hopfield) See also Franck–Condon principle Spectroscopy Spectral line References Spectroscopy
Absorption band
[ "Physics", "Chemistry" ]
1,601
[ "Instrumental analysis", "Molecular physics", "Spectroscopy", "Spectrum (physical sciences)" ]
1,551,187
https://en.wikipedia.org/wiki/Pledge%20drive
A pledge drive is an extended period of fundraising activities, generally used by public broadcasting stations to increase contributions. The term "pledge" originates from the promise that a contributor makes to send in funding at regular intervals for a certain amount of time. During a pledge drive, regular and special programming is followed by on-air appeals for pledges by station employees, who ask the audience to make their contributions, usually by phone or the Internet, during this break. Pledge drives are typically held two to four times annually, at calendar periods which vary depending on the scheduling designated by the local public broadcasting station. Background Pledge drives are especially common among U.S. stations. Public broadcasting organizations like National Public Radio (NPR) and the Public Broadcasting Service (PBS) are largely dependent on program fees paid by their member stations. The federal government of the United States provides some money for them, primarily through the Corporation for Public Broadcasting (CPB), and corporate underwriting. American public broadcasting services hold pledge drives about two to three times each year, each one usually lasting one to two weeks. Some religious broadcasting organizations, including Educational Media Foundation (which operates the K-Love and Air1 radio networks), also rely heavily on such program fees. These stations require funding in turn from listeners and viewers (as well as, if necessary, local corporate sponsors) for not only these fees, but also other daily operating costs, and stage regular pledge drives in an attempt to persuade their audiences to contribute donations. Originally, such programming consisted of arts presentations such as classical music, drama, and documentaries. However, the audience for supposedly "high-brow" fare began declining steadily during the 1980s and 1990s, due to the attrition of the generations to whom such programming mainly appealed. Younger people were less interested in the higher arts, for a variety of reasons having to do with the eclipse of "high culture" in American society. In order to appeal to such a largely Euro-American, middle-aged and affluent demographic (the so-called "Baby Boomers" and "Generation X"), PBS has resorted to specials such as self-help programs with speakers such as Suze Orman, nostalgic popular music concerts (including T. J. Lubinsky's My Music concert series, produced specifically for pledge drive airings), and special versions of PBS' traditionally popular "how-to" programs. This approach was largely pioneered by the Oklahoma Educational Television Authority (OETA), which introduced a number of popular music specials as part of its 1987 pledge drive. A retrospective on The Lawrence Welk Show was originally introduced as pledge drive material in 1987; its popularity prompted the OETA to acquire rerun rights to the series and distribute it through PBS. A hallmark of pledge breaks is the "pledge room", where the speakers deliver their message as volunteering individuals answer ringing telephones in the background, though in some cases, it may actually be a fictionalized part of the program (noticeable if the pledge room is drastically different from program to program and is neutralized, featuring none of the member station's logos within the set dressing), with the volunteers actually paid actors feigning telephone calls and the hosts having been filmed months before. Small prizes such as mugs, tote bags, various DVD sets, and books (known as "thank-you" gifts or, euphemistically, as "premiums"), as well as entries into drawings for larger awards such as trips and vehicles donated by local businesses, are also offered by many stations in return for pledging certain amounts of money. The pledges can be done by either paying per month or a one-time contribution, e.g. $15 a month or $180. Controversy Pledge drives have been controversial for most of their existence. While pledge drives are an effective method of raising money for stations, they usually annoy viewers and listeners, who find the regular interruption of what is ordinarily commercial-free content and the station's regular programming being suspended for lifestyle and music specials to be a nuisance. Audience numbers often decline during pledge drives; to compensate, most television stations air special television shows during these fundraising periods. This practice began in earnest in the mid-1970s due to CPB funding cutbacks that were the result of political pressures and the recessions of the time, as well as increasing inflation. As the proportions of government funding in stations' budgets continued to decline over time, such programs became more elaborate in order to sway people who would otherwise watch public television only sporadically (or not at all) to tune in, and possibly donate money in response to appeals during program breaks. There has also been criticism of the format depending on controversial self-help writers or lecturers not usually a part of any regular PBS member station's schedule, or if the presented program is targeted to appeal only to a wealthy and/or older demographic (as seen with Doo Wop 50) while completely ignoring the viewing needs of other audiences. Stations also have had to reckon with balancing out or dispensing with pledge drives entirely during PBS Kids children's programming, as due to their very nature, the disruption of a routine, for a matter children are unable to understand or contribute to, could drive or push those young viewers towards commercial children's programming on other networks or Internet streaming. Generally speaking, the phenomenon is less pronounced on American public radio stations, primarily because of the high popularity of the news and talk programs on that medium and the routine-based patterns of radio listeners that are much more easily disrupted than those of television, along with stricter underwriting guidelines and less tolerance for the television formats and hosts on radio. Much of the focus is placed upon the "drive time" NPR news programs Morning Edition and All Things Considered, which have the highest ratings of all public broadcasting in the U.S. This is in contrast to PBS member stations sometimes holding their drives during prime time daily and on weekend afternoons, and not during the daytime on weekdays or weekend mornings, when children's programming is typically scheduled. However, in light of intense competition public broadcasting faces from a greatly expanded media environment, other stations, especially radio, have aimed to eliminate pledge drives altogether, or significantly reduce their length, by asking for contributions throughout the year during regular station identification breaks. On radio, such programs as ATC may have one of their planned stories deleted simply to extend the length of the fund-raising "pitches". In a more recent trend, some stations also advertise that pledge drives will be shortened by one day for every day's worth of contributions donated in the weeks leading up to a drive. Additionally, some radio stations have started using prospect screening during their pledge drive to identify potential major donors for later fundraising activities. Another service which has cut down pledge drives is the introduction of PBS's Passport streaming service, which provides a tangible and continuing item (full streaming access to several years of PBS's programs) with a monthly or yearly contribution, rather than a one-time premium. See also Telethon Underwriting spot "The Pledge Drive", an episode of Seinfeld about a WNET pledge drive "The One Where Phoebe Hates PBS", an episode of Friends also featuring a WNET pledge drive and guest-starring Gary Collins as the drive's host References External links Philanthropy Publicly funded broadcasters Telethons
Pledge drive
[ "Biology" ]
1,499
[ "Philanthropy", "Behavior", "Altruism" ]
1,551,195
https://en.wikipedia.org/wiki/Lead%20chamber%20process
The lead chamber process was an industrial method used to produce sulfuric acid in large quantities. It has been largely supplanted by the contact process. In 1746 in Birmingham, England, John Roebuck began producing sulfuric acid in lead-lined chambers, which were stronger and less expensive and could be made much larger than the glass containers that had been used previously. This allowed the effective industrialization of sulfuric acid production, and with several refinements, this process remained the standard method of production for almost two centuries. The process was so robust that as late as 1946, the chamber process still accounted for 25% of sulfuric acid manufactured. History Sulfur dioxide is introduced with steam and nitrogen dioxide into large chambers lined with sheet lead where the gases are sprayed down with water and chamber acid (62–70% sulfuric acid). The sulfur dioxide and nitrogen dioxide dissolve, and over a period of approximately 30 minutes the sulfur dioxide is oxidized to sulfuric acid. The presence of nitrogen dioxide is necessary for the reaction to proceed at a reasonable rate. The process is highly exothermic, and a major consideration of the design of the chambers was to provide a way to dissipate the heat formed in the reactions. Early plants used very large lead-lined wooden rectangular chambers (Faulding box chambers) that were cooled by ambient air. The internal lead sheathing served to contain the corrosive sulfuric acid and to render the wooden chambers waterproof. In the 1820s-1830s, French chemist Joseph Louis Gay-Lussac (simultaneously and likely in collaboration with William Gossage) realized that it is not the bulk of liquid determining the speed of reaction but the internal area of the chamber, so he redesigned the chambers as stoneware packed masonry cylinders, which was an early example of the packed bed. In the 20th century, plants using Mills-Packard chambers supplanted the earlier designs. These chambers were tall tapered cylinders that were externally cooled by water flowing down the outside surface of the chamber. Sulfur dioxide for the process was provided by burning elemental sulfur or by the roasting of sulfur-containing metal ores in a stream of air in a furnace. During the early period of manufacture, nitrogen oxides were produced by the decomposition of niter at high temperature in the presence of acid, but this process was gradually supplanted by the air oxidation of ammonia to nitric oxide in the presence of a catalyst. The recovery and reuse of oxides of nitrogen was an important economic consideration in the operation of a chamber process plant. In the reaction chambers, nitric oxide reacts with oxygen to produce nitrogen dioxide. Liquid from the bottom of the chambers is diluted and pumped to the top of the chamber, and sprayed downward in a fine mist. Sulfur dioxide and nitrogen dioxide are absorbed in the liquid, and react to form sulfuric acid and nitric oxide. The liberated nitric oxide is sparingly soluble in water, and returns to the gas in the chamber where it reacts with oxygen in the air to reform nitrogen dioxide. Some percentage of the nitrogen oxides is sequestered in the reaction liquor as nitrosylsulfuric acid and as nitric acid, so fresh nitric oxide must be added as the process proceeds. Later versions of chamber plants included a high-temperature Glover tower to recover the nitrogen oxides from the chamber liquor, while concentrating the chamber acid to as much as 78% H2SO4. Exhaust gases from the chambers are scrubbed by passing them into a tower, through which some of the Glover acid flows over broken tile. Nitrogen oxides are absorbed to form nitrosylsulfuric acid, which is then returned to the Glover tower to reclaim the oxides of nitrogen. Sulfuric acid produced in the reaction chambers is limited to about 35% concentration. At higher concentrations, nitrosylsulfuric acid precipitates upon the lead walls in the form of 'chamber crystals', and is no longer able to catalyze the oxidation reactions. Chemistry Sulfur dioxide is generated by burning elemental sulfur or by roasting pyritic ore in a current of air: S8 + 8 O2 → 8 SO2 4 FeS2 + 11 O2 → 2 Fe2O3 + 8 SO2 Nitrogen oxides are produced by decomposition of niter in the presence of sulfuric acid, or by hydrolysis of nitrosylsulfuric acid: 2 NaNO3 + H2SO4 → Na2SO4 + H2O + NO + NO2 + O2 2 NOHSO4 + H2O → 2 H2SO4 + NO + NO2 In the reaction chambers, sulfur dioxide and nitrogen dioxide dissolve in the reaction liquor. Nitrogen dioxide is hydrated to produce nitrous acid, which then oxidizes the sulfur dioxide to sulfuric acid and nitric oxide. The reactions are not well characterized, but it is known that nitrosylsulfuric acid is an intermediate in at least one pathway. The major overall reactions are: 2 NO2 + H2O → HNO2 + HNO3 SO2 (aq) + HNO3 → NOHSO4 NOHSO4 + HNO2 → H2SO4 + NO2 + NO SO2 (aq) + 2 HNO2 → H2SO4 + 2 NO Nitric oxide escapes from the reaction liquor and is subsequently reoxidized by molecular oxygen to nitrogen dioxide. This is the overall rate determining step in the process: 2 NO + O2 → 2 NO2 Nitrogen oxides are absorbed and regenerated in the process, and thus serve as a catalyst for the overall reaction: 2 SO2 + 2 H2O + O2 → 2 H2SO4 References Further reading External links Process flow sheet of sulphuric acid manufacturing by lead chamber process Industrial processes Lead Sulfur Catalysis
Lead chamber process
[ "Chemistry" ]
1,203
[ "Catalysis", "Chemical kinetics" ]
1,551,283
https://en.wikipedia.org/wiki/MS-DOS%20Editor
MS-DOS Editor, commonly just called edit or edit.com, is a TUI text editor that comes with MS-DOS 5.0 and later, as well as all 32-bit x86 versions of Windows, until Windows 10. It supersedes edlin, the standard editor in earlier versions of MS-DOS. In MS-DOS, it was a stub for QBasic running in editor mode. Starting with Windows 95, MS-DOS Editor became a standalone program because QBasic didn't ship with Windows. The Editor may be used as a substitute for Windows Notepad on Windows 9x, although both are limited to small files only. MS-DOS versions are limited to approximately depending on how much conventional memory is free. The Editor can edit files that are up to 65,279 lines and up to approximately 5 MB in size. Versions The Editor version 1.0 appeared in MS-DOS 5.00, PC DOS 5.0, OS/2, and Windows NT 4.0. These editors rely on QBasic 1.0. This version can only open one file, to the limit of DOS memory. It can also open the quick help file in a split window. The Editor version 1.1 appeared in MS-DOS 6.0. It uses QBasic 1.1 but no new features were added to the Editor. PC DOS 6 does not include the edit command. Instead, it has the DOS E Editor. This was upgraded to support mouse and menus in version of 7.0. The Editor version 2.0 appeared with Windows 95, as standalone app that no longer requires QBasic. This version has been included with all 32-bit x86 versions, until Windows 10. Being a 16-bit DOS app, it does not directly run on x64, IA-64, or ARM64 versions of Windows. The FreeDOS version was developed by Shaun Raven and is licensed under the GPL. Features MS-DOS Editor uses a text user interface and its color scheme can be adjusted. It has a multiple-document interface in which its version 2.0 (as included in DOS 7 or Windows 9x) can open up to 9 files at a time while earlier versions (included in DOS 5 and 6) are limited to only one file. The screen can be split vertically into two panes which can be used to view two files simultaneously or different parts of the same file. It can also open files in binary mode, where a fixed number of characters are displayed per line, with newlines treated like any other character. This mode shows characters as hexadecimal characters (0-9 and A-F). Editor converts Unix newlines to DOS newlines and has mouse support. Some of these features were added only in version 2.0. References Further reading External links "edit" on Microsoft Docs DOS software Windows components DOS text editors Console applications 1991 software
MS-DOS Editor
[ "Technology" ]
599
[ "Windows commands", "Computing commands" ]
4,664
https://en.wikipedia.org/wiki/B%C3%A9zier%20curve
A Bézier curve ( ) is a parametric curve used in computer graphics and related fields. A set of discrete "control points" defines a smooth, continuous curve by means of a formula. Usually the curve is intended to approximate a real-world shape that otherwise has no mathematical representation or whose representation is unknown or too complicated. The Bézier curve is named after French engineer Pierre Bézier (1910–1999), who used it in the 1960s for designing curves for the bodywork of Renault cars. Other uses include the design of computer fonts and animation. Bézier curves can be combined to form a Bézier spline, or generalized to higher dimensions to form Bézier surfaces. The Bézier triangle is a special case of the latter. In vector graphics, Bézier curves are used to model smooth curves that can be scaled indefinitely. "Paths", as they are commonly referred to in image manipulation programs, are combinations of linked Bézier curves. Paths are not bound by the limits of rasterized images and are intuitive to modify. Bézier curves are also used in the time domain, particularly in animation, user interface design and smoothing cursor trajectory in eye gaze controlled interfaces. For example, a Bézier curve can be used to specify the velocity over time of an object such as an icon moving from A to B, rather than simply moving at a fixed number of pixels per step. When animators or interface designers talk about the "physics" or "feel" of an operation, they may be referring to the particular Bézier curve used to control the velocity over time of the move in question. This also applies to robotics where the motion of a welding arm, for example, should be smooth to avoid unnecessary wear. Invention The mathematical basis for Bézier curves—the Bernstein polynomials—was established in 1912, but the polynomials were not applied to graphics until some 50 years later when mathematician Paul de Casteljau in 1959 developed de Casteljau's algorithm, a numerically stable method for evaluating the curves, and became the first to apply them to computer-aided design at French automaker Citroën. De Casteljau's method was patented in France but not published until the 1980s while the Bézier polynomials were widely publicised in the 1960s by the French engineer Pierre Bézier, who discovered them independently and used them to design automobile bodies at Renault. Specific cases A Bézier curve is defined by a set of control points P0 through Pn, where n is called the order of the curve (n = 1 for linear, 2 for quadratic, 3 for cubic, etc.). The first and last control points are always the endpoints of the curve; however, the intermediate control points generally do not lie on the curve. The sums in the following sections are to be understood as affine combinations – that is, the coefficients sum to 1. Linear Bézier curves Given distinct points P0 and P1, a linear Bézier curve is simply a line between those two points. The curve is given by This is the simplest and is equivalent to linear interpolation. The quantity represents the displacement vector from the start point to the end point. Quadratic Bézier curves A quadratic Bézier curve is the path traced by the function B(t), given points P0, P1, and P2, , which can be interpreted as the linear interpolant of corresponding points on the linear Bézier curves from P0 to P1 and from P1 to P2 respectively. Rearranging the preceding equation yields: This can be written in a way that highlights the symmetry with respect to P1: Which immediately gives the derivative of the Bézier curve with respect to t: from which it can be concluded that the tangents to the curve at P0 and P2 intersect at P1. As t increases from 0 to 1, the curve departs from P0 in the direction of P1, then bends to arrive at P2 from the direction of P1. The second derivative of the Bézier curve with respect to t is Cubic Bézier curves Four points P0, P1, P2 and P3 in the plane or in higher-dimensional space define a cubic Bézier curve. The curve starts at P0 going toward P1 and arrives at P3 coming from the direction of P2. Usually, it will not pass through P1 or P2; these points are only there to provide directional information. The distance between P1 and P2 determines "how far" and "how fast" the curve moves towards P1 before turning towards P2. Writing BPi,Pj,Pk(t) for the quadratic Bézier curve defined by points Pi, Pj, and Pk, the cubic Bézier curve can be defined as an affine combination of two quadratic Bézier curves: The explicit form of the curve is: For some choices of P1 and P2 the curve may intersect itself, or contain a cusp. Any series of 4 distinct points can be converted to a cubic Bézier curve that goes through all 4 points in order. Given the starting and ending point of some cubic Bézier curve, and the points along the curve corresponding to t = 1/3 and t = 2/3, the control points for the original Bézier curve can be recovered. The derivative of the cubic Bézier curve with respect to t is The second derivative of the Bézier curve with respect to t is General definition Bézier curves can be defined for any degree n. Recursive definition A recursive definition for the Bézier curve of degree n expresses it as a point-to-point linear combination (linear interpolation) of a pair of corresponding points in two Bézier curves of degree n − 1. Let denote the Bézier curve determined by any selection of points P0, P1, ..., Pk. Then to start, This recursion is elucidated in the animations below. Explicit definition The formula can be expressed explicitly as follows (where t0 and (1-t)0 are extended continuously to be 1 throughout [0,1]): where are the binomial coefficients. For example, when n = 5: Terminology Some terminology is associated with these parametric curves. We have where the polynomials are known as Bernstein basis polynomials of degree n. t0 = 1, (1 − t)0 = 1, and the binomial coefficient, , is: The points Pi are called control points for the Bézier curve. The polygon formed by connecting the Bézier points with lines, starting with P0 and finishing with Pn, is called the Bézier polygon (or control polygon). The convex hull of the Bézier polygon contains the Bézier curve. Polynomial form Sometimes it is desirable to express the Bézier curve as a polynomial instead of a sum of less straightforward Bernstein polynomials. Application of the binomial theorem to the definition of the curve followed by some rearrangement will yield where This could be practical if can be computed prior to many evaluations of ; however one should use caution as high order curves may lack numeric stability (de Casteljau's algorithm should be used if this occurs). Note that the empty product is 1. Properties The curve begins at and ends at ; this is the so-called endpoint interpolation property. The curve is a line if and only if all the control points are collinear. The start and end of the curve is tangent to the first and last section of the Bézier polygon, respectively. A curve can be split at any point into two subcurves, or into arbitrarily many subcurves, each of which is also a Bézier curve. Some curves that seem simple, such as the circle, cannot be described exactly by a Bézier or piecewise Bézier curve; though a four-piece cubic Bézier curve can approximate a circle (see composite Bézier curve), with a maximum radial error of less than one part in a thousand, when each inner control point (or offline point) is the distance horizontally or vertically from an outer control point on a unit circle. More generally, an n-piece cubic Bézier curve can approximate a circle, when each inner control point is the distance from an outer control point on a unit circle, where (i.e. ), and . Every quadratic Bézier curve is also a cubic Bézier curve, and more generally, every degree n Bézier curve is also a degree m curve for any m > n. In detail, a degree n curve with control points is equivalent (including the parametrization) to the degree n + 1 curve with control points , where , and define , . Bézier curves have the variation diminishing property. What this means in intuitive terms is that a Bézier curve does not "undulate" more than the polygon of its control points, and may actually "undulate" less than that. There is no local control in degree n Bézier curves—meaning that any change to a control point requires recalculation of and thus affects the aspect of the entire curve, "although the further that one is from the control point that was changed, the smaller is the change in the curve". A Bézier curve of order higher than two may intersect itself or have a cusp for certain choices of the control points. Second-order curve is a parabolic segment A quadratic Bézier curve is also a segment of a parabola. As a parabola is a conic section, some sources refer to quadratic Béziers as "conic arcs". With reference to the figure on the right, the important features of the parabola can be derived as follows: Tangents to the parabola at the endpoints of the curve (A and B) intersect at its control point (C). If D is the midpoint of AB, the tangent to the curve which is perpendicular to CD (dashed cyan line) defines its vertex (V). Its axis of symmetry (dash-dot cyan) passes through V and is perpendicular to the tangent. E is either point on the curve with a tangent at 45° to CD (dashed green). If G is the intersection of this tangent and the axis, the line passing through G and perpendicular to CD is the directrix (solid green). The focus (F) is at the intersection of the axis and a line passing through E and perpendicular to CD (dotted yellow). The latus rectum is the line segment within the curve (solid yellow). Derivative The derivative for a curve of order n is Constructing Bézier curves Linear curves Let t denote the fraction of progress (from 0 to 1) the point B(t) has made along its traversal from P0 to P1. For example, when t=0.25, B(t) is one quarter of the way from point P0 to P1. As t varies from 0 to 1, B(t) draws a line from P0 to P1. Quadratic curves For quadratic Bézier curves one can construct intermediate points Q0 and Q1 such that as t varies from 0 to 1: Point Q0(t) varies from P0 to P1 and describes a linear Bézier curve. Point Q1(t) varies from P1 to P2 and describes a linear Bézier curve. Point B(t) is interpolated linearly between Q0(t) to Q1(t) and describes a quadratic Bézier curve. Higher-order curves For higher-order curves one needs correspondingly more intermediate points. For cubic curves one can construct intermediate points Q0, Q1, and Q2 that describe linear Bézier curves, and points R0 and R1 that describe quadratic Bézier curves: For fourth-order curves one can construct intermediate points Q0, Q1, Q2 and Q3 that describe linear Bézier curves, points R0, R1 and R2 that describe quadratic Bézier curves, and points S0 and S1 that describe cubic Bézier curves: For fifth-order curves, one can construct similar intermediate points. These representations rest on the process used in De Casteljau's algorithm to calculate Bézier curves. Offsets (or stroking) of Bézier curves The curve at a fixed offset from a given Bézier curve, called an offset or parallel curve in mathematics (lying "parallel" to the original curve, like the offset between rails in a railroad track), cannot be exactly formed by a Bézier curve (except in some trivial cases). In general, the two-sided offset curve of a cubic Bézier is a 10th-order algebraic curve and more generally for a Bézier of degree n the two-sided offset curve is an algebraic curve of degree 4n − 2. However, there are heuristic methods that usually give an adequate approximation for practical purposes. In the field of vector graphics, painting two symmetrically distanced offset curves is called stroking (the Bézier curve or in general a path of several Bézier segments). The conversion from offset curves to filled Bézier contours is of practical importance in converting fonts defined in Metafont, which require stroking of Bézier curves, to the more widely used PostScript type 1 fonts, which only require (for efficiency purposes) the mathematically simpler operation of filling a contour defined by (non-self-intersecting) Bézier curves. Degree elevation A Bézier curve of degree n can be converted into a Bézier curve of degree n + 1 with the same shape. This is useful if software supports Bézier curves only of specific degree. For example, systems that can only work with cubic Bézier curves can implicitly work with quadratic curves by using their equivalent cubic representation. To do degree elevation, we use the equality Each component is multiplied by (1 − t) and t, thus increasing a degree by one, without changing the value. Here is the example of increasing degree from 2 to 3. In other words, the original start and end points are unchanged. The new control points are and . For arbitrary n we use equalities Therefore: introducing arbitrary and . Therefore, new control points are Repeated degree elevation The concept of degree elevation can be repeated on a control polygon R to get a sequence of control polygons R, R1, R2, and so on. After r degree elevations, the polygon Rr has the vertices P0,r, P1,r, P2,r, ..., Pn+r,r given by It can also be shown that for the underlying Bézier curve B, Degree reduction Degree reduction can only be done exactly when the curve in question is originally elevated from a lower degree. A number of approximation algorithms have been proposed and used in practice. Rational Bézier curves The rational Bézier curve adds adjustable weights to provide closer approximations to arbitrary shapes. The numerator is a weighted Bernstein-form Bézier curve and the denominator is a weighted sum of Bernstein polynomials. Rational Bézier curves can, among other uses, be used to represent segments of conic sections exactly, including circular arcs. Given n + 1 control points P0, ..., Pn, the rational Bézier curve can be described by or simply The expression can be extended by using number systems besides reals for the weights. In the complex plane the points {1}, {-1}, and {1} with weights {}, {1}, and {} generate a full circle with radius one. For curves with points and weights on a circle, the weights can be scaled without changing the curve's shape. Scaling the central weight of the above curve by 1.35508 gives a more uniform parameterization. Applications Computer graphics Bézier curves are widely used in computer graphics to model smooth curves. As the curve is completely contained in the convex hull of its control points, the points can be graphically displayed and used to manipulate the curve intuitively. Affine transformations such as translation and rotation can be applied on the curve by applying the respective transform on the control points of the curve. Quadratic and cubic Bézier curves are most common. Higher degree curves are more computationally expensive to evaluate. When more complex shapes are needed, low order Bézier curves are patched together, producing a composite Bézier curve. A composite Bézier curve is commonly referred to as a "path" in vector graphics languages (like PostScript), vector graphics standards (like SVG) and vector graphics programs (like Artline, Timeworks Publisher, Adobe Illustrator, CorelDraw, Inkscape, and Allegro). In order to join Bézier curves into a composite Bézier curve without kinks, a property called G1 continuity suffices to force the control point at which two constituent Bézier curves meet to lie on the line defined by the two control points on either side. The simplest method for scan converting (rasterizing) a Bézier curve is to evaluate it at many closely spaced points and scan convert the approximating sequence of line segments. However, this does not guarantee that the rasterized output looks sufficiently smooth, because the points may be spaced too far apart. Conversely it may generate too many points in areas where the curve is close to linear. A common adaptive method is recursive subdivision, in which a curve's control points are checked to see if the curve approximates a line to within a small tolerance. If not, the curve is subdivided parametrically into two segments, 0 ≤ t ≤ 0.5 and 0.5 ≤ t ≤ 1, and the same procedure is applied recursively to each half. There are also forward differencing methods, but great care must be taken to analyse error propagation. Analytical methods where a Bézier is intersected with each scan line involve finding roots of cubic polynomials (for cubic Béziers) and dealing with multiple roots, so they are not often used in practice. The rasterisation algorithm used in Metafont is based on discretising the curve, so that it is approximated by a sequence of "rook moves" that are purely vertical or purely horizontal, along the pixel boundaries. To that end, the plane is first split into eight 45° sectors (by the coordinate axes and the two lines ), then the curve is decomposed into smaller segments such that the direction of a curve segment stays within one sector; since the curve velocity is a second degree polynomial, finding the values where it is parallel to one of these lines can be done by solving quadratic equations. Within each segment, either horizontal or vertical movement dominates, and the total number of steps in either direction can be read off from the endpoint coordinates; in for example the 0–45° sector horizontal movement to the right dominates, so it only remains to decide between which steps to the right the curve should make a step up. There is also a modified curve form of Bresenham's line drawing algorithm by Zingl that performs this rasterization by subdividing the curve into rational pieces and calculating the error at each pixel location such that it either travels at a 45° angle or straight depending on compounding error as it iterates through the curve. This reduces the next step calculation to a series of integer additions and subtractions. Animation In animation applications, such as Adobe Flash and Synfig, Bézier curves are used to outline, for example, movement. Users outline the wanted path in Bézier curves, and the application creates the needed frames for the object to move along the path. In 3D animation, Bézier curves are often used to define 3D paths as well as 2D curves for keyframe interpolation. Bézier curves are now very frequently used to control the animation easing in CSS, JavaScript, JavaFx and Flutter SDK. Fonts TrueType fonts use composite Bézier curves composed of quadratic Bézier curves. Other languages and imaging tools (such as PostScript, Asymptote, Metafont, and SVG) use composite Béziers composed of cubic Bézier curves for drawing curved shapes. OpenType fonts can use either kind of curve, depending on which font technology underlies the OpenType wrapper. Font engines, like FreeType, draw the font's curves (and lines) on a pixellated surface using a process known as font rasterization. Typically font engines and vector graphics engines render Bézier curves by splitting them recursively up to the point where the curve is flat enough to be drawn as a series of linear or circular segments. The exact splitting algorithm is implementation dependent, only the flatness criteria must be respected to reach the necessary precision and to avoid non-monotonic local changes of curvature. The "smooth curve" feature of charts in Microsoft Excel also uses this algorithm. Because arcs of circles and ellipses cannot be exactly represented by Bézier curves, they are first approximated by Bézier curves, which are in turn approximated by arcs of circles. This is inefficient as there exists also approximations of all Bézier curves using arcs of circles or ellipses, which can be rendered incrementally with arbitrary precision. Another approach, used by modern hardware graphics adapters with accelerated geometry, can convert exactly all Bézier and conic curves (or surfaces) into NURBS, that can be rendered incrementally without first splitting the curve recursively to reach the necessary flatness condition. This approach also preserves the curve definition under all linear or perspective 2D and 3D transforms and projections. Robotics Because the control polygon allows to tell whether or not the path collides with any obstacles, Bézier curves are used in producing trajectories of the end effectors. Furthermore, joint space trajectories can be accurately differentiated using Bézier curves. Consequently, the derivatives of joint space trajectories are used in the calculation of the dynamics and control effort (torque profiles) of the robotic manipulator. See also Bézier surface B-spline GEM/4 and GEM/5 Hermite curve NURBS String art – Bézier curves are also formed by many common forms of string art, where strings are looped across a frame of nails. Variation diminishing property of Bézier curves Notes References Citations Sources Excellent discussion of implementation details; available for free as part of the TeX distribution. Further reading A Primer on Bézier Curves an open source online book explaining Bézier curves and associated graphics algorithms, with interactive graphics Cubic Bezier Curves – Under the Hood (video) video showing how computers render a cubic Bézier curve, by Peter Nowell From Bézier to Bernstein Feature Column from American Mathematical Society This book is out of print and freely available from the author. (60 pages) Hovey, Chad (2022). Formulation and Python Implementation of Bézier and B-Spline Geometry. SAND2022-7702C. (153 pages) External links Computer code TinySpline: Open source C-library for NURBS, B-splines and Bézier curves with bindings for various languages C++ library to generate Bézier functions at compile time Simple Bézier curve implementation via recursive method in Python Graphic design Interpolation Curves Design French inventions
Bézier curve
[ "Engineering" ]
4,793
[ "Design" ]
4,668
https://en.wikipedia.org/wiki/Binomial%20coefficient
In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is written It is the coefficient of the term in the polynomial expansion of the binomial power ; this coefficient can be computed by the multiplicative formula which using factorial notation can be compactly expressed as For example, the fourth power of is and the binomial coefficient is the coefficient of the term. Arranging the numbers in successive rows for gives a triangular array called Pascal's triangle, satisfying the recurrence relation The binomial coefficients occur in many areas of mathematics, and especially in combinatorics. In combinatorics the symbol is usually read as " choose " because there are ways to choose an (unordered) subset of elements from a fixed set of elements. For example, there are ways to choose elements from , namely , , , , and . The first form of the binomial coefficients can be generalized to for any complex number and integer , and many of their properties continue to hold in this more general form. History and notation Andreas von Ettingshausen introduced the notation in 1826, although the numbers were known centuries earlier (see Pascal's triangle). In about 1150, the Indian mathematician Bhaskaracharya gave an exposition of binomial coefficients in his book Līlāvatī. Alternative notations include , , , , , and , in all of which the stands for combinations or choices; the notation means the number of ways to choose k out of n objects. Many calculators use variants of the because they can represent it on a single-line display. In this form the binomial coefficients are easily compared to the numbers of -permutations of , written as , etc. Definition and interpretations For natural numbers (taken to include 0) and , the binomial coefficient can be defined as the coefficient of the monomial in the expansion of . The same coefficient also occurs (if ) in the binomial formula (valid for any elements , of a commutative ring), which explains the name "binomial coefficient". Another occurrence of this number is in combinatorics, where it gives the number of ways, disregarding order, that objects can be chosen from among objects; more formally, the number of -element subsets (or -combinations) of an -element set. This number can be seen as equal to the one of the first definition, independently of any of the formulas below to compute it: if in each of the factors of the power one temporarily labels the term with an index (running from to ), then each subset of indices gives after expansion a contribution , and the coefficient of that monomial in the result will be the number of such subsets. This shows in particular that is a natural number for any natural numbers and . There are many other combinatorial interpretations of binomial coefficients (counting problems for which the answer is given by a binomial coefficient expression), for instance the number of words formed of bits (digits 0 or 1) whose sum is is given by , while the number of ways to write where every is a nonnegative integer is given by . Most of these interpretations can be shown to be equivalent to counting -combinations. Computing the value of binomial coefficients Several methods exist to compute the value of without actually expanding a binomial power or counting -combinations. Recursive formula One method uses the recursive, purely additive formula for all integers such that with boundary values for all integers . The formula follows from considering the set and counting separately (a) the -element groupings that include a particular set element, say "", in every group (since "" is already chosen to fill one spot in every group, we need only choose from the remaining ) and (b) all the k-groupings that don't include ""; this enumerates all the possible -combinations of elements. It also follows from tracing the contributions to Xk in . As there is zero or in , one might extend the definition beyond the above boundaries to include when either or . This recursive formula then allows the construction of Pascal's triangle, surrounded by white spaces where the zeros, or the trivial coefficients, would be. Multiplicative formula A more efficient method to compute individual binomial coefficients is given by the formula where the numerator of the first fraction, , is a falling factorial. This formula is easiest to understand for the combinatorial interpretation of binomial coefficients. The numerator gives the number of ways to select a sequence of distinct objects, retaining the order of selection, from a set of objects. The denominator counts the number of distinct sequences that define the same -combination when order is disregarded. This formula can also be stated in a recursive form. Using the "C" notation from above, , where . It is readily derived by evaluating and can intuitively be understood as starting at the leftmost coefficient of the -th row of Pascal's triangle, whose value is always , and recursively computing the next coefficient to its right until the -th one is reached. Due to the symmetry of the binomial coefficients with regard to and , calculation of the above product, as well as the recursive relation, may be optimised by setting its upper limit to the smaller of and . Factorial formula Finally, though computationally unsuitable, there is the compact form, often used in proofs and derivations, which makes repeated use of the familiar factorial function: where denotes the factorial of . This formula follows from the multiplicative formula above by multiplying numerator and denominator by ; as a consequence it involves many factors common to numerator and denominator. It is less practical for explicit computation (in the case that is small and is large) unless common factors are first cancelled (in particular since factorial values grow very rapidly). The formula does exhibit a symmetry that is less evident from the multiplicative formula (though it is from the definitions) which leads to a more efficient multiplicative computational routine. Using the falling factorial notation, Generalization and connection to the binomial series The multiplicative formula allows the definition of binomial coefficients to be extended by replacing n by an arbitrary number α (negative, real, complex) or even an element of any commutative ring in which all positive integers are invertible: With this definition one has a generalization of the binomial formula (with one of the variables set to 1), which justifies still calling the binomial coefficients: This formula is valid for all complex numbers α and X with |X| < 1. It can also be interpreted as an identity of formal power series in X, where it actually can serve as definition of arbitrary powers of power series with constant coefficient equal to 1; the point is that with this definition all identities hold that one expects for exponentiation, notably If α is a nonnegative integer n, then all terms with are zero, and the infinite series becomes a finite sum, thereby recovering the binomial formula. However, for other values of α, including negative integers and rational numbers, the series is really infinite. Pascal's triangle Pascal's rule is the important recurrence relation which can be used to prove by mathematical induction that is a natural number for all integer n ≥ 0 and all integer k, a fact that is not immediately obvious from formula (1). To the left and right of Pascal's triangle, the entries (shown as blanks) are all zero. Pascal's rule also gives rise to Pascal's triangle: Row number contains the numbers for . It is constructed by first placing 1s in the outermost positions, and then filling each inner position with the sum of the two numbers directly above. This method allows the quick calculation of binomial coefficients without the need for fractions or multiplications. For instance, by looking at row number 5 of the triangle, one can quickly read off that Combinatorics and statistics Binomial coefficients are of importance in combinatorics because they provide ready formulas for certain frequent counting problems: There are ways to choose k elements from a set of n elements. See Combination. There are ways to choose k elements from a set of n elements if repetitions are allowed. See Multiset. There are strings containing k ones and n zeros. There are strings consisting of k ones and n zeros such that no two ones are adjacent. The Catalan numbers are The binomial distribution in statistics is Binomial coefficients as polynomials For any nonnegative integer k, the expression can be written as a polynomial with denominator : this presents a polynomial in t with rational coefficients. As such, it can be evaluated at any real or complex number t to define binomial coefficients with such first arguments. These "generalized binomial coefficients" appear in Newton's generalized binomial theorem. For each k, the polynomial can be characterized as the unique degree k polynomial satisfying and . Its coefficients are expressible in terms of Stirling numbers of the first kind: The derivative of can be calculated by logarithmic differentiation: This can cause a problem when evaluated at integers from to , but using identities below we can compute the derivative as: Binomial coefficients as a basis for the space of polynomials Over any field of characteristic 0 (that is, any field that contains the rational numbers), each polynomial p(t) of degree at most d is uniquely expressible as a linear combination of binomial coefficients, because the binomial coefficients consist of one polynomial of each degree. The coefficient ak is the kth difference of the sequence p(0), p(1), ..., p(k). Explicitly, Integer-valued polynomials Each polynomial is integer-valued: it has an integer value at all integer inputs . (One way to prove this is by induction on k using Pascal's identity.) Therefore, any integer linear combination of binomial coefficient polynomials is integer-valued too. Conversely, () shows that any integer-valued polynomial is an integer linear combination of these binomial coefficient polynomials. More generally, for any subring R of a characteristic 0 field K, a polynomial in K[t] takes values in R at all integers if and only if it is an R-linear combination of binomial coefficient polynomials. Example The integer-valued polynomial can be rewritten as Identities involving binomial coefficients The factorial formula facilitates relating nearby binomial coefficients. For instance, if k is a positive integer and n is arbitrary, then and, with a little more work, We can also get Moreover, the following may be useful: For constant n, we have the following recurrence: To sum up, we have Sums of the binomial coefficients The formula says that the elements in the th row of Pascal's triangle always add up to 2 raised to the th power. This is obtained from the binomial theorem () by setting and . The formula also has a natural combinatorial interpretation: the left side sums the number of subsets of {1, ..., n} of sizes k = 0, 1, ..., n, giving the total number of subsets. (That is, the left side counts the power set of {1, ..., n}.) However, these subsets can also be generated by successively choosing or excluding each element 1, ..., n; the n independent binary choices (bit-strings) allow a total of choices. The left and right sides are two ways to count the same collection of subsets, so they are equal. The formulas and follow from the binomial theorem after differentiating with respect to (twice for the latter) and then substituting . The Chu–Vandermonde identity, which holds for any complex values m and n and any non-negative integer k, is and can be found by examination of the coefficient of in the expansion of using equation (). When , equation () reduces to equation (). In the special case , using (), the expansion () becomes (as seen in Pascal's triangle at right) where the term on the right side is a central binomial coefficient. Another form of the Chu–Vandermonde identity, which applies for any integers j, k, and n satisfying , is The proof is similar, but uses the binomial series expansion () with negative integer exponents. When , equation () gives the hockey-stick identity and its relative Let F(n) denote the n-th Fibonacci number. Then This can be proved by induction using () or by Zeckendorf's representation. A combinatorial proof is given below. Multisections of sums For integers s and t such that series multisection gives the following identity for the sum of binomial coefficients: For small , these series have particularly nice forms; for example, Partial sums Although there is no closed formula for partial sums of binomial coefficients, one can again use () and induction to show that for , with special case for . This latter result is also a special case of the result from the theory of finite differences that for any polynomial P(x) of degree less than n, Differentiating () k times and setting x = −1 yields this for , when 0 ≤ k < n, and the general case follows by taking linear combinations of these. When P(x) is of degree less than or equal to n, where is the coefficient of degree n in P(x). More generally for (), where m and d are complex numbers. This follows immediately applying () to the polynomial instead of , and observing that still has degree less than or equal to n, and that its coefficient of degree n is dnan. The series is convergent for k ≥ 2. This formula is used in the analysis of the German tank problem. It follows from which is proved by induction on M. Identities with combinatorial proofs Many identities involving binomial coefficients can be proved by combinatorial means. For example, for nonnegative integers , the identity (which reduces to () when q = 1) can be given a double counting proof, as follows. The left side counts the number of ways of selecting a subset of [n] = {1, 2, ..., n} with at least q elements, and marking q elements among those selected. The right side counts the same thing, because there are ways of choosing a set of q elements to mark, and to choose which of the remaining elements of [n] also belong to the subset. In Pascal's identity both sides count the number of k-element subsets of [n]: the two terms on the right side group them into those that contain element n and those that do not. The identity () also has a combinatorial proof. The identity reads Suppose you have empty squares arranged in a row and you want to mark (select) n of them. There are ways to do this. On the other hand, you may select your n squares by selecting k squares from among the first n and squares from the remaining n squares; any k from 0 to n will work. This gives Now apply () to get the result. If one denotes by the sequence of Fibonacci numbers, indexed so that , then the identity has the following combinatorial proof. One may show by induction that counts the number of ways that a strip of squares may be covered by and tiles. On the other hand, if such a tiling uses exactly of the tiles, then it uses of the tiles, and so uses tiles total. There are ways to order these tiles, and so summing this coefficient over all possible values of gives the identity. Sum of coefficients row The number of k-combinations for all k, , is the sum of the nth row (counting from 0) of the binomial coefficients. These combinations are enumerated by the 1 digits of the set of base 2 numbers counting from 0 to , where each digit position is an item from the set of n. Dixon's identity Dixon's identity is or, more generally, where a, b, and c are non-negative integers. Continuous identities Certain trigonometric integrals have values expressible in terms of binomial coefficients: For any These can be proved by using Euler's formula to convert trigonometric functions to complex exponentials, expanding using the binomial theorem, and integrating term by term. Congruences If n is prime, then for every k with More generally, this remains true if n is any number and k is such that all the numbers between 1 and k are coprime to n. Indeed, we have Generating functions Ordinary generating functions For a fixed , the ordinary generating function of the sequence is For a fixed , the ordinary generating function of the sequence is The bivariate generating function of the binomial coefficients is A symmetric bivariate generating function of the binomial coefficients is which is the same as the previous generating function after the substitution . Exponential generating function A symmetric exponential bivariate generating function of the binomial coefficients is: Divisibility properties In 1852, Kummer proved that if m and n are nonnegative integers and p is a prime number, then the largest power of p dividing equals pc, where c is the number of carries when m and n are added in base p. Equivalently, the exponent of a prime p in equals the number of nonnegative integers j such that the fractional part of k/pj is greater than the fractional part of n/pj. It can be deduced from this that is divisible by n/gcd(n,k). In particular therefore it follows that p divides for all positive integers r and s such that . However this is not true of higher powers of p: for example 9 does not divide . A somewhat surprising result by David Singmaster (1974) is that any integer divides almost all binomial coefficients. More precisely, fix an integer d and let f(N) denote the number of binomial coefficients with n < N such that d divides . Then Since the number of binomial coefficients with n < N is N(N + 1) / 2, this implies that the density of binomial coefficients divisible by d goes to 1. Binomial coefficients have divisibility properties related to least common multiples of consecutive integers. For example: divides . is a multiple of . Another fact: An integer is prime if and only if all the intermediate binomial coefficients are divisible by n. Proof: When p is prime, p divides for all because is a natural number and p divides the numerator but not the denominator. When n is composite, let p be the smallest prime factor of n and let . Then and otherwise the numerator has to be divisible by , this can only be the case when is divisible by p. But n is divisible by p, so p does not divide and because p is prime, we know that p does not divide and so the numerator cannot be divisible by n. Bounds and asymptotic formulas The following bounds for hold for all values of n and k such that : The first inequality follows from the fact that and each of these terms in this product is . A similar argument can be made to show the second inequality. The final strict inequality is equivalent to , that is clear since the RHS is a term of the exponential series . From the divisibility properties we can infer that where both equalities can be achieved. The following bounds are useful in information theory: where is the binary entropy function. It can be further tightened to for all . Both n and k large Stirling's approximation yields the following approximation, valid when both tend to infinity: Because the inequality forms of Stirling's formula also bound the factorials, slight variants on the above asymptotic approximation give exact bounds. In particular, when is sufficiently large, one has and . More generally, for and (again, by applying Stirling's formula to the factorials in the binomial coefficient), If n is large and k is linear in n, various precise asymptotic estimates exist for the binomial coefficient . For example, if then where d = n − 2k. much larger than If is large and is (that is, if ), then where again is the little o notation. Sums of binomial coefficients A simple and rough upper bound for the sum of binomial coefficients can be obtained using the binomial theorem: More precise bounds are given by valid for all integers with . Generalized binomial coefficients The infinite product formula for the gamma function also gives an expression for binomial coefficients which yields the asymptotic formulas as . This asymptotic behaviour is contained in the approximation as well. (Here is the k-th harmonic number and is the Euler–Mascheroni constant.) Further, the asymptotic formula hold true, whenever and for some complex number . Generalizations Generalization to multinomials Binomial coefficients can be generalized to multinomial coefficients defined to be the number: where While the binomial coefficients represent the coefficients of , the multinomial coefficients represent the coefficients of the polynomial The case r = 2 gives binomial coefficients: The combinatorial interpretation of multinomial coefficients is distribution of n distinguishable elements over r (distinguishable) containers, each containing exactly ki elements, where i is the index of the container. Multinomial coefficients have many properties similar to those of binomial coefficients, for example the recurrence relation: and symmetry: where is a permutation of (1, 2, ..., r). Taylor series Using Stirling numbers of the first kind the series expansion around any arbitrarily chosen point is Binomial coefficient with The definition of the binomial coefficients can be extended to the case where is real and is integer. In particular, the following identity holds for any non-negative integer : This shows up when expanding into a power series using the Newton binomial series : Products of binomial coefficients One can express the product of two binomial coefficients as a linear combination of binomial coefficients: where the connection coefficients are multinomial coefficients. In terms of labelled combinatorial objects, the connection coefficients represent the number of ways to assign labels to a pair of labelled combinatorial objects—of weight m and n respectively—that have had their first k labels identified, or glued together to get a new labelled combinatorial object of weight . (That is, to separate the labels into three portions to apply to the glued part, the unglued part of the first object, and the unglued part of the second object.) In this regard, binomial coefficients are to exponential generating series what falling factorials are to ordinary generating series. The product of all binomial coefficients in the nth row of the Pascal triangle is given by the formula: Partial fraction decomposition The partial fraction decomposition of the reciprocal is given by Newton's binomial series Newton's binomial series, named after Sir Isaac Newton, is a generalization of the binomial theorem to infinite series: The identity can be obtained by showing that both sides satisfy the differential equation {{math|1=(1 + z) f'''(z) = α f(z)}}. The radius of convergence of this series is 1. An alternative expression is where the identity is applied. Multiset (rising) binomial coefficient Binomial coefficients count subsets of prescribed size from a given set. A related combinatorial problem is to count multisets of prescribed size with elements drawn from a given set, that is, to count the number of ways to select a certain number of elements from a given set with the possibility of selecting the same element repeatedly. The resulting numbers are called multiset coefficients; the number of ways to "multichoose" (i.e., choose with replacement) k items from an n element set is denoted . To avoid ambiguity and confusion with n's main denotation in this article, let and . Multiset coefficients may be expressed in terms of binomial coefficients by the rule One possible alternative characterization of this identity is as follows: We may define the falling factorial as and the corresponding rising factorial as so, for example, Then the binomial coefficients may be written as while the corresponding multiset coefficient is defined by replacing the falling with the rising factorial: Generalization to negative integers n For any n, In particular, binomial coefficients evaluated at negative integers n are given by signed multiset coefficients. In the special case , this reduces to For example, if n = −4 and k = 7, then r = 4 and f = 10: Two real or complex valued arguments The binomial coefficient is generalized to two real or complex valued arguments using the gamma function or beta function via This definition inherits these following additional properties from : moreover, The resulting function has been little-studied, apparently first being graphed in . Notably, many binomial identities fail: but for n positive (so negative). The behavior is quite complex, and markedly different in various octants (that is, with respect to the x and y axes and the line ), with the behavior for negative x'' having singularities at negative integer values and a checkerboard of positive and negative regions: in the octant it is a smoothly interpolated form of the usual binomial, with a ridge ("Pascal's ridge"). in the octant and in the quadrant the function is close to zero. in the quadrant the function is alternatingly very large positive and negative on the parallelograms with vertices in the octant the behavior is again alternatingly very large positive and negative, but on a square grid. in the octant it is close to zero, except for near the singularities. Generalization to q-series The binomial coefficient has a q-analog generalization known as the Gaussian binomial coefficient. Generalization to infinite cardinals The definition of the binomial coefficient can be generalized to infinite cardinals by defining: where is some set with cardinality . One can show that the generalized binomial coefficient is well-defined, in the sense that no matter what set we choose to represent the cardinal number , will remain the same. For finite cardinals, this definition coincides with the standard definition of the binomial coefficient. Assuming the Axiom of Choice, one can show that for any infinite cardinal . See also Binomial transform Delannoy number Eulerian number Hypergeometric function List of factorial and binomial topics Macaulay representation of an integer Motzkin number Multiplicities of entries in Pascal's triangle Narayana number Star of David theorem Sun's curious identity Table of Newtonian series Trinomial expansion Notes References External links Combinatorics Factorial and binomial topics Integer sequences Triangles of numbers Operations on numbers Articles with example Python (programming language) code Articles with example Scheme (programming language) code Articles with example C code
Binomial coefficient
[ "Mathematics" ]
5,674
[ "Sequences and series", "Discrete mathematics", "Factorial and binomial topics", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Arithmetic", "Triangles of numbers", "Operations on numbers", "Numbers", "Number theory" ]
4,677
https://en.wikipedia.org/wiki/Binomial%20theorem
In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, the power expands into a polynomial with terms of the form , where the exponents and are nonnegative integers satisfying and the coefficient of each term is a specific positive integer depending on and . For example, for , The coefficient in each term is known as the binomial coefficient or (the two have the same value). These coefficients for varying and can be arranged to form Pascal's triangle. These numbers also occur in combinatorics, where gives the number of different combinations (i.e. subsets) of elements that can be chosen from an -element set. Therefore is usually pronounced as " choose ". Statement According to the theorem, the expansion of any nonnegative integer power of the binomial is a sum of the form where each is a positive integer known as a binomial coefficient, defined as This formula is also referred to as the binomial formula or the binomial identity. Using summation notation, it can be written more concisely as The final expression follows from the previous one by the symmetry of and in the first expression, and by comparison it follows that the sequence of binomial coefficients in the formula is symmetrical, A simple variant of the binomial formula is obtained by substituting for , so that it involves only a single variable. In this form, the formula reads Examples The first few cases of the binomial theorem are: In general, for the expansion of on the right side in the th row (numbered so that the top row is the 0th row): the exponents of in the terms are (the last term implicitly contains ); the exponents of in the terms are (the first term implicitly contains ); the coefficients form the th row of Pascal's triangle; before combining like terms, there are terms in the expansion (not shown); after combining like terms, there are terms, and their coefficients sum to . An example illustrating the last two points: with . A simple example with a specific positive value of : A simple example with a specific negative value of : Geometric explanation For positive values of and , the binomial theorem with is the geometrically evident fact that a square of side can be cut into a square of side , a square of side , and two rectangles with sides and . With , the theorem states that a cube of side can be cut into a cube of side , a cube of side , three rectangular boxes, and three rectangular boxes. In calculus, this picture also gives a geometric proof of the derivative if one sets and interpreting as an infinitesimal change in , then this picture shows the infinitesimal change in the volume of an -dimensional hypercube, where the coefficient of the linear term (in ) is the area of the faces, each of dimension : Substituting this into the definition of the derivative via a difference quotient and taking limits means that the higher order terms, and higher, become negligible, and yields the formula interpreted as "the infinitesimal rate of change in volume of an -cube as side length varies is the area of of its -dimensional faces". If one integrates this picture, which corresponds to applying the fundamental theorem of calculus, one obtains Cavalieri's quadrature formula, the integral – see proof of Cavalieri's quadrature formula for details. Binomial coefficients The coefficients that appear in the binomial expansion are called binomial coefficients. These are usually written and pronounced " choose ". Formulas The coefficient of is given by the formula which is defined in terms of the factorial function . Equivalently, this formula can be written with factors in both the numerator and denominator of the fraction. Although this formula involves a fraction, the binomial coefficient is actually an integer. Combinatorial interpretation The binomial coefficient can be interpreted as the number of ways to choose elements from an -element set (a combination). This is related to binomials for the following reason: if we write as a product then, according to the distributive law, there will be one term in the expansion for each choice of either or from each of the binomials of the product. For example, there will only be one term , corresponding to choosing from each binomial. However, there will be several terms of the form , one for each way of choosing exactly two binomials to contribute a . Therefore, after combining like terms, the coefficient of will be equal to the number of ways to choose exactly elements from an -element set. Proofs Combinatorial proof Expanding yields the sum of the products of the form where each is or . Rearranging factors shows that each product equals for some between and . For a given , the following are proved equal in succession: the number of terms equal to in the expansion the number of -character strings having in exactly positions the number of -element subsets of either by definition, or by a short combinatorial argument if one is defining as This proves the binomial theorem. Example The coefficient of in equals because there are three strings of length 3 with exactly two 's, namely, corresponding to the three 2-element subsets of , namely, where each subset specifies the positions of the in a corresponding string. Inductive proof Induction yields another proof of the binomial theorem. When , both sides equal , since and Now suppose that the equality holds for a given ; we will prove it for . For , let denote the coefficient of in the polynomial . By the inductive hypothesis, is a polynomial in and such that is if , and otherwise. The identity shows that is also a polynomial in and , and since if , then and . Now, the right hand side is by Pascal's identity. On the other hand, if , then and , so we get . Thus which is the inductive hypothesis with substituted for and so completes the inductive step. Generalizations Newton's generalized binomial theorem Around 1665, Isaac Newton generalized the binomial theorem to allow real exponents other than nonnegative integers. (The same generalization also applies to complex exponents.) In this generalization, the finite sum is replaced by an infinite series. In order to do this, one needs to give meaning to binomial coefficients with an arbitrary upper index, which cannot be done using the usual formula with factorials. However, for an arbitrary number , one can define where is the Pochhammer symbol, here standing for a falling factorial. This agrees with the usual definitions when is a nonnegative integer. Then, if and are real numbers with , and is any complex number, one has When is a nonnegative integer, the binomial coefficients for are zero, so this equation reduces to the usual binomial theorem, and there are at most nonzero terms. For other values of , the series typically has infinitely many nonzero terms. For example, gives the following series for the square root: Taking , the generalized binomial series gives the geometric series formula, valid for : More generally, with , we have for : So, for instance, when , Replacing with yields: So, for instance, when , we have for : Further generalizations The generalized binomial theorem can be extended to the case where and are complex numbers. For this version, one should again assume and define the powers of and using a holomorphic branch of log defined on an open disk of radius centered at . The generalized binomial theorem is valid also for elements and of a Banach algebra as long as , and is invertible, and . A version of the binomial theorem is valid for the following Pochhammer symbol-like family of polynomials: for a given real constant , define and for Then The case recovers the usual binomial theorem. More generally, a sequence of polynomials is said to be of binomial type if for all , , and for all , , and . An operator on the space of polynomials is said to be the basis operator of the sequence if and for all . A sequence is binomial if and only if its basis operator is a Delta operator. Writing for the shift by operator, the Delta operators corresponding to the above "Pochhammer" families of polynomials are the backward difference for , the ordinary derivative for , and the forward difference for . Multinomial theorem The binomial theorem can be generalized to include powers of sums with more than two terms. The general version is where the summation is taken over all sequences of nonnegative integer indices through such that the sum of all is . (For each term in the expansion, the exponents must add up to ). The coefficients are known as multinomial coefficients, and can be computed by the formula Combinatorially, the multinomial coefficient counts the number of different ways to partition an -element set into disjoint subsets of sizes . Multi-binomial theorem When working in more dimensions, it is often useful to deal with products of binomial expressions. By the binomial theorem this is equal to This may be written more concisely, by multi-index notation, as General Leibniz rule The general Leibniz rule gives the th derivative of a product of two functions in a form similar to that of the binomial theorem: Here, the superscript indicates the th derivative of a function, . If one sets and , cancelling the common factor of from each term gives the ordinary binomial theorem. History Special cases of the binomial theorem were known since at least the 4th century BC when Greek mathematician Euclid mentioned the special case of the binomial theorem for exponent . Greek mathematician Diophantus cubed various binomials, including . Indian mathematician Aryabhata's method for finding cube roots, from around 510 AD, suggests that he knew the binomial formula for exponent . Binomial coefficients, as combinatorial quantities expressing the number of ways of selecting objects out of without replacement (combinations), were of interest to ancient Indian mathematicians. The Jain Bhagavati Sutra (c. 300 BC) describes the number of combinations of philosophical categories, senses, or other things, with correct results up through (probably obtained by listing all possibilities and counting them) and a suggestion that higher combinations could likewise be found. The Chandaḥśāstra by the Indian lyricist Piṅgala (3rd or 2nd century BC) somewhat crypically describes a method of arranging two types of syllables to form metres of various lengths and counting them; as interpreted and elaborated by Piṅgala's 10th-century commentator Halāyudha his "method of pyramidal expansion" (meru-prastāra) for counting metres is equivalent to Pascal's triangle. Varāhamihira (6th century AD) describes another method for computing combination counts by adding numbers in columns. By the 9th century at latest Indian mathematicians learned to express this as a product of fractions , and clear statements of this rule can be found in Śrīdhara's Pāṭīgaṇita (8th–9th century), Mahāvīra's Gaṇita-sāra-saṅgraha (c. 850), and Bhāskara II's Līlāvatī (12th century). The Persian mathematician al-Karajī (953–1029) wrote a now-lost book containing the binomial theorem and a table of binomial coefficients, often credited as their first appearance. An explicit statement of the binomial theorem appears in al-Samawʾal's al-Bāhir (12th century), there credited to al-Karajī. Al-Samawʾal algebraically expanded the square, cube, and fourth power of a binomial, each in terms of the previous power, and noted that similar proofs could be provided for higher powers, an early form of mathematical induction. He then provided al-Karajī's table of binomial coefficients (Pascal's triangle turned on its side) up to and a rule for generating them equivalent to the recurrence relation . The Persian poet and mathematician Omar Khayyam was probably familiar with the formula to higher orders, although many of his mathematical works are lost. The binomial expansions of small degrees were known in the 13th century mathematical works of Yang Hui and also Chu Shih-Chieh. Yang Hui attributes the method to a much earlier 11th century text of Jia Xian, although those writings are now also lost. In Europe, descriptions of the construction of Pascal's triangle can be found as early as Jordanus de Nemore's De arithmetica (13th century). In 1544, Michael Stifel introduced the term "binomial coefficient" and showed how to use them to express in terms of , via "Pascal's triangle". Other 16th century mathematicians including Niccolò Fontana Tartaglia and Simon Stevin also knew of it. 17th-century mathematician Blaise Pascal studied the eponymous triangle comprehensively in his Traité du triangle arithmétique. By the early 17th century, some specific cases of the generalized binomial theorem, such as for , can be found in the work of Henry Briggs' Arithmetica Logarithmica (1624). Isaac Newton is generally credited with discovering the generalized binomial theorem, valid for any real exponent, in 1665, inspired by the work of John Wallis's Arithmetic Infinitorum and his method of interpolation. A logarithmic version of the theorem for fractional exponents was discovered independently by James Gregory who wrote down his formula in 1670. Applications Multiple-angle identities For the complex numbers the binomial theorem can be combined with de Moivre's formula to yield multiple-angle formulas for the sine and cosine. According to De Moivre's formula, Using the binomial theorem, the expression on the right can be expanded, and then the real and imaginary parts can be taken to yield formulas for and . For example, since But De Moivre's formula identifies the left side with , so which are the usual double-angle identities. Similarly, since De Moivre's formula yields In general, and There are also similar formulas using Chebyshev polynomials. Series for e The number is often defined by the formula Applying the binomial theorem to this expression yields the usual infinite series for . In particular: The th term of this sum is As , the rational expression on the right approaches , and therefore This indicates that can be written as a series: Indeed, since each term of the binomial expansion is an increasing function of , it follows from the monotone convergence theorem for series that the sum of this infinite series is equal to . Probability The binomial theorem is closely related to the probability mass function of the negative binomial distribution. The probability of a (countable) collection of independent Bernoulli trials with probability of success all not happening is An upper bound for this quantity is In abstract algebra The binomial theorem is valid more generally for two elements and in a ring, or even a semiring, provided that . For example, it holds for two matrices, provided that those matrices commute; this is useful in computing powers of a matrix. The binomial theorem can be stated by saying that the polynomial sequence is of binomial type. See also Binomial approximation Binomial distribution Binomial inverse theorem Binomial coefficient Stirling's approximation Tannery's theorem Polynomials calculating sums of powers of arithmetic progressions q-binomial theorem Notes References Further reading External links Binomial Theorem by Stephen Wolfram, and "Binomial Theorem (Step-by-Step)" by Bruce Colletti and Jeff Bryant, Wolfram Demonstrations Project, 2007. Factorial and binomial topics Theorems about polynomials Articles containing proofs
Binomial theorem
[ "Mathematics" ]
3,297
[ "Factorial and binomial topics", "Theorems in algebra", "Combinatorics", "Theorems about polynomials", "Articles containing proofs" ]
4,699
https://en.wikipedia.org/wiki/Blissymbols
Blissymbols or Blissymbolics is a constructed language conceived as an ideographic writing system called Semantography consisting of several hundred basic symbols, each representing a concept, which can be composed together to generate new symbols that represent new concepts. Blissymbols differ from most of the world's major writing systems in that the characters do not correspond at all to the sounds of any spoken language. Semantography was published by Charles K. Bliss in 1949 and found use in the education of people with communication difficulties. History Semantography was invented by Charles K. Bliss (1897–1985), born Karl Kasiel Blitz to a Jewish family in Chernivtsi (then Czernowitz, Austria-Hungary), which had a mixture of different nationalities that "hated each other, mainly because they spoke and thought in different languages." Bliss graduated as a chemical engineer at the Vienna University of Technology, and joined an electronics company. After the Nazi annexation of Austria in 1938, Bliss was sent to concentration camps but his German wife Claire managed to get him released, and they finally became exiles in Shanghai, where Bliss had a cousin. Bliss devised the symbols while a refugee at the Shanghai Ghetto and Sydney, from 1942 to 1949. He wanted to create an easy-to-learn international auxiliary language to allow communication between different linguistic communities. He was inspired by Chinese characters, with which he became familiar at Shanghai. Bliss published his system in Semantography (1949, exp. 2nd ed. 1965, 3rd ed. 1978.) It had several names: As the "tourist explosion" took place in the 1960s, a number of researchers were looking for new standard symbols to be used at roads, stations, airports, etc. Bliss then adopted the name Blissymbolics in order that no researcher could plagiarize his system of symbols. Since the 1960s/1970s, Blissymbols have become popular as a method to teach disabled people to communicate. In 1971, Shirley McNaughton started a pioneer program at the Ontario Crippled Children's Centre (OCCC), aimed at children with cerebral palsy, from the approach of augmentative and alternative communication (AAC). According to Arika Okrent, Bliss used to complain about the way the teachers at the OCCC were using the symbols, in relation with the proportions of the symbols and other questions: for example, they used "fancy" terms like "nouns" and "verbs", to describe what Bliss called "things" and "actions". (2009, p. 173-4). The ultimate objective of the OCCC program was to use Blissymbols as a practical way to teach the children to express themselves in their mother tongue, since the Blissymbols provided visual keys to understand the meaning of the English words, especially the abstract words. In Semantography, Bliss had not provided a systematic set of definitions for his symbols (there was a provisional vocabulary index instead (1965, pp. 827–67)), so McNaughton's team might often interpret a certain symbol in a way that Bliss would later criticize as a "misinterpretation". For example, they might interpret a tomato as a vegetable —according to the English definition of tomato— even though the ideal Blissymbol of vegetable was restricted by Bliss to just vegetables growing underground. Eventually the OCCC staff modified and adapted Bliss's system in order to make it serve as a bridge to English. (2009, p. 189) Bliss' complaints about his symbols "being abused" by the OCCC became so intense that the director of the OCCC told Bliss, on his 1974 visit, never to come back. In spite of this, in 1975, Bliss granted an exclusive world license, for use with disabled children, to the new Blissymbolics Communication Foundation directed by Shirley McNaughton (later called Blissymbolics Communication International, BCI). Nevertheless, in 1977, Bliss claimed that this agreement was violated so that he was deprived of effective control of his symbol system. According to Okrent (2009, p. 190), there was a final period of conflict, as Bliss would make continuous criticisms to McNaughton often followed by apologies. Bliss finally brought his lawyers back to the OCCC, reaching a settlement: Blissymbolic Communication International now claims an exclusive license from Bliss, for the use and publication of Blissymbols for persons with communication, language, and learning difficulties. The Blissymbol method has been used in Canada, Sweden, and a few other countries. Practitioners of Blissymbolics (that is, speech and language therapists and users) maintain that some users who have learned to communicate with Blissymbolics find it easier to learn to read and write traditional orthography in the local spoken language than do users who did not know Blissymbolics. The speech question Unlike similar constructed languages like aUI, Blissymbolics was conceived as a written language with no phonology, on the premise that "interlinguistic communication is mainly carried on by reading and writing". Nevertheless, Bliss suggested that a set of international words could be adopted, so that "a kind of spoken language could be established – as a travelling aid only". (1965, p. 89–90). Whether Blissymbolics constitutes an unspoken language is a controversial question, whatever its practical utility may be. Some linguists, such as John DeFrancis and J. Marshall Unger have argued that genuine ideographic writing systems with the same capacities as natural languages do not exist. Semantics Bliss' concern about semantics finds an early referent in John Locke, whose Essay Concerning Human Understanding prevented people from those "vague and insignificant forms of speech" that may give the impression of being deep learning. Another vital referent is Gottfried Wilhelm Leibniz's project of an ideographic language "characteristica universalis", based on the principles of Chinese characters. It would contain small figures representing "visible things by their lines, and the invisible, by the visible which accompany them", adding "certain additional marks, suitable to make understood the flexions and the particles." Bliss stated that his own work was an attempt to take up the thread of Leibniz's project. Finally there is a strong influence by The Meaning of Meaning (1923) by C. K. Ogden and I. A. Richards, which was considered a standard work on semantics. Bliss found especially useful their "triangle of reference": the physical thing or "referent" that we perceive would be represented at the right vertex; the meaning that we know by experience (our implicit definition of the thing), at the top vertex; and the physical word that we speak or symbol we write, at the left vertex. The reversed process would happen when we read or listen to words: from the words, we recall meanings, related to referents which may be real things or unreal "fictions". Bliss was particularly concerned with political propaganda, whose discourses would tend to contain words that correspond to unreal or ambiguous referents. Grammar The grammar of Blissymbols is based on a certain interpretation of nature, dividing it into matter (material things), energy (actions), and human values (mental evaluations). In a natural language, these would give place respectively to nouns, verbs, and adjectives. In Blissymbols, they are marked respectively by a small square symbol, a small cone symbol, and a small V or inverted cone. These symbols may be placed above any other symbol, turning it respectively into a "thing", an "action", and an "evaluation": When a symbol is not marked by any of the three grammar symbols (square, cone, inverted cone), it may refer to a non-material thing, a grammatical particle, etc. Examples The symbol represents the expression "world language", which was a first tentative name for Blissymbols. It combines the symbol for "writing tool" or "pen" (a line inclined, as a pen being used) with the symbol for "world", which in its turn combines "ground" or "earth" (a horizontal line below) and its counterpart derivate "sky" (a horizontal line above). Thus the world would be seen as "what is among the ground and the sky", and "Blissymbols" would be seen as "the writing tool to express the world". This is clearly distinct from the symbol of "language", which is a combination of "mouth" and "ear". Thus natural languages are mainly oral, while Blissymbols is just a writing system dealing with semantics, not phonetics. The 900 individual symbols of the system are called "Bliss-characters"; these may be "ideographic" – representing abstract concepts, "pictographic" – a direct representation of objects, or "composite" – in which two or more existing Bliss-characters are superimposed to represent a new meaning. Size, orientation and relation to the "skyline" and "earthline" affects the meaning of each symbol. A single concept is called a "Bliss-word", which can consist of one or more Bliss-characters. In multiple-character Bliss-words, the main character is called the "classifier" which "indicates the semantic or grammatical category to which the Bliss-word belongs". To this can be added Bliss-characters as prefixes or suffixes called "modifiers" which amend the meaning of the first symbol. A further symbol called an "indicator" can be added above one of the characters in the Bliss-word (typically the classifier); these are used as "grammatical and/or semantic markers." Sentence on the right means "I want to go to the cinema.", showing several features of Blissymbolics: The pronoun "I" is formed of the Bliss-character for "person" and the number 1 (the first person). Using the number 2 would give the symbol for singular "You"; adding the plural indicator (a small cross at the top) would produce the pronouns "We" and plural "You". The Bliss-word for "to want" contains the heart which symbolizes "feeling" (the classifier), plus the serpentine line which symbolizes "fire" (the modifier), and the verb (called "action") indicator at the top. The Bliss-word for "to go" is composed of the Bliss-character for "leg" and the verb indicator. The Bliss-word for "cinema" is composed of the Bliss-character for "house" (the classifier), and "film" (the modifier); "film" is a composite character composed of "camera" and the arrow indicating movement. Towards the international standardization of the script Blissymbolics was used in 1971 to help children at the Ontario Crippled Children's Centre (OCCC, now the Holland Bloorview Kids Rehabilitation Hospital) in Toronto, Ontario, Canada. Since it was important that the children see consistent pictures, OCCC had a draftsman named Jim Grice draw the symbols. Both Charles K. Bliss and Margrit Beesley at the OCCC worked with Grice to ensure consistency. In 1975, a new organization named Blissymbolics Communication Foundation directed by Shirley McNaughton led this effort. Over the years, this organization changed its name to Blissymbolics Communication Institute, Easter Seal Communication Institute, and ultimately to Blissymbolics Communication International (BCI). BCI is an international group of people who act as an authority regarding the standardization of the Blissymbolics language. It has taken responsibility for any extensions of the Blissymbolics language as well as any maintenance needed for the language. BCI has coordinated usage of the language since 1971 for augmentative and alternative communication. BCI received a licence and copyright through legal agreements with Charles K. Bliss in 1975 and 1982. Limiting the count of Bliss-characters (there are currently about 900) is very useful in order to help the user community. It also helps when implementing Blissymbolics using technology such as computers. In 1991, BCI published a reference guide containing 2300 vocabulary items and detailed rules for the graphic design of additional characters, so they settled a first set of approved Bliss-words for general use. The Standards Council of Canada then sponsored, on January 21, 1993, the registration of an encoded character set for use in ISO/IEC 2022, in the ISO-IR international registry of coded character sets. After many years of requests, the Blissymbolic language was finally approved as an encoded language, with code , into the ISO 639-2 and ISO 639-3 standards. A proposal was posted by Michael Everson for the Blissymbolics script to be included in the Universal Character Set (UCS) and encoded for use with the ISO/IEC 10646 and Unicode standards. BCI would cooperate with the Unicode Technical Committee (UTC) and the ISO Working Group. The proposed encoding does not use the lexical encoding model used in the existing ISO-IR/169 registered character set, but instead applies the Unicode and ISO character-glyph model to the Bliss-character model already adopted by BCI, since this would significantly reduce the number of needed characters. Bliss-characters can now be used in a creative way to create many new arbitrary concepts, by surrounding the invented words with special Bliss indicators (similar to punctuation), something which was not possible in the ISO-IR/169 encoding. However, by the end of 2009, the Blissymbolic script was not encoded in the UCS. Some questions are still unanswered, such as the inclusion in the BCI repertoire of some characters (currently about 24) that are already encoded in the UCS (like digits, punctuation signs, spaces and some markers), but whose unification may cause problems due to the very strict graphical layouts required by the published Bliss reference guides. In addition, the character metrics use a specific layout where the usual baseline is not used, and the ideographic em-square is not relevant for Bliss character designs that use additional "earth line" and "sky line" to define the composition square. Some fonts supporting the BCI repertoire are available and usable with texts encoded with private-use assignments (PUA) within the UCS. But only the private BCI encoding based on ISO-IR/169 registration is available for text interchange. See also Egyptian hieroglyphs Esperanto iConji Isotype Kanji sitelen pona LoCoS (language) References External links Blissymbol Communication UK An Introduction to Blissymbols (PDF file) Standard two-byte encoded character set for Blissymbols , from the ISO-IR international registry of character sets, registration number 169 (1993-01-21). Michael Everson's First proposed encoding into Unicode and ISO/IEC 10646 of Blissymbolics characters, based on the decomposition of the ISO-IR/169 repertoire. Preliminary proposal for encoding Blissymbols (WG2 N5228) Radiolab program about Charles Bliss – Broadcast December 2012 – the item about Charles Bliss starts after 5 minutes and is approx 30 mins long. Engineered languages Auxiliary and educational artificial scripts International auxiliary languages Pictograms Augmentative and alternative communication Writing systems introduced in 1949 Constructed languages Constructed languages introduced in the 1940s
Blissymbols
[ "Mathematics" ]
3,200
[ "Symbols", "Pictograms" ]
4,715
https://en.wikipedia.org/wiki/Boolean%20satisfiability%20problem
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) asks whether there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the formula's variables can be consistently replaced by the values TRUE or FALSE to make the formula evaluate to TRUE. If this is the case, the formula is called satisfiable, else unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable. SAT is the first problem that was proven to be NP-complete—this is the Cook–Levin theorem. This means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. There is no known algorithm that efficiently solves each SAT problem, and it is generally believed that no such algorithm exists, but this belief has not been proven mathematically, and resolving the question of whether SAT has a polynomial-time algorithm is equivalent to the P versus NP problem, which is a famous open problem in the theory of computing. Nevertheless, as of 2007, heuristic SAT-algorithms are able to solve problem instances involving tens of thousands of variables and formulas consisting of millions of symbols, which is sufficient for many practical SAT problems from, e.g., artificial intelligence, circuit design, and automatic theorem proving. Definitions A propositional logic formula, also called Boolean expression, is built from variables, operators AND (conjunction, also denoted by ∧), OR (disjunction, ∨), NOT (negation, ¬), and parentheses. A formula is said to be satisfiable if it can be made TRUE by assigning appropriate logical values (i.e. TRUE, FALSE) to its variables. The Boolean satisfiability problem (SAT) is, given a formula, to check whether it is satisfiable. This decision problem is of central importance in many areas of computer science, including theoretical computer science, complexity theory, algorithmics, cryptography and artificial intelligence. Conjunctive normal form A literal is either a variable (in which case it is called a positive literal) or the negation of a variable (called a negative literal). A clause is a disjunction of literals (or a single literal). A clause is called a Horn clause if it contains at most one positive literal. A formula is in conjunctive normal form (CNF) if it is a conjunction of clauses (or a single clause). For example, is a positive literal, is a negative literal, and is a clause. The formula is in conjunctive normal form; its first and third clauses are Horn clauses, but its second clause is not. The formula is satisfiable, by choosing x1 = FALSE, x2 = FALSE, and x3 arbitrarily, since (FALSE ∨ ¬FALSE) ∧ (¬FALSE ∨ FALSE ∨ x3) ∧ ¬FALSE evaluates to (FALSE ∨ TRUE) ∧ (TRUE ∨ FALSE ∨ x3) ∧ TRUE, and in turn to TRUE ∧ TRUE ∧ TRUE (i.e. to TRUE). In contrast, the CNF formula a ∧ ¬a, consisting of two clauses of one literal, is unsatisfiable, since for a=TRUE or a=FALSE it evaluates to TRUE ∧ ¬TRUE (i.e., FALSE) or FALSE ∧ ¬FALSE (i.e., again FALSE), respectively. For some versions of the SAT problem, it is useful to define the notion of a generalized conjunctive normal form formula, viz. as a conjunction of arbitrarily many generalized clauses, the latter being of the form for some Boolean function R and (ordinary) literals . Different sets of allowed Boolean functions lead to different problem versions. As an example, R(¬x,a,b) is a generalized clause, and R(¬x,a,b) ∧ R(b,y,c) ∧ R(c,d,¬z) is a generalized conjunctive normal form. This formula is used below, with R being the ternary operator that is TRUE just when exactly one of its arguments is. Using the laws of Boolean algebra, every propositional logic formula can be transformed into an equivalent conjunctive normal form, which may, however, be exponentially longer. For example, transforming the formula (x1∧y1) ∨ (x2∧y2) ∨ ... ∨ (xn∧yn) into conjunctive normal form yields ; while the former is a disjunction of n conjunctions of 2 variables, the latter consists of 2n clauses of n variables. However, with use of the Tseytin transformation, we may find an equisatisfiable conjunctive normal form formula with length linear in the size of the original propositional logic formula. Complexity SAT was the first problem known to be NP-complete, as proved by Stephen Cook at the University of Toronto in 1971 and independently by Leonid Levin at the Russian Academy of Sciences in 1973. Until that time, the concept of an NP-complete problem did not even exist. The proof shows how every decision problem in the complexity class NP can be reduced to the SAT problem for CNF formulas, sometimes called CNFSAT. A useful property of Cook's reduction is that it preserves the number of accepting answers. For example, deciding whether a given graph has a 3-coloring is another problem in NP; if a graph has 17 valid 3-colorings, then the SAT formula produced by the Cook–Levin reduction will have 17 satisfying assignments. NP-completeness only refers to the run-time of the worst case instances. Many of the instances that occur in practical applications can be solved much more quickly. See §Algorithms for solving SAT below. 3-satisfiability Like the satisfiability problem for arbitrary formulas, determining the satisfiability of a formula in conjunctive normal form where each clause is limited to at most three literals is NP-complete also; this problem is called 3-SAT, 3CNFSAT, or 3-satisfiability. To reduce the unrestricted SAT problem to 3-SAT, transform each clause to a conjunction of clauses where are fresh variables not occurring elsewhere. Although the two formulas are not logically equivalent, they are equisatisfiable. The formula resulting from transforming all clauses is at most 3 times as long as its original; that is, the length growth is polynomial. 3-SAT is one of Karp's 21 NP-complete problems, and it is used as a starting point for proving that other problems are also NP-hard. This is done by polynomial-time reduction from 3-SAT to the other problem. An example of a problem where this method has been used is the clique problem: given a CNF formula consisting of c clauses, the corresponding graph consists of a vertex for each literal, and an edge between each two non-contradicting literals from different clauses; see the picture. The graph has a c-clique if and only if the formula is satisfiable. There is a simple randomized algorithm due to Schöning (1999) that runs in time (4/3)n where n is the number of variables in the 3-SAT proposition, and succeeds with high probability to correctly decide 3-SAT. The exponential time hypothesis asserts that no algorithm can solve 3-SAT (or indeed k-SAT for any ) in time (that is, fundamentally faster than exponential in n). Selman, Mitchell, and Levesque (1996) give empirical data on the difficulty of randomly generated 3-SAT formulas, depending on their size parameters. Difficulty is measured in number recursive calls made by a DPLL algorithm. They identified a phase transition region from almost-certainly-satisfiable to almost-certainly-unsatisfiable formulas at the clauses-to-variables ratio at about 4.26. 3-satisfiability can be generalized to k-satisfiability (k-SAT, also k-CNF-SAT), when formulas in CNF are considered with each clause containing up to k literals. However, since for any k ≥ 3, this problem can neither be easier than 3-SAT nor harder than SAT, and the latter two are NP-complete, so must be k-SAT. Some authors restrict k-SAT to CNF formulas with exactly k literals. This does not lead to a different complexity class either, as each clause with j < k literals can be padded with fixed dummy variables to . After padding all clauses, 2k–1 extra clauses must be appended to ensure that only can lead to a satisfying assignment. Since k does not depend on the formula length, the extra clauses lead to a constant increase in length. For the same reason, it does not matter whether duplicate literals are allowed in clauses, as in . Special cases of SAT Conjunctive normal form Conjunctive normal form (in particular with 3 literals per clause) is often considered the canonical representation for SAT formulas. As shown above, the general SAT problem reduces to 3-SAT, the problem of determining satisfiability for formulas in this form. Disjunctive normal form SAT is trivial if the formulas are restricted to those in disjunctive normal form, that is, they are a disjunction of conjunctions of literals. Such a formula is indeed satisfiable if and only if at least one of its conjunctions is satisfiable, and a conjunction is satisfiable if and only if it does not contain both x and NOT x for some variable x. This can be checked in linear time. Furthermore, if they are restricted to being in full disjunctive normal form, in which every variable appears exactly once in every conjunction, they can be checked in constant time (each conjunction represents one satisfying assignment). But it can take exponential time and space to convert a general SAT problem to disjunctive normal form; to obtain an example, exchange "∧" and "∨" in the above exponential blow-up example for conjunctive normal forms. Exactly-1 3-satisfiability A variant of the 3-satisfiability problem is the one-in-three 3-SAT (also known variously as 1-in-3-SAT and exactly-1 3-SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine whether there exists a truth assignment to the variables so that each clause has exactly one TRUE literal (and thus exactly two FALSE literals). In contrast, ordinary 3-SAT requires that every clause has at least one TRUE literal. Formally, a one-in-three 3-SAT problem is given as a generalized conjunctive normal form with all generalized clauses using a ternary operator R that is TRUE just if exactly one of its arguments is. When all literals of a one-in-three 3-SAT formula are positive, the satisfiability problem is called one-in-three positive 3-SAT. One-in-three 3-SAT, together with its positive case, is listed as NP-complete problem "LO4" in the standard reference Computers and Intractability: A Guide to the Theory of NP-Completeness by Michael R. Garey and David S. Johnson. One-in-three 3-SAT was proved to be NP-complete by Thomas Jerome Schaefer as a special case of Schaefer's dichotomy theorem, which asserts that any problem generalizing Boolean satisfiability in a certain way is either in the class P or is NP-complete. Schaefer gives a construction allowing an easy polynomial-time reduction from 3-SAT to one-in-three 3-SAT. Let "(x or y or z)" be a clause in a 3CNF formula. Add six fresh Boolean variables a, b, c, d, e, and f, to be used to simulate this clause and no other. Then the formula R(x,a,d) ∧ R(y,b,d) ∧ R(a,b,e) ∧ R(c,d,f) ∧ R(z,c,FALSE) is satisfiable by some setting of the fresh variables if and only if at least one of x, y, or z is TRUE, see picture (left). Thus any 3-SAT instance with m clauses and n variables may be converted into an equisatisfiable one-in-three 3-SAT instance with 5m clauses and n + 6m variables. Another reduction involves only four fresh variables and three clauses: R(¬x,a,b) ∧ R(b,y,c) ∧ R(c,d,¬z), see picture (right). Not-all-equal 3-satisfiability Another variant is the not-all-equal 3-satisfiability problem (also called NAE3SAT). Given a conjunctive normal form with three literals per clause, the problem is to determine if an assignment to the variables exists such that in no clause all three literals have the same truth value. This problem is NP-complete, too, even if no negation symbols are admitted, by Schaefer's dichotomy theorem. Linear SAT A 3-SAT formula is Linear SAT (LSAT) if each clause (viewed as a set of literals) intersects at most one other clause, and, moreover, if two clauses intersect, then they have exactly one literal in common. An LSAT formula can be depicted as a set of disjoint semi-closed intervals on a line. Deciding whether an LSAT formula is satisfiable is NP-complete. 2-satisfiability SAT is easier if the number of literals in a clause is limited to at most 2, in which case the problem is called 2-SAT. This problem can be solved in polynomial time, and in fact is complete for the complexity class NL. If additionally all OR operations in literals are changed to XOR operations, then the result is called exclusive-or 2-satisfiability, which is a problem complete for the complexity class SL = L. Horn-satisfiability The problem of deciding the satisfiability of a given conjunction of Horn clauses is called Horn-satisfiability, or HORN-SAT. It can be solved in polynomial time by a single step of the unit propagation algorithm, which produces the single minimal model of the set of Horn clauses (w.r.t. the set of literals assigned to TRUE). Horn-satisfiability is P-complete. It can be seen as P's version of the Boolean satisfiability problem. Also, deciding the truth of quantified Horn formulas can be done in polynomial time. Horn clauses are of interest because they are able to express implication of one variable from a set of other variables. Indeed, one such clause ¬x1 ∨ ... ∨ ¬xn ∨ y can be rewritten as x1 ∧ ... ∧ xn → y; that is, if x1,...,xn are all TRUE, then y must be TRUE as well. A generalization of the class of Horn formulas is that of renameable-Horn formulae, which is the set of formulas that can be placed in Horn form by replacing some variables with their respective negation. For example, (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2 ∨ x3) ∧ ¬x1 is not a Horn formula, but can be renamed to the Horn formula (x1 ∨ ¬x2) ∧ (¬x1 ∨ x2 ∨ ¬y3) ∧ ¬x1 by introducing y3 as negation of x3. In contrast, no renaming of (x1 ∨ ¬x2 ∨ ¬x3) ∧ (¬x1 ∨ x2 ∨ x3) ∧ ¬x1 leads to a Horn formula. Checking the existence of such a replacement can be done in linear time; therefore, the satisfiability of such formulae is in P as it can be solved by first performing this replacement and then checking the satisfiability of the resulting Horn formula. XOR-satisfiability Another special case is the class of problems where each clause contains XOR (i.e. exclusive or) rather than (plain) OR operators. This is in P, since an XOR-SAT formula can also be viewed as a system of linear equations mod 2, and can be solved in cubic time by Gaussian elimination; see the box for an example. This recast is based on the kinship between Boolean algebras and Boolean rings, and the fact that arithmetic modulo two forms a finite field. Since a XOR b XOR c evaluates to TRUE if and only if exactly 1 or 3 members of {a,b,c} are TRUE, each solution of the 1-in-3-SAT problem for a given CNF formula is also a solution of the XOR-3-SAT problem, and in turn each solution of XOR-3-SAT is a solution of 3-SAT; see the picture. As a consequence, for each CNF formula, it is possible to solve the XOR-3-SAT problem defined by the formula, and based on the result infer either that the 3-SAT problem is solvable or that the 1-in-3-SAT problem is unsolvable. Provided that the complexity classes P and NP are not equal, neither 2-, nor Horn-, nor XOR-satisfiability is NP-complete, unlike SAT. Schaefer's dichotomy theorem The restrictions above (CNF, 2CNF, 3CNF, Horn, XOR-SAT) bound the considered formulae to be conjunctions of subformulas; each restriction states a specific form for all subformulas: for example, only binary clauses can be subformulas in 2CNF. Schaefer's dichotomy theorem states that, for any restriction to Boolean functions that can be used to form these subformulas, the corresponding satisfiability problem is in P or NP-complete. The membership in P of the satisfiability of 2CNF, Horn, and XOR-SAT formulae are special cases of this theorem. The following table summarizes some common variants of SAT. Extensions of SAT An extension that has gained significant popularity since 2003 is satisfiability modulo theories (SMT) that can enrich CNF formulas with linear constraints, arrays, all-different constraints, uninterpreted functions, etc. Such extensions typically remain NP-complete, but very efficient solvers are now available that can handle many such kinds of constraints. The satisfiability problem becomes more difficult if both "for all" (∀) and "there exists" (∃) quantifiers are allowed to bind the Boolean variables. An example of such an expression would be ; it is valid, since for all values of x and y, an appropriate value of z can be found, viz. z=TRUE if both x and y are FALSE, and z=FALSE else. SAT itself (tacitly) uses only ∃ quantifiers. If only ∀ quantifiers are allowed instead, the so-called tautology problem is obtained, which is co-NP-complete. If any number of both quantifiers are allowed, the problem is called the quantified Boolean formula problem (QBF), which can be shown to be PSPACE-complete. It is widely believed that PSPACE-complete problems are strictly harder than any problem in NP, although this has not yet been proved. Using highly parallel P systems, QBF-SAT problems can be solved in linear time. Ordinary SAT asks if there is at least one variable assignment that makes the formula true. A variety of variants deal with the number of such assignments: MAJ-SAT asks if at least half of all assignments make the formula TRUE. It is known to be complete for PP, a probabilistic class. Surprisingly, MAJ-kSAT is demonstrated to be in P for every finite integer k. #SAT, the problem of counting how many variable assignments satisfy a formula, is a counting problem, not a decision problem, and is #P-complete. UNIQUE SAT is the problem of determining whether a formula has exactly one assignment. It is complete for US, the complexity class describing problems solvable by a non-deterministic polynomial time Turing machine that accepts when there is exactly one nondeterministic accepting path and rejects otherwise. UNAMBIGUOUS-SAT is the name given to the satisfiability problem when the input is restricted to formulas having at most one satisfying assignment. The problem is also called USAT. A solving algorithm for UNAMBIGUOUS-SAT is allowed to exhibit any behavior, including endless looping, on a formula having several satisfying assignments. Although this problem seems easier, Valiant and Vazirani have shown that if there is a practical (i.e. randomized polynomial-time) algorithm to solve it, then all problems in NP can be solved just as easily. MAX-SAT, the maximum satisfiability problem, is an FNP generalization of SAT. It asks for the maximum number of clauses which can be satisfied by any assignment. It has efficient approximation algorithms, but is NP-hard to solve exactly. Worse still, it is APX-complete, meaning there is no polynomial-time approximation scheme (PTAS) for this problem unless P=NP. WMSAT is the problem of finding an assignment of minimum weight that satisfy a monotone Boolean formula (i.e. a formula without any negation). Weights of propositional variables are given in the input of the problem. The weight of an assignment is the sum of weights of true variables. That problem is NP-complete (see Th. 1 of ). Other generalizations include satisfiability for first- and second-order logic, constraint satisfaction problems, 0-1 integer programming. Finding a satisfying assignment While SAT is a decision problem, the search problem of finding a satisfying assignment reduces to SAT. That is, each algorithm which correctly answers whether an instance of SAT is solvable can be used to find a satisfying assignment. First, the question is asked on the given formula Φ. If the answer is "no", the formula is unsatisfiable. Otherwise, the question is asked on the partly instantiated formula Φ{x1=TRUE}, that is, Φ with the first variable x1 replaced by TRUE, and simplified accordingly. If the answer is "yes", then x1=TRUE, otherwise x1=FALSE. Values of other variables can be found subsequently in the same way. In total, n+1 runs of the algorithm are required, where n is the number of distinct variables in Φ. This property is used in several theorems in complexity theory: NP ⊆ P/poly ⇒ PH = Σ2   (Karp–Lipton theorem) NP ⊆ BPP ⇒ NP = RP P = NP ⇒ FP = FNP Algorithms for solving SAT Since the SAT problem is NP-complete, only algorithms with exponential worst-case complexity are known for it. In spite of this, efficient and scalable algorithms for SAT were developed during the 2000s and have contributed to dramatic advances in the ability to automatically solve problem instances involving tens of thousands of variables and millions of constraints (i.e. clauses). Examples of such problems in electronic design automation (EDA) include formal equivalence checking, model checking, formal verification of pipelined microprocessors, automatic test pattern generation, routing of FPGAs, planning, and scheduling problems, and so on. A SAT-solving engine is also considered to be an essential component in the electronic design automation toolbox. Major techniques used by modern SAT solvers include the Davis–Putnam–Logemann–Loveland algorithm (or DPLL), conflict-driven clause learning (CDCL), and stochastic local search algorithms such as WalkSAT. Almost all SAT solvers include time-outs, so they will terminate in reasonable time even if they cannot find a solution. Different SAT solvers will find different instances easy or hard, and some excel at proving unsatisfiability, and others at finding solutions. Recent attempts have been made to learn an instance's satisfiability using deep learning techniques. SAT solvers are developed and compared in SAT-solving contests. Modern SAT solvers are also having significant impact on the fields of software verification, constraint solving in artificial intelligence, and operations research, among others. See also Unsatisfiable core Satisfiability modulo theories Counting SAT Planar SAT Karloff–Zwick algorithm Circuit satisfiability Notes External links SAT Game: try solving a Boolean satisfiability problem yourself The international SAT competition website International Conference on Theory and Applications of Satisfiability Testing Journal on Satisfiability, Boolean Modeling and Computation SAT Live, an aggregate website for research on the satisfiability problem Yearly evaluation of MaxSAT solvers References Sources This article includes material from https://web.archive.org/web/20070708233347/http://www.sigda.org/newsletter/2006/eNews_061201.html by Prof. Karem A. Sakallah. Further reading (by date of publication) Boolean algebra Electronic design automation Formal methods Logic in computer science NP-complete problems Satisfiability problems
Boolean satisfiability problem
[ "Mathematics", "Engineering" ]
5,437
[ "Boolean algebra", "Logic in computer science", "Automated theorem proving", "Mathematical logic", "Computational problems", "Software engineering", "Fields of abstract algebra", "Mathematical problems", "Formal methods", "NP-complete problems", "Satisfiability problems" ]
4,733
https://en.wikipedia.org/wiki/Bidirectional%20text
A bidirectional text contains two text directionalities, right-to-left (RTL) and left-to-right (LTR). It generally involves text containing different types of alphabets, but may also refer to boustrophedon, which is changing text direction in each row. An example is the RTL Hebrew name Sarah: , spelled sin (ש) on the right, resh (ר) in the middle, and heh (ה) on the left. Many computer program failed to display this correctly, because they were designed to display text in one direction only. Some so-called right-to-left scripts such as the Persian script and Arabic are mostly, but not exclusively, right-to-left—mathematical expressions, numeric dates and numbers bearing units are embedded from left to right. That also happens if text from a left-to-right language such as English is embedded in them; or vice versa, if Arabic is embedded in a left-to-right script such as English. Bidirectional script support Bidirectional script support is the capability of a computer system to correctly display bidirectional text. The term is often shortened to "BiDi" or "bidi". Early computer installations were designed only to support a single writing system, typically for left-to-right scripts based on the Latin alphabet only. Adding new character sets and character encodings enabled a number of other left-to-right scripts to be supported, but did not easily support right-to-left scripts such as Arabic or Hebrew, and mixing the two was not practical. Right-to-left scripts were introduced through encodings like ISO/IEC 8859-6 and ISO/IEC 8859-8, storing the letters (usually) in writing and reading order. It is possible to simply flip the left-to-right display order to a right-to-left display order, but doing this sacrifices the ability to correctly display left-to-right scripts. With bidirectional script support, it is possible to mix characters from different scripts on the same page, regardless of writing direction. In particular, the Unicode standard provides foundations for complete BiDi support, with detailed rules as to how mixtures of left-to-right and right-to-left scripts are to be encoded and displayed. Unicode bidi support The Unicode standard calls for characters to be ordered 'logically', i.e. in the sequence they are intended to be interpreted, as opposed to 'visually', the sequence they appear. This distinction is relevant for bidi support because at any bidi transition, the visual presentation ceases to be the 'logical' one. Thus, in order to offer bidi support, Unicode prescribes an algorithm for how to convert the logical sequence of characters into the correct visual presentation. For this purpose, the Unicode encoding standard divides all its characters into one of four types: 'strong', 'weak', 'neutral', and 'explicit formatting'. Strong characters Strong characters are those with a definite direction. Examples of this type of character include most alphabetic characters, syllabic characters, Han ideographs, non-European or non-Arabic digits, and punctuation characters that are specific to only those scripts. Weak characters Weak characters are those with vague direction. Examples of this type of character include European digits, Eastern Arabic-Indic digits, arithmetic symbols, and currency symbols. Neutral characters Neutral characters have direction indeterminable without context. Examples include paragraph separators, tabs, and most other whitespace characters. Punctuation symbols that are common to many scripts, such as the colon, comma, full-stop, and the no-break-space also fall within this category. Explicit formatting Explicit formatting characters, also referred to as "directional formatting characters", are special Unicode sequences that direct the algorithm to modify its default behavior. These characters are subdivided into "marks", "embeddings", "isolates", and "overrides". Their effects continue until the occurrence of either a paragraph separator, or a "pop" character. Marks If a "weak" character is followed by another "weak" character, the algorithm will look at the first neighbouring "strong" character. Sometimes this leads to unintentional display errors. These errors are corrected or prevented with "pseudo-strong" characters. Such Unicode control characters are called marks. The mark ( or ) is to be inserted into a location to make an enclosed weak character inherit its writing direction. For example, to correctly display the for an English name brand (LTR) in an Arabic (RTL) passage, an LRM mark is inserted after the trademark symbol if the symbol is not followed by LTR text (e.g. ""). If the LRM mark is not added, the weak character ™ will be neighbored by a strong LTR character and a strong RTL character. Hence, in an RTL context, it will be considered to be RTL, and displayed in an incorrect order (e.g. ""). Embeddings The "embedding" directional formatting characters are the classical Unicode method of explicit formatting, and as of Unicode 6.3, are being discouraged in favor of "isolates". An "embedding" signals that a piece of text is to be treated as directionally distinct. The text within the scope of the embedding formatting characters is not independent of the surrounding text. Also, characters within an embedding can affect the ordering of characters outside. Unicode 6.3 recognized that directional embeddings usually have too strong an effect on their surroundings and are thus unnecessarily difficult to use. Isolates The "isolate" directional formatting characters signal that a piece of text is to be treated as directionally isolated from its surroundings. As of Unicode 6.3, these are the formatting characters that are being encouraged in new documents – once target platforms are known to support them. These formatting characters were introduced after it became apparent that directional embeddings usually have too strong an effect on their surroundings and are thus unnecessarily difficult to use. Unlike the legacy 'embedding' directional formatting characters, 'isolate' characters have no effect on the ordering of the text outside their scope. Isolates can be nested, and may be placed within embeddings and overrides. Overrides The "override" directional formatting characters allow for special cases, such as for part numbers (e.g. to force a part number made of mixed English, digits and Hebrew letters to be written from right to left), and are recommended to be avoided wherever possible. As is true of the other directional formatting characters, "overrides" can be nested one inside another, and in embeddings and isolates. Using Unicode to override Using will switch the text direction from left-to-right to right-to-left. Similarly, using will switch the text direction from right-to-left to left-to-right. Refer to the Unicode Bidirectional Algorithm. Pops The "pop" directional formatting character, encoded at , terminates the scope of the most recent "embedding", "override", or "isolate". Runs In the algorithm, each sequence of concatenated strong characters is called a "run". A "weak" character that is located between two "strong" characters with the same orientation will inherit their orientation. A "weak" character that is located between two "strong" characters with a different writing direction will inherit the main context's writing direction (in an LTR document the character will become LTR, in an RTL document, it will become RTL). Table of possible BiDi character types Security Unicode bidirectional characters are used in the Trojan Source vulnerability. Visual Studio Code highlights BiDi control characters since version 1.62 released in October 2021. Visual Studio highlights BiDi control characters since version 17.0.3 released on December 14, 2021. Scripts using bidirectional text Egyptian hieroglyphs Egyptian hieroglyphs were written bidirectionally, where the signs that had a distinct "head" or "tail" faced the beginning of the line. Chinese characters and other CJK scripts Chinese characters can be written in either direction as well as vertically (top to bottom then right to left), especially in signs (such as plaques), but the orientation of the individual characters does not change. This can often be seen on tour buses in China, where the company name customarily runs from the front of the vehicle to its rear — that is, from right to left on the right side of the bus, and from left to right on the left side of the bus. English texts on the right side of the vehicle are also quite commonly written in reverse order. (See pictures of tour bus and post vehicle below.) Likewise, other CJK scripts made up of the same square characters, such as the Japanese writing system and Korean writing system, can also be written in any direction, although horizontally left-to-right, top-to-bottom and vertically top-to-bottom right-to-left are the two most common forms. Boustrophedon Boustrophedon is a writing style found in ancient Greek inscriptions, in Old Sabaic (an Old South Arabian language) and in Hungarian runes. This method of writing alternates direction, and usually reverses the individual characters, on each successive line. Moon type Moon type is an embossed adaptation of the Latin alphabet invented as a tactile alphabet for the blind. Initially the text changed direction (but not character orientation) at the end of the lines. Special embossed lines connected the end of a line and the beginning of the next. Around 1990, it changed to a left-to-right orientation. See also Internationalization and localization Horizontal and vertical writing in East Asian scripts Cyrillic numerals Right-to-left mark Transformation of text Boustrophedon References External links Unicode Standards Annex #9 The Bidirectional Algorithm W3C guidelines on authoring techniques for bi-directional text - includes examples and good explanations ICU International Components for Unicode contains an implementation of the bi-directional algorithm — along with other internationalization services Character encoding Unicode algorithms Internationalization and localization Writing direction
Bidirectional text
[ "Technology" ]
2,159
[ "Natural language and computing", "Internationalization and localization", "Character encoding" ]