id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
40,454,460
https://en.wikipedia.org/wiki/Phomopsis%20longicolla
Phomopsis longicolla is a species of ascomycete fungus in the family Diaporthaceae. It is a plant pathogen and mainly responsible for a soybean disease called Phomopsis seed decay (PSD). In other plant species, P. longicolla can also live as an endophyte, such as in the mangrove plant Sonneratia caseolaris. P. longicolla has been found to produce a number of cytotoxic and antimicrobial secondary metabolites, especially members of the class of phomoxanthones. P. longicolla was first described in 1985 by Thomas W. Hobbs et al. at the Department of Plant Pathology at Ohio State University. References Fungal plant pathogens and diseases longicolla Soybean diseases Fungus species Fungi described in 1985
Phomopsis longicolla
Biology
174
458,675
https://en.wikipedia.org/wiki/Brinell%20scale
The Brinell scale (pronounced ) measures the indentation hardness of materials. It determines hardness through the scale of penetration of an indenter, loaded on a material test-piece. It is one of several definitions of hardness in materials science. The hardness scale is expressed as the Brinell Hardness Number (BHN or BH) and was named for Johan August Brinell, who developed the method in the early 20th century. History Proposed by Swedish engineer Johan August Brinell in 1900, it was the first widely used and standardised hardness test in engineering and metallurgy. The large size of indentation and possible damage to test-piece limits its usefulness. However, it also had the useful feature that the hardness value divided by two gave the approximate UTS in ksi for steels. This feature contributed to its early adoption over competing hardness tests. Test details The typical test uses a diameter steel ball as an indenter with a force. For softer materials, a smaller force is used; for harder materials, a tungsten carbide ball is substituted for the steel ball. The indentation is measured and hardness calculated as: where: BHN = Brinell Hardness Number (kgf/mm) P = applied load in kilogram-force (kgf) D = diameter of indenter (mm) d = diameter of indentation (mm) Brinell hardness is sometimes quoted in megapascals; the Brinell hardness number is multiplied by the acceleration due to gravity, 9.80665 m/s2, to convert it to megapascals. The Brinell hardness number can be correlated with the ultimate tensile strength (UTS), although the relationship is dependent on the material, and therefore determined empirically. The relationship is based on Meyer's index (n) from Meyer's law. If Meyer's index is less than 2.2 then the ratio of UTS to BHN is 0.36. If Meyer's index is greater than 2.2, then the ratio increases. The Brinell hardness is designated by the most commonly used test standards (ASTM E10-14 and ISO 6506–1:2005) as HBW (H from hardness, B from brinell and W from the material of the indenter, tungsten (wolfram) carbide). In former standards HB or HBS were used to refer to measurements made with steel indenters. HBW is calculated in both standards using the SI units as where: F = applied load (newtons) D = diameter of indenter (mm) d = diameter of indentation (mm) Common values When quoting a Brinell hardness number (BHN or more commonly HB), the conditions of the test used to obtain the number must be specified. The standard format for specifying tests can be seen in the example "HBW 10/3000". "HBW" means that a tungsten carbide (from the chemical symbol for tungsten or from the Spanish/Swedish/German name for tungsten, "Wolfram") ball indenter was used, as opposed to "HBS", which means a hardened steel ball. The "10" is the ball diameter in millimeters. The "3000" is the force in kilograms force. The hardness may also be shown as XXX HB YYD2. The XXX is the force to apply (in kgf) on a material of type YY (5 for aluminum alloys, 10 for copper alloys, 30 for steels). Thus a typical steel hardness could be written: 250 HB 30D2. It could be a maximum or a minimum. Standards International (ISO) and European (CEN) Standard US standard (ASTM International) See also Brinelling Hardness comparison Knoop hardness test Leeb rebound hardness test Rockwell scale Vickers hardness test References External links Brinell Hardness Test – Methods, advantages, disadvantages, applications Rockwell to Brinell conversion chart (Brinell, Rockwell A,B,C) Struers hardness conversion table (Vickers, Brinell, Rockwell B,C,D) Brinell Hardness HB conversion chart (MPa, Brinell, Vickers, Rockwell C) Hardness tests Dimensionless numbers Scales de:Härte#Härteprüfung nach Brinell
Brinell scale
Materials_science,Mathematics
904
52,845,426
https://en.wikipedia.org/wiki/Aspergillus%20monodii
Aspergillus monodii is a coprophilic species of fungus in the genus Aspergillus which has been isolated from an arid zone in Africa. References Further reading monodii Fungi described in 2011 Fungus species
Aspergillus monodii
Biology
48
65,223,668
https://en.wikipedia.org/wiki/Ernest%20Harold%20Baynes
Ernest Harold Baynes (1868–1925) was an American naturalist and writer. He was instrumental in bringing to public attention the near demise of songbirds and of the bison. He founded the American Bison Society, of which President Teddy Roosevelt was honorary chairman. He was "the closest thing New England, and the world for that matter, will ever get to a real-life Doctor Dolittle; all sorts of New England birds and animals–foxes, wolves, chickadees, bears and bison were known to roam around and in and out of his house." Origins He was born on 1 May 1868 at Calcutta, West Bengal in India, a son of John Baynes a British inventor, by his wife Helen Augusta Nowill Baynes In the 1870s, after his father had failed at running a textiles company in Calcutta, the family moved to New York, where John set up the Baynes Tracery and Mosaic Co., which produced etched memorial tablets, among other products. He patented manufacturing processes with the tastemaker Lockwood de Forest, and Baynes tablets survive at Grace Church in Newark, the Battell Chapel and Norfolk Library in Norfolk, Connecticut, and the Cleveland Soldiers' and Sailors' Monument. John claimed (without proof) to have invented "photo-modeling", a technique for using light to carve sculpture. Ernest's siblings included Lillian Baynes Griffin, a British-born American journalist and photographer, and John R. Baynes, a metal etcher and photographer. Career He received his early education in England and aged 11 moved with his parents to Bronx Park, New York. He graduated as valedictorian of his high school class and subsequently attended the College of the City of New York. In the 1890s he started publishing articles on nature and wildlife in various newspapers. "Without the constraints of scholarly publishing, he became a wildlife showman through his articles and appearances." Bison conservation In 1904 he was appointed conservator of the Corbin Park buffalo reserve on the edge of the Blue Mountain Forest in New Hampshire, by Austin Corbin Jr. (d.1938), whose father the banker and railroad entrepreneur Austin Corbin (1827-1896) had established it. Known as the "Blue Mountain Forest Association", it was a limited membership proprietary hunting club, the park of which comprised in the towns of Cornish, Croydon, Grantham, Newport and Plainfield. Corbin Sr. imported bison from Oklahoma, Montana, Wyoming, Manitoba and Texas and donated bison to other American zoos and preserves. He also imported exotic species from Europe and Canada, including wild boar from the Black Forest of Germany. Having been purchased by a syndicate of hunters in 1944, the park survives in 2020, surrounded by a chain-link fence, as a non-profit organization with a membership of about 30 wealthy game hunters, and is referred to as the "millionaires hunt club", said to be "the most exclusive game preserve in the United States". The herd of bison, however, was destroyed in the 1940s following an outbreak of brucellosis, and the main species preserved and hunted are elk and boar. From a natural level of 60 million in America, the bison population had been reduced by human activity to just 1,000 by the 1890s, and in 1904 160 of these animals lived within Corbin Park. In about 1906 Baynes conducted a survey into surviving numbers of American bison, and found that 2,039 existed, 325 in the wild (25 in the USA, 300 in Canada), and 1,714 in captivity (1,109 in the USA, 175 in Canada and 130 in Europe, 300 elsewhere). After 15 years of work and campaigning by Baynes, the national bison herd had increased to 20,000. He was famous for his tame bison and for driving around the park in a carriage pulled by a pair of bison War Whoop and Tomahawk, trained by him in an effort to promote the usefulness of the breed as draught animals. Baynes commented, "Of all the works of the late Mr. Austin Corbin, the preservation of that herd of bison was the one that would earn his country's deepest gratitude. His experiment led to the founding of the American Bison Society and was connected, directly or otherwise, with the formation of some of our national parks." Bird conservation He campaigned against wild birds being killed for their plumage. In 1913 he established one of the earliest bird sanctuaries (the Meriden Bird Club) at his home at Meriden, New Hampshire, which occasion was marked by a play being performed there in 1914 written by poet Percy MacKaye and called Sanctuary: A Bird Masque, with actors dressed in bird costumes, including Baynes himself in the role of "Shy, the Naturalist". Amongst the audience was President Woodrow Wilson. Baynes' activity is believed to have maintained the political appetite to ban the importation of bird feathers, included within the Underwood Tariff bill then being debated in Congress. The play was performed across the country and helped to fuel the bird-protection movement developing in the 1910s. Vivisection Baynes investigated vivisection and the claims of anti-vivisectionists. He visited laboratories where experiments were carried out and came to the unexpected conclusion that little pain had been inflicted on the animals which he believed was insignificant in comparison to the relief from pain the research had given humans. He authored the article "The Truth about Vivisection" for the Woman's Home Companion in July, 1921. In this article, Baynes supported vivisection and critiqued the arguments of anti-vivisectionists. Baynes publicly declared himself a supporter of vivisection which caused great controversy. He was attacked by anti-vivisection organizations as a fake humanitarian and a supporter of animal cruelty. Baynes received much abusive mail from a threatening nature. Walter Hadwen for the American Anti-Vivisection Society wrote a rebuttal to Baynes' article, stating it was filled with misinformation. However, Baynes received support from W. W. Keen, Henry Cantwell Wallace, Frederic Augustus Lucas and many other academics and doctors. Baynes defended vivisection for developing methods of disease prevention. In 1923, he authored a pamphlet Vivisection and Modern Miracles. Death He died aged 56 on January 21, 1925, at his home "Sunset Ridge", Meriden, Sullivan County, New Hampshire, US. His ashes were scattered on Croydon Mountain near his home, which event is commemorated on a local monument inscribed: Here were scattered the ashes of Ernest Harold Baynes, lover of animals and men, and loved of them. May 1, 1868, January 21, 1925. In Popular Culture Earnest Harold Baynes appears in Annie Hartnett's novel, Unlikely Animals. Also going by Harold, he appears as a ghost to the protagonist Emma Starling's father, Clive. His writings depicting relationships with the animals in his home also appear frequently, symbolizing a new section of the novel. Selected publications Wild Bird Guests: How to Entertain Them (1915) The Truth About Vivisection (1921) Polaris, the Story of an Eskimo Dog (1922) Vivisection and Modern Miracles (1923) The Sprite: The Story of a Red Fox (1924) The Book of Dogs: An Intimate Study of Mankind's Best Friend (with Louis Agassiz Fuertes) Animal Heroes of the Great War (1925) Three Young Crows, and Other Bird Stories (1927) Jimmie: The Story of a Black Bear Cub (1929) War Whoop and Tomahawk: The Story of Two Buffalo Calves (1929) Wild Life in the Blue Mountain Forest, revised and edited by Raymond Gorges, foreword by Austin Corbin, with illustrations from photographs by the author and Louise Birt Baynes (1931). References Further reading 1868 births 1925 deaths 19th-century American naturalists 20th-century American naturalists American conservationists American nature writers American male non-fiction writers Vivisection activists British people in colonial India British emigrants to the United States
Ernest Harold Baynes
Chemistry
1,637
11,286,568
https://en.wikipedia.org/wiki/World%20Electric%20Vehicle%20Association
The World Electric Vehicle Association (WEVA) is an organization that promotes electric vehicles. Member associations It is composed of: The Electric Drive Transportation Association (EDTA) The Electric Vehicle Association of Asia Pacific (EVAAP) The European Association for Battery, Hybrid and Fuel Cell Electric Vehicles (AVERE) EDTA The Electric Drive Transportation Association (EDTA) is the American branch, based in Washington, D.C. The Electric Drive Transportation Association (EDTA), established in 1989, is an American industry association focused on promoting electric drive technologies. EDTA's activities encompass supporting the sustainable commercialization of electric drive transportation technologies. It achieves this through various means, including providing comprehensive information and education, facilitating industry networking, engaging in public policy advocacy, and organizing international conferences and exhibitions. EVAAP The Electric Vehicle Association of Asia Pacific (EVAAP) is an international organization that promotes the use of electric vehicles in the Asia and the Pacific and also representative to the World Electric Vehicle Association (WEVA) organizing the International Electric Vehicle Symposium (EVS) rotationally with AVERE and EDTA. AVERE The European Association for Battery, Hybrid and Fuel Cell Electric Vehicles (AVERE) was founded in 1978 and is based in Brussels. It is a European network of users, NGO's, associations, interest groups, etc. Its main objective is promoting the use of battery, hybrid and fuel cell electric vehicles (individually and in fleets) for priority uses in order to achieve a greener mobility for cities and countries. Structure Activities World Electric Vehicle Journal (WEVJ) WEVA publishes an international scientific journal called the World Electric Vehicle Journal. The World Electric Vehicle Journal is a peer-reviewed international scientific journal that covers all studies related to battery, hybrid and fuel cell electric vehicles comprehensively. It publishes selected contributions from the EVS Symposia after an additional review process. See also CalCars Electric vehicle Japan Automobile Research Institute (JARI) Repower America References External links WEVA official site International climate change organizations Non-profit organizations based in California Palo Alto, California Plug-in hybrid vehicles Environmental organizations based in California Electric vehicle organizations Organizations established in 1990
World Electric Vehicle Association
Engineering
437
288,291
https://en.wikipedia.org/wiki/Ricci%20flow
In the mathematical fields of differential geometry and geometric analysis, the Ricci flow ( , ), sometimes also referred to as Hamilton's Ricci flow, is a certain partial differential equation for a Riemannian metric. It is often said to be analogous to the diffusion of heat and the heat equation, due to formal similarities in the mathematical structure of the equation. However, it is nonlinear and exhibits many phenomena not present in the study of the heat equation. The Ricci flow, so named for the presence of the Ricci tensor in its definition, was introduced by Richard Hamilton, who used it through the 1980s to prove striking new results in Riemannian geometry. Later extensions of Hamilton's methods by various authors resulted in new applications to geometry, including the resolution of the differentiable sphere conjecture by Simon Brendle and Richard Schoen. Following the possibility that the singularities of solutions of the Ricci flow could identify the topological data predicted by William Thurston's geometrization conjecture, Hamilton produced a number of results in the 1990s which were directed towards the conjecture's resolution. In 2002 and 2003, Grigori Perelman presented a number of fundamental new results about the Ricci flow, including a novel variant of some technical aspects of Hamilton's program. Perelman's work is now widely regarded as forming the proof of the Thurston conjecture and the Poincaré conjecture, regarded as a special case of the former. It should be emphasized that the Poincare conjecture has been a well-known open problem in the field of geometric topology since 1904. These results by Hamilton and Perelman are considered as a milestone in the fields of geometry and topology. Mathematical definition On a smooth manifold , a smooth Riemannian metric automatically determines the Ricci tensor . For each element of , by definition is a positive-definite inner product on the tangent space at . If given a one-parameter family of Riemannian metrics , one may then consider the derivative , which then assigns to each particular value of and a symmetric bilinear form on . Since the Ricci tensor of a Riemannian metric also assigns to each a symmetric bilinear form on , the following definition is meaningful. Given a smooth manifold and an open real interval , a Ricci flow assigns, to each in the interval , a Riemannian metric on such that . The Ricci tensor is often thought of as an average value of the sectional curvatures, or as an algebraic trace of the Riemann curvature tensor. However, for the analysis of existence and uniqueness of Ricci flows, it is extremely significant that the Ricci tensor can be defined, in local coordinates, by a formula involving the first and second derivatives of the metric tensor. This makes the Ricci flow into a geometrically-defined partial differential equation. The analysis of the ellipticity of the local coordinate formula provides the foundation for the existence of Ricci flows; see the following section for the corresponding result. Let be a nonzero number. Given a Ricci flow on an interval , consider for between and . Then . So, with this very trivial change of parameters, the number −2 appearing in the definition of the Ricci flow could be replaced by any other nonzero number. For this reason, the use of −2 can be regarded as an arbitrary convention, albeit one which essentially every paper and exposition on Ricci flow follows. The only significant difference is that if −2 were replaced by a positive number, then the existence theorem discussed in the following section would become a theorem which produces a Ricci flow that moves backwards (rather than forwards) in parameter values from initial data. The parameter is usually called , although this is only as part of standard informal terminology in the mathematical field of partial differential equations. It is not physically meaningful terminology. In fact, in the standard quantum field theoretic interpretation of the Ricci flow in terms of the renormalization group, the parameter corresponds to length or energy, rather than time. Normalized Ricci flow Suppose that is a compact smooth manifold, and let be a Ricci flow for in the interval . Define so that each of the Riemannian metrics has volume 1; this is possible since is compact. (More generally, it would be possible if each Riemannian metric had finite volume.) Then define to be the antiderivative of which vanishes at . Since is positive-valued, is a bijection onto its image . Now the Riemannian metrics , defined for parameters , satisfy Here denotes scalar curvature. This is called the normalized Ricci flow equation. Thus, with an explicitly defined change of scale and a reparametrization of the parameter values, a Ricci flow can be converted into a normalized Ricci flow. The converse also holds, by reversing the above calculations. The primary reason for considering the normalized Ricci flow is that it allows a convenient statement of the major convergence theorems for Ricci flow. However, it is not essential to do so, and for virtually all purposes it suffices to consider Ricci flow in its standard form. Moreover, the normalized Ricci flow is not generally meaningful on noncompact manifolds. Existence and uniqueness Let be a smooth closed manifold, and let be any smooth Riemannian metric on . Making use of the Nash–Moser implicit function theorem, showed the following existence theorem: There exists a positive number and a Ricci flow parametrized by such that converges to in the topology as decreases to 0. He showed the following uniqueness theorem: If and are two Ricci flows as in the above existence theorem, then for all The existence theorem provides a one-parameter family of smooth Riemannian metrics. In fact, any such one-parameter family also depends smoothly on the parameter. Precisely, this says that relative to any smooth coordinate chart on , the function is smooth for any . Dennis DeTurck subsequently gave a proof of the above results which uses the Banach implicit function theorem instead. His work is essentially a simpler Riemannian version of Yvonne Choquet-Bruhat's well-known proof and interpretation of well-posedness for the Einstein equations in Lorentzian geometry. As a consequence of Hamilton's existence and uniqueness theorem, when given the data , one may speak unambiguously of the Ricci flow on with initial data , and one may select to take on its maximal possible value, which could be infinite. The principle behind virtually all major applications of Ricci flow, in particular in the proof of the Poincaré conjecture and geometrization conjecture, is that, as approaches this maximal value, the behavior of the metrics can reveal and reflect deep information about . Convergence theorems Complete expositions of the following convergence theorems are given in and . The three-dimensional result is due to . Hamilton's proof, inspired by and loosely modeled upon James Eells and Joseph Sampson's epochal 1964 paper on convergence of the harmonic map heat flow, included many novel features, such as an extension of the maximum principle to the setting of symmetric 2-tensors. His paper (together with that of Eells−Sampson) is among the most widely cited in the field of differential geometry. There is an exposition of his result in . In terms of the proof, the two-dimensional case is properly viewed as a collection of three different results, one for each of the cases in which the Euler characteristic of is positive, zero, or negative. As demonstrated by , the negative case is handled by the maximum principle, while the zero case is handled by integral estimates; the positive case is more subtle, and Hamilton dealt with the subcase in which has positive curvature by combining a straightforward adaptation of Peter Li and Shing-Tung Yau's gradient estimate to the Ricci flow together with an innovative "entropy estimate". The full positive case was demonstrated by Bennett , in an extension of Hamilton's techniques. Since any Ricci flow on a two-dimensional manifold is confined to a single conformal class, it can be recast as a partial differential equation for a scalar function on the fixed Riemannian manifold . As such, the Ricci flow in this setting can also be studied by purely analytic methods; correspondingly, there are alternative non-geometric proofs of the two-dimensional convergence theorem. The higher-dimensional case has a longer history. Soon after Hamilton's breakthrough result, Gerhard Huisken extended his methods to higher dimensions, showing that if almost has constant positive curvature (in the sense of smallness of certain components of the Ricci decomposition), then the normalized Ricci flow converges smoothly to constant curvature. found a novel formulation of the maximum principle in terms of trapping by convex sets, which led to a general criterion relating convergence of the Ricci flow of positively curved metrics to the existence of "pinching sets" for a certain multidimensional ordinary differential equation. As a consequence, he was able to settle the case in which is four-dimensional and has positive curvature operator. Twenty years later, Christoph Böhm and Burkhard Wilking found a new algebraic method of constructing "pinching sets", thereby removing the assumption of four-dimensionality from Hamilton's result (). Simon Brendle and Richard Schoen showed that positivity of the isotropic curvature is preserved by the Ricci flow on a closed manifold; by applying Böhm and Wilking's method, they were able to derive a new Ricci flow convergence theorem (). Their convergence theorem included as a special case the resolution of the differentiable sphere theorem, which at the time had been a long-standing conjecture. The convergence theorem given above is due to , which subsumes the earlier higher-dimensional convergence results of Huisken, Hamilton, Böhm & Wilking, and Brendle & Schoen. Corollaries The results in dimensions three and higher show that any smooth closed manifold which admits a metric of the given type must be a space form of positive curvature. Since these space forms are largely understood by work of Élie Cartan and others, one may draw corollaries such as Suppose that is a smooth closed 3-dimensional manifold which admits a smooth Riemannian metric of positive Ricci curvature. If is simply-connected then it must be diffeomorphic to the 3-sphere. So if one could show directly that any smooth closed simply-connected 3-dimensional manifold admits a smooth Riemannian metric of positive Ricci curvature, then the Poincaré conjecture would immediately follow. However, as matters are understood at present, this result is only known as a (trivial) corollary of the Poincaré conjecture, rather than vice versa. Possible extensions Given any larger than two, there exist many closed -dimensional smooth manifolds which do not have any smooth Riemannian metrics of constant curvature. So one cannot hope to be able to simply drop the curvature conditions from the above convergence theorems. It could be possible to replace the curvature conditions by some alternatives, but the existence of compact manifolds such as complex projective space, which has a metric of nonnegative curvature operator (the Fubini-Study metric) but no metric of constant curvature, makes it unclear how much these conditions could be pushed. Likewise, the possibility of formulating analogous convergence results for negatively curved Riemannian metrics is complicated by the existence of closed Riemannian manifolds whose curvature is arbitrarily close to constant and yet admit no metrics of constant curvature. Li–Yau inequalities Making use of a technique pioneered by Peter Li and Shing-Tung Yau for parabolic differential equations on Riemannian manifolds, proved the following "Li–Yau inequality". Let be a smooth manifold, and let be a solution of the Ricci flow with such that each is complete with bounded curvature. Furthermore, suppose that each has nonnegative curvature operator. Then, for any curve with , one has showed the following alternative Li–Yau inequality. Let be a smooth closed -manifold, and let be a solution of the Ricci flow. Consider the backwards heat equation for -forms, i.e. ; given and , consider the particular solution which, upon integration, converges weakly to the Dirac delta measure as increases to . Then, for any curve with , one has where . Both of these remarkable inequalities are of profound importance for the proof of the Poincaré conjecture and geometrization conjecture. The terms on the right hand side of Perelman's Li–Yau inequality motivates the definition of his "reduced length" functional, the analysis of which leads to his "noncollapsing theorem". The noncollapsing theorem allows application of Hamilton's compactness theorem (Hamilton 1995) to construct "singularity models", which are Ricci flows on new three-dimensional manifolds. Owing to the Hamilton–Ivey estimate, these new Ricci flows have nonnegative curvature. Hamilton's Li–Yau inequality can then be applied to see that the scalar curvature is, at each point, a nondecreasing (nonnegative) function of time. This is a powerful result that allows many further arguments to go through. In the end, Perelman shows that any of his singularity models is asymptotically like a complete gradient shrinking Ricci soliton, which are completely classified; see the previous section. See for details on Hamilton's Li–Yau inequality; the books and contain expositions of both inequalities above. Examples Constant-curvature and Einstein metrics Let be a Riemannian manifold which is Einstein, meaning that there is a number such that . Then is a Ricci flow with , since then If is closed, then according to Hamilton's uniqueness theorem above, this is the only Ricci flow with initial data . One sees, in particular, that: if is positive, then the Ricci flow "contracts" since the scale factor is less than 1 for positive ; furthermore, one sees that can only be less than , in order that is a Riemannian metric. This is the simplest examples of a "finite-time singularity". if is zero, which is synonymous with being Ricci-flat, then is independent of time, and so the maximal interval of existence is the entire real line. if is negative, then the Ricci flow "expands" since the scale factor is greater than 1 for all positive ; furthermore one sees that can be taken arbitrarily large. One says that the Ricci flow, for this initial metric, is "immortal". In each case, since the Riemannian metrics assigned to different values of differ only by a constant scale factor, one can see that the normalized Ricci flow exists for all time and is constant in ; in particular, it converges smoothly (to its constant value) as . The Einstein condition has as a special case that of constant curvature; hence the particular examples of the sphere (with its standard metric) and hyperbolic space appear as special cases of the above. Ricci solitons Ricci solitons are Ricci flows that may change their size but not their shape up to diffeomorphisms. Cylinders Sk × Rl (for k ≥ 2) shrink self similarly under the Ricci flow up to diffeomorphisms A significant 2-dimensional example is the cigar soliton, which is given by the metric (dx2 + dy2)/(e4t + x2 + y2) on the Euclidean plane. Although this metric shrinks under the Ricci flow, its geometry remains the same. Such solutions are called steady Ricci solitons. An example of a 3-dimensional steady Ricci soliton is the Bryant soliton, which is rotationally symmetric, has positive curvature, and is obtained by solving a system of ordinary differential equations. A similar construction works in arbitrary dimension. There exist numerous families of Kähler manifolds, invariant under a U(n) action and birational to Cn, which are Ricci solitons. These examples were constructed by Cao and Feldman-Ilmanen-Knopf. (Chow-Knopf 2004) A 4-dimensional example exhibiting only torus symmetry was recently discovered by Bamler-Cifarelli-Conlon-Deruelle. A gradient shrinking Ricci soliton consists of a smooth Riemannian manifold (M,g) and f ∈ C∞(M) such that One of the major achievements of was to show that, if M is a closed three-dimensional smooth manifold, then finite-time singularities of the Ricci flow on M are modeled on complete gradient shrinking Ricci solitons (possibly on underlying manifolds distinct from M). In 2008, Huai-Dong Cao, Bing-Long Chen, and Xi-Ping Zhu completed the classification of these solitons, showing: Suppose (M,g,f) is a complete gradient shrinking Ricci soliton with dim(M) = 3. If M is simply-connected then the Riemannian manifold (M,g) is isometric to , , or , each with their standard Riemannian metrics. This was originally shown by with some extra conditional assumptions. Note that if M is not simply-connected, then one may consider the universal cover and then the above theorem applies to There is not yet a good understanding of gradient shrinking Ricci solitons in any higher dimensions. Relationship to uniformization and geometrization Hamilton's first work on Ricci flow was published at the same time as William Thurston's geometrization conjecture, which concerns the topological classification of three-dimensional smooth manifolds. Hamilton's idea was to define a kind of nonlinear diffusion equation which would tend to smooth out irregularities in the metric. Suitable canonical forms had already been identified by Thurston; the possibilities, called Thurston model geometries, include the three-sphere S3, three-dimensional Euclidean space E3, three-dimensional hyperbolic space H3, which are homogeneous and isotropic, and five slightly more exotic Riemannian manifolds, which are homogeneous but not isotropic. (This list is closely related to, but not identical with, the Bianchi classification of the three-dimensional real Lie algebras into nine classes.) Hamilton succeeded in proving that any smooth closed three-manifold which admits a metric of positive Ricci curvature also admits a unique Thurston geometry, namely a spherical metric, which does indeed act like an attracting fixed point under the Ricci flow, renormalized to preserve volume. (Under the unrenormalized Ricci flow, the manifold collapses to a point in finite time.) However, this doesn't prove the full geometrization conjecture, because of the restrictive assumption on curvature. Indeed, a triumph of nineteenth-century geometry was the proof of the uniformization theorem, the analogous topological classification of smooth two-manifolds, where Hamilton showed that the Ricci flow does indeed evolve a negatively curved two-manifold into a two-dimensional multi-holed torus which is locally isometric to the hyperbolic plane. This topic is closely related to important topics in analysis, number theory, dynamical systems, mathematical physics, and even cosmology. Note that the term "uniformization" suggests a kind of smoothing away of irregularities in the geometry, while the term "geometrization" suggests placing a geometry on a smooth manifold. Geometry is being used here in a precise manner akin to Klein's notion of geometry (see Geometrization conjecture for further details). In particular, the result of geometrization may be a geometry that is not isotropic. In most cases including the cases of constant curvature, the geometry is unique. An important theme in this area is the interplay between real and complex formulations. In particular, many discussions of uniformization speak of complex curves rather than real two-manifolds. Singularities Hamilton showed that a compact Riemannian manifold always admits a short-time Ricci flow solution. Later Shi generalized the short-time existence result to complete manifolds of bounded curvature. In general, however, due to the highly non-linear nature of the Ricci flow equation, singularities form in finite time. These singularities are curvature singularities, which means that as one approaches the singular time the norm of the curvature tensor blows up to infinity in the region of the singularity. A fundamental problem in Ricci flow is to understand all the possible geometries of singularities. When successful, this can lead to insights into the topology of manifolds. For instance, analyzing the geometry of singular regions that may develop in 3d Ricci flow, is the crucial ingredient in Perelman's proof of the Poincare and Geometrization Conjectures. Blow-up limits of singularities To study the formation of singularities it is useful, as in the study of other non-linear differential equations, to consider blow-ups limits. Intuitively speaking, one zooms into the singular region of the Ricci flow by rescaling time and space. Under certain assumptions, the zoomed in flow tends to a limiting Ricci flow , called a singularity model. Singularity models are ancient Ricci flows, i.e. they can be extended infinitely into the past. Understanding the possible singularity models in Ricci flow is an active research endeavor. Below, we sketch the blow-up procedure in more detail: Let be a Ricci flow that develops a singularity as . Let be a sequence of points in spacetime such that as . Then one considers the parabolically rescaled metrics Due to the symmetry of the Ricci flow equation under parabolic dilations, the metrics are also solutions to the Ricci flow equation. In the case that i.e. up to time the maximum of the curvature is attained at , then the pointed sequence of Ricci flows subsequentially converges smoothly to a limiting ancient Ricci flow . Note that in general is not diffeomorphic to . Type I and Type II singularities Hamilton distinguishes between Type I and Type II singularities in Ricci flow. In particular, one says a Ricci flow , encountering a singularity a time is of Type I if . Otherwise the singularity is of Type II. It is known that the blow-up limits of Type I singularities are gradient shrinking Ricci solitons. In the Type II case it is an open question whether the singularity model must be a steady Ricci soliton—so far all known examples are. Singularities in 3d Ricci flow In 3d the possible blow-up limits of Ricci flow singularities are well-understood. From the work of Hamilton, Perelman and Brendle, blowing up at points of maximum curvature leads to one of the following three singularity models: The shrinking round spherical space form The shrinking round cylinder The Bryant soliton The first two singularity models arise from Type I singularities, whereas the last one arises from a Type II singularity. Singularities in 4d Ricci flow In four dimensions very little is known about the possible singularities, other than that the possibilities are far more numerous than in three dimensions. To date the following singularity models are known The 4d Bryant soliton Compact Einstein manifold of positive scalar curvature Compact gradient Kahler–Ricci shrinking soliton The FIK shrinker (discovered by M. Feldman, T. Ilmanen, D. Knopf) The BCCD shrinker (discovered by Richard Bamler, Charles Cifarelli, Ronan Conlon, and Alix Deruelle) Note that the first three examples are generalizations of 3d singularity models. The FIK shrinker models the collapse of an embedded sphere with self-intersection number −1. Relation to diffusion To see why the evolution equation defining the Ricci flow is indeed a kind of nonlinear diffusion equation, we can consider the special case of (real) two-manifolds in more detail. Any metric tensor on a two-manifold can be written with respect to an exponential isothermal coordinate chart in the form (These coordinates provide an example of a conformal coordinate chart, because angles, but not distances, are correctly represented.) The easiest way to compute the Ricci tensor and Laplace-Beltrami operator for our Riemannian two-manifold is to use the differential forms method of Élie Cartan. Take the coframe field so that metric tensor becomes Next, given an arbitrary smooth function , compute the exterior derivative Take the Hodge dual Take another exterior derivative (where we used the anti-commutative property of the exterior product). That is, Taking another Hodge dual gives which gives the desired expression for the Laplace/Beltrami operator To compute the curvature tensor, we take the exterior derivative of the covector fields making up our coframe: From these expressions, we can read off the only independent spin connection one-form where we have taken advantage of the anti-symmetric property of the connection (). Take another exterior derivative This gives the curvature two-form from which we can read off the only linearly independent component of the Riemann tensor using Namely from which the only nonzero components of the Ricci tensor are From this, we find components with respect to the coordinate cobasis, namely But the metric tensor is also diagonal, with and after some elementary manipulation, we obtain an elegant expression for the Ricci flow: This is manifestly analogous to the best known of all diffusion equations, the heat equation where now is the usual Laplacian on the Euclidean plane. The reader may object that the heat equation is of course a linear partial differential equation—where is the promised nonlinearity in the p.d.e. defining the Ricci flow? The answer is that nonlinearity enters because the Laplace-Beltrami operator depends upon the same function p which we used to define the metric. But notice that the flat Euclidean plane is given by taking . So if is small in magnitude, we can consider it to define small deviations from the geometry of a flat plane, and if we retain only first order terms in computing the exponential, the Ricci flow on our two-dimensional almost flat Riemannian manifold becomes the usual two dimensional heat equation. This computation suggests that, just as (according to the heat equation) an irregular temperature distribution in a hot plate tends to become more homogeneous over time, so too (according to the Ricci flow) an almost flat Riemannian manifold will tend to flatten out the same way that heat can be carried off "to infinity" in an infinite flat plate. But if our hot plate is finite in size, and has no boundary where heat can be carried off, we can expect to homogenize the temperature, but clearly we cannot expect to reduce it to zero. In the same way, we expect that the Ricci flow, applied to a distorted round sphere, will tend to round out the geometry over time, but not to turn it into a flat Euclidean geometry. Recent developments The Ricci flow has been intensively studied since 1981. Some recent work has focused on the question of precisely how higher-dimensional Riemannian manifolds evolve under the Ricci flow, and in particular, what types of parametric singularities may form. For instance, a certain class of solutions to the Ricci flow demonstrates that neckpinch singularities will form on an evolving -dimensional metric Riemannian manifold having a certain topological property (positive Euler characteristic), as the flow approaches some characteristic time . In certain cases, such neckpinches will produce manifolds called Ricci solitons. For a 3-dimensional manifold, Perelman showed how to continue past the singularities using surgery on the manifold. Kähler metrics remain Kähler under Ricci flow, and so Ricci flow has also been studied in this setting, where it is called Kähler–Ricci flow. Notes References Articles for a popular mathematical audience. Research articles. Erratum. Revised version: Textbooks External links 1981 introductions 3-manifolds Geometric flow Partial differential equations Riemannian geometry Riemannian manifolds
Ricci flow
Mathematics
5,807
15,355,278
https://en.wikipedia.org/wiki/ZNF41
Zinc finger protein 41 is a protein that in humans is encoded by the ZNF41 gene. This gene product is a likely zinc finger family transcription factor. It contains KRAB-A and KRAB-B domains that act as transcriptional repressors in related proteins, and multiple zinc finger DNA binding motifs and finger linking regions characteristic of the Kruppel family. This gene is part of a gene cluster on chromosome Xp11.23. Several alternatively spliced transcript variants have been described, however, the full-length nature of only some of them is known. References Further reading External links Transcription factors
ZNF41
Chemistry,Biology
127
16,549,386
https://en.wikipedia.org/wiki/Trust%20anchor
In cryptographic systems with hierarchical structure, a trust anchor is an authoritative entity for which trust is assumed and not derived. In the X.509 architecture, a root certificate would be the trust anchor from which the whole chain of trust is derived. The trust anchor must be in the possession of the trusting party beforehand to make any further certificate path validation possible. Most operating systems provide a built-in list of self-signed root certificates to act as trust anchors for applications. The Firefox web browser also provides its own list of trust anchors. The end-user of an operating system or web browser is implicitly trusting in the correct operation of that software, and the software manufacturer in turn is delegating trust for certain cryptographic operations to the certificate authorities responsible for the root certificates. See also Web of trust References Key management
Trust anchor
Technology
169
2,584,965
https://en.wikipedia.org/wiki/Japanese%20theorem%20for%20cyclic%20polygons
In geometry, the Japanese theorem states that no matter how one triangulates a cyclic polygon, the sum of inradii of triangles is constant. Conversely, if the sum of inradii is independent of the triangulation, then the polygon is cyclic. The Japanese theorem follows from Carnot's theorem; it is a Sangaku problem. Proof This theorem can be proven by first proving a special case: no matter how one triangulates a cyclic quadrilateral, the sum of inradii of triangles is constant. After proving the quadrilateral case, the general case of the cyclic polygon theorem is an immediate corollary. The quadrilateral rule can be applied to quadrilateral components of a general partition of a cyclic polygon, and repeated application of the rule, which "flips" one diagonal, will generate all the possible partitions from any given partition, with each "flip" preserving the sum of the inradii. The quadrilateral case follows from a simple extension of the Japanese theorem for cyclic quadrilaterals, which shows that a rectangle is formed by the two pairs of incenters corresponding to the two possible triangulations of the quadrilateral. The steps of this theorem require nothing beyond basic constructive Euclidean geometry. With the additional construction of a parallelogram having sides parallel to the diagonals, and tangent to the corners of the rectangle of incenters, the quadrilateral case of the cyclic polygon theorem can be proved in a few steps. The equality of the sums of the radii of the two pairs is equivalent to the condition that the constructed parallelogram be a rhombus, and this is easily shown in the construction. Another proof of the quadrilateral case is available due to Wilfred Reyes (2002). In the proof, both the Japanese theorem for cyclic quadrilaterals and the quadrilateral case of the cyclic polygon theorem are proven as a consequence of Thébault's problem III. See also Carnot's theorem, which is used in a proof of the theorem above Equal incircles theorem Tangent lines to circles Notes References Claudi Alsina, Roger B. Nelsen: Icons of Mathematics: An Exploration of Twenty Key Images. MAA, 2011, , pp. 121-125 Wilfred Reyes: An Application of Thebault’s Theorem . Forum Geometricorum, Volume 2, 2002, pp. 183–185 External links Mangho Ahuja, Wataru Uegaki, Kayo Matsushita: In Search of the Japanese Theorem Japanese theorem at Mathworld Japanese Theorem interactive demonstration at the C.a.R. website Wataru Uegaki: "Japanese Theoremの起源と歴史" (On the Origin and History of the Japanese Theorem) http://hdl.handle.net/10076/4917 Euclidean plane geometry Japanese mathematics Theorems about triangles and circles
Japanese theorem for cyclic polygons
Mathematics
614
46,408,285
https://en.wikipedia.org/wiki/Abell%202067
Abell 2067 is a galaxy cluster in the constellation of Corona Borealis. On a larger scale, Abell 2067, along with Abell 2061, Abell 2065, Abell 2079, Abell 2089, and Abell 2092, make up the Corona Borealis Supercluster. Abell 2061 lies 1.8 megaparsecs south of it and the two are likely interacting. References Corona Borealis 2067 Galaxy clusters
Abell 2067
Astronomy
97
143,532
https://en.wikipedia.org/wiki/Pardaxin
Pardaxin is a peptide produced by the Red Sea sole (P4, P5) and the Pacific Peacock sole (P1, P2, P3) that is used as a shark repellent. It causes lysis of mammalian and bacterial cells, similar to melittin. Synthesis In the lab, pardaxin is synthesized using an automated peptide synthesizer. Alternatively, the secretions of the Red Sea sole can be collected and purified. Functions Antibacterial peptide Pardaxin has a helix-hinge-helix structure. This structure is common in peptides that act selectively on bacterial membranes and cytotoxic peptides that lyse mammalian and bacterial cells. Pardaxin shows a significantly lower hemolytic activity towards human red blood cells compared to melittin. The C-terminal tail of pardaxin is responsible for this non-selective activity against the erythrocytes and bacteria. The amphiphilic C-terminal helix is the ion-channel lining segment of the peptide. The N-terminal α-helix is important for the insertion of the peptide to the lipid bilayer of the cell. The mechanism of pardaxin is dependent on the membrane composition. Pardaxin significantly disrupts lipid bilayers composed of zwitterionic lipids, especially those composed of 1-palmitoyl-2-oleoyl-phosphatidylcholine (POPC). This suggests a carpet mechanism for cell lysis. The carpet mechanism is when a high density of peptides accumulates on the target membrane surface. The phospholipid displacement changes in fluidity, and the cellular contents leak out. The presence of anionic lipids or cholesterol was found to reduce the peptide's ability to disrupt bilayers. Shark repellent P. marmoratas and P. pavoninus release pardaxin when threatened by sharks. Pardaxin targets the gills and pharyngeal cavity of the sharks. It results in severe struggling, mouth paralysis, and temporary increase of urea leakage in the gills. This distress is caused by the attack of the cellular membrane of the gills, which causes a large influx of salt ions. Research into creating a commercial shark repellent using pardaxin was discontinued because it dilutes in the water too quickly. It is only effective if sprayed almost directly into a shark's mouth. Cancer treatment Pardaxin inhibits proliferation and induces apoptosis of human cancer cell lines. Its 33-amino acid structure contains many cationic and amphipathic amino acids. This makes it easier for it to interact with anionic membranes, such as those in tumor cells, which are inherently more acidic because of the acidic environment created by more glycolysis. Pardaxin initiates caspase-dependent and caspase-independent apoptosis in human cervical carcinoma cells. Pardaxin triggers reactive oxygen species (ROS). ROS production disrupts protein folding and induces the unfolded protein response (UPR). This causes stress on the endoplasmic reticulum, which releases calcium. This leads to an increase in mitochondrial calcium, dropping its membrane potential. The pore permeability changes, and Cytochrome c (Cyt c) is released. Cyt c activates the caspase chain that leads to apoptosis. ROS also activates the JNK pathway. JNK is phosphorylated, which leads to the phosphorylation of AP-1 (transcription factor consisting of cFOS and Cjun). This results in the activation of caspases as well. ROS also causes a caspase independent pathway that results in apoptosis. When the mitochondrial membrane potential changes, apoptosis-inducing factors (AIFs) are also released. These trigger apoptosis when they enter the nucleus, not needing to involve caspases. References Protein families Antimicrobial peptides
Pardaxin
Biology
827
175,075
https://en.wikipedia.org/wiki/Yuga
A yuga, in Hinduism, is generally used to indicate an age of time. In the Rigveda, a yuga refers to generations, a period of time (whether long or short), or a yoke (joining of two things). In the Mahabharata, the words yuga and kalpa (a day of Brahma) are used interchangeably to describe the cycle of creation and destruction. In post-Vedic texts, the words "yuga" and "age" commonly denote a (pronounced chatur yuga), a cycle of four world ages—for example, in the Surya Siddhanta and Bhagavad Gita (part of the Mahabharata)—unless expressly limited by the name of one of its minor ages: Krita (Satya) Yuga, Treta Yuga, Dvapara Yuga, or Kali Yuga. Etymology Yuga () means "a yoke" (joining of two things), "generations", or "a period of time" such as an age, where its archaic spelling is yug, with other forms of yugam, , and yuge, derived from yuj (), believed derived from (Proto-Indo-European: 'to join or unite'). Meanings The term "yuga" has multiple meanings, including representing the number 4 and various periods of time. In early Indian astronomy, it referred to a five-year cycle starting with the conjunction of the sun and moon in the autumnal equinox. More commonly, "yuga" is used in the context of kalpas, composed of four yugas. According to the Manusmriti, a kalpa starts with a Satya Yuga (4,000 years), followed by a Treta Yuga (3,000 years), a Dvapara Yuga (2,000 years), and ends with a Kali Yuga (1,000 years). According to Vishnu Purana, each Mahayuga comprises a Satya Yuga (1,728,000 human years), a Treta Yuga (1,296,000 years), a Dvapara Yuga (864,000 years), and a Kali Yuga (432,000 years). Virtues According to the Manusmriti, the virtue (dharma) of human beings varies across the four yugas (ages). The text states: In the Krita Yuga, the virtue is austerity (tapas); in the Treta Yuga, it is knowledge (jnana); in the Dvapara Yuga, it is sacrifice (yajna); and in the Kali Yuga, it is charity (dāna). See also Hindu units of time Kalpa (day of Brahma) Manvantara (age of Manu) Pralaya (period of dissolution) Yuga Cycle (four yuga ages): Satya (Krita), Treta, Dvapara, and Kali List of numbers in Hindu scriptures Explanatory notes References External links Vedic Time System: Yuga Four Yugas Hindu astronomy Hindu philosophical concepts Time in Hinduism Units of time
Yuga
Physics,Mathematics
666
36,122,626
https://en.wikipedia.org/wiki/Juchart
A Juchart (also Jucharte or Juchard, in French Pose, in Italian Pertica) was a unit of area measurement used in rural Switzerland until the early 20th century. In other German speaking regions it was known as a Joch, Jochart, Jauchart, Jauch, Juck or Juckert. The Juchart was a measurement of the amount of farm land that a man could plow in one day. It is similar to the northern German traditional measurement of a Morgen, which was approximately the amount of land tillable by one man behind an ox in the morning hours of a day. In the French speaking Canton of Vaud a related unit of acreage, the Pose was used. Size As with most units of this type, the size of a Juchart varied widely. It depended on the productivity and shape of the land. Notes References External links History of Switzerland Obsolete units of measurement Units of area
Juchart
Mathematics
196
3,480,841
https://en.wikipedia.org/wiki/Polarization-division%20multiple%20access
Polarization-division multiple access (PDMA) is a channel access method used in some cellular networks and broadcast satellite services. Separate antennas are used in this type, each with different polarization and followed by separate receivers, allowing simultaneous regional access of satellites. Each corresponding ground station antenna needs to be polarized in the same way as its counterpart in the satellite. This is generally accomplished by providing each participating ground station with an antenna that has dual polarization. The frequency band allocated to each antenna beam can be identical because the uplink signals are orthogonal in polarization. This technique allows frequency reuse. See also Frequency-division multiple access Code-division multiple access Time-division multiple access Channel access methods Polarization (waves)
Polarization-division multiple access
Physics
147
298,804
https://en.wikipedia.org/wiki/Stored-value%20card
A stored-value card (SVC) or cash card is a payment card with a monetary value stored on the card itself, not in an external account maintained by a financial institution. This means no network access is required by the payment collection terminals as funds can be withdrawn and deposited straight from the card. Like cash, payment cards can be used anonymously as the person holding the card can use the funds. They are an electronic development of token coins and are typically used in low-value payment systems or where network access is difficult or expensive to implement, such as parking machines, public transport systems, and closed payment systems in locations such as ships. Stored-value cards differ from debit cards, where money is on deposit with the issuer, and credit cards which are subject to credit limits set by the issuer and are connected to accounts at financial institutions. Another difference between stored-value cards and debit and credit cards is that debit and credit cards are usually issued in the name of individual account holders, while stored-value cards may be anonymous, as in the case of gift cards. Stored-value cards are prepaid money cards and may be disposed when the value is used, or the card value may be topped up, as in the case of telephone calling cards or when used as a fare card. The term closed-loop means the funds and/or data are physically stored on the token or card in the form of binary-coded data. This is unlike payment cards where data is maintained on the card issuer's computers. Like payment cards, value can be accessed using a magnetic stripe, chip or radio-frequency identification (RFID) embedded in the card; or by entering a code number, printed on the card, into a telephone or other numeric keypad. Names There is no common name for stored-value cards, which are country or company specific. Names for stored-value cards include APPH in US, Mondex in Canada, Chipknip in the Netherlands, Geldkarte in Germany, Quick in Austria, Moneo in France, Proton in Belgium, Carta prepagata ("Prepaid card") in Italy, FeliCa-cards such as Suica in Japan, China T-Union in mainland China, EZ-Link and NETS (CashCard and FlashPay) in Singapore, Papara Card in Turkey, Octopus card in Hong Kong, SUBE card in Argentina, T-Cash in the Philippines and Touch 'n Go and MyRapid Card in Malaysia. The U.S. Department of the Treasury manages three stored-value card programs: EZpay, EagleCash, and Navy Cash. Non-government stored-value cards include Aramark GuestExpress, Compass Zipthru, and Freedompay FreetoGo. Uses Stored-value cards are most commonly used for low-value transactions, such as transit system farecards, telephone prepaid calling cards, cafeterias, or for micropayments in shops or vending machines. They also have an advantage over most other payment cards in that when making, say, a purchase, telecommunication facilities are not needed, which may be important in situations where the availability or reliability of these facilities are uncertain or costly, especially for low-value transactions. A benefit to the merchant is that bank transaction fees are not incurred as the transaction is processed offline and there need not be a reference to the bank for processing. A limitation is that these cards cannot be used for online, telephone, mail order and other "card not present transactions". The German Geldkarte and the Austrian Quick card can also be used to validate a customer's age at cigarette vending machines. Typical applications of organization specific or industry specific prepaid card include payroll cards, rebate cards, gift cards, cafeteria cards and travel cards and U.S. based health schemes such as HSA cards. The EZpay, EagleCash, and Navy Cash cards are used by the U.S. military as electronic alternatives to cash in areas characterized by difficult access and limited banking or telecommunications infrastructure. Stored-value cards can save organizations a considerable amount of money if customers add a large sum of funds at one time to the card and then pay a lower transaction fee for each use of the card on smaller purchases. Prepaid cards Closed system prepaid cards Closed system prepaid cards are cards issued by a merchant and may only be redeemed for purchases from the merchant. They are typically of fixed amounts and are commonly known as merchant gift cards or store cards. These cards are typically purchased to be used as gifts, and are increasingly replacing the traditional paper gift certificate. Generally, few if any laws govern these types of cards. Card issuers or sellers are not required to obtain a license. Closed system prepaid cards are not subject to the USA PATRIOT Act, as they generally cannot identify a customer. As debts owed to consumers who purchased the card, these purchases remain on the books of a merchant as a liability rather than an asset. Consequently, gift certificates and merchant gift cards have fallen under state escheat or abandoned property laws (APL). However, the emergence of closed system prepaid cards has blurred the applicability of APL. North Carolina and Illinois have excluded these types of cards from APL provided the card has no expiration date or a service fee. Maine and Virginia require the issuer to pay the state when the cards are abandoned. In Connecticut an issuer is required to identify the residence of the gift card owner. Since most merchant gift cards are anonymous, the residence of the card's owner is deemed to be the state's treasurer's office. Presently, no law requires a merchant to provide refunds for lost or stolen cards. Whether a refund is possible is specified in an issuer's cardholder agreement. In addition, most closed system cards cannot be redeemed for cash. When a cardholder redeems all but an insignificant portion of the card on merchandise, that amount is generally lost and is a windfall gain for the issuing merchant. The merchant also obtains a windfall gain if a card has an expiry date and the cardholder fails to use the full value by that date. Furthermore, the merchant has an interest-free use of the value until it is redeemed. Semi-closed system prepaid cards Semi-closed system prepaid cards are similar to closed system prepaid cards. However, cardholders are permitted to redeem the cards at multiple merchants within a geographic area. These types of cards are issued by a third party, rather than the retailer who accepts the card. Examples include university cards and mall gift cards. The laws governing these types of cards are unsettled. Depending on the state, the issuer may or may not be required to have a money transmitter license or other similar license. In addition to the District of Columbia, the states in the US that require a license include Connecticut, Florida, Illinois, Iowa, Louisiana, Maryland, Minnesota, Mississippi, North Carolina, Oregon, Texas, Vermont, Virginia, West Virginia, Washington, and Wyoming. Note, these states explicitly require licensing for card issuers. Other states may have more subtle licensing laws. Under 18 USC section 1960, it is a crime for an issuer to conduct a money transmitting business without a license. Cardholders generally suffer from the same problems that closed system card holders suffer. It is unclear whether or not Chapters 7 and 11 of the Bankruptcy code are applicable to these types of cards. Money laundering It is common for countries to place limits on how much currency may be taken out of or brought into a country. However, these limits generally do not apply to money leaving a country in non-cash forms such as on stored-value cards. There is concern that stored-value cards can be used for money laundering, that is, moving offshore funds derived from criminal activities such as drug trafficking. There are reports of these cards being used by Mexican drug cartels to transfer money across borders. For example, in the United States, it is legal for anyone to enter or leave the country with money that is stored on cards, and (unlike cash in high amounts) does not have to be reported to customs or any other authority. Some members of the U.S. Congress are considering creating laws that would require travelers crossing, entering, or leaving the country to report these cards. The Financial Crimes Enforcement Network of the U.S. Department of the Treasury has published a notice of proposed rulemaking on stored-value cards in the June 28, 2010 edition of the Federal Register. The proposed rules would require sellers of prepaid cards to register with the government and keep records on transactions and customers. See also Card (disambiguation) Prepaid credit card Scrip Gift card Telephone card Electronic money Decoupled debit card References Sources Payment cards Radio-frequency identification sv:Kontantkort
Stored-value card
Engineering
1,821
8,245,866
https://en.wikipedia.org/wiki/Alpha-particle%20spectroscopy
Alpha spectrometry (also known as alpha(-particle) spectroscopy) is the quantitative study of the energy of alpha particles emitted by a radioactive nuclide that is an alpha emitter. As emitted alpha particles are mono-energetic (i.e. not emitted with a spectrum of energies, such as beta decay) with energies often distinct to the decay they can be used to identify which radionuclide they originated from. Experimental methods Counting with a source deposited onto a metal disk It is common to place a drop of the test solution on a metal disk which is then dried out to give a uniform coating on the disk. This is then used as the test sample. If the thickness of the layer formed on the disk is too thick then the lines of the spectrum are broadened to lower energies. This is because some of the energy of the alpha particles is lost during their movement through the layer of active material. Liquid scintillation An alternative method is to use liquid scintillation counting (LSC), where the sample is directly mixed with a scintillation cocktail. When the individual light emission events are counted, the LSC instrument records the amount of light energy per radioactive decay event. The alpha spectra obtained by liquid scintillation counting are broaden because of the two main intrinsic limitations of the LSC method: (1) because the random quenching reduces the number of photons emitted per radioactive decay, and (2) because the emitted photons can be absorbed by cloudy or coloured samples (Lambert-Beer law). The liquid scintillation spectra are subject to Gaussian broadening, rather than to the distortion caused by the absorption of alpha-particles by the sample when the layer of active material deposited onto a disk is too thick. Alpha spectra From left to right the peaks are due to 209Po, 239Pu, 210Po and 241Am. The fact that isotopes such as 239Pu and 241Am have more than one alpha line indicates that the (daughter) nucleus can be in different discrete energy levels. Calibration: MCA does not work on energy, it works on voltage. To relate the energy to voltage one must calibrate the detection system. Here different alpha emitting sources of known energy were placed under the detector and the full energy peak is recorded. Measurement of thickness of thin foils: Energies of alpha particles from radioactive sources are measured before and after passing through the thin films. By measuring difference and using SRIM we can measure the thickness of thin foils. Kinematics of alpha decay The decay energy, Q (also called the Q-value of the reaction), corresponds to a disappearance of mass. For the alpha decay nuclear reaction: ^{A}_{Z}P -> ^{(A-4)}_{(Z-2)}D + \alpha, (where P is the parent nuclide and D the daughter). , or to put in the more commonly used units: Q (MeV) = -931.5 ΔM (Da), (where ΔM = ΣMproducts - ΣMreactants). When the daughter nuclide and alpha particle formed are in their ground states (common for alpha decay), the total decay energy is divided between the two in kinetic energy (T): The size of T is dependent on the ratio of masses of the products and due to the conservation of momentum (the parent's momentum = 0 at the moment of decay) this can be calculated: and , The alpha particle, or 4He nucleus, is an especially strongly bound particle. This combined with the fact that the binding energy per nucleon has a maximum value near A=56 and systematically decreases for heavier nuclei, creates the situation that nuclei with A>150 have positive Qα-values for the emission of alpha particles. For example, one of the heaviest naturally occurring isotopes, ^238U -> ^234Th + ^4He (ignoring charges): Qα = -931.5 (234.043 601 + 4.002 603 254 13 - 238.050 788 2) = 4.2699 MeV Note that the decay energy will be divided between the alpha-particle and the heavy recoiling daughter so that the kinetic energy of the alpha particle (Tα) will be slightly less: Tα = (234.043 601 / 238.050 788 2) 4.2699 = 4.198 MeV, (note this is for the 238gU to 234gTh reaction, which in this case has the branching ratio of 79%). The kinetic energy of the recoiling 234Th daughter nucleus is TD = (mα / mP) Qα = (4.002 603 254 13 / 238.050 788 2) 4.2699 = 0.0718 MeV or 71.8 keV, which whilst much smaller is still substantially bigger than that of chemical bonds (<10 eV) meaning the daughter nuclide will break away from whatever chemical environment the parent had been in. The recoil energy is also the reason that alpha spectrometers, whilst run under reduced pressure, are not operated at too low a pressure so that the air helps stop the recoiling daughter from moving completely out of the original alpha-source and cause serious contamination problems if the daughters are themselves radioactive. The Qα-values generally increase with increasing atomic number but the variation in the mass surface due to shell effects can overwhelm the systematic increase. The sharp peaks near A = 214 are due to the effects of the N = 126 shell. References Spectroscopy
Alpha-particle spectroscopy
Physics,Chemistry
1,155
40,366,664
https://en.wikipedia.org/wiki/Quarter%208-cubic%20honeycomb
In seven-dimensional Euclidean geometry, the quarter 8-cubic honeycomb is a uniform space-filling tessellation (or honeycomb). It has half the vertices of the 8-demicubic honeycomb, and a quarter of the vertices of a 8-cube honeycomb. Its facets are 8-demicubes h{4,36}, pentic 8-cubes h6{4,36}, {3,3}×{32,1,1} and {31,1,1}×{31,1,1} duoprisms. See also Regular and uniform honeycombs in 8-space: 8-cube honeycomb 8-demicube honeycomb 8-simplex honeycomb Truncated 8-simplex honeycomb Omnitruncated 8-simplex honeycomb Notes References Kaleidoscopes: Selected Writings of H. S. M. Coxeter, edited by F. Arthur Sherk, Peter McMullen, Anthony C. Thompson, Asia Ivic Weiss, Wiley-Interscience Publication, 1995, (Paper 24) H.S.M. Coxeter, Regular and Semi-Regular Polytopes III, [Math. Zeit. 200 (1988) 3-45] See p318 Honeycombs (geometry) 9-polytopes
Quarter 8-cubic honeycomb
Physics,Chemistry,Materials_science
276
2,903,479
https://en.wikipedia.org/wiki/Omicron%20Bo%C3%B6tis
Omicron Boötis (ο Boötis) is a yellow-hued star in the northern constellation of Boötes. With an apparent visual magnitude of +4.60, it is a fifth magnitude star that is visible to the naked eye. Based upon an annual parallax shift of 13.42 mas as seen from the Earth, it is located about 243 light years from the Sun. The star is moving closer to the Sun with a radial velocity of −9 km/s. At the age of 2.72 billion years, this is an evolved G-type giant star with a stellar classification of G8.5 III. It belongs to the so-called "red clump", which indicates it is generating energy through helium fusion at its core. Although it displays a higher abundance of barium than is normal for a star of its type, Williams (1975) considers its status as a barium star to be "very doubtful". The star has double the mass of the Sun and has expanded to 11 times the Sun's radius. It is radiating 85 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,864 K. References External links G-type giants Horizontal-branch stars Bootis, Omicron Boötes BD+17 2780 Bootis, 35 129972 072125 5502
Omicron Boötis
Astronomy
283
864,168
https://en.wikipedia.org/wiki/Mains%20electricity%20by%20country
Mains electricity by country includes a list of countries and territories, with the plugs, voltages and frequencies they commonly use for providing electrical power to low voltage appliances, equipment, and lighting typically found in homes and offices. (For industrial machinery, see industrial and multiphase power plugs and sockets.) Some countries have more than one voltage available. For example, in North America, a unique split-phase system is used to supply to most premises that works by center tapping a 240 volt transformer. This system is able to concurrently provide 240 volts and 120 volts. Consequently, this allows homeowners to wire up both 240 V and 120 V circuits as they wish (as regulated by local building codes). Most sockets are connected to 120 V for the use of small appliances and electronic devices, while larger appliances such as dryers, electric ovens, ranges and EV chargers use dedicated 240 V sockets. Different sockets are mandated for different voltage or maximum current levels. Voltage, frequency, and plug type vary, but large regions may use common standards. Physical compatibility of receptacles may not ensure compatibility of voltage, frequency, or connection to earth (ground), including plugs and cords. In some areas, older standards may still exist. Foreign enclaves, extraterritorial government installations, or buildings frequented by tourists may support plugs not otherwise used in a country, for the convenience of travellers. Main reference sourceIEC World Plugs The International Electrotechnical Commission (IEC) publishes a web microsite World Plugs which provides the main source for this page, except where other sources are indicated. World Plugs includes some history, a description of plug types, and a list of countries giving the type(s) used and the mains voltage and frequency. Although useful for quick reference, especially for travellers, IEC World Plugs may not be regarded as totally accurate, as illustrated by the examples in the plugs section below, and errors may exist. Voltages Voltages in this article are the nominal single-phase supply voltages, or split-phase supply voltages. Three-phase and industrial loads may have other voltages. All voltages are root mean square (RMS) voltage; the peak AC voltage is greater by a factor of , and the peak-to-peak voltage greater by a factor of Plugs The system of plug types using a single letter (from A to N) used here is from World Plugs, which defines the plug type letters in terms of a general description, without making reference to specific standards. Where a plug does not have a specific letter code assigned to it, then it may be defined by the style sheet number listed in IEC TR 60083. Not all plugs are included in the letter system; for example, there is no designation for the plugs defined by the Thai National Standard TIS 116-2549, though some web sites refer to the three-pin plug described in that standard as "Type O". Identification guide Table of mains voltages, frequencies, and plugs Notes See also Delta-wye transformer Electrical wiring Electric power transmission Electrification Electrical grid List of railway electrification systems Mains electricity References External links Electricity Electric power Electrical-engineering-related lists Electrical standards Electrical wiring Energy-related lists by country Mains power connectors fi:Verkkovirta ur:مینز برق بلحاظ ملک
Mains electricity by country
Physics,Engineering
699
56,717,276
https://en.wikipedia.org/wiki/Sara%20Negri
Sara Negri (born January 21, 1967) is a mathematical logician who studies proof theory. She is Italian, worked in Finland for several years, where she was a professor of theoretical philosophy in the University of Helsinki, and currently holds a position as professor of mathematical logic at the University of Genoa. Education and career Negri was born in Padua, and studied at the University of Padua. She earned a master's degree there in 1991 and a Ph.D. in 1996, both in mathematics. Her dissertation, Dalla Topologia Formale all'Analisi, was supervised by Giovanni Sambin. She went to Helsinki as a docent in 1998, and became a full professor there in 2015. She has also taken several visiting positions, including a Humboldt Fellowship in 2004–2005 at the Ludwig Maximilian University of Munich. She became full professor of mathematical logic at the University of Genoa, in Italy, in 2019. Recognition Negri was elected to the Academia Europaea in 2018. Books Negri is the co-author, with Jan von Plato, of two books: Structural Proof Theory (Cambridge University Press, 2001) Proof Analysis: A Contribution to Hilbert's Last Problem (Cambridge University Press, 2011) References External links 1967 births Living people Italian mathematicians 20th-century Italian philosophers 21st-century Italian philosophers Finnish mathematicians Finnish philosophers Women mathematicians Italian women philosophers Mathematical logicians Women logicians University of Padua alumni Academic staff of the University of Helsinki Italian expatriates in Finland Members of Academia Europaea
Sara Negri
Mathematics
309
4,379,834
https://en.wikipedia.org/wiki/British%20Rail%20flying%20saucer
The British Rail flying saucer, officially known simply as space vehicle, was a proposed interplanetary spacecraft designed by Charles Osmond Frederick. Although the proposed craft required controlled thermonuclear fusion and other futuristic technologies, a patent application was filed on behalf of British Rail in December 1970 and granted on 21 March 1973. Purpose The flying saucer originally started as a proposal for a lifting platform. However, the project was revised and edited, and by the time the patent was filed had become a large passenger craft for interplanetary travel. Design The craft was to be powered by nuclear fusion, using laser beams to produce pulses of nuclear energy in a generator in the centre of the craft, at a rate of over 1000 Hz to prevent resonance, which could damage the vehicle. The pulses of energy would then have been transferred out of a nozzle into a series of radial electrodes running along the underside of the craft, which would have converted the energy into electricity that would then pass into a ring of powerful electromagnets (the patent describes using superconductors if possible). These magnets would accelerate subatomic particles emitted by the fusion reaction, providing lift and thrust. This general design was used in several fusion rocket studies. A layer of thick metal running above the fusion reactor would have acted as a shield to protect the passengers above from the radiation emitted from the core of the reactor. The entire vehicle would be piloted in such a way that the acceleration and deceleration of the craft would have simulated gravity in zero gravity conditions. A patent application was filed by Jensen and Son on behalf of British Rail on 11 December 1970 and granted on 21 March 1973. The patent lapsed in 1976 due to non-payment of renewal fees. Media attention The patent first came to the attention of the media in an article in The Guardian on 31 May 1978 by Adrian Hope of the New Scientist magazine. There was a further mention in The Daily Telegraph on 11 July 1982, during the silly season. The Railway Magazine mentioned it in its May 1996 issue, saying that the passengers would have been "fried" anyway. When the patent was rediscovered in 2006, it gained widespread publicity in the British press. A group of nuclear scientists examined the designs and declared them to be unworkable, expensive and very inefficient. Michel van Baal of the European Space Agency claimed "I have had a look at the plans, and they don't look very serious to me at all", adding that many of the technologies used in the craft, such as nuclear fusion and high temperature superconductors, had not yet been discovered, while Colin Pillinger, the scientist in charge of the Beagle 2 probe, was quoted as saying "If I hadn't seen the documents I wouldn't have believed it". References British Rail research and development Flying saucers Hypothetical spacecraft Nuclear spacecraft propulsion
British Rail flying saucer
Astronomy,Technology
585
32,694,134
https://en.wikipedia.org/wiki/Ya%20cai
Ya cai () is a pickled vegetable originating from the Sichuan province, China. It is made from the upper stems of a variety of mustard green. Ya Cai is more pungent than the similar zha cai. See also Tianjin preserved vegetable Zha cai Suan cai Pao cai Meigan cai References External links fuchsiadunlop.com Sohbet Zha Cai Preserved Mustard Tuber: Ya Cai's Sichuan Sister Chinese pickles Sichuan cuisine Fermented foods
Ya cai
Biology
100
22,342,102
https://en.wikipedia.org/wiki/NGC%207049
NGC 7049 is a lenticular galaxy that spans about 150,000 light-years and lies about 100 million light-years away from Earth in the inconspicuous southern constellation of Indus. NGC 7049's unusual appearance is largely due to a prominent rope-like dust ring which stands out against the starlight behind it. These dust lanes are usually seen in young galaxies with active star-forming regions. NGC 7049 shows the features of both an elliptical galaxy and a spiral galaxy, and has relatively few globular clusters, indicative of its status as a lenticular type. NGC 7049 is the brightest (BCG) of the Indus triplet of galaxies (NGC 7029, NGC 7041, NGC 7049), and its structure might have arisen from several recent galaxy collisions. Typical BCGs are some of the oldest and most massive galaxies. References External links Simbad database 7049 Indus (constellation) Lenticular galaxies
NGC 7049
Astronomy
192
2,422,023
https://en.wikipedia.org/wiki/Reflection%20group
In group theory and geometry, a reflection group is a discrete group which is generated by a set of reflections of a finite-dimensional Euclidean space. The symmetry group of a regular polytope or of a tiling of the Euclidean space by congruent copies of a regular polytope is necessarily a reflection group. Reflection groups also include Weyl groups and crystallographic Coxeter groups. While the orthogonal group is generated by reflections (by the Cartan–Dieudonné theorem), it is a continuous group (indeed, Lie group), not a discrete group, and is generally considered separately. Definition Let E be a finite-dimensional Euclidean space. A finite reflection group is a subgroup of the general linear group of E which is generated by a set of orthogonal reflections across hyperplanes passing through the origin. An affine reflection group is a discrete subgroup of the affine group of E that is generated by a set of affine reflections of E (without the requirement that the reflection hyperplanes pass through the origin). The corresponding notions can be defined over other fields, leading to complex reflection groups and analogues of reflection groups over a finite field. Examples Two dimensions In two dimensions, the finite reflection groups are the dihedral groups, which are generated by reflection in two lines that form an angle of and correspond to the Coxeter diagram Conversely, the cyclic point groups in two dimensions are not generated by reflections, nor contain any – they are subgroups of index 2 of a dihedral group. Infinite reflection groups include the frieze groups and and the wallpaper groups , , , and . If the angle between two lines is an irrational multiple of pi, the group generated by reflections in these lines is infinite and non-discrete, hence, it is not a reflection group. Three dimensions Finite reflection groups are the point groups Cnv, Dnh, and the symmetry groups of the five Platonic solids. Dual regular polyhedra (cube and octahedron, as well as dodecahedron and icosahedron) give rise to isomorphic symmetry groups. The classification of finite reflection groups of R3 is an instance of the ADE classification. Relation with Coxeter groups A reflection group W admits a presentation of a special kind discovered and studied by H. S. M. Coxeter. The reflections in the faces of a fixed fundamental "chamber" are generators ri of W of order 2. All relations between them formally follow from the relations expressing the fact that the product of the reflections ri and rj in two hyperplanes Hi and Hj meeting at an angle is a rotation by the angle fixing the subspace Hi ∩ Hj of codimension 2. Thus, viewed as an abstract group, every reflection group is a Coxeter group. Finite fields When working over finite fields, one defines a "reflection" as a map that fixes a hyperplane. Geometrically, this amounts to including shears in a hyperplane. Reflection groups over finite fields of characteristic not 2 were classified by . Generalizations Discrete isometry groups of more general Riemannian manifolds generated by reflections have also been considered. The most important class arises from Riemannian symmetric spaces of rank 1: the n-sphere Sn, corresponding to finite reflection groups, the Euclidean space Rn, corresponding to affine reflection groups, and the hyperbolic space Hn, where the corresponding groups are called hyperbolic reflection groups. In two dimensions, triangle groups include reflection groups of all three kinds. See also Hyperplane arrangement Chevalley–Shephard–Todd theorem Reflection groups are related to kaleidoscopes. Parabolic subgroup of a reflection group References Notes Bibliography Textbooks External links
Reflection group
Physics
744
29,015,125
https://en.wikipedia.org/wiki/Ammonium%20sulfite
Ammonium sulfite is the ammonium salt of sulfurous acid with the chemical formula (NH4)2SO3. Preparation Ammonium sulfite can be prepared by the reaction of ammonia with sulfur dioxide in aqueous solution: Ammonium sulfite is produced in gas scrubbers, now obsolete, consisting of ammonium hydroxide to remove sulfur dioxide from emissions from power plants. The conversion is the basis of the Walther Process. The resulting ammonium sulfite can be air oxidized to give ammonium sulfate. Uses Ammonium sulfite is the precursor to ammonium thiosulfate, by reaction with elemental sulfur. Niche For cosmetics, ammonium sulfite is used as a hair straightening agent and a hair waving agent. Ammonium based hair products have been made to replace sodium hydroxide-based products due to the destructive nature of sodium hydroxide on hair. The most common food product with ammonium sulfite is caramel coloring E150d. According to the FDA, caramel coloring contains ammonium, potassium, or sodium sulfite. Ammonium sulfite is used as a preservative for fixers in photography. When film photographs are being developed ammonium sulfite can be one of the reducing agents used to preserve the hypo (sodium thiosulfate or ammonium thiosulfate). Ammonium sulfite can also be used in the making of bricks. The bricks made using ammonium sulfite are mainly used for blast furnace linings. Ammonium sulfite can be included in lubricants for cold metal working. The lubricants are intended to reduce friction to keep heat production down and keep impurities out of the metals. Chemical properties Ammonium sulfite is a reducing agent. It emits sulfur dioxide and oxides of nitrogen upon heating to decomposition. The specific gravity of ammonium sulfite is 1.41. The refractive index of ammonium sulfite is 1.515. References Ammonium compounds Sulfites
Ammonium sulfite
Chemistry
423
71,809,134
https://en.wikipedia.org/wiki/Veerse%20Gatdam
The Veerse Gatdam is a man-made barrier across the former Eastern Scheldt estuary branch known as the Veerse Gat, between Walcheren and Noord-Beveland islands in Zeeland, Netherlands. The barrier was completed on 27 April 1961. Because of the completion of this barrier and the completion of the Zandkreekdam a year prior on the eastern end of the waterway, the Veerse Meer was created. The Veerse Gatdam is the third structure constructed as part of the Delta Works water management system. The N57 motorway runs along the top of the barrier. Overview The barrier is long and connects the island of Walcheren with Noord-Beveland. The barrier was partly built as a with asphalt coated dike on the Plaat van Onrust, a former sandbar. For the remaining parts, sinkable passing caissons were used. For the construction of these caissons, a dock was constructed between Veere en Vrouwenpolder. The caissons contain openings, so that the tide could keep going on, while building the barrier. With this it was prevented that the current would keep on getting stronger as the construction progressed. Only at the end of construction, were the sliders simultaneously lowered, by which the barrier was closed in one instant. By building the Veerse Gatdam, the town of Veere was no longer connected with the open sea. The fishing fleet of Veere had to be relocated before the completion of the barrier to the nearby village of Colijnsplaat. The Veerse Meer is currently a popular aquatics location, particularly for windsurfing. On the north side of the dam, a large recreational beach has been created. The closing of the Veerse Gat was adapted into a film, Deltafase 1, by Bert Haanstra. References External links Veerse Dam, Encyclopedie van Zeeland (in Dutch) Delta Works Dams completed in 1961 Dams in Zeeland Noord-Beveland Walcheren Veere
Veerse Gatdam
Physics
434
59,110,697
https://en.wikipedia.org/wiki/Marco%20Fraaije
Marco Wilhelmus Fraaije (born 7 December 1968) is a Dutch scientist whose research concerns enzymology of redox enzymes, enzyme discovery & engineering and biocatalysis at the Groningen Biomolecular Sciences and Biotechnology Institute (GBB) at the University of Groningen. Education Marco Fraaije graduated in 1993 with a Master of Science degree in Molecular Sciences at Wageningen University. Subsequently, he became a doctoral student at Wageningen University under supervision of Willem van Berkel focusing on the mechanism and structure of flavoenzymes and was awarded his PhD in biochemistry in 1998. Following his PhD, he worked as a postdoctoral researcher as EMBO fellow in the protein crystallography research group at the University of Pavia. In 1999, he was made assistant professor at GBB at the University of Groningen, and in 2007 he was appointed as associate professor. In 2012, he was made full professor in molecular enzymology. Research Fraaije is active in the fields of enzyme engineering and biocatalysis. His research mainly deals with discovery, engineering and exploration of novel oxidative enzymes, with special emphasis on flavin-containing enzymes. Besides exploring the biocatalytic potential of these biocatalysts, he also aims at elucidating the molecular functioning of oxidative flavoenzymes. He also has interest in evolutionary aspects of enzymology and in line with this he is board member of the geological museum Oertijdmuseum in Boxtel. Marco Fraaije has a significant number of publications and four patents. He has coordinated EU-funded projects including OXYGREEN (2008-2013), ROBOX (2015-2019), and OXYTRAIN (2017-2020). Awards In 2018, Fraaije received the BIOCAT science award from the Biocat Society at the International Congress on Biocatalysis for his scientific achievement in the field of biocatalysis. Other research prizes include the Unilever research prize, 1993; EMBO long-term fellowship, 1998; and the VICI-NWO research grant, 2016. In 2005, he became a member of the Biomolecular Chemistry division of the Netherlands Organization for Scientific Research and currently chairs the Applied Biocatalysis division of the Dutch Biotechnology Society. Selected publications Aalbers FS, Fraaije MW (2017) Coupled reactions by coupled enzymes: alcohol to lactone cascade with alcohol dehydrogenase-cyclohexanone monooxygenase fusions. Appl Microbiol Biotechnol. 101, 7557-7565. Beyer N, Kulig JK, Bartsch A, Hayes MA, Janssen DB, Fraaije MW (2017) P450BM3 fused to phosphite dehydrogenase allows phosphite-driven selective oxidations. Appl Microbiol Biotechnol. 101, 2319-2331. Romero E, Castellanos JR, Mattevi A, Fraaije MW (2016) Characterization and crystal structure of a robust cyclohexanone monooxygenase. Angew Chem Int Ed Engl. 55, 15852-15855. Selected as Biocatalysis Hot Paper. Brondani PB, Dudek HM, Martinoli C, Mattevi A, Fraaije MW (2014) Finding the switch: turning a Baeyer-Villiger monooxygenase into a NADPH oxidase. J. Am. Chem. Soc. 136, 16966-16969 References 21st-century Dutch scientists Dutch biologists Biotechnologists Wageningen University and Research alumni Academic staff of the University of Groningen Living people 1968 births
Marco Fraaije
Biology
772
231,102
https://en.wikipedia.org/wiki/Air-ground%20radiotelephone%20service
Air-ground radiotelephone service is a system that allows voice calls and other communication services to be made from an aircraft to either a satellite or land-based network. The service operates via a transceiver mounted in the aircraft on designated frequencies. In the US these frequencies have been allocated by the Federal Communications Commission. The system is used in both commercial and general aviation services. Licensees may offer a wide range of telecommunications services to passengers and others on aircraft. Design A U.S. air-ground radiotelephone transmits a radio signal in the 849 to 851 megahertz range; this signal is sent to either a receiving ground station or a communications satellite depending on the design of the particular system. "Commercial aviation air-ground radiotelephone service licensees operate in the 800 MHz band and can provide communication services to all aviation markets, including commercial, governmental, and private aircraft." If it is a call from a commercial airline passenger radiotelephone, the call is then forwarded to a verification center to process credit card or calling card information. The verification center will then route the call to the public switched telephone network, which completes the call. For the return signal, ground stations and satellites use a radio signal in the 894 to 896 megahertz range. Frequencies Two separate frequency bands have been allocated by the FCC for air-ground telephone service. One at 454/459 MHz, was originally reserved for "general" aviation use (non-airliners) and the 800 MHz range, primarily used for airliner telephone service, which has shown limited acceptance by passengers. AT&T Corporation abandoned its 800 MHz air-ground offering in 2005, and Verizon AIRFONE (formerly GTE Airfone) is scheduled for decommissioning in late 2008, although the FCC has re-auctioned Verizon's spectrum (see below). Skytel, (now defunct) which had the third nationwide 800 MHz license, elected not to build it, but continued to operate in the 450 MHz AGRAS system. Its AGRAS license and operating network was sold to Bell Industries in April, 2007. The 450 MHz General Aviation network is administered by Mid-America Computer Corporation in Blair, Nebraska, which has called the service AGRAS, and requires the use of instruments manufactured by Terra and Chelton Aviation/Wulfsberg Electronics, and marketed as the Flitephone VI Series. "General aviation air-ground radiotelephone service licensees operate in the 450 MHz band and can provide a variety of telecommunications services to private aircraft such as small single engine planes and corporate jets." In the 800 MHz band, the FCC defined 10 blocks of paired uplink/downlink narrowband ranges (6 kHz) and six control ranges (3.2 kHz). Six carriers were licensed to offer in-flight telephony, each being granted non-exclusive use of the 10 blocks and exclusive use of a control block. Of the six, only three commenced operations, and only one persisted into the 1990s, now known as Verizon Airfone. History An air-to-ground radiotelephone technology demonstration occurred during 1923 Toulouse Air Show at Francazal and Montaudran airports, in France. The first recorded air-to-ground radiotelephone service on a scheduled flight was in 1937 on the Chicago-Seattle route by Northwest Airlines. AirFone commenced its service in the early 1980s starting with first-class under experimental licenses; the FCC's formal allocation was in 1990. AirFone handsets were gradually extended to include one unit in each row of seats in economy. The service was always priced extremely high--$3.99 per call and $4.99 per minute in 2006—and has seen less and less use as the ready availability of cellular telephones has increased. In an FCC filing in 2005, the agency noted that 4,500 aircraft have AirFone service, and quoted Verizon AirFone's president stating in an article in The New York Times that only two to three people per flight make a call. Verizon added stock tickers and limited information services, but those had little use. In 2003, Verizon partnered with Tenzing Communications to offer very low-speed email using an on-board proxy server and limited live instant messaging at rates of 64 to 128 kbit/s on United Airlines and two other carriers. This service lasted about a year. (Tenzing was merged into a new entity called OnAir along with investment from Airbus and SITA, an airline-owned systems integrator. OnAir will launch satellite-based broadband service in 2006.) On May 10, 2006, the FCC began Auction 65, which sold off the 4 MHz of spectrum over which radiotelephone calls were made, and required AirFone to revise its equipment within two years of the auction's conclusion on June 2, 2006. Instead of the narrowband approach, with dedicated uplink and downlink for each call, Verizon is required to move its operations to a 1 MHz slice which is expected to provide substantially higher call volume and quality. AirFone received a non-renewable license to share that 1 MHz until 2010 using vertical polarization with the winner of License D in Auction 65, LiveTV, a division of the airline JetBlue, which had not announced its plans at the end of the auction. A more broadband-oriented 3 MHz license (License C) was won by AC BidCo, LLC, a sister company of Aircell. Aircell will deploy in-flight broadband using this license. (License C includes 849.0-850.5 MHz and 894.0-895.5 MHz; License D includes 850.5-851.0 MHz and 895.5-896.0 MHz.) An interim approach by Aircell was to utilize the existing ground-based cellular network, with highly directional antennas beamed upward. Although initially successful, the widespread conversion to GSM and spread-spectrum by carriers (not all carriers participated) made obsolete the early generation Aircell instruments. Some units were exchanged for satellite-based Iridium equipment, but Aircell's recent acquisition of 3 MHz of the 800 MHz spectrum at auction at the FCC, will undoubtedly lead to a new generation of products. Only the 450 MHz AGRAS network continues to operate in its original configuration. See also Aircraft emergency frequency Radiotelephone References External links FCC Order for terms of changing 800 MHz Air-Ground Radiotelephone Service (2005) FCC Auction 65 details for 800 MHz Air-Ground Radiotelephone Service in May 2006 Aircell home page Verizon AirFone's president's estimate of calls per flight FLITEFONE-VI AIR-GROUND TELEPHONE NETWORK Verizon Notice re: 800 MHz Magnastar service temporary extension (2007) Avionics Mobile radio telephone systems
Air-ground radiotelephone service
Technology
1,424
681,481
https://en.wikipedia.org/wiki/Immirzi%20parameter
The Immirzi parameter (also known as the Barbero–Immirzi parameter) is a numerical coefficient appearing in loop quantum gravity (LQG), a nonperturbative theory of quantum gravity. The Immirzi parameter measures the size of the quantum of area in Planck units. As a result, its value is currently fixed by matching the semiclassical black hole entropy, as calculated by Stephen Hawking, and the counting of microstates in loop quantum gravity. The reality conditions The Immirzi parameter arises in the process of expressing a Lorentz connection with noncompact group SO(3,1) in terms of a complex connection with values in a compact group of rotations, either SO(3) or its double cover SU(2). Although named after Giorgio Immirzi, the possibility of including this parameter was first pointed out by Fernando Barbero. The significance of this parameter remained obscure until the spectrum of the area operator in LQG was calculated. It turns out that the area spectrum is proportional to the Immirzi parameter. Black hole thermodynamics In the 1970s Stephen Hawking, motivated by the analogy between the law of increasing area of black hole event horizons and the second law of thermodynamics, performed a semiclassical calculation showing that black holes are in equilibrium with thermal radiation outside them, and that black hole entropy (that is, the entropy of the black hole itself, not the entropy of the radiation in equilibrium with the black hole, which is infinite) equals (in Planck units) In 1997, Ashtekar, Baez, Corichi and Krasnov quantized the classical phase space of the exterior of a black hole in vacuum General Relativity. They showed that the geometry of spacetime outside a black hole is described by spin networks, some of whose edges puncture the event horizon, contributing area to it, and that the quantum geometry of the horizon can be described by a U(1) Chern–Simons theory. The appearance of the group U(1) is explained by the fact that two-dimensional geometry is described in terms of the rotation group SO(2), which is isomorphic to U(1). The relationship between area and rotations is explained by Girard's theorem relating the area of a spherical triangle to its angular excess. By counting the number of spin-network states corresponding to an event horizon of area A, the entropy of black holes is seen to be Here is the Immirzi parameter and either or depending on the gauge group used in loop quantum gravity. So, by choosing the Immirzi parameter to be equal to , one recovers the Bekenstein–Hawking formula. This computation appears independent of the kind of black hole, since the given Immirzi parameter is always the same. However, Krzysztof Meissner and Marcin Domagala with Jerzy Lewandowski have corrected the assumption that only the minimal values of the spin contribute. Their result involves the logarithm of a transcendental number instead of the logarithms of integers mentioned above. The Immirzi parameter appears in the denominator because the entropy counts the number of edges puncturing the event horizon and the Immirzi parameter is proportional to the area contributed by each puncture. Immirzi parameter in spin foam theory In late 2006, independent from the definition of isolated horizon theory, Ansari reported that in loop quantum gravity the eigenvalues of the area operator are symmetric by the ladder symmetry. Corresponding to each eigenvalue there are a finite number of degenerate states. One application could be if the classical null character of a horizon is disregarded in the quantum sector, in the lack of energy condition and presence of gravitational propagation the Immirzi parameter tunes to: by the use of Olaf Dreyer's conjecture for identifying the evaporation of minimal area cell with the corresponding area of the highly damping quanta. This proposes a kinematical picture for defining a quantum horizon via spin foam models, however the dynamics of such a model has not yet been studied. Scale-invariant theory For scale-invariant dilatonic theories of gravity with standard model-type matter couplings, Charles Wang and co-workers show that their loop quantization lead to a conformal class of Ashtekar–Barbero connection variables using the Immirzi parameter as a conformal gauge parameter without a preferred value. Accordingly, a different choice of the value for the Immirzi parameter for such a theory merely singles out a conformal frame without changing the physical descriptions. Interpretation The parameter may be viewed as a renormalization of Newton's constant. Various speculative proposals to explain this parameter have been suggested: for example, an argument due to Olaf Dreyer based on quasinormal modes. Another more recent interpretation is that it is the measure of the value of parity violation in quantum gravity, analogous to the theta parameter of QCD, and its positive real value is necessary for the Kodama state of loop quantum gravity. As of today (2004), no alternative calculation of this constant exists. If a second match with experiment or theory (for example, the value of Newton's force at long distance) were found requiring a different value of the Immirzi parameter, it would constitute evidence that loop quantum gravity cannot reproduce the physics of general relativity at long distances. On the other hand, the Immirzi parameter seems to be the only free parameter of vacuum LQG, and once it is fixed by matching one calculation to an "experimental" result, it could in principle be used to predict other experimental results. Unfortunately, no such alternative calculations have been made so far. References External links "Quantum Geometry of Isolated Horizons and Black Hole Entropy", a calculation incorporating matter and the theory of isolated horizons from General Relativity. "Area, Ladder Symmetry, and Degeneracy in Loop Quantum Gravity", a brief review on the quantum of area ladder symmetry and area degeneracy in loop quantum gravity and the application of these two in the calculation incorporating the modifications of black hole radiation. Black holes Loop quantum gravity
Immirzi parameter
Physics,Astronomy
1,258
46,693,835
https://en.wikipedia.org/wiki/Blastocladiaceae
The Blastocladiaceae are a family of fungi in the division Blastocladiomycota. It contains the following genera: Allomyces Blastocladiella Blastocladia Blastocladiopsis Microallomyces The family was circumscribed by Henning Eiler Petersen in 1909. References External links Blastocladiomycota Aquatic fungi Fungus families
Blastocladiaceae
Biology
83
37,488,173
https://en.wikipedia.org/wiki/International%20Journal%20of%20Nanomedicine
The International Journal of Nanomedicine is a peer-reviewed medical journal covering research on the application of nanotechnology in diagnostics, therapeutics, and drug delivery systems throughout the biomedical field. The journal was established in 2006 and is published by Dove Medical Press. External links Dove Medical Press academic journals English-language journals Nanomedicine journals Open access journals Academic journals established in 2006
International Journal of Nanomedicine
Materials_science
80
28,798,783
https://en.wikipedia.org/wiki/OpenSonATA
Open SonATA stands for Open SETI on the Allen Telescope Array and is the open source version of the software are used for signal detection by the SETI Institute on the Allen Telescope Array (ATA). The software currently runs on Linux and macOS operating systems and is intended to be ported to multiple platforms. The Allen Telescope Array uses the OpenSUSE operating system on the SonATA computers. Before the release of the code to the public setiQuest had to find all instances that conflicted with the GPL license they looked to release it in. With the release of Open SonATA 2.1, setiQuest released the source code to the public under the GPL License. setiQuest has included "ways to help" in their documentation of the software. The source code can be found in setiQuest's GitHub repository. Open SonATA is closely related to the setiQuest project. References Search for extraterrestrial intelligence
OpenSonATA
Astronomy
190
21,723,146
https://en.wikipedia.org/wiki/Monoi%20oil
Monoï oil is an infused perfume-oil made from soaking the petals of Tahitian gardenias (best known as Tiaré flowers) in coconut oil. Monoï (pronounced mah-noy) is a Tahitian word meaning "scented oil" in the Tahitian. Monoï is widely used among French Polynesians as a skin and hair softener. Authentic Tahitian monoi oil follows a strict manufacturing code that oversees the entire process from handpicking the tiare flowers to storage and shipping of the final product. This process has been validated and protected by an Appellation of Origin which was awarded to Monoi de Tahiti on 1 April 1992. History The date when monoi was first created is unknown; however, its origins can be traced back 2000 years to the Maohi people, the indigenous Polynesians. Early European explorers who travelled to the Polynesian islands, including James Cook, documented the natives’ use of monoi for medicinal, cosmetic and religious purposes. Monoi featured prominently in the lives of these ancient people, from birth until death. It was applied to the bodies of newborns to keep them from dehydrating in hot weather, and from getting chilled in cooler temperatures. When a person died, their body was embalmed and perfumed with manoi to help facilitate their journey into the afterlife. Monoi was also used in ancient Polynesian religious rites. During ceremonies which took place in their temples (marae), priests (tahuʻa) used monoi to anoint sacred objects and purify offerings to their deities. Māori navigators used manoi to protect their bodies from cold, harsh winds and salt water during long canoe expeditions at sea – even today, many divers rub monoi all over their bodies prior to diving for the same purpose. In 1942, monoi began to be manufactured commercially. Ingredients Tiare flower The tiare flower (Gardenia taitensis), from the family Rubiaceae, is Tahiti's national flower. The small white, star-shaped flower grows on high bushes throughout French Polynesia, which features soil of coral origin, and blossoms all year long. Other names for this flower include Tiare Tahiti and Tiare Maohi. Beyond their contribution to Monoi Tiare Tahiti, tiare flowers are deeply rooted in everyday Polynesian life. In traditional medicine, the flower is prepared in a variety of concoctions to alleviate a range of common maladies including the common cold, headaches and sunburn. Many Polynesians enjoy placing a few tiare flowers on a small, water-filled saucer to release the fragrance throughout their "fares" (Polynesian houses). The flower necklaces that are offered to tourists as a welcome gesture are created with tiare flowers, and vahine (Polynesian women) customarily wear them behind one ear. The tiare flowers that are used in Monoi de Tahiti are hand-picked at a very particular stage of their growth, specifically when they are still unopened. The flowers are immediately taken to the manufacturing plant and stripped of their pistils. The flower portion is placed in refined coconut oil for a minimum of 15 days. This is known as "enfleurage" (flower soaking), a French term used to designate a specific extraction step. According to specific maceration standards set by the decree of Appellation d'Origine, which each manufacturer must scrupulously follow, a minimum of 15 tiare flowers must be used in every liter of refined coconut oil. Coconut oil Coconut palm trees remain the most utilized Polynesian island tree and cover approximately of land. Under favourable conditions, the coconut palm tree grows its first fruits during its 6th year and produces approximately 60 coconuts per year, from its 10th to its 70th year. As the nut begins to form it is completely empty and contains no nutrients. When its size increases, the shell hardens and becomes filled with a transparent liquid that will turn into oil upon full maturity. When the coconuts fall from the trees, they are gathered to undergo the ancient process of extracting the coconut kernels. The husk is cracked open with an ax. The two coconut halves are left for several hours in the sun, until the almonds have shrunk enough to be removed and broken into small pieces. The fragments are then taken to special flat wooden barracks covered with sliding metal roofs which are popularly known on the Polynesian islands as "coprah dryers". The sliding roofs are only used at night and during the rainy season. The coprah is left to dry for more than a week until the coconut meat has lost over 90% of its moisture. Placed into special natural fiber bags, the coconut fragments are shipped to the unique oil mill located on the island of Tahiti where they will be thrown into special machines and ground to a fine coco flour. The flour is then heated up to 125 degrees and finally pressed into raw coconut oil. After that step, the oil will undergo more refining to remove all impurities and obtain the highest possible quality. Infusion Once the refining process is completed the coconut oil is placed into special storage tanks until it is purchased by one of only a handful of Monoi manufacturers. These manufacturers will proceed individually to the final maceration step which is to infuse the oil with Tiare flowers. Monoi de Tahiti must be stored in drums with a food-suitable liner or material. Drums must be lead-sealed when they leave Tahiti and kept away from humidity, light and heat. Previously, these infused oils were stored in dried shells of wax gourd fruit. Common uses Recent manufacturer tests verify that monoi oil is rich in methyl salicylate which is a skin-soothing agent. It is a naturally concentrated emollient which penetrates the skin, re-hydrates the layers of the epidermis and shields skin against external damages including sun and wind. Monoi oil is used: After a shower or bath to rehydrate skin. Before or after a swim, it provides protection against the effects of sun, wind, sea or pool water As a pre-shampoo hot oil treatment, it helps repair and deep condition the hair to a healthy shine. As a hair treatment after shampooing, once hair is dry. It adds shine, smooths frizz, and conditions the hair. During a bath. A few drops in the water reportedly encourages relaxation while keeping skin soft and subtly fragrant. As a dark tanning oil After being warmed in the palms of the hands, it is suited for massaging sore parts of the body or for warming up a weak body. As a pain reliever for sunburn. References External links Monoï Institute Guide to Using Monoi Oil on Hair, body, tanning and more Essential oils Tahiti Polynesia
Monoi oil
Chemistry
1,396
3,005,092
https://en.wikipedia.org/wiki/Ribosome%20shunting
Ribosome shunting is a mechanism of translation initiation in which ribosomes bypass, or "shunt over", parts of the 5' untranslated region to reach the start codon. However, a benefit of ribosomal shunting is that it can translate backwards allowing more information to be stored than usual in an mRNA molecule. Some viral RNAs have been shown to use ribosome shunting as a more efficient form of translation during certain stages of viral life cycle or when translation initiation factors are scarce (e.g. cleavage by viral proteases). Some viruses known to use this mechanism include adenovirus, Sendai virus, human papillomavirus, duck hepatitis B pararetrovirus, rice tungro bacilliform viruses, and cauliflower mosaic virus. In these viruses the ribosome is directly translocated from the upstream initiation complex to the start codon (AUG) without the need to unwind RNA secondary structures. Ribosome shunting in Cauliflower mosaic virus Translation of Cauliflower mosaic virus (CaMV) 35S RNA is initiated by a ribosome shunt. The 35S RNA of CaMV contains a ~600 nucleotide leader sequence which contains 7-9 short open reading frames (sORFs) depending on the strain. This long leader sequence has the potential to form an extensive complex stem-loop structure, which is an inhibitory element for expression of following ORFs. However, translation of ORFs downstream of the CaMV 35S RNA leader has been commonly observed. Ribosome shunting model indicates with the collaboration of initiation factors, ribosomes start scanning from capped 5’-end and scans for a short distance until it hits the first sORF. The hairpin structure formed by leader brings the first long ORF into the close spatial vicinity of a 5’-proximal sORF. After read through sORF A, the 80S scanning ribosome disassembles at the stop codon, which is the shunt take-off site. The 40S ribosomal subunits keep combining with RNA, and bypass the strong stem-loop structural element, land at the shunt acceptor site, resume scanning and reinitiate at the first long ORF. 5’-proximal sORF A and the stem-loop structure itself are two essential elements for CaMV shunting [5]. sORFs with 2-15 codons, and 5-10 nucleotides between sORF stop codon and the base of the stem structure are optimal for ribosome shunting, while the minimal (start-stop) ORF does not promote shunting. Ribosome shunting in Rice tungro bacilliform pararetrovirus Ribosome shunting process was first discovered in CaMV in 1993, and then was reported in Rice tungro bacilliform virus (RTBV) in 1996. The mechanism of ribosome shunting in RTBV resembles that in CaMV: it also requires the first short ORF as well as a following strong secondary structure. Swapping of the conserved shunt elements between CaMV and RTBV revealed the importance of nucleotide composition of the landing sequence for efficient shunting, indicating that the mechanism of ribosome shunting is evolutionary conserved in plant pararetroviruses. Ribosome shunting in Sendai virus Sendai virus Y proteins are initiated by ribosome shunting. Among 8 primary translation products of Sendai virus P/C mRNA, leaky scanning accounts for translation of protein C’, P, and C proteins, while expression of protein Y1 and Y2 is initiated via a ribosomal shunt discontinuous scanning. Scanning complex enters 5’ cap and scan ~50 nucleotides of 5’ UTR, and then is transferred to an acceptor site at or close the Y initiation codons. In the case of Sendai virus, no specific donor site sequences are required. Ribosome shunt in Adenovirus Ribosome shunting is observed during expression of late adenovirus mRNAs. Late adenovirus mRNAs contains a 5’ tripartite leader, a highly conserved 200-nucleotide NTR with a 25- to 44- nucleotide unstructured 5’ conformation followed by a complex group of stable hairpin structure, which confers preferential translation by reducing the requirement for the eIF-4F (cap-binding protein complex), which is inactivated by adenovirus to interfere with cellular protein translation. When eIF4E is abundant, the subunit binds to the 5' cap on mRNAs, forming an eIF4 complex leading to shunting; however, when eIF4E is altered or deactivated during late adenovirus infection of heat shock, the tripartite leader exclusively and efficiently directs initiation by shunting. While Adenovirus required tyrosine kinase to infect the cells without it by disrupting the cap-initiation complex known as the tripartite leader. It disrupts the process via ribosome shunting, in tyrosine phosphorylation. There are two key sites for the binding of the ribosome. In translating viral mRNA and suppressing the translation while being capped by ribosome shunting process. In the case of adenovirus late mRNA and hsp70 mRNA, instead of recognition of stop codon of first short ORF, pausing of translation is caused by scanning ribosome with three conserved sequences that are complementary to the 3’ hairpin of 18S ribosomal RNA. The mechanism for ribosome shunt involves the larger subunit binding upstream of the start codon. The polymerase is then able to leapfrog using protein binding and a power stroke to bypass the start codon on the coding mRNA. The tripate is then inserted into the parent strand to create a new binding site for further replication. References Protein biosynthesis
Ribosome shunting
Chemistry
1,231
1,659,199
https://en.wikipedia.org/wiki/Narrabri%20Stellar%20Intensity%20Interferometer
The Narrabri Stellar Intensity Interferometer (NSII) was the first astronomical instrument to measure the diameters of a large number of stars at visible wavelengths. It was designed by (amongst others) Robert Hanbury Brown, who received the Hughes Medal in 1971 for this work. It was built by University of Sydney School of Physics and was located near the town of Narrabri in north-central New South Wales, Australia. Many of the components were constructed in the UK. The design was based on an earlier optical intensity interferometer built by Hanbury Brown and Richard Q. Twiss at Jodrell Bank in the UK. Whilst the original device had a maximum baseline of 10m, the NSII device consisted of a large circular track that allowed the detectors to be separated from 10 to 188m. The NSII operated from 1963 until 1974, and was used to measure the angular diameters of 32 stars. See also Lists of telescopes References External links The angular diameters of 32 stars, Mon. Not. R. Astron. Soc. Volume 167 pp 121–136 (1974) Hanbury Brown R, The intensity interferometer – its application to astronomy, Taylor & Francis, 1974 Telescopes Interferometric telescopes Science and technology in New South Wales Astronomical observatories in New South Wales
Narrabri Stellar Intensity Interferometer
Astronomy
267
39,207,906
https://en.wikipedia.org/wiki/Bupirimate
Bupirimate (systematic name 5-butyl-2-ethylamino-6-methylpyrimidin-4-yldimethylsulphamate; brand names Nimrod and Roseclear 2) is an active ingredient of plant protection products (or pesticides), which has an effect as a fungicide. It belongs to the chemical family of pyrimidine sulfamates. Bupirimate has translaminar mobility and systemic translocation in the xylem. It acts mainly by inhibiting sporulation and is used for control of powdery mildew of apples, pears, stone fruit, cucurbits, roses and other ornamentals, strawberries, gooseberries, currants, raspberries, hops, beets and other crops. Bupirimate is not an insecticide. It is of low mammalian toxicity and is non-toxic to bees. However, it is used in many products which also contain insecticides. History A research programme at ICI's Jealott's Hill site during the 1960s had the objective of discovering fungicides which could penetrate into and move within plants and hence could cure established infections. The outcome of the research was three related compounds: dimethirimol, ethirimol and bupirimate which were first marketed in 1968, 1970 and 1975 respectively. The key target for these fungicides are the mildews but each compound differs in its effect on individual mildew species. In particular, bupirimate is effective on apple powdery mildew caused by the fungus Podosphaera leucotricha, which the earlier materials were not. Regulation In terms of the regulation of plant protection products in the European Union, this active substance is in revision of the inclusion in Annex I of the 91/414/EEC Directive. In France, the active substance is permitted in the composition of preparations with an authorization on the market. References External links Fungicides Aminopyrimidines Sulfamate esters
Bupirimate
Biology
423
50,375,026
https://en.wikipedia.org/wiki/Power-to-X
Power-to-X (also P2X and P2Y) are electricity conversion, energy storage, and reconversion pathways from surplus renewable energy. Power-to-X conversion technologies allow for the decoupling of power from the electricity sector for use in other sectors (such as transport or chemicals), possibly using power that has been provided by additional investments in generation. The term is widely used in Germany and may have originated there. The X in the terminology can refer to one of the following: power-to-ammonia, power-to-chemicals, power-to-fuel, power-to-gas (power-to-hydrogen, power-to-methane) power-to-liquid (synthetic fuel), power to food, power-to-heat. Electric vehicle charging, space heating and cooling, and water heating can be shifted in time to match generation, forms of demand response that can be called power-to-mobility and power-to-heat. Collectively power-to-X schemes which use surplus power fall under the heading of flexibility measures and are particularly useful in energy systems with high shares of renewable generation and/or with strong decarbonization targets. A large number of pathways and technologies are encompassed by the term. In 2016 the German government funded a €30million first-phase research project into power-to-X options. Power-to-fuel Surplus electric power can be converted to gas fuel energy for storage and reconversion. Direct current electrolysis of water (efficiency 80–85% at best) can be used to produce hydrogen which can, in turn, be converted to methane (CH4) via methanation. Another possibility is converting the hydrogen, along with CO2 to methanol. Both these fuels can be stored and used to produce electricity again, hours to months later. Storage and reconversion of power-to-fuel Hydrogen and methane can be used as downstream fuels, fed into the natural gas grid, or used to make synthetic fuel. Alternatively they can be used as a chemical feedstock, as can ammonia (). Reconversion technologies include gas turbines, combined cycle plants, reciprocating engines and fuel cells. Power-to-power refers to the round-trip reconversion efficiency. For hydrogen storage, the round-trip efficiency remains limited at 35–50%. Electrolysis is expensive and power-to-gas processes need substantial full-load hours to be economic. However, while round-trip conversion efficiency of power-to-power is lower than with batteries and electrolysis can be expensive, storage of the fuels themselves is quite inexpensive. This means that large amounts of energy can be stored for long periods of time with power-to-power, which is ideal for seasonal storage. This could be particularly useful for systems with high variable renewable energy penetration, since many areas have significant seasonal variability of solar, wind, and run-of-the-river-hydroelectric generation. Batteries Despite it also being based fundamentally on electrolytic chemical reactions, battery storage is not normally considered a power-to-fuel concept. Power-to-heat The purpose of power-to-heat systems is to utilize excess electricity generated by renewable energy sources which would otherwise be wasted. Depending on the context, the power-to-heat can either be stored as heat, or delivered as heat to meet a need. Heating systems In contrast to simple electric heating systems such as night storage heating which covers the complete heating requirements, power-to-heat systems are hybrid systems, which additionally have traditional heating systems using chemical fuels like wood or natural gas. When there are excess energy the heat production can result from electric energy otherwise the traditional heating system will be used. In order to increase flexibility power-to-heat systems are often coupled with heat accumulators. The power supply occurs for the most part in the local and district heating networks. Power-to-heat systems are also able to supply buildings or industrial systems with heat. Power-to-heat involves contributing to the heat sector, either by resistance heating or via a heat pump. Resistance heaters have unity efficiency, and the corresponding coefficient of performance (COP) of heat pumps is 2–5. Back-up immersion heating of both domestic hot water and district heating offers a cheap way of using surplus renewable energy and will often displace carbon-intensive fossil fuels for the task. Large-scale heat pumps in district heating systems with thermal energy storage are an especially attractive option for power-to-heat: they offer exceptionally high efficiency for balancing excess wind and solar power, and they can be profitable investments. Heat storage systems Other forms of power-to-X Power-to-mobility refers to the charging of battery electric vehicles (BEV). Given the expected uptake of EVs, dedicated dispatch will be required. As vehicles are idle for most of the time, shifting the charging time can offer considerable flexibility: the charging window is a relatively long 8–12hours, whereas the charging duration is around 90minutes. The EV batteries can also be discharged to the grid to make them work as electricity storage devices, but this causes additional wear to the battery. Impact According to the German concept of sector coupling interconnecting all the energy-using sectors will require the digitalisation and automation of numerous processes to synchronise supply and demand. A 2023 study examined to role that powertoX could play in a highlyrenewable future energy system for Japan. The P2X technologies considered include water electrolysis, methanation, Fischer–Tropsch synthesis, and Haber–Bosch synthesis and the study used linear programming to determine leastcost system structure and operation. Results indicate that these various P2X technologies can effectively shift electricity loads and reduce curtailment by 80% or more. See also Grid energy storage Flywheel References Energy policy Energy policy of Germany Energy storage Power engineering
Power-to-X
Engineering,Environmental_science
1,207
23,321,374
https://en.wikipedia.org/wiki/Tetraphenylporphyrin
Tetraphenylporphyrin, abbreviated TPP or H2TPP, is a synthetic heterocyclic compound that resembles naturally occurring porphyrins. Porphyrins are dyes and cofactors found in hemoglobin and cytochromes and are related to chlorophyll and vitamin B12. The study of naturally occurring porphyrins is complicated by their low symmetry and the presence of polar substituents. Tetraphenylporphyrin is hydrophobic, symmetrically substituted, and easily synthesized. The compound is a dark purple solid that dissolves in nonpolar organic solvents such as chloroform and benzene. Synthesis and structure Tetraphenylporphyrin was first synthesized in 1935 by Rothemund, who caused benzaldehyde and pyrrole to react in a sealed bomb at 150 °C for 24 h. Adler and Longo modified the Rothemund method by allowing benzaldehyde and pyrrole to react for 30 min in refluxing propionic acid (141 °C) open to the air: 8 C4H4NH + 8 C6H5CHO + 3 O2 → 2 (C6H5C)4(C4H2N)2(C4H2NH)2 + 14 H2O Despite its modest yields, the synthesis of H2TPP is a common experiment in university teaching labs. Highly efficient routes to H2TPP and many analogues involve the air-free condensation of the pyrrole and aldehyde to give the porphyrinogen. In this so-called Lindsey synthesis of meso-substituted porphyrins, the porphyrinogen is subsequently oxidized to deliver the porphyrin. The conjugate base of the porphyrin, TPP2−, belongs to the symmetry group D4h while its hydrogenated counterpart H2(TPP) is D2h. Unlike natural porphyrins, H2TPP is substituted at the oxidatively sensitive "meso" carbon positions, and hence the compound is sometimes called meso-tetraphenylporphyrin. Another synthetic porphyrin, octaethylporphyrin (H2OEP) does have a substitution pattern that is biomimetic. Many derivatives of TPP and OEP are known, including those prepared from substituted benzaldehydes. One of the first functional analogues of myoglobin was the ferrous derivative of the "picket fence porphyrin," which is structurally related to Fe(TPP), being derived via the condensation of 2-nitrobenzaldehyde and pyrrole. Sulfonated derivatives of TPP are also well known to give water-soluble derivatives, e.g. tetraphenylporphine sulfonate: 4 SO3 + (C6H5C)4(C4H2N)2(C4H2NH)2 → (HO3SC6H4C)4(C4H2N)2(C4H2NH)2 + 4 H2O Complexes Complexation can be thought of as proceeding via the conversion of H2TPP to TPP2−, with 4-fold symmetry. The metal insertion process proceeds via several steps, not via the dianion. Representative complexes: Cu(TPP) Zn(TPP)Lx VO(TPP) Fe(TPP)Cl Optical properties Tetraphenylporphyrin has a strong absorption band with maximum at 419 nm (so called Soret band) and four weak bands with maxima at 515, 550, 593 and 649 nm (so called Q-bands). It shows red fluorescence with maxima at 649 and 717 nm. The quantum yield is 11%. Soret red shifts for Zn(TTP)-Donor systems relative to the Soret band at 416.2 nm for Zn(TTP) in cyclohexane have been measured. Applications H2TPP is a photosensitizer for the production of singlet oxygen. Its molecules have potential applications in single-molecule electronics, as they show diode-like behavior that can be altered for each individual molecule. References Chelating agents Tetrapyrroles Macrocycles Phenyl compounds
Tetraphenylporphyrin
Chemistry
924
5,652,499
https://en.wikipedia.org/wiki/Mother%20Camels
The protecting Mother Camels (Arabic العوائذ alʽawaʼid) is an asterism in the constellation of Draco described by ancient Arabic nomadic tribes. The asterism was interpreted as a ring of mother camels – Beta Draconis (Rastaban), Gamma Draconis (Eltanin), Nu Draconis (Kuma) and Xi Draconis (Grumium) – surrounding a foal (the faint star Alruba), with another mother camel, Mu Draconis (Alrakis) running to join them. The Arabs did not see the constellation Draco as it is now. The Mother Camels were protecting the foal from the attack of two wolves or jackals – Zeta Draconis (Aldhibah) and Eta Draconis (Athebyne). The faint pair Omega Draconis and 27 Draconis was known as the "wolf's claws" (الأظفار الذئب al-ʼaẓfār al-dhiʼb). References Draco (constellation)
Mother Camels
Astronomy
231
8,619,670
https://en.wikipedia.org/wiki/Polytetrahedron
Polytetrahedron is a term used for three distinct types of objects, all based on the tetrahedron: A uniform convex 4-polytope made up of 600 tetrahedral cells. It is more commonly known as a 600-cell or hexacosichoron. Other derivative 4-polytope are identified as polytetrahedra, where a qualifying prefix such as rectified or truncated is used. A connected set of regular tetrahedra, the 3-dimensional analogue of a polyiamond. Polytetrahedra and polyiamonds are related as polycubes are related to polyominoes. In origami, a polypolyhedron is "a compound of multiple linked polyhedral skeletons with uniform nonintersecting edges" . There exist two topologically distinct polytetrahedra, each made up of four intersecting triangles. See also Compound of five tetrahedra Compound of ten tetrahedra References 4-polytopes Polyhedra Paper folding
Polytetrahedron
Mathematics
213
26,021,893
https://en.wikipedia.org/wiki/List%20of%20email%20subject%20abbreviations
This is a list of commonly and uncommonly used abbreviations that are used in the subject box of an English-language email header. Standard prefixes These prefixes are usually automatically inserted by the email client. Re: or RE: followed by the subject line of a previous message indicates a reply to that message. "Re" in a narrower sense though is, as 3.6.5. explicitly states, an abbreviation of "in re"—"re" being the ablative singular of rēs ("thing", "circumstance")—, loosely meaning "about", "concerning", "regarding". As such, regarding is a fitting English translation with the same two initial letters as in reply. It is expressly stated in 3.6.5. as somewhat structuring the otherwise free-form subject field. If used, exactly one character string Re: (disregarding letter case) ought to appear at the very front of the subject line. Fw:, FW: or FWD: signals a forwarded message: the recipient is informed that the email was originally sent to someone else who has in turn sent a copy of the email to them. Non-standard infixes and suffixes These words are inserted in the middle of or at the end of the subject, usually by the author. Was:, WAS: or was: indicates the subject was changed since the previous email. Not an abbreviation, but the English word "was" (past tense of "to be"). Denoting a subject change prevents confusion on the part of the recipient and avoids accusations of threadjacking in email-based discussion threads. Original subject may furthermore get parenthesised. Example: Subject: Do you know a good babysitter? (WAS: What should we do this weekend?); real-world occurrence: lore.kernel.org OT: off topic. Used within an email thread to indicate that this particular reply is about a different topic than the rest of the thread, in order to avoid accusations of threadjacking. EOM, Eom or eom – end of message. Used at the end of the subject when the entire content of the email is contained in the subject and the body remains empty. This saves the recipient's time because they then do not have to open the message. 1L – One Liner. Used at the beginning of the subject when the subject of the email is the only text contained in the email. This prefix indicates to the reader that it is not necessary to open the email. E.g., "1L: WFH today" WFH – work from home. Used in the subject line or body of the email. NONB – Non-business. Used at the beginning of the subject when the subject of the email is not related to business. This prefix indicates to the reader that the email is not about a work related or endorsed topic. Software development The following prefixes are often used in software development: [ANNOUNCE], [ANN] – announcement. A new version of the software has been released. [BUG] – bug report. A description of an error in the software. [PATCH] – software patch. New code is attached to or included in the body of the message. Other English abbreviations This is a list of abbreviations which are less commonly used in the subject of an English email header: AEAP, meaning As Early As Possible. ASAP, meaning As Soon As Possible. AB, meaning Action By. Used with a time indicator to inform the recipient that the sender needs a task to be completed within a certain deadline, e.g. AB+2 meaning Action By 2 days. AR, meaning Action Required. The recipient is informed that they are being given a task. CFI, meaning Copied For Information COB, meaning Close Of Business (end of work day). Implying that something should happen by the end of the typical work shift. COP or EOP, meaning Close Of Play / End Of Play. British sporting term referring to an overnight, intra-game, break during a cricket match which is scheduled to take place over multiple days. Also used in a similar context at The Championships. CTA, meaning Call to Action. Instruction to the receiver designed to provoke an immediate response. CWC, change in working conditions EOD, meaning End Of Day FYA, meaning For Your Action. The recipient is informed that they are being given a task. Can also mean For Your Attention, For Your Approval, For Your Assistance, For Your Awareness, For Your Authorization, or For Your Acknowledgement. FAO, meaning "For the Attention Of", especially in email or written correspondence. This can be used to direct an email towards an individual when an email is being sent to a team email address or to a specific department in a company. e.g. FAO: Jo Smith, Finance Department. FYI or Fyi: , "for your information". The recipient is informed that they do not have to reply to this email. FYSA, meaning For Your Situational Awareness. The recipient is informed that this information may be important context for other communications but contains no action required. Similar to FYI but used heavily in U.S. government and military email correspondence. (Not to be confused with FISA.) FYFG, meaning For Your Future Guidance. Also written as Fyfg. Used at the beginning of the subject, typically in corporate emails in which management wants to inform personnel about a new procedure they should follow. FYG, meaning For Your Guidance. Also written as Fyg. Used at the beginning of the subject, typically in corporate emails in which management wants to inform personnel about a new procedure they should follow. FYR, meaning For Your Reference. This is typically used in email subjects to send follow-up information about something the recipients already know. I, meaning Information. Used at the beginning of the subject. The recipient is informed that they do not have to reply to this email. May be more commonly used in Europe than in North America, where FYI may be preferred. LET, meaning Leaving Early Today. Used in corporate emails to indicate that the sender will be leaving the office early that day. LF, meaning Looking For something. Used in corporate emails to indicate that the sender is looking for that particular thing. LSFW, meaning Less Safe For Work. Used in corporate emails to indicate that the content may be sexually explicit or profane, helping the recipient to avoid potentially objectionable material. MIA, meaning Missing In Action. Used when original email has lost in work process. NIM, meaning No Internal Message. Used when the entire content of the email is contained in the subject and the body remains empty. This saves the recipient's time because they then do not have to open the email. NLS, meaning Not Life-Safe. Used to indicate that the content may be shocking or grotesque, helping the recipient to avoid potentially objectionable material. NM, meaning No Message. Also written as N/M, n/m, or *n/m*. Used when the entire content of the email is contained in the subject and the body remains empty. This saves the recipient's time because they then do not have to open the email. NB, meaning Note Well. Abbreviation of Latin nota bene. Used before a piece of important information to make readers notice it. NMP, meaning Not My Problem. Used in a reply to indicate that the previous email has been ignored. NMS, meaning Not Mind-Safe. Used to indicate that the content may be shocking or grotesque, helping the recipient to avoid potentially objectionable material. NNTO, meaning No Need To Open. The recipient is informed that they do not need to open the email; necessary information is in the Subject line. NNTR, meaning No Need To Respond. The recipient is informed that they do not have to reply to this email. NRN, meaning No Reply Necessary or No Reply Needed. The recipient is informed that they do not have to reply to this email. NRR, meaning No Reply Requested or No Reply Required. The recipient is informed that they do not have to reply to this email. NSFW, meaning Not Safe For Work or Not Suitable For Work. Used in corporate emails to indicate that the content may be sexually explicit or profane, helping the recipient to avoid potentially objectionable material. NSS, meaning Not School-Safe or Not School-Suitable. Used in school network emails to indicate that the content may be sexually explicit or profane, helping the recipient to avoid potentially objectionable material. NT, meaning No Text. Also written as N/T or n/t. Used when the entire content of the email is contained in the subject and the body remains empty. This saves the recipient's time because she then does not have to open the email. NWR, meaning Not Work Related. Used in corporate emails to indicate that the content is not related to business and therefore that the recipient can ignore it if desired. NWS, meaning Not Work-Safe or Not Work-Suitable. Used in corporate emails to indicate that the content may be sexually explicit or profane, helping the recipient to avoid potentially objectionable material. NYR, meaning Need Your Response. Meaning requires a response. NYRT, meaning Need Your Response Today. Meaning requires a response this working day. NYRQ, meaning Need Your Response Quick. Meaning requires an immediate response. NYR-NBD, meaning Need Your Response - Next Business Day. Meaning requires a response before the end of the next working day. OoO, meaning Out of Office. Used in corporate emails to indicate that the sender will not be at work. PFA, meaning Please Find Attached / Attachment. Used in corporate emails to indicate that a document or set of documents is attached for the reference. PNFO, meaning Probably Not For the Office. Used in corporate emails to indicate that the content may be sexually explicit or profane, helping the recipient to avoid potentially objectionable material. PNSFW, meaning Probably Not Safe For Work or Possibly Not Safe For Work. Used in corporate emails to indicate that the content may be sexually explicit or profane, helping the recipient to avoid potentially objectionable material. PYR, meaning Per Your Request. The recipient is informed that the sender is replying to a previous email in which they were given a task. QUE, meaning Question. The recipient is informed that the sender wants an answer to this e-mail. RB, meaning Reply By. Used with a time indicator to inform the recipient that the sender needs a reply within a certain deadline, e.g. RB+7 meaning Reply By one week (7 days). RLB, meaning Read later. Used when sending personal or informational email to a business email address. Immediate response not required. RR, meaning Reply Requested or Reply Required. The recipient is informed that they should reply to this email. RSVP, meaning Reply Requested, please, from the French Répondez s'il vous plaît. The recipient is informed that they should reply to this email. Often used for replies (accept/decline) to invitations. SFW, meaning Safe For Work. Used in corporate emails to indicate that although the subject or content may look as if it is sexually explicit or profane, it is in fact not. SIM, meaning Subject Is Message. Used when the entire content of the email is contained in the subject and the body remains empty. This saves the recipient's time because they then do not have to open the email. SSIA, meaning Subject Says It All. Used when the entire content of the email is contained in the subject and the body remains empty. This saves the recipient's time because they then do not have to open the email. A [1] at the start of the subject line, meaning "one-liner", means the same. Also EOM, above. TLTR, meaning Too Long to read. Used in some corporate emails to request that the email sender re-writes the email body shorter TBF, meaning (1) To be Forwarded. Used in some corporate emails to request that the email receiver should forward the mail to someone else. It also has the more common meaning (2) To be Frank/Fair. Usually only used in the email body. TSFW, meaning Technically Safe For Work or Totally Safe For Work. Used in corporate emails to indicate that although the subject or content may look as if it is sexually explicit or profane, it is in fact not. Y/N, meaning Yes/No. The recipient is informed that they should reply to this email with a simple yes or no answer, increasing the likelihood for the sender of getting a quick response. VSRE, meaning Very Short Reply Expected. UDA, meaning Urgent Document Attached Abbreviations in other languages The email client will typically check for an existing "Re:" when deciding whether or not to add one in front of the subject. However, clients may use different abbreviations if the computer is set up for a non-English language, e.g. "AW:" for German, and this can mean that a conversation between two participants can build up convoluted subject lines like "Re: AW: Re: AW: ..". Whereas "Re:" stands for "re" in Latin (see Standard prefixes), it is often taken to mean "regarding", "reply" or "response" in English, and in most other languages, similarly, the abbreviation corresponds to the word for "response" or "reply." To avoid the issue of convoluted subject lines mentioned earlier, email clients may have an option to force the use of the standard (RE) and English (FW) abbreviations even when all other features are presented in another language, or to recognize other forms. See also Internet slang (list) Internet slang Emoticon Threaded discussion PFA Full Form In Mail References Internet slang Email subject abbreviations Email subject Email
List of email subject abbreviations
Technology
2,889
22,262,277
https://en.wikipedia.org/wiki/Silver%20bromate
Silver bromate (AgBrO3), is a toxic, light and heat-sensitive, white powder. Uses Silver bromate can be used as an oxidant for the transformation of tetrahydropyranyl ethers to carbonyl compounds. References External links Silver bromate solubility Chemical data Bromates Silver compounds Oxidizing agents Reagents for organic chemistry
Silver bromate
Chemistry
79
13,940,388
https://en.wikipedia.org/wiki/Francesco%20Sannino
Francesco Sannino (born 9 February 1968) is an Italian theoretical physicist and a professor at the University of Southern Denmark. He conducts research in the topics of effective field theories and their applications to strongly coupled theories such as quantum chromodynamics. He also researches in beyond standard model physics and quantum field theory. After his studies at the University of Naples, Federico II in 1992, he enrolled in PhD programmes at Syracuse University and University of Naples, obtaining the doctoral degree in 1997. In 1997, he obtained a research fellowship from Yale University and in 2000 he moved to NORDITA. In 2004 he became associate professor at the Niels Bohr Institute in Denmark. In 2007 he has been a paid associate at CERN while becoming full professor at the University of Southern Denmark. In 2009 the research centre CP3-Origins at University of Southern Denmark was formed under his leadership by the Danish Research Foundation. In 2010 he was awarded the EliteForsk Prize for researchers by the Danish Ministry of Science. References External links Scientific publications of Francesco Sannino on INSPIRE-HEP 21st-century Italian physicists 21st-century Danish physicists Quantum physicists Syracuse University alumni Academic staff of the University of Southern Denmark Living people 1968 births Italian theoretical physicists People associated with CERN
Francesco Sannino
Physics
256
34,404,486
https://en.wikipedia.org/wiki/Inglenook
An inglenook or chimney corner is a recess that adjoins a fireplace. The word comes from "ingle", an old Scots word for a domestic fire (derived from the Gaelic aingeal), and "nook". The inglenook originated as a partially enclosed hearth area, appended to a larger room. The hearth was used for cooking, and its enclosing alcove became a natural place for people seeking warmth to gather. With changes in building design, kitchens became separate rooms, while inglenooks were retained in the living space as intimate warming places, subsidiary spaces within larger rooms. Inglenooks were prominent features of shingle style architecture and characteristic of Arts and Crafts architecture but began to disappear with the advent of central heating. Prominent American architects who employed the feature included Greene and Greene, Henry Hobson Richardson, and Frank Lloyd Wright. British architect Richard Norman Shaw significantly influenced Richardson. References Architectural elements Rooms
Inglenook
Technology,Engineering
195
15,872,404
https://en.wikipedia.org/wiki/Mormon%20views%20on%20evolution
The Church of Jesus Christ of Latter-day Saints (LDS Church) takes no official position on whether or not biological evolution has occurred, nor on the validity of the modern evolutionary synthesis as a scientific theory. In the twentieth century, the First Presidency of the LDS Church published doctrinal statements on the origin of man and creation. In addition, individual leaders of the church have expressed a variety of personal opinions on evolution, many of which have affected the beliefs and perceptions of Latter-day Saints. There have been three public statements from the First Presidency (1909, 1910, 1925) and one private statement from the First Presidency (1931) about the LDS Church's view on evolution. The 1909 statement was a delayed response to the publication of On the Origin of Species by Charles Darwin. In the statement, the First Presidency affirmed their doctrine that Adam is the direct, divine offspring of God. The statement declares evolution as "the theories of men", but does not directly qualify it as untrue or evil. In response to the 1911 Brigham Young University modernism controversy, the First Presidency issued an official statement in its 1910 Christmas message that the church members should be kind to everyone regardless of differences in opinion about evolution and that proven science is accepted by the church with joy. In 1925, in response to the Scopes Trial, the First Presidency published a statement, similar in content to the 1909 statement, but with "anti-science" language removed. A private memo written in 1931 by the First Presidency to church general authorities confirmed a neutral stance on the existence of pre-Adamites and "death before the fall." It further asserted that geology, biology, and other sciences were best left to scientists (and implicitly, not theologians), and were not central to the Gospel. There are a variety of LDS Church publications that address evolution, often with neutral or opposing viewpoints. In order to address students' questions about the church's position on evolution in biology and related classes, Brigham Young University (BYU) released a library packet on evolution in 1992. This packet contains the first three official First Presidency statement as well as the "Evolution" section in the Encyclopedia of Mormonism to supplement normal course material. Statements from church presidents are mixed with some vehemently against evolution and the theories of Charles Darwin, and some willing to admit that the circumstances of earth's creation are unknown and that evolution could explain some aspects of creation. In the 1930s, church leaders Joseph Fielding Smith, B. H. Roberts, and James E. Talmage debated about the existence of pre-Adamites, eliciting a memo from the First Presidency in 1931 claiming a neutral stance on pre-Adamites. Since the publication of On the Origin of Species, some Latter-day Saint scientists have published essays or speeches to try and reconcile science and Mormon doctrine. Many of these scientists subscribe to the idea that evolution is the natural process God used to create the Earth and its inhabitants and that there are commonalities between Mormon doctrine and foundations of evolutionary biology. Debate and questioning among members of the LDS Church continues concerning evolution, religion, and the reconciliation between the two. Although articles from publications like BYU Studies often represent neutral or pro-evolutionary stances, LDS-sponsored publications such as the Ensign tend to publish articles with anti-evolutionary views. Studies published since 2014 have found that the majority of Latter-day Saints do not believe humans evolved over time. A 2018 study in the Journal of Contemporary Religion found that very liberal or moderate members of the LDS Church were more likely to accept evolution as their education level increased, whereas very conservative members were less likely to accept evolution as their education level increased. Another 2018 study found that over time, Latter-day Saint undergraduate attitudes towards evolution have changed from antagonistic to accepting. The researchers attributed this attitude change to more primary school exposure to evolution and a reduction in the number of anti-evolution statements from the First Presidency. Official doctrine The LDS Church has no official position on the theory of evolution or the details of "what happened on earth before Adam and Eve, including how their bodies were created." Even so, some church general authorities have made statements suggesting that, in their opinion, evolution is opposed to scriptural teaching. Apostles Joseph Fielding Smith and Bruce R. McConkie were among the most well-known advocates of this position. Other church authorities and members have made statements suggesting that, in their opinion, evolution is not in opposition to scriptural doctrine. Examples of this position have come from B. H. Roberts, James E. Talmage, and John A. Widtsoe. While maintaining its "no position" stance, the LDS Church has produced a number of official publications that have included discussion and personal statements from these various church leaders on evolution and the "origin of man." These statements generally adopt the position, as a church-approved encyclopedia entry states, "[t]he scriptures tell why man was created, but they do not tell how, though the Lord has promised that he will tell that when he comes again." First Presidency statements There have been three authoritative public statements (1909, 1910, and 1925) and one private statement (1931) given from the LDS Church's highest authority, the First Presidency, which represents the church's doctrinal position on the origin of mankind. The 1909 and 1925 statements of the First Presidency have been subsequently endorsed by church leaders such as apostle Boyd K. Packer in 1988. In February 2002, the entire 1909 First Presidency message was reprinted in the church's Ensign magazine. 1909 statement "The Origin of Man" Historically, Latter-day Saints were isolated in the western plains when The Origin of Species was published by Charles Darwin in 1859. Consequently, there was little discussion about evolution among Mormon communities. The Latter-day Saints were trying to survive and build settlements in Utah and evolution was not a prominent concern for them. George Q. Cannon of the Quorum of the Twelve responded to Darwin in 1861, stated that revelation is superior to science, but considered the possibility of evolution among animals and plants. This was and is not considered doctrine. The building of the transcontinental railroad in 1869 allowed for the Saints to gain access to outside ideas and influences. Because of this new knowledge, Mormon schools sought to combat scientific theories such as evolution with faith. Publications helped reaffirm church doctrine; however, views on evolution were mixed. Some believed a belief in evolution was equivalent to atheism, whereas some sought to find common ground between evolution and faith. Due to the many differing opinions that emerged, in the early 1900s the LDS Church began to officially respond to the theories that had already been discussed for nearly fifty years. The first official statement from the First Presidency on the issue of evolution was in 1909, the centennial of Darwin's birth and the 50th anniversary of the publication of On the Origin of Species. Church president Joseph F. Smith appointed a committee headed by Orson F. Whitney, a member of Quorum of the Twelve, to prepare an official statement, "basing its belief on divine revelation, ancient and modern, proclaim[ing] man to be the direct and lineal offspring of Deity." This teaching regarding the origin of man differs from traditional Christianity's doctrine of creation, referred to by some as "creationism", which consists of belief in a fiat creation. In addition, the statement declares human evolution as one of the "theories of men", but falls short of explicitly declaring it untrue or evil. It states that, "man began life as a human being, in the likeness of our heavenly father". Moreover, it states that although man begins life as a germ or embryo, it does not mean that, "[Adam] began life as anything less than a man, or less than the human germ or embryo that becomes a man" Supported by signatures from the First Presidency, the statement was published in November 1909. The statement did not define the origins of animals other than humans, nor did it venture into any more specifics regarding the origin of man. 1910 statement "Words in Season from the First Presidency" In response to continual questions from church members regarding evolution, as well as problems preceding the 1911 Brigham Young University modernism controversy, in its 1910 Christmas message, the First Presidency made reference to the church's position on science. It stated that the church is not hostile to science and that "diversity of opinion does not necessitate intolerance of spirit". The message continues by stating that proven science is accepted with joy, but theories, speculation, or anything contrary to revelation or common sense are not accepted. 1925 statement "Mormon View of Evolution" In 1925, in the midst of the Scopes Trial in Tennessee, a new First Presidency issued an official statement which reaffirmed the doctrine that Adam was the first man upon the earth and that he was created in the image of God. There is a short article in the Encyclopedia of Mormonism which is largely composed of quotes from the 1909 and 1925 statements. It states that men and women are created in the image of the "universal Father and Mother", and Adam, like Christ was a pre-existing spirit who took a body to become a "living soul". It continues by stating that because man is "endowed with divine attributes", he "is capable, by experience through ages and aeons, of evolving into a God." The official statement was initially published in Deseret News on July 18, 1925 and later published in the Improvement Era in September 1925. The 1925 statement is shorter than the 1909 statement, containing selected excerpts from the 1909 statement. "Anti-science" language was removed and the title was altered from "The Origin of Man" to "Mormon View of Evolution". The comment which concluded that theories of evolution are "theories of men" in the 1909 official statement was no longer included in the 1925 official statement. The First Presidency has not publicly issued an official statement on evolution since 1925. 1931 statement "First Presidency Minutes" In April 1931, the First Presidency sent out a lengthy memo to all church general authorities in response to the debate between B. H. Roberts of the Presidency of the Seventy and Joseph Fielding Smith of the Quorum of the Twelve on the existence of pre-Adamites. The memo stated the church's neutral stance on the existence of pre-Adamites. Official church publications The subject of evolution has been addressed in several official publications of the church. General conference speeches The LDS Church has published several general conference talks mentioning evolution. In the October 1984 conference, apostle Boyd K. Packer stated that "no one with reverence for God could believe that His children evolved from slime or from reptiles" as well as affirming that "those who accept the theory of evolution don't show much enthusiasm for genealogical research." In the April 2012 conference, apostle Russell M. Nelson discussed the human body stating "some people erroneously think that these marvelous physical attributes happened by chance or resulted from a big bang somewhere". He then compared this to an "explosion in a printing shop produc[ing] a dictionary". Instruction manuals Old Testament Student Seminary Manual The Old Testament Student Manual, published by the Church Educational System, contains several quotes by general authorities as well as academics from a variety of backgrounds (both members of the church and non-members) related to organic evolution and the origins of the earth. The 2003 edition states that there is no official stance on the age of the earth but that evidence for a longer process is substantial and very few people believe the earth was actually created in the space of one week. However, it also includes a quote from Joseph Fielding Smith indicating his interpretation of church doctrine as it pertains to the theory of organic evolution. He asserts that organic evolution is incompatible and inconsistent with revelations from God and that to accept it is to reject the plan of salvation. Doctrine and Covenants and Church History Seminary Teacher Manual Doctrine and Covenants mentions "the seven thousand years of [the earth's] continuance, or its temporal existence", which has been interpreted by Joseph Fielding Smith and Bruce R. McConkie as a statement suggesting that the earth is no more than about six thousand years old (the seventh thousand-year period being the future millennium). Speciation generally occurs over very large spans of time. However, in relation to this verse, the manual for seminary teachers explains: "It may be helpful to explain that the 7,000 years refers to the time since the Fall of Adam and Eve. It is not referring to the actual age of the earth including the periods of creation." BYU Library packet on evolution Since 1992 at the LDS-owned universities, a packet of authoritative statements approved by the BYU Board of Trustees (composed of the First Presidency, other general authorities, and general organizational leaders) has been provided to students in classes when discussing the topic of organic evolution. The packet was assembled due to the large number of questions students had about evolution and the origins of man and is intended to be distributed along with other course material. The packet includes the first three Official First Presidency statements on the origin of man as well as the "Evolution" section in the Encyclopedia of Mormonism which includes elements from the 1909 and 1925 statements as well as the 1931 "First Presidency Minutes". Official magazines Ensign In 1982, the Ensign, an official periodical of the church, published an article entitled "Christ and the Creation" by Bruce R. McConkie, which stated that "[m]ortality and procreation and death all had their beginnings with the Fall." In an earlier edition of the Ensign published in 1980, McConkie stated that "the greatest heresy in the sectarian world ... is that God is a spirit nothingness which fills the immensity of space, and that creation came through evolutionary processes." New Era A July 2016 article for young adults in the New Era acknowledged questions about how the age of the earth, dinosaurs, and evolution fit with church teachings, stating "it does all fit together, but there are still a lot of questions." The article offered no further explanation to how science and LDS teachings fit together, and stated "nothing that science reveals can disprove your faith" and told youth "not to get worried in the meantime." A few months later in the same magazine, the church published an anonymously authored article stating that "the Church has no official position of the theory of evolution." The article continues by stating that the theory of organic evolution should be left for scientific study and that no details about the what happened before Adam and Eve and how their bodies were created have not been revealed, but the origin of man is clear from the teaching of the church. A much earlier anonymously-authored article from 2004 did not attempt to reconcile church teachings and scientific views of evolution, but stated that not having the answers does not discredit the existence of God, and that God will not reveal more unto us until we prove our faith. An example was provided of how the author avoided a classroom debate on evolution by stating that they knew God existed and created us. The article also quoted past church president Gordon B. Hinckley giving his own example of how he chose to drop the question and not let it bother him. Subsequent letters from youth stated that the youth viewed themselves as against evolution and supportive of intelligent design. A previous article in the New Era also showed youth viewing evolution as an antagonistic idea to their faith and becoming upset when it was taught and another featured a church seventy using scientific arguments in an attempt to disprove evolutionary natural selection and adaptation. Improvement Era The Improvement Era was an official periodical of the church between 1897 and 1970. In the April 1910 edition in the "Priesthood Quorum's Table" section of that periodical, Genesis is cited as well as other scriptures from Genesis and the Pearl of Great Price. The article states that it is unclear whether the mortal bodies of man evolved through natural processes, whether Adam and Eve where transplanted to Earth from another place, or whether they were born on Earth in mortality. The article states that those questions are not fully answered in the church's current revelation and scripture. The article cites the answer is attributed to the church's First Presidency. Canonized scriptures Some verses in the standard works raise questions about the compatibility of scriptural teachings and scientists' current understanding of organic evolution. One such verse, in Doctrine and Covenants describes the "temporal existence" of the earth as 7,000 years old. Other scriptural verses suggest that no organisms died before the fall of Adam. In the Book of Mormon, the prophet Lehi teaches: "If Adam had not transgressed he would not have fallen, but he would have remained in the garden of Eden. And all things which were created must have remained in the same state in which they were after they were created; and they must have remained forever, and had no end". In Moses in the Pearl of Great Price, the prophet Enoch states: "Because that Adam fell, we are; and by his fall came death; and we are made partakers of misery and woe." Bible Dictionary In the Bible Dictionary of the LDS Church, the entry for "Fall of Adam" previously included the following statement: "Before the fall, Adam and Eve had physical bodies but no blood. There was no sin, no death, and no children among any of the earthly creations." Under the entry "Flesh", it is written: "Since flesh often means mortality, Adam is spoken of as the 'first flesh' upon the earth, meaning he was the first mortal on the earth, all things being created in a non-mortal condition, and becoming mortal through the fall of Adam. As noted above, the Bible Dictionary is published by the LDS Church, and its preface states: "It [the Bible Dictionary] is not intended as an official or revealed endorsement by the church of the doctrinal, historical, cultural, and other matters set forth." Statements from church presidents Every statement by an LDS Church president does not necessarily constitute official church doctrine, but a statement from him is generally regarded by church membership as authoritative and usually represents doctrine. Official church doctrine is however presented and taught unitedly by the entire First Presidency, usually released in an official letter or other authorized publication. Brigham Young Brigham Young, the church's second president, stated that the LDS Church differs from other Christian churches, because they do not seek to clash their ideas with scientific theory. He continued that whether God began with an empty Earth, whether he created out of nothing, whether he made it in six days or millions of years will remain a mystery unless God reveals something about it. Young made the following statement two years later, stating the injustice of the fact that the theories of scientists are taught in school, but not the principles of the gospel. He wrote that for this purpose, he created Brigham Young Academy, so that God's revelation could be taught in schools with books written by members of the LDS Church. Young also stated that he was, "resolutely and uncompromisingly opposed" to "the theories...of Darwin." John Taylor John Taylor was the second church president to comment directly on Darwinian theory. In his 1882 book Mediation and Atonement, Taylor stated that nature and creation is governed by the laws of man and organisms exist in the same form since creation, as contradicted by the ideas of evolutionists. Taylor continued that man did not originate from chaos of matter, but from "the faculties and powers of a God". Joseph F. Smith Soon after the First Presidency's 1909 statement, Joseph F. Smith professed in an editorial that "the Church itself has no philosophy about the modus operandi employed by the Lord in His creation of the world." However, in the very same month (and in the wake of the evolution controversy that had recently ensued at Brigham Young University), Smith published and signed a statement wherein he explained some of the conflicts between revealed religion and the theories of evolution. He cited the 1911 Brigham Young University modernism controversy, stating that evolution is in conflict with scriptures and modern revelation. He continues that the church holds that "divine revelation" must be the "standard" and is "truth". Smith mentions that "science has changed from age to age", and "philosophic theories of life" have their place, but do not belong in LDS Church school classes and anywhere else when they contradict the word of God. A 1910 editorial in a church magazine that enumerates various possibilities for creation is usually attributed to Smith or to the First Presidency. Included in the listed possibilities were the ideas that Adam and Eve: (1) "evolved in natural processes to present perfection"; (2) were "transplanted [to earth] from another sphere"; or (3) were "born here ... as other mortals have been." Smith authored an editorial the next year in the church magazine discouraging the discussion of evolution in church school stating that members of the church believe the theory of evolution was "more or less a fallacy." David O. McKay In a 1952 speech to students at BYU, McKay used the theory of evolution as an example while suggesting that science can "leave [a student] with his soul unanchored." He stated that a professor that denies "divine agency in creation" imposes on the student that life was created by chance. McKay insisted that students should be led to a "counterbalancing thought" that "God is the Creator of the earth", "the Father of our souls and spirits", and "the purpose of creation is theirs (God and Jesus Christ)." In the April 1968 general conference, McKay's son, David, read a message on his father's behalf that was an edited version of the 1952 speech, including the omission of the word "beautiful" when describing the theory of evolution. In 1954, McKay quoted the Old Testament while affirming to members of the BYU faculty that living things only reproduce "after their kind". He quoted Genesis which states, "Let the earth bring forth the living creatures after his kind, cattle and creeping things, and the beast of the earth after his kind." Spencer W. Kimball At a 1975 church women's conference, church president Spencer W. Kimball quoted, "And, I God created man in mine own image, and in the image of mine Only Begotten created I him; male and female created I them." (Kimball added that "the story of the rib, of course, is figurative.") Kimball continued, "we don't know exactly how [Adam and Eve's] coming into this world happened, and when we're able to understand it the Lord will tell us." Ezra Taft Benson Prior to becoming president of the LDS Church, Ezra Taft Benson gave an April 1981 general conference address in which he stated that "the theory of man’s development from lower forms of life" is a "false idea". In 1988, after becoming president of the church, Benson published a book counseling members of the church to use the Book of Mormon to counter the theories of evolution. He wrote that "we have not been using the Book of Mormon as we should. Our homes are not as strong unless we are using it to bring our children to Christ. Our families may be corrupted by worldly trends and teachings unless we know how to use the book to expose and combat the falsehoods in socialism, organic evolution, rationalism, humanism, etc." In 1988, Benson published another book that included his earlier warnings about the "deceptions" of Charles Darwin. He wrote that educational institutions serve to mislead youth, which explains—he noted—why the church advises that youth attend church institutions, allowing parents to closely observe the education of their children and clear up "the deceptions of men like . . . Charles Darwin. Gordon B. Hinckley In a 1997 speech at an Institute of Religion in Ogden, Utah, church president Gordon B. Hinckley said: "People ask me every now and again if I believe in evolution. I tell them I am not concerned with organic evolution. I do not worry about it. I passed through that argument long ago." wherein he contrasts "organic evolution" with the evolution and improvement of individuals: In the late 1990s, Hinckley recalled his university studies of anthropology and geology to reporter Larry A. Witham: "'Studied all about it. Didn't worry me then. Doesn't worry me now'", insisting that the church only requires the belief that Adam was the first man of '"what we would call the human race."' In 2004, an official church magazine printed a quote from Hinckley from a 1983 speech where he expressed a similar sentiment. Statements from apostles In the early 1900s, many general authorities, specifically those with science backgrounds, subscribed to the idea of an old earth, yet most of them rejected Darwinism. Joseph Fielding Smith and other general authorities were against the old earth theory as well as Darwin's theory of evolution. Individual leaders of the church have expressed a variety of personal opinions on biological evolution and as such these do not necessarily constitute official church doctrine. Statements from the 1930s Roberts–Smith–Talmage dispute In 1930, B. H. Roberts, the presiding member of the First Council of the Seventy, was assigned by the First Presidency to create a study manual for the Melchizedek priesthood holders of the church. Entitled The Truth, The Way, The Life, the draft of the manual that was submitted to the First Presidency and the Quorum of the Twelve Apostles for approval stated that death had been occurring on Earth for millions of years prior to the fall of Adam and that human-like pre-Adamites had lived on the Earth. On 5 April 1930, Joseph Fielding Smith, a junior member of the Quorum of the Twelve Apostles and the son of a late church president, "vigorously promulgated [the] opposite point of view" in a speech that was published in a church magazine. In his widely read speech, Smith taught as doctrine that there had been no death on earth until after the fall of Adam and that there were no "pre-Adamites". In 1931, both Roberts and Smith were permitted to present their views to the First Presidency and the Quorum of the Twelve. After hearing both sides, the First Presidency issued a memo to the general authorities of the church which stated while they agree with the idea that "Adam is the primal parent of our race", there is no advantage to continuing the discussion and that church members should focus on "[bearing] the message of the restored gospel to the people of the world" and that those sciences do not have anything to do with, "the salvation of the souls of mankind". They stated that continuation of the discussion would only lead to "confusion, division, and misunderstanding if continued further." Another of the apostles, geologist James E. Talmage, pointed out that Smith's views could be misinterpreted as the church's official position, since Smith's views were widely circulated in a church magazine but Roberts's views were limited to an internal church document. As a result, the First Presidency gave permission to Talmage to give a speech promoting views that were contrary to Smith's. In his speech on August 9, 1931, in the Salt Lake Tabernacle, Talmage taught the same principles that Roberts had originally outlined in his draft manual. Over Smith's objections, the First Presidency authorized a church publication of Talmage's speech in pamphlet form. In 1965, Talmage's speech was reprinted again by the church in an official church magazine. As Talmage points out in the article, "The outstanding point of difference ... is the point of time which man in some state has lived on this planet." With regards to evolution in general, Talmage challenged many of its aspects in the same speech. He said that he does not believe Adam descended from cavemen or lower forms of men, but is divinely created. He did, however, state that were it true that Adam evolved from lower form, it only seems likely that men will continue to evolve into something higher as a part of eternal progression. He continued by stating that, "evolution is true so far as it means development, and progress, and advancement in all the works of God", and that the scriptures, "should not be discredited by theories of men; they cannot be discredited by fact and truth." Talmage considered the possibility of pre-Adamites; however, he denied speciation and evolution. Roberts died in 1933 and The Truth, The Way, The Life remained unpublished until 1994, when it was published by an independent publisher. Although it is apparent that Roberts and Smith may have had differing views on whether there was death before the fall of Adam, it is evident that they may have had similar views against organic evolution as the explanation for the origin of man. For example, Roberts wrote that "the theory of evolution as advocated by many modern scientists lies stranded upon the shore of idle speculation. There is one other objection to be urged against the theory of evolution before leaving it; it is contrary to the revelations of God." Roberts further criticized the theories of evolution by stating that Darwin's claims of evolution are contrary to the experience and knowledge of man, because the law of nature requires that every organism reproduces of its own kind, and while variation may occur, changes usually revert due to extinction, chromosomal infertility, or by reversion to original species. Joseph Fielding Smith In 1954, when he was President of the Quorum of the Twelve Apostles, Smith wrote at length about his personal views on evolution in his book Man, His Origin and Destiny stating that it was a destructive and contaminating influence and that "If the Bible does not kill Evolution, Evolution will kill the Bible." He further stated that "There is not and cannot be, any compromise between the Gospel of Jesus Christ and the theories of evolution" and that "It is not possible for a logical mind to hold both Bible teaching and evolutionary teaching at the same time" since "If you accept [the scriptures] you cannot accept organic evolution." In response to an inquiry about the book from the head of the University of Utah Geology Department, church president David O. McKay affirmed that "the Church has officially taken no position" on evolution, Smith's book "is not approved by the Church", and that the book is entirely Smith's "views for which he alone is responsible". Smith also produced personal statements on evolution in his Doctrines of Salvation including that "If evolution is true, the church is false" since "If life began on Earth as advocated by Darwin ... then the doctrines of the church are false". Smith stated about his views on evolution, "No Adam, no fall; no fall, no atonement; no atonement, no savior." Smith also asserted that "There was no death of any living creature before the fall of Adam! Adam’s mission was to bring to pass the fall and it came upon the earth and living things throughout all nature. Anything contrary to this doctrine is diametrically opposed to the doctrines revealed to the Church! If there was any creature increasing by propagation before the fall, then throw away the Book of Mormon, deny your faith, the Book of Abraham and the revelations in the Doctrine and Covenants! Our scriptures most emphatically tell us that death came through the fall, and has passed upon all creatures including the earth itself. For this earth of ours was pronounced good when the Lord finished it. It became fallen and subject to death as did all things upon its face, through the transgression of Adam." Bruce R. McConkie Bruce R. McConkie was an influential church leader and author on the topic of evolution, having been published several times speaking strongly on the topic. He stated his view in 1982 at BYU that there was no death in the world for Adam or for any form of life before the fall, and that trying to reconcile religion and organic evolution was a false and devilish heresy among church members. In 1984, McConkie disparaged the "evolutionary fantasies of biologists" and stated that yet to be revealed "doctrines will completely destroy the whole theory of organic evolution" and stated that any religion that assumes humans are a product of evolution cannot offer salvation since true believers know humans were made in a state in which there was no procreation or death. In his popular and controversial reference book Mormon Doctrine, McConkie devoted ten pages to his entry on evolution. After canvassing statements of past church leaders, the standard works, and the 1909 First Presidency statement, McConkie concluded that "[t]here is no harmony between the truths of revealed religion and the theories of organic evolution." The evolution entry in Mormon Doctrine quotes extensively from Smith's Man, His Origin and Destiny. McConkie characterized the intellect of those Latter-day Saints who believe in evolution while simultaneously having knowledge of church doctrines on life and creation as "scrubby and grovelling". McConkie included a disclaimer in Mormon Doctrine stating that he alone was responsible for the doctrinal and scriptural interpretations. The 1958 edition falsely stated that the "official doctrine of the Church" asserted a "falsity of the theory of organic evolution." McConkie also wrote that "there were no pre-Adamites," that Adam was not the "end-product of evolution," and that there "was no death in the world, either for man or for any form of life until after the Fall of Adam." Russell M. Nelson Prior to becoming president of the LDS Church, Russell M. Nelson stated in a 2007 interview with the Pew Research Center that "to think that man evolved from one species to another is, to me, incomprehensible. Man has always been man. Dogs have always been dogs. Monkeys have always been monkeys. It's just the way genetics works." He also stated in 1987 in a church magazine article that he found the theory of evolution unbelievable. Academic The earliest instance in which science and evolution were used to support LDS doctrine occurred in a series of six published articles in 1895, "Theosophy and Mormonism" by Nels L. Nelson. These articles were published in 1904 in Scientific Aspects of Mormonism. Nelson used the ideas of evolution to consider the spiritual and physical development of God and humans. Nelson's view of evolution is spiritual with deliberate use of scientific processes by God rather than as a random, accidental process. Mormon philosopher William Henry Chamberlin's Essay on Nature (1915) and Frederick J. Pack's Science and Belief in God (1924) defended the theory of evolution; both attempted to reconcile religion and evolution. In a work, Pack states, "no warfare exists between 'Mormonism' and true science." In 1978, dean of the College of Biology and Agriculture at BYU, A. Lester Allen, tried to present an approach to evolution from the perspective of an LDS biologist. Allen established seven doctrinal landmarks that are fundamental beliefs of the LDS Church, but considered that human's limited perspective and limited perception of reality means that humans may not very well understand the circumstances surrounding the creation of Adam and Eve and the existence of the Garden of Eden using only their mortal senses. Allen also stated that besides core doctrine of the LDS Church relating to the existence of Adam, Eve, and the Garden of Eden, all hypotheses are fair game for "responsible scientists" to consider and investigate. In 2018, BYU professor and evolutionary biologist Steven L. Peck at a Mormon studies conference at Utah Valley University explained that Mormons believe in "eternal progression" and that the universe was organized from pre-existing matter, which are ideas also held by evolutionary biologists. Views in the early 2000s There is an ongoing discussion and questioning among members of the LDS Church concerning the religion, evolution, and the reconciliation between the two. There are a number of current Mormon-related publications with articles on evolution. According to scholar Michael R. Ash, a great number of church members read the Ensign, which generally publishes articles with unfavorable views on evolution. Other publications like BYU Studies, FARMS Review of Books, Dialogue, and Sunstone have published pro-evolution or neutral articles. The official stance of the church on evolution is neutral. Though scholar Joseph Baker argues that the church's position is rather "skeptically neutral", because the church continues to endorse their 1910 statement. There are many church members, including scientists, who accept evolution as a legitimate scientific theory. In a 2014 U.S. Religious Landscape Study, researchers found that 52% of Mormons believe that humans always existed in their present form while 42% believe that humans evolved over time. More specifically, 29% of Mormons believe that evolution is guided by a supreme being, while 11% believe the evolution occurred due to natural processes. A 2017 study, the Next Mormons Survey, professor Benjamin Knoll surveyed Mormons about their beliefs in evolution. Of those surveyed, 74% responded that they were confident or had faith that God created Adam and Eve in the last 10,000 years and that Adam and Eve did not evolve from other forms of life. When asked whether evolution is the best explanation for how God brought about life on Earth, 33% of Mormons were confident or had faith that this was not true. After analyzing the results Knoll suggested that 37% of Mormons completely reject God-guided evolution. Another 37% accept God-guided evolution for life on Earth, but feel that Adam and Eve were an exception and were physically created by God. The other 26% were split between the belief that Adam and Eve may have been created through the process of evolution and the disbelief in God-guided evolution and the existence of a physical Adam and Eve. Moreover, unlike other studies conducted which have found a correlation between education level and belief in evolution, Next Mormons Survey found no correlation between education level and belief in evolution among Mormons. In contrast, a 2018 study of American Mormons in the Journal of Contemporary Religion found that education was a defining factor of evolution acceptance. This is, however, only true when accounting for political ideology as well. The study determined that among those with moderate or liberal political ideology, the probability of accepting evolution increases with increasing education level. The correlation between evolution acceptance and education level was even higher among liberals. The probability of accepting evolution among very liberal Mormons with an 8th grade or less education was 9%, while the probability of accepting evolution among very liberal Mormons with a post-graduate degree increases to 82%. The findings were different from conservative Mormons who showed a decrease in probability of accepting evolution as their education level increased. A very conservative Mormon with an 8th grade education or less had a 35% probability of accepting evolution, whereas a very conservative Mormon with a post-graduate degree was 20% likely to accept evolution. Baker suggests that low rates of acceptance of evolution of Mormons may be related to the high rates of political conservatism among Mormons. A 2018 study in PLOS One researched the attitudes toward evolution of Latter-day Saint undergraduates. The study revealed that there has been a recent shift of attitude towards evolution among LDS undergraduates. These attitudes have shifted from antagonistic to accepting. The researchers cited examples of more acceptance of fossil and geological records, as well as an acceptance of the old age of the earth. The researchers attributed this attitude change to several factors including primary-school exposure to evolution and a reduction in the number of anti-evolution statements from the First Presidency. See also William Henry Chamberlin (philosopher) Ralph Vary Chamberlin Relationship between religion and science Ahmadiyya views on evolution Evolution and the Roman Catholic Church Jainism and non-creationism Jewish views on evolution Hindu views on evolution Issues in Science and Religion Notes References Further reading . Christian creationism Christianity and evolution Evolution Evolution Evolution Evolution Religion and science Creationism Evolution and religion
Mormon views on evolution
Biology
8,191
7,262,157
https://en.wikipedia.org/wiki/Sintz%20Gas%20Engine%20Company
The Sintz Gas Engine Company was formed in about 1885 by Clark Sintz and others in Springfield, Ohio. It was a pioneering marine engine manufacturing business that expanded into other fields. After its sale in 1902 to the Michigan Yacht and Power Company, Sintz ceased to exist in 1903 as an entity. Background Clark Sintz had been undertaking pioneering engine work both on his own and with John F Endter. John Foos held the patent. In 1885 the company demonstrated a small 2-cycle engine in a small boat. The engine was based on a Dugald Clerk design. Clerk was a Scottish engineer who had patented the engine in the 1870s. Foos formed his own company, Foos Gas Engine Company, in 1889 using his own improved version of Clark Sintz's engine. In 1894 Elwood Haynes used a Sintz engine in his first car, as did Milton Reeves in 1896. In 1894 Sintz sold his interest in the company and, together with his son, Claude formed the Wolverine Motor Works. Wolverine Motor Works The Wolverine Motor Works initially was formed to make motor cars but instead began making marine engines for pleasure boats and in 1901 moved its marine engine manufacturing to Holland, Michigan. That same year Sintz sold the business to Charles Snyder. Sintz had been engaged by Snyder to design a small gauge railway for his banana plantation in Panama. Claude Sintz went on to make marine engines under his name from 1904 to 1907 and then founded The Sintz-Wallin Company of Grand Rapids. His early engines were two strokes with the brand name Leader. In 1913 Sintz-Wallin merged with the Midland Tractor Company and formed the Leader Gas Engine Company. In 1915 the Leader's moved to Quincy, Illinois, where they consolidated along with Dayton Foundry and Machine Company and Hayton Pump Company into Dayton-Dick Company. Dayton-Dick became Dayton-Dowd in 1919 and ceased making tractors in 1924. The pump manufacturing business continued until 1945 when it was acquired by the Peerless Pump Company. Peerless is now owned by Grundfos. Cars From 1899 to 1903 the Sintz company produced cars of numerous styles. It also produced rail cars and light trams. All were powered by an own-make two-stroke engine. Michigan Yacht and Power Company In about 1890 O J Mulford, W A Pungs, and a Mr Seymour formed the Michigan Yacht and Power Company in Detroit. They made small power boats and were distributors of the Sintz marine engines. In 1901 or 1902, Michigan Yacht and Power Company purchased the Sintz company and moved it to Detroit. In late 1903 Sintz ceased to exist as an entity. The new company was named the Pungs-Finch Auto and Gas Engine Company in 1904. Pungs bought out his partner O. J. Mulford, who departed and established the Gray Marine Motor Company in 1905. Gray Marine Motor Company renamed again in 1911 as Gray Motor Company, reformed in 1924 as Gray Marine Motor Company, and eventually acquired by Continental in 1944. References David Burgess Wise, The New Illustrated Encyclopedia of Automobiles. Defunct motor vehicle manufacturers of the United States Defunct manufacturing companies based in Michigan Defunct manufacturing companies based in Ohio Motor vehicle manufacturers based in Michigan Springfield, Ohio Marine engine manufacturers Engine manufacturers of the United States Vehicle manufacturing companies established in 1899 Vehicle manufacturing companies disestablished in 1903 Automotive pioneers Automotive engineers Vintage vehicles Cars introduced in 1899 1890s cars 1900s cars
Sintz Gas Engine Company
Engineering
683
8,553,751
https://en.wikipedia.org/wiki/Biological%20organisation
Biological organisation is the organisation of complex biological structures and systems that define life using a reductionistic approach. The traditional hierarchy, as detailed below, extends from atoms to biospheres. The higher levels of this scheme are often referred to as an ecological organisation concept, or as the field, hierarchical ecology. Each level in the hierarchy represents an increase in organisational complexity, with each "object" being primarily composed of the previous level's basic unit. The basic principle behind the organisation is the concept of emergence—the properties and functions found at a hierarchical level are not present and irrelevant at the lower levels. The biological organisation of life is a fundamental premise for numerous areas of scientific research, particularly in the medical sciences. Without this necessary degree of organisation, it would be much more difficult—and likely impossible—to apply the study of the effects of various physical and chemical phenomena to diseases and physiology (body function). For example, fields such as cognitive and behavioral neuroscience could not exist if the brain was not composed of specific types of cells, and the basic concepts of pharmacology could not exist if it was not known that a change at the cellular level can affect an entire organism. These applications extend into the ecological levels as well. For example, DDT's direct insecticidal effect occurs at the subcellular level, but affects higher levels up to and including multiple ecosystems. Theoretically, a change in one atom could change the entire biosphere. Levels The simple standard biological organisation scheme, from the lowest level to the highest level, is as follows: More complex schemes incorporate many more levels. For example, a molecule can be viewed as a grouping of elements, and an atom can be further divided into subatomic particles (these levels are outside the scope of biological organisation). Each level can also be broken down into its own hierarchy, and specific types of these biological objects can have their own hierarchical scheme. For example, genomes can be further subdivided into a hierarchy of genes. Each level in the hierarchy can be described by its lower levels. For example, the organism may be described at any of its component levels, including the atomic, molecular, cellular, histological (tissue), organ and organ system levels. Furthermore, at every level of the hierarchy, new functions necessary for the control of life appear. These new roles are not functions that the lower level components are capable of and are thus referred to as emergent properties. Every organism is organised, though not necessarily to the same degree. An organism can not be organised at the histological (tissue) level if it is not composed of tissues in the first place. Emergence of biological organisation Biological organisation is thought to have emerged in the early RNA world when RNA chains began to express the basic conditions necessary for natural selection to operate as conceived by Darwin(heritability, variation of type, and competition for limited resources). Fitness of an RNA replicator (its per capita rate of increase) would likely have been a function of adaptive capacities that were intrinsic (in the sense that they were determined by the nucleotide sequence) and the availability of resources. The three primary adaptive capacities may have been: (1) the capacity to replicate with moderate fidelity (giving rise to both heritability and variation of type). (2) the capacity to avoid decay. (3) the capacity to acquire and process resources. These capacities would have been determined initially by the folded configurations of the RNA replicators (see "Ribozyme") that, in turn, would be encoded in their individual nucleotide sequences. Competitive success among different RNA replicators would have depended on the relative values of these adaptive capacities. Subsequently, among more recent organisms competitive success at successive levels of biological organisation, presumably continued to depend, in a broad sense, on the relative values of these adaptive capacities. Fundamentals Empirically, a large proportion of the (complex) biological systems we observe in nature exhibit hierarchical structure. On theoretical grounds we could expect complex systems to be hierarchies in a world in which complexity had to evolve from simplicity. System hierarchies analysis performed in the 1950s, laid the empirical foundations for a field that would be, from the 1980s, hierarchical ecology. The theoretical foundations are summarized by thermodynamics. When biological systems are modeled as physical systems, in its most general abstraction, they are thermodynamic open systems that exhibit self-organised behavior, and the set/subset relations between dissipative structures can be characterized in a hierarchy. A simpler and more direct way to explain the fundamentals of the "hierarchical organisation of life", was introduced in Ecology by Odum and others as the "Simon's hierarchical principle"; Simon emphasized that hierarchy "emerges almost inevitably through a wide variety of evolutionary processes, for the simple reason that hierarchical structures are stable". To motivate this deep idea, he offered his "parable" about imaginary watchmakers. {| !Parable of the Watchmakers |- | There once were two watchmakers, named Hora and Tempus, who made very fine watches. The phones in their workshops rang frequently; new customers were constantly calling them. However, Hora prospered while Tempus became poorer and poorer. In the end, Tempus lost his shop. What was the reason behind this? The watches consisted of about 1000 parts each. The watches that Tempus made were designed such that, when he had to put down a partly assembled watch (for instance, to answer the phone), it immediately fell into pieces and had to be reassembled from the basic elements. Hora had designed his watches so that he could put together subassemblies of about ten components each. Ten of these subassemblies could be put together to make a larger sub-assembly. Finally, ten of the larger subassemblies constituted the whole watch. Each subassembly could be put down without falling apart. |} See also Abiogenesis Cell theory Cellular differentiation Composition of the human body Evolution of biological complexity Evolutionary biology Gaia hypothesis Hierarchy theory Holon (philosophy) Human ecology Level of analysis Living systems Self-organization Spontaneous order Structuralism (biology) Timeline of the evolutionary history of life Notes References External links 2011's theoretical/mathematical discussion. Life Articles containing video clips Hierarchy Emergence Levels of organization (Biology)
Biological organisation
Biology
1,295
52,983,512
https://en.wikipedia.org/wiki/Pi%20Fornacis
π Fornacis (Latinised as Pi Fornacis) is the Bayer designation for a binary star system in the southern constellation of Fornax. It has an apparent visual magnitude of 5.360, which is bright enough to be seen with the naked eye on a dark night. With an annual parallax shift of 11.08 mas, it is estimated to lie around 294 light years from the Sun. At that distance, the visual magnitude is diminished by an interstellar absorption factor of 0.10 due to dust. This system is a member of the thin disk population of the Milky Way galaxy. The primary, component A, is an evolved G-type giant star with a stellar classification of G8 III. It has an estimated mass slightly higher than the Sun, but has expanded to more than nine times the Sun's radius. The star is roughly five billion years old and is spinning slowly with a projected rotational velocity of 0.9 km/s. Pi Fornacis A radiates 57.5 times the solar luminosity from its outer atmosphere at an effective temperature of 5,048 K. A companion, component B, was discovered in 2008 using the AMBER instrument of the Very Large Telescope facility. At the time of discovery, this star lay at an estimated angular separation of from the primary along a position angle of . The preliminary orbital period for the pair is 11.4 years, and the semimajor axis is at least 70 mas. The orbit is highly inclined to the line of sight from the Earth. References G-type giants Binary stars Fornax Fornacis, Pi CD-30 703 012438 09440 594
Pi Fornacis
Astronomy
343
10,649,582
https://en.wikipedia.org/wiki/Leader%20election
In distributed computing, leader election is the process of designating a single process as the organizer of some task distributed among several computers (nodes). Before the task has begun, all network nodes are either unaware which node will serve as the "leader" (or coordinator) of the task, or unable to communicate with the current coordinator. After a leader election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task leader. The network nodes communicate among themselves in order to decide which of them will get into the "leader" state. For that, they need some method in order to break the symmetry among them. For example, if each node has unique and comparable identities, then the nodes can compare their identities, and decide that the node with the highest identity is the leader. The definition of this problem is often attributed to LeLann, who formalized it as a method to create a new token in a token ring network in which the token has been lost. Leader election algorithms are designed to be economical in terms of total bytes transmitted, and time. The algorithm suggested by Gallager, Humblet, and Spira for general undirected graphs has had a strong impact on the design of distributed algorithms in general, and won the Dijkstra Prize for an influential paper in distributed computing. Many other algorithms have been suggested for different kinds of network graphs, such as undirected rings, unidirectional rings, complete graphs, grids, directed Euler graphs, and others. A general method that decouples the issue of the graph family from the design of the leader election algorithm was suggested by Korach, Kutten, and Moran. Definition The problem of leader election is for each processor eventually to decide whether it is a leader or not, subject to the constraint that exactly one processor decides that it is the leader. An algorithm solves the leader election problem if: States of processors are divided into elected and not-elected states. Once elected, it remains as elected (similarly if not elected). In every execution, exactly one processor becomes elected and the rest determine that they are not elected. A valid leader election algorithm must meet the following conditions: Termination: the algorithm should finish within a finite time once the leader is selected. In randomized approaches this condition is sometimes weakened (for example, requiring termination with probability 1). Uniqueness: there is exactly one processor that considers itself as leader. Agreement: all other processors know who the leader is. An algorithm for leader election may vary in the following aspects: Communication mechanism: the processors are either synchronous in which processes are synchronized by a clock signal or asynchronous where processes run at arbitrary speeds. Process names: whether processes have a unique identity or are indistinguishable (anonymous). Network topology: for instance, ring, acyclic graph or complete graph. Size of the network: the algorithm may or may not use knowledge of the number of processes in the system. Algorithms Leader election in rings A ring network is a connected-graph topology in which each node is exactly connected to two other nodes, i.e., for a graph with n nodes, there are exactly n edges connecting the nodes. A ring can be unidirectional, which means processors only communicate in one direction (a node could only send messages to the left or only send messages to the right), or bidirectional, meaning processors may transmit and receive messages in both directions (a node could send messages to the left and right). Anonymous rings A ring is said to be anonymous if every processor is identical. More formally, the system has the same state machine for every processor. There is no deterministic algorithm to elect a leader in anonymous rings, even when the size of the network is known to the processes. This is due to the fact that there is no possibility of breaking symmetry in an anonymous ring if all processes run at the same speed. The state of processors after some steps only depends on the initial state of neighbouring nodes. So, because their states are identical and execute the same procedures, in every round the same messages are sent by each processor. Therefore, each processor state also changes identically and as a result if one processor is elected as a leader, so are all the others. For simplicity, here is a proof in anonymous synchronous rings. It is a proof by contradiction. Consider an anonymous ring R with size n>1. Assume there exists an algorithm "A" to solve leader election in this anonymous ring R. Lemma: after round of the admissible execution of A in R, all the processes have the same states. Proof. Proof by induction on . Base case: : all the processes are in the initial state, so all the processes are identical. Induction hypothesis: assume the lemma is true for rounds. Inductive step: in round , every process send the same message to the right and send the same message to the left. Since all the processes are in the same state after round , in round k, every process will receive the message from the left edge, and will receive the message from the right edge. Since all processes are receiving the same messages in round , they are in the same state after round . The above lemma contradicts the fact that after some finite number of rounds in an execution of A, one process entered the elected state and other processes entered the non-elected state. Randomized (probabilistic) leader election A common approach to solve the problem of leader election in anonymous rings is the use of probabilistic algorithms. In such approaches, generally processors assume some identities based on a probabilistic function and communicate it to the rest of the network. At the end, through the application of an algorithm, a leader is selected (with high probability). Asynchronous ring Source: Since there is no algorithm for anonymous rings (proved above), the asynchronous rings would be considered as asynchronous non-anonymous rings. In non-anonymous rings, each process has a unique , and they don't know the size of the ring. Leader election in asynchronous rings can be solved by some algorithm with using messages or messages. In the algorithm, every process sends a message with its to the left edge. Then waits until a message from the right edge. If the in the message is greater than its own , then forwards the message to the left edge; else ignore the message, and does nothing. If the in the message is equal to its own , then sends a message to the left announcing myself is elected. Other processes forward the announcement to the left and turn themselves to non-elected. It is clear that the upper bound is for this algorithm. In the algorithm, it is running in phases. On the th phase, a process will determine whether it is the winner among the left side and right side neighbors. If it is a winner, then the process can go to next phase. In phase , each process needs to determine itself is a winner or not by sending a message with its to the left and right neighbors (neighbor do not forward the message). The neighbor replies an only if the in the message is larger than the neighbor's , else replies an . If receives two s, one from the left, one from the right, then is the winner in phase . In phase , the winners in phase need to send a message with its to the left and right neighbors. If the neighbors in the path receive the in the message larger than their , then forward the message to the next neighbor, otherwise reply an . If the th neighbor receives the larger than its , then sends back an , otherwise replies an . If the process receives two s, then it is the winner in phase . In the last phase, the final winner will receive its own in the message, then terminates and send termination message to the other processes. In the worst case, each phase there are at most winners, where is the phase number. There are phases in total. Each winner sends in the order of messages in each phase. So, the messages complexity is . Synchronous ring In Attiya and Welch's Distributed Computing book, they described a non-uniform algorithm using messages in synchronous ring with known ring size . The algorithm is operating in phases, each phase has rounds, each round is one time unit. In phase , if there is a process with , then process sends termination message to the other processes (sending termination messages cost rounds). Else, go to the next phase. The algorithm will check if there is a phase number equals to a process , then does the same steps as phase . At the end of the execution, the minimal will be elected as the leader. It used exactly messages and rounds. Itai and Rodeh introduced an algorithm for a unidirectional ring with synchronized processes. They assume the size of the ring (number of nodes) is known to the processes. For a ring of size n, a≤n processors are active. Each processor decides with probability of a^(-1) whether to become a candidate. At the end of each phase, each processor calculates the number of candidates c and if it is equal to 1, it becomes the leader. To determine the value of c, each candidate sends a token (pebble) at the start of the phase which is passed around the ring, returning after exactly n time units to its sender. Every processor determines c by counting the number of pebbles which passed through. This algorithm achieves leader election with expected message complexity of O(nlogn). A similar approach is also used in which a time-out mechanism is employed to detect deadlocks in the system. There are also algorithms for rings of special sizes such as prime size and odd size. Uniform algorithm In typical approaches to leader election, the size of the ring is assumed to be known to the processes. In the case of anonymous rings, without using an external entity, it is not possible to elect a leader. Even assuming an algorithm exists, the leader could not estimate the size of the ring. i.e. in any anonymous ring, there is a positive probability that an algorithm computes a wrong ring size. To overcome this problem, Fisher and Jiang used a so-called leader oracle Ω? that each processor can ask whether there is a unique leader. They show that from some point upward, it is guaranteed to return the same answer to all processes. Rings with unique IDs In one of the early works, Chang and Roberts proposed a uniform algorithm in which a processor with the highest ID is selected as the leader. Each processor sends its ID in a clockwise direction. A processor receives a message and compares the ID with its own. If the ID is bigger then the processor passes it through, otherwise it discards the message. The authors show that this algorithm uses messages in the worst case and in the average case. Hirschberg and Sinclair improved this algorithm with message complexity by introducing a bidirectional message-passing scheme. Leader election in a mesh The mesh is another popular form of network topology, especially in parallel systems, redundant memory systems and interconnection networks. In a mesh structure, nodes are either corner (only two neighbours), border (only three neighbours) or interior (with four neighbours). The number of edges in a mesh of size a x b is m=2ab-a-b. Unoriented mesh A typical algorithm to solve the leader election in an unoriented mesh is to only elect one of the four corner nodes as the leader. Since the corner nodes might not be aware of the state of other processes, the algorithm should first wake up the corner nodes. A leader can be elected as follows. Wake-up process: in which nodes initiate the election process. Each initiator sends a wake-up message to all its neighbouring nodes. If a node is not initiator, it simply forwards the messages to the other nodes. In this stage at most messages are sent. Election process: the election in outer ring takes two stages at most with messages. Termination: leader sends a terminating message to all nodes. This requires at most 2n messages. The message complexity is at most , and if the mesh is square-shaped, . Oriented mesh An oriented mesh is a special case where port numbers are compass labels, i.e. north, south, east and west. Leader election in an oriented mesh is trivial. We only need to nominate a corner, e.g. "north" and "east" and make sure that node knows it is a leader. Torus A special case of mesh architecture is a torus which is a mesh with "wrap-around". In this structure, every node has exactly 4 connecting edges. One approach to elect a leader in such a structure is known as electoral stages. Similar to procedures in ring structures, this method in each stage eliminates potential candidates until eventually one candidate node is left. This node becomes the leader and then notifies all other processes of termination. This approach can be used to achieve a complexity of O(n). There also more practical approaches introduced for dealing with presence of faulty links in the network. Election in hypercubes A Hypercube is a network consisting of nodes, each with degree of and edges. A similar electoral stages as before can be used to solve the problem of leader election. In each stage two nodes (called duelists) compete and the winner is promoted to the next stage. This means in each stage only half of the duelists enter the next stage. This procedure continues until only one duelist is left, and it becomes the leader. Once selected, it notifies all other processes. This algorithm requires messages. In the case of unoriented hypercubes, a similar approach can be used but with a higher message complexity of . Election in complete networks Complete networks are structures in which all processes are connected to one another, i.e., the degree of each node is n-1, n being the size of the network. An optimal solution with O(n) message and space complexity is known. In this algorithm, processes have the following states: Dummy: nodes that do not participate in the leader election algorithm. Passive: the initial state of processes before start. Candidate: the status of nodes after waking up. The candidate nodes will be considered to become the leader. There is an assumption that although a node does not know the total set of nodes in the system, it is required that in this arrangement every node knows the identifier of its single successor, which is called neighbor, and every node is known by another one. All processors initially start in a passive state until they are woken up. Once the nodes are awake, they are candidates to become the leader. Based on a priority scheme, candidate nodes collaborate in the virtual ring. At some point, candidates become aware of the identity of candidates that precede them in the ring. The higher priority candidates ask the lower ones about their predecessors. The candidates with lower priority become dummies after replying to the candidates with higher priority. Based on this scheme, the highest priority candidate eventually knows that all nodes in the system are dummies except itself, at which point it knows it is the leader. The above algorithm is not correct — it needs further improvement. Universal leader election techniques As the name implies, these algorithms are designed to be used in any process network without prior knowledge of the network's topology or properties (such as size). Shout The Shout protocol builds a spanning tree on a generic graph and elects its root as leader. The algorithm has a total cost linear in the edges cardinality. Mega-Merger This technique is similar to finding a Minimum Spanning Tree (MST) in which the root of the tree becomes the leader. The idea is that individual nodes "merge" with each other to form bigger structures. The result of this algorithm is a tree (a graph with no cycles) whose root is the leader of the entire system. The cost of the mega-merger method is , where m is the number of edges and n is the number of nodes. Yo-yo Yo-yo (algorithm) is a minimum finding algorithm consisting of two parts: a preprocessing phase and a series of iterations. In the first phase or setup, each node exchanges its id with all its neighbours and based on the value it orients its incident edges. For instance, if node x has a smaller id than y, x orients towards y. If a node has a smaller id than all its neighbours it becomes a source. In contrast, a node with all inward edges (i.e., with id larger than all of its neighbours) is a sink. All other nodes are internal nodes. Once all the edges are oriented, the iteration phase starts. Each iteration is an electoral stage in which some candidates will be removed. Each iteration has two phases: YO- and –YO. In this phase sources start the process to propagate to each sink the smallest values of the sources connected to that sink. Yo- A source (local minima) transmits its value to all its out-neighbours An internal node waits to receive a value from all its in-neighbours. It calculates the minimum and sends it to out-neighbour. A sink (a node with no outgoing edge) receives all the values and compute their minimum. -yo A sink sends YES to neighbours from which saw the smallest value and NO to others An internal node sends YES to all in-neighbours from which it received the smallest value and NO to others. If it receives only one NO, it sends NO to all. A source waits until it receives all votes. If all YES, it survives and if not, it is no longer a candidate. When a node x sends NO to an in-neighbour y, the logical direction of that edge is reversed. When a node y receives NO from an out-neighbour, it flips the direction of that link. After the final stage, any source who receives a NO is no longer a source and becomes a sink. An additional stage, pruning, also is introduced to remove the nodes that are useless, i.e. their existence has no impact on the next iterations. If a sink is leaf, then it is useless and therefore is removed. If, in the YO- phase the same value is received by a node from more than one in-neighbour, it will ask all but one to remove the link connecting them. This method has a total cost of O(mlogn) messages. Its real message complexity including pruning is an open research problem and is unknown. Applications Radio networks In radio network protocols, leader election is often used as a first step to approach more advanced communication primitives, such as message gathering or broadcasts. The very nature of wireless networks induces collisions when adjacent nodes transmit at the same time; electing a leader allows to better coordinate this process. While the diameter D of a network is a natural lower bound for the time needed to elect a leader, upper and lower bounds for the leader election problem depend on the specific radio model studied. Models and runtime In radio networks, the n nodes may in every round choose to either transmit or receive a message. If no collision detection is available, then a node cannot distinguish between silence or receiving more than one message at a time. Should collision detection be available, then a node may detect more than one incoming message at the same time, even though the messages itself cannot be decoded in that case. In the beeping model, nodes can only distinguish between silence or at least one message via carrier sensing. Known runtimes for single-hop networks range from a constant (expected with collision detection) to O(n log n) rounds (deterministic and no collision detection). In multi-hop networks, known runtimes differ from roughly O((D+ log n)(log2 log n)) rounds (with high probability in the beeping model), O(D log n) (deterministic in the beeping model), O(n) (deterministic with collision detection) to O(n log3/2 n (log log n)0.5) rounds (deterministic and no collision detection). See also Bully algorithm Chang and Roberts algorithm HS algorithm Voting system References Distributed computing problems
Leader election
Mathematics
4,186
597,564
https://en.wikipedia.org/wiki/Just-noticeable%20difference
In the branch of experimental psychology focused on sense, sensation, and perception, which is called psychophysics, a just-noticeable difference or JND is the amount something must be changed in order for a difference to be noticeable, detectable at least half the time. This limen is also known as the difference limen, difference threshold, or least perceptible difference. Quantification For many sensory modalities, over a wide range of stimulus magnitudes sufficiently far from the upper and lower limits of perception, the 'JND' is a fixed proportion of the reference sensory level, and so the ratio of the JND/reference is roughly constant (that is the JND is a constant proportion/percentage of the reference level). Measured in physical units, we have: where is the original intensity of the particular stimulation, is the addition to it required for the change to be perceived (the JND), and k is a constant. This rule was first discovered by Ernst Heinrich Weber (1795–1878), an anatomist and physiologist, in experiments on the thresholds of perception of lifted weights. A theoretical rationale (not universally accepted) was subsequently provided by Gustav Fechner, so the rule is therefore known either as the Weber Law or as the Weber–Fechner law; the constant k is called the Weber constant. It is true, at least to a good approximation, of many but not all sensory dimensions, for example the brightness of lights, and the intensity and the pitch of sounds. It is not true, however, for the wavelength of light. Stanley Smith Stevens argued that it would hold only for what he called prothetic sensory continua, where change of input takes the form of increase in intensity or something obviously analogous; it would not hold for metathetic continua, where change of input produces a qualitative rather than a quantitative change of the percept. Stevens developed his own law, called Stevens' Power Law, that raises the stimulus to a constant power while, like Weber, also multiplying it by a constant factor in order to achieve the perceived stimulus. The JND is a statistical, rather than an exact quantity: from trial to trial, the difference that a given person notices will vary somewhat, and it is therefore necessary to conduct many trials in order to determine the threshold. The JND usually reported is the difference that a person notices on 50% of trials. If a different proportion is used, this should be included in the description—for example one might report the value of the "75% JND". Modern approaches to psychophysics, for example signal detection theory, imply that the observed JND, even in this statistical sense, is not an absolute quantity, but will depend on situational and motivational as well as perceptual factors. For example, when a researcher flashes a very dim light, a participant may report seeing it on some trials but not on others. The JND formula has an objective interpretation (implied at the start of this entry) as the disparity between levels of the presented stimulus that is detected on 50% of occasions by a particular observed response, rather than what is subjectively "noticed" or as a difference in magnitudes of consciously experienced 'sensations'. This 50%-discriminated disparity can be used as a universal unit of measurement of the psychological distance of the level of a feature in an object or situation and an internal standard of comparison in memory, such as the 'template' for a category or the 'norm' of recognition. The JND-scaled distances from norm can be combined among observed and inferred psychophysical functions to generate diagnostics among hypothesised information-transforming (mental) processes mediating observed quantitative judgments. Music production applications In music production, a single change in a property of sound which is below the JND does not affect perception of the sound. For amplitude, the JND for humans is around 1 dB. The JND for tone is dependent on the tone's frequency content. Below 500 Hz, the JND is about 3 Hz for sine waves; above 1000 Hz, the JND for sine waves is about 0.6% (about 10 cents). The JND is typically tested by playing two tones in quick succession with the listener asked if there was a difference in their pitches. The JND becomes smaller if the two tones are played simultaneously as the listener is then able to discern beat frequencies. The total number of perceptible pitch steps in the range of human hearing is about 1,400; the total number of notes in the equal-tempered scale, from 16 to 16,000 Hz, is 120. In speech perception JND analysis is frequently occurring in both music and speech, the two being related and overlapping in the analysis of speech prosody (i.e. speech melody). Although JND varies as a function of the frequency band being tested, it has been shown that JND for the best performers at around 1 kHz is well below 1 Hz, (i.e. less than a tenth of a percent). It is, however, important to be aware of the role played by critical bandwidth when performing this kind of analysis. When analysing speech melody, rather than musical tones, accuracy decreases. This is not surprising given that speech does not stay at fixed intervals in the way that tones in music do. Johan 't Hart (1981) found that JND for speech averaged between 1 and 2 STs but concluded that "only differences of more than 3 semitones play a part in communicative situations". Note that, given the logarithmic characteristics of Hz, for both music and speech perception results should not be reported in Hz but either as percentages or in STs (5 Hz between 20 and 25 Hz is very different from 5 Hz between 2000 and 2005 Hz, but an ~18.9% or 3 semitone increase is perceptually the same size difference, regardless of whether one starts at 20Hz or at 2000Hz). Marketing applications Weber's law has important applications in marketing. Manufacturers and marketers endeavor to determine the relevant JND for their products for two very different reasons: so that negative changes (e.g. reductions in product size or quality, or increase in product price) are not discernible to the public (i.e. remain below JND) and so that product improvements (e.g. improved or updated packaging, larger size or lower price) are very apparent to consumers without being wastefully extravagant (i.e. they are at or just above the JND). When it comes to product improvements, marketers very much want to meet or exceed the consumer's differential threshold; that is, they want consumers to readily perceive any improvements made in the original products. Marketers use the JND to determine the amount of improvement they should make in their products. Less than the JND is wasted effort because the improvement will not be perceived; more than the JND is again wasteful because it reduces the level of repeat sales. On the other hand, when it comes to price increases, less than the JND is desirable because consumers are unlikely to notice it. Haptics applications Weber's law is used in haptic devices and robotic applications. Exerting the proper amount of force to human operator is a critical aspects in human robot interactions and tele operation scenarios. It can highly improve the performance of the user in accomplishing a task. See also Absolute threshold ABX test Color difference Limen Minimal clinically important difference Mutatis mutandis Psychometric function Sensor resolution Visual perception Weber–Fechner law References Citations Sources Perception Psychophysics
Just-noticeable difference
Physics
1,579
51,395,548
https://en.wikipedia.org/wiki/Object%20point
Object points are an approach used in software development effort estimation under some models such as COCOMO II. Object points are a way of estimating effort size, similar to Source Lines Of Code (SLOC) or Function Points. They are not necessarily related to objects in Object-oriented programming, the objects referred to include screens, reports, and modules of the language. The number of raw objects and complexity of each are estimated and a weighted total Object-Point count is then computed and used to base estimates of the effort needed. See also COCOMO (Constructive Cost Model) Comparison of development estimation software Function point Software development effort estimation Software Sizing Source lines of code Use Case Points References Software development
Object point
Technology,Engineering
140
319,252
https://en.wikipedia.org/wiki/Strict%20function
In computer science and computer programming, a function f is said to be strict if, when applied to a non-terminating expression, it also fails to terminate. A strict function in the denotational semantics of programming languages is a function f where . The entity , called bottom, denotes an expression that does not return a normal value, either because it loops endlessly or because it aborts due to an error such as division by zero. A function that is not strict is called non-strict. A strict programming language is one in which user-defined functions are always strict. Intuitively, non-strict functions correspond to control structures. Operationally, a strict function is one that always evaluates its argument; a non-strict function is one that might not evaluate some of its arguments. Functions having more than one parameter can be strict or non-strict in each parameter independently, as well as jointly strict in several parameters simultaneously. As an example, the if-then-else expression of many programming languages, called ?: in languages inspired by C, may be thought of as a function of three parameters. This function is strict in its first parameter, since the function must know whether its first argument evaluates to true or to false before it can return; but it is non-strict in its second parameter, because (for example) if(false,,1) = 1, as well as non-strict in its third parameter, because (for example) if(true,2,) = 2. However, it is jointly strict in its second and third parameters, since if(true,,) = and if(false,,) = . In a non-strict functional programming language, strictness analysis refers to any algorithm used to prove the strictness of a function with respect to one or more of its arguments. Such functions can be compiled to a more efficient calling convention, such as call by value, without changing the meaning of the enclosing program. See also Eager evaluation Lazy evaluation Short-circuit evaluation References Formal methods Denotational semantics Evaluation strategy
Strict function
Engineering
423
11,940,462
https://en.wikipedia.org/wiki/Abitare
Abitare (which translates to "live" or "dwell"), published monthly in Milan, Italy, is a design magazine. It was first published in 1961. History and profile Abitare was launched in Milan in 1961 by Piera Peroni. It was devoted to architecture, interior design, furniture, product design and graphic arts and was published both in Italian and English. In 1976, the magazine was sold to Segesta Publishing group. Later it became part of the RCS Group and began to be published by RCS MediaGroup. Shortly after the founding of the magazine, postwar architect, Eugenio Gentili Tedeschi joined Peroni. In addition to writing for the magazine, he later served as acting de facto editor-in-chief with Franca Santi. Stefano Boeri, Chiara Maranzana, Mario Piazza and Maria Giulia Zunino were among the editors-in-chief of the magazine. The magazine momentarily ceased print publication in March 2014. However, its online version continued to publish content. The magazine was relaunched in October 2014 with a new format and new graphics under the direction of Silvia Botti. See also List of magazines published in Italy References External links Official website 1961 establishments in Italy Architecture magazines Design magazines English-language magazines Italian-language magazines Magazines established in 1961 Magazines published in Milan Monthly magazines published in Italy
Abitare
Engineering
278
2,450,817
https://en.wikipedia.org/wiki/Lunar%20theory
Lunar theory attempts to account for the motions of the Moon. There are many small variations (or perturbations) in the Moon's motion, and many attempts have been made to account for them. After centuries of being problematic, lunar motion can now be modeled to a very high degree of accuracy (see section Modern developments). Lunar theory includes: the background of general theory; including mathematical techniques used to analyze the Moon's motion and to generate formulae and algorithms for predicting its movements; and also quantitative formulae, algorithms, and geometrical diagrams that may be used to compute the Moon's position for a given time; often by the help of tables based on the algorithms. Lunar theory has a history of over 2000 years of investigation. Its more modern developments have been used over the last three centuries for fundamental scientific and technological purposes, and are still being used in that way. Applications Applications of lunar theory have included the following: In the eighteenth century, comparison between lunar theory and observation was used to test Newton's law of universal gravitation by the motion of the lunar apogee. In the eighteenth and nineteenth centuries, navigational tables based on lunar theory, initially in the Nautical Almanac, were much used for the determination of longitude at sea by the method of lunar distances. In the very early twentieth century, comparison between lunar theory and observation was used in another test of gravitational theory, to test (and rule out) Simon Newcomb's suggestion that a well-known discrepancy in the motion of the perihelion of Mercury might be explained by a fractional adjustment of the power -2 in Newton's inverse square law of gravitation (the discrepancy was later successfully explained by the general theory of relativity). In the mid-twentieth century, before the development of atomic clocks, lunar theory and observation were used in combination to implement an astronomical time scale (ephemeris time) free of the irregularities of mean solar time. In the late twentieth and early twenty-first centuries, modern developments of lunar theory are being used in the Jet Propulsion Laboratory Development Ephemeris series of models of the Solar System, in conjunction with high-precision observations, to test the exactness of physical relationships associated with the general theory of relativity, including the strong equivalence principle, relativistic gravitation, geodetic precession, and the constancy of the gravitational constant. History The Moon has been observed for millennia. Over these ages, various levels of care and precision have been possible, according to the techniques of observation available at any time. There is a correspondingly long history of lunar theories: it stretches from the times of the Babylonian and Greek astronomers, down to modern lunar laser ranging. Among notable astronomers and mathematicians down the ages, whose names are associated with lunar theories, are: Babylonian/Chaldean Naburimannu Kidinnu Soudines Greek/Hellenistic Hipparchus Ptolemy Arab Ibn al-Shatir European, 16th to early 20th centuries Tycho Brahe Johannes Kepler Jeremiah Horrocks Ismaël Bullialdus John Flamsteed Isaac Newton Edmond Halley Leonhard Euler Alexis Clairaut Jean d'Alembert Tobias Mayer Johann Tobias Bürg Pierre-Simon Laplace Philippe le Doulcet Johann Karl Burckhardt Peter Andreas Hansen Charles-Eugène Delaunay John Couch Adams North American, 19th to early 20th centuries Simon Newcomb George William Hill Ernest William Brown Wallace John Eckert Other notable mathematicians and mathematical astronomers also made significant contributions. The history can be considered to fall into three parts: from ancient times to Newton; the period of classical (Newtonian) physics; and modern developments. Ancient times to Newton Babylon Of Babylonian astronomy, practically nothing was known to historians of science before the 1880s. Surviving ancient writings of Pliny had made bare mention of three astronomical schools in Mesopotamia – at Babylon, Uruk, and 'Hipparenum' (possibly 'Sippar'). But definite modern knowledge of any details only began when Joseph Epping deciphered cuneiform texts on clay tablets from a Babylonian archive: In these texts he identified an ephemeris of positions of the Moon. Since then, knowledge of the subject, still fragmentary, has had to be built up by painstaking analysis of deciphered texts, mainly in numerical form, on tablets from Babylon and Uruk (no trace has yet been found of anything from the third school mentioned by Pliny). To the Babylonian astronomer Kidinnu (in Greek or Latin, Kidenas or Cidenas) has been attributed the invention (5th or 4th century BC) of what is now called "System  B" for predicting the position of the moon, taking account that the moon continually changes its speed along its path relative to the background of fixed stars. This system involved calculating daily stepwise changes of lunar speed, up or down, with a minimum and a maximum approximately each month. The basis of these systems appears to have been arithmetical rather than geometrical, but they did approximately account for the main lunar inequality now known as the equation of the center. The Babylonians kept very accurate records for hundreds of years of new moons and eclipses. Some time between the years 500 BC and 400 BC they identified and began to use the 19 year cyclic relation between lunar months and solar years now known as the Metonic cycle. This helped them build a numerical theory of the main irregularities in the Moon's motion, reaching remarkably good estimates for the (different) periods of the three most prominent features of the Moon's motion: The synodic month, i.e. the mean period for the phases of the Moon. Now called "System B", it reckons the synodic month as 29 days and (sexagesimally) 3,11;0,50 "time degrees", where each time degree is one degree of the apparent motion of the stars, or 4 minutes of time, and the sexagesimal values after the semicolon are fractions of a time degree. This converts to 29.530594 days = 29d 12h 44m 3.33s, to compare with a modern value (as at 1900 Jan 0) of 29.530589 days, or 29d 12h 44m 2.9s. This same value was used by Hipparchos and Ptolemy, was used throughout the Middle Ages, and still forms the basis of the Hebrew calendar. The mean lunar velocity relative to the stars they estimated at 13° 10′ 35″ per day, giving a corresponding month of 27.321598 days, to compare with modern values of 13° 10′ 35.0275″ and 27.321582 days. The anomalistic month, i.e. the mean period for the Moon's approximately monthly accelerations and decelerations in its rate of movement against the stars, had a Babylonian estimate of 27.5545833 days, to compare with a modern value 27.554551 days. The draconitic month, i.e. the mean period with which the path of the Moon against the stars deviates first north and then south in ecliptic latitude by comparison with the ecliptic path of the Sun, was indicated by a number of different parameters leading to various estimates, e.g. of 27.212204 days, to compare with a modern value of 27.212221, but the Babylonians also had a numerical relationship that 5458 synodic months were equal to 5923 draconitic months, which when compared with their accurate value for the synodic month leads to practically exactly the modern figure for the draconitic month. The Babylonian estimate for the synodic month was adopted for the greater part of two millennia by Hipparchus, Ptolemy, and medieval writers (and it is still in use as part of the basis for the calculated Hebrew (Jewish) calendar). Greece and Hellenistic Egypt Thereafter, from Hipparchus and Ptolemy in the Bithynian and Ptolemaic epochs down to the time of Newton's work in the seventeenth century, lunar theories were composed mainly with the help of geometrical ideas, inspired more or less directly by long series of positional observations of the moon. Prominent in these geometrical lunar theories were combinations of circular motions – applications of the theory of epicycles. Hipparchus Hipparchus, whose works are mostly lost and known mainly from quotations by other authors, assumed that the Moon moved in a circle inclined at 5° to the ecliptic, rotating in a retrograde direction (i.e. opposite to the direction of annual and monthly apparent movements of the Sun and Moon relative to the fixed stars) once in 18 years. The circle acted as a deferent, carrying an epicycle along which the Moon was assumed to move in a retrograde direction. The center of the epicycle moved at a rate corresponding to the mean change in Moon's longitude, while the period of the Moon around the epicycle was an anomalistic month. This epicycle approximately provided for what was later recognized as the elliptical inequality, the equation of the center, and its size approximated to an equation of the center of about 5° 1'. This figure is much smaller than the modern value: but it is close to the difference between the modern coefficients of the equation of the center (1st term) and that of the evection: the difference is accounted for by the fact that the ancient measurements were taken at times of eclipses, and the effect of the evection (which subtracts under those conditions from the equation of the center) was at that time unknown and overlooked. For further information see also separate article Evection. Ptolemy Ptolemy's work the Almagest had wide and long-lasting acceptance and influence for over a millennium. He gave a geometrical lunar theory that improved on that of Hipparchus by providing for a second inequality of the Moon's motion, using a device that made the apparent apogee oscillate a little – prosneusis of the epicycle. This second inequality or second anomaly accounted rather approximately, not only for the equation of the center, but also for what became known (much later) as the evection. But this theory, applied to its logical conclusion, would make the distance (and apparent diameter) of the Moon appear to vary by a factor of about 2, which is clearly not seen in reality. (The apparent angular diameter of the Moon does vary monthly, but only over a much narrower range of about 0.49°–0.55°.) This defect of the Ptolemaic theory led to proposed replacements by Ibn al-Shatir in the 14th century and by Copernicus in the 16th century. Ibn al-Shatir and Copernicus Significant advances in lunar theory were made by the Arab astronomer, Ibn al-Shatir (1304–1375). Drawing on the observation that the distance to the Moon did not change as drastically as required by Ptolemy's lunar model, he produced a new lunar model that replaced Ptolemy's crank mechanism with a double epicycle model that reduced the computed range of distances of the Moon from the Earth. A similar lunar theory, developed some 150 years later by the Renaissance astronomer Nicolaus Copernicus, had the same advantage concerning the lunar distances. Tycho Brahe, Johannes Kepler, and Jeremiah Horrocks Tycho Brahe and Johannes Kepler refined the Ptolemaic lunar theory, but did not overcome its central defect of giving a poor account of the (mainly monthly) variations in the Moon's distance, apparent diameter and parallax. Their work added to the lunar theory three substantial further discoveries. The nodes and the inclination of the lunar orbital plane both appear to librate, with a monthly (according to Tycho) or semi-annual period (according to Kepler). The lunar longitude has a twice-monthly Variation, by which the Moon moves faster than expected at new and full moon, and slower than expected at the quarters. There is also an annual effect, by which the lunar motion slows down a little in January and speeds up a little in July: the annual equation. The refinements of Brahe and Kepler were recognized by their immediate successors as improvements, but their seventeenth-century successors tried numerous alternative geometrical configurations for the lunar motions to improve matters further. A notable success was achieved by Jeremiah Horrocks, who proposed a scheme involving an approximate 6 monthly libration in the position of the lunar apogee and also in the size of the elliptical eccentricity. This scheme had the great merit of giving a more realistic description of the changes in distance, diameter and parallax of the Moon. Newton A first gravitational period for lunar theory started with the work of Newton. He was the first to define the problem of the perturbed motion of the Moon in recognisably modern terms. His groundbreaking work is shown for example in the Principia in all versions including the first edition published in 1687. Newton's biographer, David Brewster, reported that the complexity of Lunar Theory impacted Newton's health: "[H]e was deprived of his appetite and sleep" during his work on the problem in 1692–3, and told the astronomer John Machin that "his head never ached but when he was studying the subject". According to Brewster, Edmund Halley also told John Conduitt that when pressed to complete his analysis Newton "always replied that it made his head ache, and kept him awake so often, that he would think of it no more" [Emphasis in original]. Solar perturbation of lunar motion Newton identified how to evaluate the perturbing effect on the relative motion of the Earth and Moon, arising from their gravity towards the Sun, in Book 1, Proposition 66, and in Book 3, Proposition 25. The starting-point for this approach is Corollary VI to the laws of motion. This shows that if the external accelerative forces from some massive body happens to act equally and in parallel on some different other bodies considered, then those bodies would be affected equally, and in that case their motions (relative to each other) would continue as if there were no such external accelerative forces at all. It is only in the case that the external forces (e.g. in Book 1, Prop. 66, and Book 3, Prop. 25, the gravitational attractions towards the Sun) are different in size or in direction in their accelerative effects on the different bodies considered (e.g. on the Earth and Moon), that consequent effects are appreciable on the relative motions of the latter bodies. (Newton referred to accelerative forces or accelerative gravity due to some external massive attractor such as the Sun. The measure he used was the acceleration that the force tends to produce (in modern terms, force per unit mass), rather than what we would now call the force itself.) Thus Newton concluded that it is only the difference between the Sun's accelerative attraction on the Moon and the Sun's attraction on the Earth that perturbs the motion of the Moon relative to the Earth. Newton then in effect used vector decomposition of forces, to carry out this analysis. In Book 1, Proposition 66 and in Book 3, Proposition 25, he showed by a geometrical construction, starting from the total gravitational attraction of the Sun on the Earth, and of the Sun on the Moon, the difference that represents the perturbing effect on the motion of the Moon relative to the Earth. In summary, line LS in Newton's diagram as shown below represents the size and direction of the perturbing acceleration acting on the Moon in the Moon's current position P (line LS does not pass through point P, but the text shows that this is not intended to be significant, it is a result of the scale factors and the way the diagram has been built up). Shown here is Newton's diagram from the first (1687) Latin edition of the Principia (Book 3, Proposition 25, p. 434). Here he introduced his analysis of perturbing accelerations on the Moon in the Sun-Earth-Moon system. Q represents the Sun, S the Earth, and P the Moon. Parts of this diagram represent distances, other parts gravitational accelerations (attractive forces per unit mass). In a dual significance, SQ represents the Earth-Sun distance, and then it also represents the size and direction of the Earth-Sun gravitational acceleration. Other distances in the diagram are then in proportion to distance SQ. Other attractions are in proportion to attraction SQ. The Sun's attractions are SQ (on the Earth) and LQ (on the Moon). The size of LQ is drawn so that the ratio of attractions LQ:SQ is the inverse square of the ratio of distances PQ:SQ. (Newton constructs KQ=SQ, giving an easier view of the proportions.) The Earth's attraction on the Moon acts along direction PS. (But line PS signifies only distance and direction so far, nothing has been defined about the scale factor between solar and terrestrial attractions). After showing solar attractions LQ on the Moon and SQ on the Earth, on the same scale, Newton then makes a vector decomposition of LQ into components LM and MQ. Then he identifies the perturbing acceleration on the Moon as the difference of this from SQ. SQ and MQ are parallel to each other, so SQ can be directly subtracted from MQ, leaving MS. The resulting difference, after subtracting SQ from LQ, is therefore the vector sum of LM and MS: these add up to a perturbing acceleration LS. Later Newton identified another resolution of the perturbing acceleration LM+MS = LS, into orthogonal components: a transverse component parallel to LE, and a radial component, effectively ES. Newton's diagrammatic scheme, since his time, has been re-presented in other and perhaps visually clearer ways. Shown here is a vector presentation indicating, for two different positions, P1 and P2, of the Moon in its orbit around the Earth, the respective vectors LS1 and LS2 for the perturbing acceleration due to the Sun. The Moon's position at P1 is fairly close to what it was at P in Newton's diagram; corresponding perturbation LS1 is like Newton's LS in size and direction. At another position P2, the Moon is farther away from the Sun than the Earth is, the Sun's attraction LQ2 on the Moon is weaker than the Sun's attraction SQ=SQ2 on the Earth, and then the resulting perturbation LS2 points obliquely away from the Sun. Constructions like those in Newton's diagram can be repeated for many different positions of the Moon in its orbit. For each position, the result is a perturbation vector like LS1 or LS2 in the second diagram. Shown here is an often-presented form of the diagram that summarises sizes and directions of the perturbation vectors for many different positions of the Moon in its orbit. Each small arrow is a perturbation vector like LS, applicable to the Moon in the particular position around the orbit from which the arrow begins. The perturbations on the Moon when it is nearly in line along the Earth-Sun axis, i.e. near new or full moon, point outwards, away from the Earth. When the Moon-Earth line is 90° from the Earth-Sun axis they point inwards, towards the Earth, with a size that is only half the maximum size of the axial (outwards) perturbations. (Newton gave a rather good quantitative estimate for the size of the solar perturbing force: at quadrature where it adds to the Earth's attraction he put it at of the mean terrestrial attraction, and twice as much as that at the new and full moons where it opposes and diminishes the Earth's attraction.) Newton also showed that the same pattern of perturbation applies, not only to the Moon, in its relation to the Earth as disturbed by the Sun, but also to other particles more generally in their relation to the solid Earth as disturbed by the Sun (or by the Moon); for example different portions of the tidal waters at the Earth's surface. The study of the common pattern of these perturbing accelerations grew out of Newton's initial study of the perturbations of the Moon, which he also applied to the forces moving tidal waters. Nowadays this common pattern itself has become often known as a tidal force whether it is being applied to the disturbances of the motions of the Moon, or of the Earth's tidal waters – or of the motions of any other object that suffers perturbations of analogous pattern. After introducing his diagram 'to find the force of the Sun to perturb the Moon' in Book 3, Proposition 25, Newton developed a first approximation to the solar perturbing force, showing in further detail how its components vary as the Moon follows its monthly path around the Earth. He also took the first steps in investigating how the perturbing force shows its effects by producing irregularities in the lunar motions. For a selected few of the lunar inequalities, Newton showed in some quantitative detail how they arise from the solar perturbing force. Much of this lunar work of Newton's was done in the 1680s, and the extent and accuracy of his first steps in the gravitational analysis was limited by several factors, including his own choice to develop and present the work in what was, on the whole, a difficult geometrical way, and by the limited accuracy and uncertainty of many astronomical measurements in his time. Classical gravitational period after Newton The main aim of Newton's successors, from Leonhard Euler, Alexis Clairaut and Jean d'Alembert in the mid-eighteenth century, down to Ernest William Brown in the late nineteenth and early twentieth century, was to account completely and much more precisely for the moon's motions on the basis of Newton's laws, i.e. the laws of motion and of universal gravitation by attractions inversely proportional to the squares of the distances between the attracting bodies. They also wished to put the inverse-square law of gravitation to the test, and for a time in the 1740s it was seriously doubted, on account of what was then thought to be a large discrepancy between the Newton-theoretical and the observed rates in the motion of the lunar apogee. However Clairaut showed shortly afterwards (1749–50) that at least the major cause of the discrepancy lay not in the lunar theory based on Newton's laws, but in excessive approximations that he and others had relied on to evaluate it. Most of the improvements in theory after Newton were made in algebraic form: they involved voluminous and highly laborious amounts of infinitesimal calculus and trigonometry. It also remained necessary, for completing the theories of this period, to refer to observational measurements. Results of the theories The lunar theorists used (and invented) many different mathematical approaches to analyse the gravitational problem. Not surprisingly, their results tended to converge. From the time of the earliest gravitational analysts among Newton's successors, Euler, Clairaut and d'Alembert, it was recognized that nearly all of the main lunar perturbations could be expressed in terms of just a few angular arguments and coefficients. These can be represented by: the mean motions or positions of the Moon and the Sun, together with three coefficients and three angular positions, which together define the shape and location of their apparent orbits: the two eccentricities (, about 0.0549, and , about 0.01675) of the ellipses that approximate to the apparent orbits of the Moon and the Sun; the angular direction of the perigees ( and ) (or their opposite points the apogees) of the two orbits; and the angle of inclination (, mean value about 18523") between the planes of the two orbits, together with the direction () of the line of nodes in which those two planes intersect. The ascending node () is the node passed by the Moon when it is tending northwards relative to the ecliptic. From these basic parameters, just four basic differential angular arguments are enough to express, in their different combinations, nearly all of the most significant perturbations of the lunar motions. They are given here with their conventional symbols due to Delaunay; they are sometimes known as the Delaunay arguments: the Moon's mean anomaly (angular distance of the mean longitude of the Moon from the mean longitude of its perigee ); the Sun's mean anomaly (angular distance of the mean longitude of the Sun from the mean longitude of its perigee ); the Moon's mean argument of latitude (angular distance of the mean longitude of the Moon from the mean longitude of its ascending (northward-bound) node ); the Moon's mean (solar) elongation (angular distance of the mean longitude of the Moon from the mean longitude of the Sun). This work culminated into Brown's lunar theory (1897–1908) and Tables of the Motion of the Moon (1919). These were used in the American Ephemeris and Nautical Almanac until 1968, and in a modified form until 1984. Largest or named lunar inequalities Several of the largest lunar perturbations in longitude (contributions to the difference in its true ecliptic longitude relative to its mean longitude) have been named. In terms of the differential arguments, they can be expressed in the following way, with coefficients rounded to the nearest second of arc ("): Equation of the center The Moon's equation of the center, or elliptic inequality, was known at least in approximation, to the ancients from the Babylonians and Hipparchus onwards. Knowledge of more recent date is that it corresponds to the approximate application of Kepler's law of equal areas in an elliptical orbit, and represents the speeding-up of the Moon as its distance from the Earth decreases while it moves towards its perigee, and then its slowing down as its distance from the Earth increases while it moves towards its apogee. The effect on the Moon's longitude can be approximated by a series of terms, of which the first three are . Evection The evection (or its approximation) was known to Ptolemy, but its name and knowledge of its cause dates from the 17th century. Its effect on the Moon's longitude has an odd-appearing period of about 31.8 days. This can be represented in a number of ways, for example as the result of an approximate 6-monthly libration in the position of perigee, with an accompanying 6-monthly pulsation in the size of the Moon's orbital eccentricity. Its principal term is . Variation The Variation, discovered by Tycho Brahe, is a speeding-up of the Moon as it approaches new-moon and full-moon, and a slowing-down as it approaches first and last quarter. Its gravitational explanation with a quantitative estimate was first given by Newton. Its principal term is . Annual equation The annual equation, also discovered by Brahe, was qualitatively explained by Newton in terms that the Moon's orbit becomes slightly expanded in size, and longer in period, when the Earth is at perihelion closest to the Sun at the beginning of January, and the Sun's perturbing effect is strongest, and then slightly contracted in size and shorter in period when the Sun is most distant in early July, so that its perturbing effect is weaker: the modern value for the principal term due to this effect is . Parallactic inequality The parallactic inequality, first found by Newton, makes Brahe's Variation a little asymmetric as a result of the finite distance and non-zero parallax of the Sun. Its effect is that the Moon is a little behind at first quarter, and a little ahead at last quarter. Its principal term is . Reduction to the ecliptic The reduction to the ecliptic represents the geometric effect of expressing the Moon's motion in terms of a longitude in the plane of the ecliptic, although its motion is really taking place in a plane that is inclined by about 5 degrees. Its principal term is . The analysts of the mid-18th century expressed the perturbations of the Moon's position in longitude using about 25-30 trigonometrical terms. However, work in the nineteenth and twentieth century led to very different formulations of the theory so these terms are no longer current. The number of terms needed to express the Moon's position with the accuracy sought at the beginning of the twentieth century was over 1400; and the number of terms needed to emulate the accuracy of modern numerical integrations based on laser-ranging observations is in the tens of thousands: there is no limit to the increase in number of terms needed as requirements of accuracy increase. Modern developments Digital computers and lunar laser ranging Since the Second World War and especially since the 1960s, lunar theory has been further developed in a somewhat different way. This has been stimulated in two ways: on the one hand, by the use of automatic digital computation, and on the other hand, by modern observational data-types, with greatly increased accuracy and precision. Wallace John Eckert, a student of Ernest William Brown and employee at IBM, used the experimental digital computers developed there after the Second World War for computation of astronomical ephemerides. One of the projects was to put Brown's lunar theory into the machine and evaluate the expressions directly. Another project was something entirely new: a numerical integration of the equations of motion for the Sun and the four major planets. This became feasible only after electronic digital computers became available. Eventually this led to the Jet Propulsion Laboratory Development Ephemeris series. In the meantime, Brown's theory was improved with better constants and the introduction of Ephemeris Time and the removal of some empirical corrections associated with this. This led to the Improved Lunar Ephemeris (ILE), which, with some minor successive improvements, was used in the astronomical almanacs from 1960 through 1983 and enabled lunar landing missions. The most significant improvement of position observations of the Moon have been the Lunar Laser Ranging measurements, obtained using Earth-bound lasers and special retroreflectors placed on the surface of the Moon. The time-of-flight of a pulse of laser light to one of the retroreflectors and back gives a measure of the Moon's distance at that time. The first of five retroreflectors that are operational today was taken to the Moon in the Apollo 11 spacecraft in July 1969 and placed in a suitable position on the Moon's surface by Buzz Aldrin. Range precision has been extended further by the Apache Point Observatory Lunar Laser-ranging Operation, established in 2005. Numerical integrations, relativity, tides, librations The lunar theory, as developed numerically to fine precision using these modern measures, is based on a larger range of considerations than the classical theories: It takes account not only of gravitational forces (with relativistic corrections) but also of many tidal and geophysical effects and a greatly extended theory of lunar libration. Like many other scientific fields this one has now developed so as to be based on the work of large teams and institutions. An institution notably taking one of the leading parts in these developments has been the Jet Propulsion Laboratory (JPL) at California Institute of Technology; and names particularly associated with the transition, from the early 1970s onwards, from classical lunar theories and ephemerides towards the modern state of the science include those of J. Derral Mulholland and J.G. Williams, and for the linked development of solar system (planetary) ephemerides E. Myles Standish. Since the 1970s, JPL has produced a series of numerically integrated Development Ephemerides (numbered DExxx), incorporating Lunar Ephemerides (LExxx). Planetary and lunar ephemerides DE200/LE200 were used in the official Astronomical Almanac ephemerides for 1984–2002, and ephemerides DE405/LE405, of further improved accuracy and precision, have been in use as from the issue for 2003. The current ephemeris is DE440. Analytical developments In parallel with these developments, a new class of analytical lunar theory has also been developed in recent years, notably the Ephemeride Lunaire Parisienne by Jean Chapront and Michelle Chapront-Touzé from the Bureau des Longitudes. Using computer-assisted algebra, the analytical developments have been taken further than previously could be done by the classical analysts working manually. Also, some of these new analytical theories (like ELP) have been fitted to the numerical ephemerides previously developed at JPL as mentioned above. The main aims of these recent analytical theories, in contrast to the aims of the classical theories of past centuries, have not been to generate improved positional data for current dates; rather, their aims have included the study of further aspects of the motion, such as long-term properties, which may not so easily be apparent from the modern numerical theories themselves. Notes References Bibliography 'AE 1871': "Nautical Almanac & Astronomical Ephemeris" for 1871, (London, 1867). E W Brown (1896). An Introductory Treatise on the Lunar Theory, Cambridge University Press. E W Brown. "Theory of the Motion of the Moon", Memoirs of the Royal Astronomical Society, 53 (1897), 39–116. E W Brown. "Theory of the Motion of the Moon", Memoirs of the Royal Astronomical Society, 53 (1899), 163–202. E W Brown. "Theory of the Motion of the Moon", Memoirs of the Royal Astronomical Society, 54 (1900), 1–63. E W Brown. "On the verification of the Newtonian law", Monthly Notes of the Royal Astronomical Society 63 (1903), 396–397. E W Brown. "Theory of the Motion of the Moon", Memoirs of the Royal Astronomical Society, 57 (1905), 51–145. E W Brown. "Theory of the Motion of the Moon", Memoirs of the Royal Astronomical Society, 59 (1908), 1–103. E W Brown (1919). Tables of the Motion of the Moon, New Haven. M Chapront-Touzé & J Chapront. "The lunar ephemeris ELP-2000", Astronomy & Astrophysics 124 (1983), 50–62. M Chapront-Touzé & J Chapront: "ELP2000-85: a semi-analytical lunar ephemeris adequate for historical times", Astronomy & Astrophysics 190 (1988), 342–352. M Chapront-Touzé & J Chapront, Analytical Ephemerides of the Moon in the 20th Century (Observatoire de Paris, 2002). J Chapront; M Chapront-Touzé; G Francou. "A new determination of lunar orbital parameters, precession constant and tidal acceleration from LLR measurements", Astronomy & Astrophysics 387 (2002), 700–709. J Chapront & G Francou. "The lunar theory ELP revisited. Introduction of new planetary perturbations", Astronomy & Astrophysics 404 (2003), 735–742. I B Cohen and Anne Whitman (1999). Isaac Newton: 'The Principia', a new translation, University of California Press. (For bibliographic details but no text, see external link.) J O Dickey; P L Bender; J E Faller; and others. "Lunar Laser Ranging: A Continuing Legacy of the Apollo Program", Science 265 (1994), pp. 482–490. J L E Dreyer (1906). A History of Astronomy from Thales to Kepler, Cambridge University Press, (later republished under the modified title "History of the Planetary Systems from Thales to Kepler"). W J Eckert et al. Improved Lunar Ephemeris 1952–1959: A Joint Supplement to the American Ephemeris and the (British) Nautical Almanac, (US Government Printing Office, 1954). J Epping & J N Strassmaier. "Zur Entzifferung der astronomischen Tafeln der Chaldaer" ("On the Deciphering of the Astronomical Tables of the Chaldaeans"), Stimmen aus Maria Laach, vol. 21 (1881), pp. 277–292. 'ESAE 1961': Explanatory Supplement to the Astronomical Ephemeris and the American Ephemeris and Nautical Almanac ('prepared jointly by the Nautical Almanac Offices of the United Kingdom and the United States of America'), London (HMSO), 1961. K Garthwaite; D B Holdridge & J D Mulholland. "A preliminary special perturbation theory for the lunar motion", Astronomical Journal 75 (1970), 1133. H Godfray (1885). Elementary Treatise on the Lunar Theory, London, (4th ed.). Andrew Motte (1729a) (translator). "The Mathematical Principles of Natural Philosophy, by Sir Isaac Newton, translated into English", Volume I, containing Book 1. Andrew Motte (1729b) (translator). "The Mathematical Principles of Natural Philosophy, by Sir Isaac Newton, translated into English", Volume II, containing Books 2 and 3 (with Index, Appendix containing additional (Newtonian) proofs, and "The Laws of the Moon's Motion according to Gravity", by John Machin). J D Mulholland & P J Shelus. "Improvement of the numerical lunar ephemeris with laser ranging data", Moon 8 (1973), 532. O Neugebauer (1975). A History of Ancient Mathematical Astronomy, (in 3 volumes), New York (Springer). X X Newhall; E M Standish; J G Williams. "DE102: A numerically integrated ephemeris of the Moon and planets spanning forty-four centuries", Astronomy and Astrophysics 125 (1983), 150. U S Naval Observatory (2009). "History of the Astronomical Almanac" . J G Williams et al. "Making solutions from lunar laser ranging data", Bulletin of the American Astronomical Society (1972), 4Q, 267. J.G. Williams; S.G. Turyshev; & D.H. Boggs. "Progress in Lunar Laser Ranging Tests of Relativistic Gravity", Physical Review Letters, 93 (2004), 261101. External links Gravity Effects of gravity Orbit of the Moon Time in astronomy
Lunar theory
Astronomy
8,069
72,219,089
https://en.wikipedia.org/wiki/Hemangada%20Thakura
Hemangada Thakura was the King of Mithila between 1571 AD to 1590 AD. He was also an Indian Astronomer in 16th century. He was famous for his astronomical treatise Grahan Mala. The book told the dates of the eclipses for 1088 years from 1620 AD to 2708 AD. The dates of lunar and solar eclipse that Hemangad Thakur had fixed on the basis of his unique calculations are proving to be true till date. Early life Hemangada Thakura was born in a Maithil Brahmin family in Mithila region of present Bihar state in India. He born in 1530 AD. He was the grandson of Mahamahopadhyay Mahesha Thakura and the son of Gopal Thakur. Mahesh Thakur was also the King of Mithila in Khandwala Dynasty. History After the abdication of his father Gopal Thakur, he was handed over the throne of Mithila in 1571 AD. But he was not interested in governance. In 1572 AD, he was arrested and taken to Delhi and imprisoned for not paying taxes on time to the Mughal Empire. It is said that in prison he started writing mathematical calculations on the surface of the floor of the jail, then the jailer asked him about the mathematical figures drawn on the floor. Hemangada Thakura replied that he was trying to understand the motion of the Moon. Then the jailer spread the news that Hemangada Thakura had been mental mad. After hearing the news, the Mughal Emperor himself went to see Hemangada Thakura and asked about the mathematical calculations and figures drawn on the floor. Then Hemangada Thakura replied that he had calculated the dates of the eclipses for the next 500 years. After hearing the reply, the emperor immediately granted copperplate and pen to him for writing the calculations and told him that if his calculations became true, then he would be released from the prison. There in prison, he composed his famous book Grahan Mala which explained the eclipses for 1088 years. He predicated the date and time of the next lunar eclipse and informed the emperor. The prediction of the next lunar eclipse came to true on the same date and time as calculated by him. On composing this book, the Mughal emperor not only released him, but also returned the tax-free kingdom of Mithila. Discovery in Astronomy Hemangada Thakura calculated the dates of the eclipses for 1088 years from 1620 AD to 2708 AD on the basis of his unique calculations. The eclipses dates have been proved to be true till date. He composed an astronomical treatise book known as Grahan Mala which explains the dates of the eclipses. In making Panchang, scholars and Pandits take helps of this book. The manuscript of the book was preserved in Kameshwar Singh Darbhanga Sanskrit University, which has been stolen a few years back. By the way, in 1983 itself, the university had published this book, which is present in various libraries. Indian National Science Academy started a research project through national commission ( 2014 - 2022 ) by Vanaja V on the astronomical treatise Grahan Mala. The research project is known as “A Critical Study of Hemangada Thakkura’s Grahaṇamala” . References Astronomers Mithila Indian royalty 16th century in Asia 16th-century births
Hemangada Thakura
Astronomy
684
15,409,174
https://en.wikipedia.org/wiki/Piezoelectric%20accelerometer
A piezoelectric accelerometer is an accelerometer that employs the piezoelectric effect of certain materials to measure dynamic changes in mechanical variables (e.g., acceleration, vibration, and mechanical shock). As with all transducers, piezoelectrics convert one form of energy into another and provide an electrical signal in response to a quantity, property, or condition that is being measured. Using the general sensing method upon which all accelerometers are based, acceleration acts upon a seismic mass that is restrained by a spring or suspended on a cantilever beam, and converts a physical force into an electrical signal. Before the acceleration can be converted into an electrical quantity it must first be converted into either a force or displacement. This conversion is done via the mass spring system shown in the figure to the right. Introduction The word piezoelectric finds its roots in the Greek word piezein, which means to squeeze or press. When a physical force is exerted on the accelerometer, the seismic mass loads the piezoelectric element according to Newton's second law of motion (). The force exerted on the piezoelectric material can be observed in the change in the electrostatic force or voltage generated by the piezoelectric material. This differs from a piezoresistive effect in that piezoresistive materials experience a change in the resistance of the material rather than a change in charge or voltage. Physical force exerted on the piezoelectric can be classified as one of two types; bending or compression. Stress of the compression type can be understood as a force exerted to one side of the piezoelectric while the opposing side rests against a fixed surface, while bending involves a force being exerted on the piezoelectric from both sides. Piezoelectric materials used for the purpose of accelerometers fall into two categories: single crystal and ceramic materials. The first and more widely used are single-crystal materials (usually quartz). Though these materials do offer a long life span in terms of sensitivity, their disadvantage is that they are generally less sensitive than some piezoelectric ceramics. The other category, ceramic materials, have a higher piezoelectric constant (sensitivity) than single-crystal materials, and are less expensive to produce. Ceramics use barium titanate, lead-zirconate-lead-titanate, lead metaniobate, and other materials whose composition is considered proprietary by the company responsible for their development. The disadvantage of piezoelectric ceramics, however, is that their sensitivity degrades with time making the longevity of the device less than that of single-crystal materials. In applications when low sensitivity piezoelectrics are used, two or more crystals can be connected together for output multiplication. The proper material can be chosen for particular applications based on the sensitivity, frequency response, bulk-resistivity, and thermal response. Due to the low output signal and high output impedance that piezoelectric accelerometers possess, there is a need for amplification and impedance conversion of the signal produced. In the past this problem has been solved using a separate (external) amplifier/impedance converter. This method, however, is generally impractical due to the noise that is introduced as well as the physical and environmental constraints posed on the system as a result. Today IC amplifiers/impedance converters are commercially available and are generally packaged within the case of the accelerometer itself. History Behind the mystery of the operation of the piezoelectric accelerometer lie some very fundamental concepts governing the behavior of crystallographic structures. In 1880, Pierre and Jacques Curie published an experimental demonstration connecting mechanical stress and surface charge on a crystal. This phenomenon became known as the piezoelectric effect. Closely related to this phenomenon is the Curie point, named for the physicist Pierre Curie, which is the temperature above which piezoelectric material loses spontaneous polarization of its atoms. The development of the commercial piezoelectric accelerometer came about through a number of attempts to find the most effective method to measure the vibration on large structures such as bridges and on vehicles in motion such as aircraft. One attempt involved using the resistance strain gage as a device to build an accelerometer. Incidentally, it was Hans J. Meier who, through his work at MIT, is given credit as the first to construct a commercial strain gage accelerometer (circa 1938). However, the strain gage accelerometers were fragile and could only produce low resonant frequencies and they also exhibited a low frequency response. These limitations in dynamic range made it unsuitable for testing naval aircraft structures. On the other hand, the piezoelectric sensor was proven to be a much better choice over the strain gage in designing an accelerometer. The high modulus of elasticity of piezoelectric materials makes the piezoelectric sensor a more viable solution to the problems identified with the strain gage accelerometer. Simply stated, the inherent properties of the piezoelectric accelerometers made it a much better alternative to the strain gage types because of its high frequency response, and its ability to generate high resonant frequencies. The piezoelectric accelerometer allowed for a reduction in its physical size at the manufacturing level and it also provided for a higher g (standard gravity) capability relative to the strain gage type. By comparison, the strain gage type exhibited a flat frequency response up to 200 Hz while the piezoelectric type provided a flat response up to 10,000 Hz. These improvements made it possible for measuring the high frequency vibrations associated with the quick movements and short duration shocks of aircraft which before was not possible with the strain gage types. Before long, the technological benefits of the piezoelectric accelerometer became apparent and in the late 1940s, large scale production of piezoelectric accelerometers began. Today, piezoelectric accelerometers are used for instrumentation in the fields of engineering, health and medicine, aeronautics and many other different industries. Manufacturing There are two common methods used to manufacture accelerometers. One is based upon the principles of piezoresistance and the other is based on the principles of piezoelectricity. Both methods ensure that unwanted orthogonal acceleration vectors are excluded from detection. Manufacturing an accelerometer that uses piezoresistance first starts with a semiconductor layer that is attached to a handle wafer by a thick oxide layer. The semiconductor layer is then patterned to the accelerometer's geometry. This semiconductor layer has one or more apertures so that the underlying mass will have the corresponding apertures. Next the semiconductor layer is used as a mask to etch out a cavity in the underlying thick oxide. A mass in the cavity is supported in cantilever fashion by the piezoresistant arms of the semiconductor layer. Directly below the accelerometer's geometry is a flex cavity that allows the mass in the cavity to flex or move in direction that is orthogonal to the surface of the accelerometer. Accelerometers based upon piezoelectricity are constructed with two piezoelectric transducers. The unit consists of a hollow tube that is sealed by a piezoelectric transducer on each end. The transducers are oppositely polarized and are selected to have a specific series capacitance. The tube is then partially filled with a heavy liquid and the accelerometer is excited. While excited the total output voltage is continuously measured and the volume of the heavy liquid is microadjusted until the desired output voltage is obtained. Finally the outputs of the individual transducers are measured, the residual voltage difference is tabulated, and the dominant transducer is identified. In 1943 the Danish company Brüel & Kjær launched Type 4301 - the world's first charge accelerometer. Applications of piezoelectric accelerometers Piezoelectric accelerometers are used in many different industries, environments, and applications - all typically requiring measurement of short duration impulses. Piezoelectric measuring devices are widely used today in the laboratory, on the production floor, and as original equipment for measuring and recording dynamic changes in mechanical variables including shock and vibration. Some accelerometers have built-in electronics to amplify the signal before transmitting it to the recording device. This work was pioneered by PCB Piezotronics, released in 1967 as ICP® Integrated circuit piezoelectric, later evolving to be the IEPE standard (see Integrated Electronics Piezo-Electric). Other related, brand specific descriptors of IEPE are: CCLD, IsoTron or DeltaTron. Accelerometers also have had the addition of onboard memory to contain serial number and calibration data, typically referred to as TEDS Transducer Electronic Data Sheet per the IEEE 1451 standard. References Norton, Harry N.(1989). Handbook of Transducers. Prentice Hall PTR. 'PDF Link' External links 'Piezoelectric Tranducers' 'Piezoelectric Sensors' 'Piezoelectric Accelerometers - Theory and Application' 'Access to Accels' - Tutorial about PE accelerometers Piezoelectric materials Transducers Accelerometers
Piezoelectric accelerometer
Physics,Technology,Engineering
1,962
41,613,298
https://en.wikipedia.org/wiki/Entomolithus
Entomolithus (petrified insect) is an obsolete scientific name for several trilobites, first published by Linnaeus in 1753, before the starting point of zoological nomenclature in a list under the heading "Paradoxus: 3. Entomolithus Monoculi". This is why this first name has no formal status. After the starting point of the zoological nomenclature, the name was published again in 1759, but with a different description. Because scholars incorrectly considered Entomolithus Linnaeus, 1759 a junior homonym, it was later replaced by Entomostracites Wahlenberg, 1818. Although the name as published in 1759 was in fact valid, the International Commission on Zoological Nomenclature decided to suppress Entomolithus Linnaeus, 1759, because this name had gone out of use for a very long time. Species originally assigned to Entomolithus have been renamed. E. paradoxus = Paradoxides paradoxissimus E. paradoxus α expansus = Asaphus expansus References Disused trilobite generic names
Entomolithus
Biology
212
35,334,744
https://en.wikipedia.org/wiki/Comparison%20of%20online%20music%20lockers
This is a comparison of online music storage services (Cloud Music Services), Internet services that allow uploads of personally owned or licensed music to the cloud for listening on multiple devices. Previously, there were three large services—Amazon Music, Apple's iTunes Match, and YouTube Music—each incorporating an online music store (see comparison), with purchased songs from the associated music store not counting toward storage limits. Other than additional storage space, the main additional feature provided with an annual fee by Apple (and formerly Amazon.com) was "scan-and-match", which examined music files on a computer and added a copy of matched tracks to the user's music locker without having to upload the files. Google provided both a large amount of storage space and the scan-and-match feature at no cost. Amazon was the first of the initially-significant players to launch their cloud music locker service, in late March 2011, and the first to discontinue it, on 30 April 2018. Amazon Music launched without obtaining any new music streaming licenses, which upset the major record labels. Amazon eventually negotiated licenses before launching scan-and-match. Google launched their service less than a month and a half after Amazon, also without obtaining any new licenses. Like Amazon, Google eventually negotiated licenses before launching scan-and-match. In 2018, Google announced a transition from Google Play Music to YouTube Music, and in May, 2020, Google had created a transfer tool to migrate added albums, uploads, history, and playlists. On October 22, 2020, Google Play Music was discontinued. Apple was the last of the first three services to launch, which they did on October 12, 2011. However, Apple had negotiated ahead of time with the major record labels for new licenses. Apple's product is the only of the three to remain in operation today (see iTunes Match, below). For streaming services where a person is unable to upload their own music, but is limited to music provided by the service, such as Pandora Radio and Spotify, see Comparison of on-demand streaming music services. See that article also for information on subscription streaming services provided by four of the companies below (Google Play Music All Access, Apple's Apple Music, Amazon's Prime Music, and Microsoft's Groove Music Pass). Comparison Former or defunct services Amazon Music storage, started in March 2009, offered storage space for 250 uploaded tracks (MP3 or AAC up to 100 MB each) in free version or 250,000 tracks in premium version, as well as web players for major operating systems, Fire TV, Roku, and Sonos sound systems. Amazon did not allow podcasts, ringtones, or audiobooks to be uploaded. Amazon started phasing out cloud storage from December 2017. Best Buy Music Cloud debuted in June 2011 to unfavourable reviews. Google Play Music Music locker, store, and streaming service debuted in May 2011, and shut down October 2020. Google has replaced Play Music with YouTube Music. Groove Music by Microsoft debuted in 2015, linking Microsoft's Groove music player to OneDrive cloud storage. It allowed storing up to 5 GB of music in AAC, MP3 and WMA formats. Playback was possible on devices running Windows, iOS or Android as well as Xbox game consoles. Lala started in 2006, was purchased by Apple, and shut down on May 31, 2010. Mougg started in 2010, renamed to Mashup in 2012, the domain ceased to function in December 2012. In April 2013, the service returned to its original name. MP3tunes started in late 2005, fought major record labels in Capitol Records, Inc. v. MP3Tunes, LLC, and closed in 2012 after filing for Chapter 7 bankruptcy. mSpot Music started in May 2010, was purchased by Samsung, and shut down on October 15, 2012. My.MP3.com started in January 2000, fought major record labels in UMG v. MP3.com, and the service was discontinued by a new owner. Samsung Music Hub was only available for a few Samsung devices and was retired on 1 July 2014. Style Jukebox, debuted in September 2012, offered up to 2 TB of music storage (10 GB in the trial period) and music players for the common operating systems, and supported all major file formats incl. high-resolution audio. The service was discontinued in December 2017. Ubuntu One only included music features (web and mobile app playback, 20 GB storage) with the paid plan. The service was shut down on 1 June 2014. See also Comparison of digital music stores Comparison of music streaming services List of music software List of Internet radio stations List of online music databases References Online services comparisons Cloud storage ITunes
Comparison of online music lockers
Technology
966
38,000,278
https://en.wikipedia.org/wiki/A.M.%20Bohnert%20Rice%20Plantation%20Pump
The A.M. Bohnert Rice Plantation Pump, located on Route 165 and Post Bayou Lane, near Gillett, Arkansas, in Arkansas County, is a rare surviving example of an early 20th-century pump engine built by the engine manufacturer Fairbanks, Morse & Company. The pumping engine played an important role in productive rice farming in the area, supplying water to flood the fields. The National Register of Historic Places included the pump in 2010. See also L.A. Black Rice Milling Association Inc. Office, also in Arkansas County, Arkansas Tichnor Rice Dryer and Storage Building, also in Arkansas County, Arkansas National Register of Historic Places listings in Arkansas County, Arkansas References Agriculture in Arkansas Pumps Rice production in the United States Agricultural buildings and structures on the National Register of Historic Places in Arkansas National Register of Historic Places in Arkansas County, Arkansas Fairbanks-Morse
A.M. Bohnert Rice Plantation Pump
Physics,Chemistry
175
47,239,058
https://en.wikipedia.org/wiki/Edward%20Goodrich%20Acheson%20Award
The Edward Goodrich Acheson Award was established by The Electrochemical Society (ECS) in 1928 to honor the memory of Edward Goodrich Acheson, a charter member of ECS. The award is presented every 2 years for "conspicuous contribution to the advancement of the objectives, purposes, and activities of the society (ECS)". Recipients of the award receive a gold medal, wall plaque, and cash prize, ECS Life membership, and a complimentary meeting registration. History The Edward Goodrich Acheson Award is the first and most prestigious award of The Electrochemical Society. The award was established by a gift of $25,000 from past president (and namesake of the award) Edward Goodrich Acheson. Originally, recipients were presented with a prize of $1,000, a gold medal, and a bronze replica, with the intention that the gold medal would "find its way to the safe deposit box," while the replica was reserved for "everyday use". The Acheson family later agreed to have the medal be electroplated gold in order to keep the award fund in balance. Thanks to continuous donations from the Acheson family between 1942 and 1991, the endowment fund has allowed the monetary prize to be increased 3 times since its establishment. Recipients of the award As listed by ECS: 2018 Tetsuya Osaka 2016 Barry Miller 2014 Ralph J. Brodd 2012 Dennis W. Hess 2010 John S. Newman 2008 Robert P. Frankenthal 2006 Vittorio de Nora 2004 Wayne L. Worrell 2002 Bruce Deal 2000 Larry R. Faulkner 1998 Jerry M. Woodall 1996 Richard C. Alkire 1994 J. Bruce Wagner, Jr. 1992 Dennis R. Turner 1990 Theodore R. Beck 1988 Herbert H. Uhlig 1986 Eric M. Pell 1984 Norman Hackerman 1982 Henry C. Gatos 1980 Ernest B. Yeager 1978 Dan A. Vermilyea 1976 N. Bruce Hannay 1974 Cecil V. King 1972 Charles W. Tobias 1970 Samuel Ruben 1968 Francis L. LaQue 1966 Warren C. Vosburgh 1964 Earl A. Gulbransen 1962 Charles L. Faust 1960 Henry B. Linford 1958 William J. Kroll 1956 Robert M. Burns 1954 George W. Heise 1952 John W. Marden 1950 George W. Vinal 1948 Duncan A. MacInnes 1946 H. Jermain Creighton 1944 William Blum 1942 Charles F. Burgess 1939 Francis C. Frary 1937 Frederick M. Becket 1935 Frank J. Tone 1933 Colin G. Fink 1931 Edwin Fitch Northrup 1929 Edward Goodrich Acheson See also List of chemistry awards References External links Edward Goodrich Acheson Award Recipients American science and technology awards Chemistry awards Awards established in 1928
Edward Goodrich Acheson Award
Technology
550
60,290,781
https://en.wikipedia.org/wiki/Karel%20Wiesner
Karel František Wiesner (November 25, 1919 – November 28, 1986) was a Canadian chemist of Czech origin known for his contributions to the chemistry of natural products, notably aconitum alkaloids and digitalis glycosides. Early life and career He was born in Prague, Czechoslovakia, into a family of some wealth and notability. His undergraduate education began in 1938 when he enrolled to study natural sciences at Charles University. His studies were interrupted the following year when universities were shuttered under the German occupation. Working under the supervision of at Bulovka Hospital, and in a rudimentary laboratory in the basement of his parental home, he discovered a polarographic method of measuring fast chemical reactions. He was awarded a doctorate for this research when Charles University reopened in 1945. In 1943, he joined a research group at the Fragner pharmaceutical company near Prague that was working to develop a penicillin variant. Despite working in secrecy and isolation under onerous wartime restrictions, the group managed to first separate and then test an antimicrobial drug. Wiesner's role included ensuring an adequate supply of the antibiotic by extracting and purifying the substance from the test subject's urine following treatment. From 1946 until 1948 he conducted postgraduate research in organic chemistry under Vladimir Prelog at ETH, Zürich, funded by a Rockefeller fellowship. Wiesner immigrated to Canada in 1948 to take up a position at the University of New Brunswick, Fredericton. Apart from a two-year spell with the pharmaceutical company Ayerst in Montreal, he remained at UNB for the remainder of his career. In 1981, Wiesner became a founding member of the World Cultural Council. He died of lymphoma in 1986. Scientific achievements Wiesner made remarkable contributions to the structural and synthetic chemistry of complex polysubstituted polycyclic natural products. In the 1950s, prior to the development of nuclear magnetic resonance spectroscopy, he determined the structure of several diterpene alkaloids including veatchine, atisine, annotinine, delphinine, aconitine, and songorine. After returning to New Brunswick from Ayerst in 1964, he began a successful program to synthesize these compounds, culminating in the total synthesis of chasmanine and napelline. Towards the end of the 1970s Wiesner turned his attention to digitalis derivatives, with the goal of finding cardiac glycosides with safer therapeutic ratios. In the last decade of his career he succeeded in demonstrating the separation of the inotropic and toxic properties of this group of compounds, elucidated the underlying chemical mechanism, and finally achieved the total synthesis of digitoxin and other cardioactive steroids. Honors and awards Wiesner received a Guggenheim Fellowship in 1952, the Chemical Institute of Canada's Palladium Medal in 1963, the Royal Society of Chemistry's Centenary Prize in 1976, the American Chemical Society's Ernest Guenther Award in 1983, and the Izaak Walton Killam Memorial Prize in 1986. He was elected to the Royal Society of Canada in 1957, to the Royal Society in 1969, and admitted to the Pontifical Academy of Sciences in 1978. He was awarded the Order of Canada on June 25, 1975. He also received the Marin Drinov Medal of the Bulgarian Academy of Sciences. References External links Canadian fellows of the Royal Society Officers of the Order of Canada 1919 births 1986 deaths 20th-century Canadian scientists Canadian chemists Organic chemists Czechoslovak emigrants to Canada Canadian people of Czech descent Academic staff of the University of New Brunswick Charles University alumni Czech chemists Fellows of the Royal Society of Canada Founding members of the World Cultural Council Members of the Pontifical Academy of Sciences Scientists from Prague
Karel Wiesner
Chemistry
765
3,330,144
https://en.wikipedia.org/wiki/Spin%287%29-manifold
In mathematics, a Spin(7)-manifold is an eight-dimensional Riemannian manifold whose holonomy group is contained in Spin(7). Spin(7)-manifolds are Ricci-flat and admit a parallel spinor. They also admit a parallel 4-form, known as the Cayley form, which is a calibrating form for a special class of submanifolds called Cayley cycles. History The fact that Spin(7) might possibly arise as the holonomy group of certain Riemannian 8-manifolds was first suggested by the 1955 classification theorem of Marcel Berger, and this possibility remained consistent with the simplified proof of Berger's theorem given by Jim Simons in 1962. Although not a single example of such a manifold had yet been discovered, Edmond Bonan then showed in 1966 that, if such a manifold did in fact exist, it would carry a parallel 4-form, and that it would necessarily be Ricci-flat. The first local examples of 8-manifolds with holonomy Spin(7) were finally constructed around 1984 by Robert Bryant, and his full proof of their existence appeared in Annals of Mathematics in 1987. Next, complete (but still noncompact) 8-manifolds with holonomy Spin(7) were explicitly constructed by Bryant and Salamon in 1989. The first examples of compact Spin(7)-manifolds were then constructed by Dominic Joyce in 1996. See also G2 manifold Calabi–Yau manifold References . . . Riemannian manifolds
Spin(7)-manifold
Mathematics
321
69,535,436
https://en.wikipedia.org/wiki/DeepRoute.ai
DeepRoute.ai is a Chinese robotaxi startup based in Shenzhen, Guangdong, China. DeepRoute.ai has partnered with Caocao Mobility, Dongfeng Motors, and Dongfeng Commercial Vehicle to test self-driving vehicles. The company began self-driving robotaxi service in Wuhan in April 2021, and the company publicly launched robotaxi service in Shenzhen in July 2021. In addition to robotaxi technology (DeepRoute-INJOY), DeepRoute.ai has also developed a self-driving solution for medium-duty trucks (DeepRoute-LINK). Its L4 Full Stack Self-Driving System, DeepRoute-Sense, was named a CES 2020 Innovation Awards Honoree in the category of Vehicle Intelligence & Transportation. It includes a lightweight set-top box and sensor-fusion calibration service, consisting of GNSS, eight vehicle cameras, three lidars and a series of other sensors to help correspondence and data synchronization between the controllers. In December 2021, DeepRoute.ai announced DeepRoute-Driver 2.0, a production-ready Level 4 system comprising five solid-state lidar sensors, eight cameras, a proprietary computing system and an optional millimeter-wave radar. DeepRoute.ai secured $50 million in a Series Pre-A led by Fosun RZ Capital, the venture capital arm of Chinese conglomerate Fosun International in September 2019. The company also raised a Series B funding round of $300 million in September 2021, which included Alibaba, Jeneration Capital, Yunqi Partners and Geely as investors. DeepRoute.ai’s CEO is Maxwell Zhou, who led autonomous driving projects at Baidu, Texas Instruments and DJI. History DeepRoute.ai was founded in Shenzhen in February 2019 by Maxwell Zhou who has a Doctorate degree in Artificial Intelligence. In August of 2020, DeepRoute.ai partnered with CaoCao Mobility to start Robotaxi service in Hangzhou. A few months later, in October, the company joined a $90M Autonomous Driving Pilot Program led by Dongfeng Motor, aiming to bring more than 200 Robotaxis to Wuhan by the end of 2022. As its primary partner, DeepRoute.ai is working with Dongfeng Motor to build the largest Robotaxi fleet in Wuhan’s Central Business District and development area, making it the most extensive fleet in China. By January 2021, DeepRoute.ai had accumulated over one million kilometers of road testing. The company began self-driving robotaxi service in Wuhan in April 2021, and the company publicly launched robotaxi service in Shenzhen in July 2021. In September 2021, DeepRoute.ai announced a $300 million Series B funding round led by Alibaba Group. In December 2021, DeepRoute.ai announced DeepRoute-Driver 2.0, a production-ready Level 4 system comprising five solid-state lidar sensors, eight cameras, a proprietary computing system and an optional millimeter-wave radar. In March 2023, DeepRoute.ai announced its Driver 3.0 solution, the latest advance to achieving full autonomous driving. DeepRoute.ai is among the first to successfully complete HD map-free self-driving public road tests thus breaking limitations created by geo-fencing. Partnerships In June 2022, DeepRoute.ai partnered with Deppon Logistics Co., Ltd. to provide autonomous driving medium-duty trucks for logistics transfer. This marked the first use of self-driving mid-size trucks in commercial service in China. In August 2020, DeepRoute.ai announced its partnership with Cao Cao Mobility, a Geely-backed ride-hailing company, to test Robotaxis in Hangzhou for daily operations, planning to provide Robotaxis during the 2022 Asian Games. References Self-driving car companies Automotive technologies Robotics
DeepRoute.ai
Engineering
800
52,293
https://en.wikipedia.org/wiki/Origami
) is the Japanese art of paper folding. In modern usage, the word "origami" is often used as an inclusive term for all folding practices, regardless of their culture of origin. The goal is to transform a flat square sheet of paper into a finished sculpture through folding and sculpting techniques. Modern origami practitioners generally discourage the use of cuts, glue, or markings on the paper. Origami folders often use the Japanese word to refer to designs which use cuts. In the detailed Japanese classification, origami is divided into stylized ceremonial origami (儀礼折り紙, girei origami) and recreational origami (遊戯折り紙, yūgi origami), and only recreational origami is generally recognized as origami. In Japan, ceremonial origami is generally called "origata" (:ja:折形) to distinguish it from recreational origami. The term "origata" is one of the old terms for origami. The small number of basic origami folds can be combined in a variety of ways to make intricate designs. The best-known origami model is the Japanese paper crane. In general, these designs begin with a square sheet of paper whose sides may be of different colors, prints, or patterns. Traditional Japanese origami, which has been practiced since the Edo period (1603–1868), has often been less strict about these conventions, sometimes cutting the paper or using nonsquare shapes to start with. The principles of origami are also used in stents, packaging, and other engineering applications. Etymology The word "origami" is a compound of two smaller words: "ori" (root verb "oru"), meaning to fold, and "kami", meaning paper. Until recently, not all forms of paper folding were grouped under the word origami. Before that, paper folding for play was known by a variety of names, including "orikata" or "origata" (折形), "orisue" (折据), "orimono" (折物), "tatamigami" (畳紙) and others. History Distinct paperfolding traditions arose in Europe, China, and Japan which have been well-documented by historians. These seem to have been mostly separate traditions, until the 20th century. Ceremonial origami (origata) By the 7th century, paper had been introduced to Japan from China via the Korean Peninsula, and the Japanese developed washi by improving the method of making paper in the Heian period. The papermaking technique developed in Japan around 805 to 809 was called nagashi-suki (流し漉き), a method of adding mucilage to the process of the conventional tame-suki (溜め漉き) technique to form a stronger layer of paper fibers. With the development of Japanese papermaking technology and the widespread use of paper, folded paper began to be used for decorations and tools for religious ceremonies such as gohei, ōnusa (:ja:大麻 (神道)) and shide at Shinto shrines. Religious decorations made of paper and the way gifts were wrapped in folded paper gradually became stylized and established as ceremonial origami. During the Heian period, the Imperial court established a code of etiquette for wrapping money and goods used in ceremonies with folded paper, and a code of etiquette for wrapping gifts. In the Muromachi period from the 1300s to the 1400s, various forms of decorum were developed by the Ogasawara clan and Ise clans (:ja:伊勢氏), completing the prototype of Japanese folded-paper decorum that continues to this day. The Ise clan presided over the decorum of the inside of the palace of the Ashikaga Shogunate, and in particular, Ise Sadachika (:ja:伊勢貞親) during the reign of the eighth Shogun, Ashikaga Yoshimasa (足利義政), greatly influenced the development of the decorum of the daimyo and samurai classes, leading to the development of various stylized forms of ceremonial origami. The shapes of ceremonial origami created in this period were geometric, and the shapes of noshi to be attached to gifts at feasts and weddings, and origami that imitated butterflies to be displayed on sake vessels, were quite different from those of later generations of recreational origami whose shapes captured the characteristics of real objects and living things. The "noshi" wrapping, and the folding of female and male butterflies, which are still used for weddings and celebrations, are a continuation and development of a tradition that began in the Muromachi period. A reference in a poem by Ihara Saikaku from 1680 describes the origami butterflies used during Shinto weddings to represent the bride and groom. Recreational origami 1500s-1800s It is not certain when play-made paper models, now commonly known as origami, began in Japan. However, the kozuka of a Japanese sword made by Gotō Eijō (後藤栄乗) between the end of the 1500s and the beginning of the 1600s was decorated with a picture of a crane made of origami, and it is believed that origami for play existed by the Sengoku period or the early Edo period. In 1747, during the Edo period, a book titled Ranma zushiki (欄間図式) was published, which contained various designs of the ranma (:ja:欄間), a decoration of Japanese architecture. This included origami of various designs, including paper models of cranes, which are still well known today. It is thought that by this time, many people were familiar with origami for play, which modern people recognize as origami. During this period, origami was commonly called orikata (折形) or orisue (折据) and was often used as a pattern on kimonos and decorations. Hiden senbazuru orikata (:ja:秘傳千羽鶴折形), published in 1797, is the oldest known technical book on origami for play. The book contains 49 origami pieces created by a Buddhist monk named Gidō (:ja:義道) in Ise Province, whose works were named and accompanied by kyōka (狂歌, comic tanka) by author Akisato Ritō (秋里籬島). These pieces were far more technically advanced than their predecessors, suggesting that origami culture had become more sophisticated. Gido continued to produce origami after the publication of his book, leaving at least 158 highly skilled masterpieces for posterity. In 1976, Kuwana City in Mie Prefecture, Gido's hometown, designated 49 of the methods described in the Hiden senbazuru orikata as Intangible Cultural Properties of Kuwana City. Kuwana City has also certified qualified persons who are able to correctly produce these works and have in-depth knowledge of the art. Kuwana City has published some of the origami production methods on YouTube. From the late Edo period to the Bakumatu period, origami that imitated the six legendary Japanese poets, rokkasen (六歌仙) listed in the Kokin Wakashū (古今和歌集) compiled in the 900s and the characters in Chūshingura became popular, but today they are rarely used as subjects for origami. In Europe, there was a well-developed genre of napkin folding, which flourished during the 17th and 18th centuries. After this period, this genre declined and was mostly forgotten; historian Joan Sallas attributes this to the introduction of porcelain, which replaced complex napkin folds as a dinner-table status symbol among nobility. However, some of the techniques and bases associated with this tradition continued to be a part of European culture; folding was a significant part of Friedrich Fröbel's "Kindergarten" method, and the designs published in connection with his curriculum are stylistically similar to the napkin fold repertoire. Another example of early origami in Europe is the "pajarita," a stylized bird whose origins date from at least the nineteenth century. Since 1800s When Japan opened its borders in the 1860s, as part of a modernization strategy, they imported Fröbel's Kindergarten system—and with it, German ideas about paperfolding. This included the ban on cuts, and the starting shape of a bicolored square. These ideas, and some of the European folding repertoire, were integrated into the Japanese tradition. Before this, traditional Japanese sources use a variety of starting shapes, often had cuts, and if they had color or markings, these were added after the model was folded. In Japan, the first kindergarten was established in 1875, and origami was promoted as part of early childhood education. The kindergarten's 1877 regulations listed 25 activities, including origami subjects. Shōkokumin (小国民), a magazine for boys, frequently published articles on origami. Origami Zusetsu (折紙図説), published in 1908, clearly distinguished ceremonial origami from recreational origami. These books and magazines carried both the traditional Japanese style of origami and the style inspired by Fröbel. In the early 1900s, Akira Yoshizawa, Kosho Uchiyama, and others began creating and recording original origami works. Akira Yoshizawa in particular was responsible for a number of innovations, such as wet-folding and the Yoshizawa–Randlett diagramming system, and his work inspired a renaissance of the art form. In 1974, origami was offered in the USSR as an additional activity for elementary school children. During the 1980s a number of folders started systematically studying the mathematical properties of folded forms, which led to a rapid increase in the complexity of origami models. Starting in the late 20th century, there has been a renewed interest in understanding the behavior of folding matter, both artistically and scientifically. The "new origami," which distinguishes it from old craft practices, has had a rapid evolution due to the contribution of computational mathematics and the development of techniques such as box-pleating, tessellations and wet-folding. Artists like Robert J. Lang, Erik Demaine, Sipho Mabona, Giang Dinh, Paul Jackson, and others, are frequently cited for advancing new applications of the art. The computational facet and the interchanges through social networks, where new techniques and designs are introduced, have raised the profile of origami in the 21st century. Techniques and materials Techniques Many origami books begin with a description of basic origami techniques which are used to construct the models. This includes simple diagrams of basic folds like valley and mountain folds, pleats, reverse folds, squash folds, and sinks. There are also standard named bases which are used in a wide variety of models, for instance the bird base is an intermediate stage in the construction of the flapping bird. Additional bases are the preliminary base (square base), fish base, waterbomb base, and the frog base. Origami paper Almost any laminar (flat) material can be used for folding; the only requirement is that it should hold a crease. Origami paper, often referred to as "kami" (Japanese for paper), is sold in prepackaged squares of various sizes ranging from 2.5 cm (1 in) to 25 cm (10 in) or more. It is commonly colored on one side and white on the other; however, dual coloured and patterned versions exist and can be used effectively for color-changed models. Origami paper weighs slightly less than copy paper, making it suitable for a wider range of models. Normal copy paper with weights of 70–90 g/m2 (19–24 lb) can be used for simple folds, such as the crane and waterbomb. Heavier weight papers of 100 g/m2 (approx. 25 lb) or more can be wet-folded. This technique allows for a more rounded sculpting of the model, which becomes rigid and sturdy when it is dry. Foil-backed paper, as its name implies, is a sheet of thin foil glued to a sheet of thin paper. Related to this is tissue foil, which is made by gluing a thin piece of tissue paper to kitchen aluminium foil. A second piece of tissue can be glued onto the reverse side to produce a tissue/foil/tissue sandwich. Foil-backed paper is available commercially, but not tissue foil; it must be handmade. Both types of foil materials are suitable for complex models. is the traditional origami paper used in Japan. Washi is generally tougher than ordinary paper made from wood pulp, and is used in many traditional arts. Washi is commonly made using fibres from the bark of the gampi tree, the mitsumata shrub (Edgeworthia papyrifera), or the paper mulberry but can also be made using bamboo, hemp, rice, and wheat. Artisan papers such as unryu, lokta, hanji, gampi, kozo, saa, and abaca have long fibers and are often extremely strong. As these papers are floppy to start with, they are often backcoated or resized with methylcellulose or wheat paste before folding. Also, these papers are extremely thin and compressible, allowing for thin, narrowed limbs as in the case of insect models. Paper money from various countries is also popular to create origami with; this is known variously as Dollar Origami, Orikane, and Money Origami. Tools It is common to fold using a flat surface, but some folders like doing it in the air with no tools, especially when displaying the folding. Some folders believe that no tool should be used when folding. However a couple of tools can help especially with the more complex models. For instance a bone folder allows sharp creases to be made in the paper easily, paper clips can act as extra pairs of fingers, and tweezers can be used to make small folds. When making complex models from origami crease patterns, it can help to use a ruler and ballpoint embosser to score the creases. Completed models can be sprayed so that they keep their shape better, and a spray is needed when wet folding. Types Action origami In addition to the more common still-life origami, there are also moving object designs; origami can move. Action origami includes origami that flies, requires inflation to complete, or, when complete, uses the kinetic energy of a person's hands, applied at a certain region on the model, to move another flap or limb. Some argue that, strictly speaking, only the latter is really "recognized" as action origami. Action origami, first appearing with the traditional Japanese flapping bird, is quite common. One example is Robert Lang's instrumentalists; when the figures' heads are pulled away from their bodies, their hands will move, resembling the playing of music. Modular origami Modular origami consists of putting a number of identical pieces together to form a complete model. Often the individual pieces are simple, but the final assembly may be more difficult. Many modular origami models are decorative folding balls such as kusudama, which differ from classical origami in that the pieces may be held together using thread or glue. Chinese paper folding, a cousin of origami, includes a similar style called golden venture folding where large numbers of pieces are put together to create elaborate models. This style is most commonly known as "3D origami". However, that name did not appear until Joie Staff published a series of books titled 3D Origami, More 3D Origami, and More and More 3D Origami. This style originated from some Chinese refugees while they were detained in America and is also called Golden Venture folding from the ship they came on. Wet-folding Wet-folding is an origami technique for producing models with gentle curves rather than geometric straight folds and flat surfaces. The paper is dampened so it can be moulded easily, and the final model keeps its shape when it dries. It can be used, for instance, to produce very natural looking animal models. Size, an adhesive that is crisp and hard when dry, but dissolves in water when wet and becoming soft and flexible, is often applied to the paper either at the pulp stage while the paper is being formed, or on the surface of a ready sheet of paper. The latter method is called external sizing and most commonly uses Methylcellulose, or MC, paste, or various plant starches. Pureland origami Pureland origami adds the restrictions that only simple mountain/valley folds may be used, and all folds must have straightforward locations. It was developed by John Smith in the 1970s to help inexperienced folders or those with limited motor skills. Some designers also like the challenge of creating within the very strict constraints. Origami tessellations Origami tessellation is a branch that has grown in popularity after 2000. A tessellation is a collection of figures filling a plane with no gaps or overlaps. In origami tessellations, pleats are used to connect molecules such as twist folds together in a repeating fashion. During the 1960s, Shuzo Fujimoto was the first to explore twist fold tessellations in any systematic way, coming up with dozens of patterns and establishing the genre in the origami mainstream. Around the same time period, Ron Resch patented some tessellation patterns as part of his explorations into kinetic sculpture and developable surfaces, although his work was not known by the origami community until the 1980s. Chris Palmer is an artist who has extensively explored tessellations after seeing the Zilij patterns in the Alhambra, and has found ways to create detailed origami tessellations out of silk. Robert Lang and Alex Bateman are two designers who use computer programs to create origami tessellations. The first international convention devoted to origami tessellations was hosted in Brasília (Brazil) in 2006, and the first instruction book on tessellation folding patterns was published by Eric Gjerde in 2008. Since then, the field has grown very quickly. Tessellation artists include Polly Verity (Scotland); Joel Cooper, Christine Edison, Ray Schamp and Goran Konjevod from the US; Roberto Gretter (Italy); Christiane Bettens (Switzerland); Carlos Natan López (Mexico); and Jorge C. Lucero (Brazil). Kirigami Kirigami is a Japanese term for paper cutting. Cutting was often used in traditional Japanese origami, but modern innovations in technique have made the use of cuts unnecessary. Most origami designers no longer consider models with cuts to be origami, instead using the term Kirigami to describe them. This change in attitude occurred during the 1960s and 70s, so early origami books often use cuts, but for the most part they have disappeared from the modern origami repertoire, and most modern books do not even mention cutting. Strip folding Strip folding is a combination of paper folding and paper weaving. A common example of strip folding is called the Lucky Star, also called Chinese lucky star, dream star, wishing star, or simply origami star. Another common fold is the Moravian Star which is made by strip folding in 3-dimensional design to include 16 spikes. Teabag folding Teabag folding is credited to Dutch artist Tiny van der Plas, who developed the technique in 1992 as a papercraft art for embellishing greeting cards. It uses small square pieces of paper (e.g., a tea bag wrapper) bearing symmetrical designs that are folded in such a way that they interlock and produce a three-dimensional version of the underlying design. The basic kite fold is used to produce rosettes that are a 3 dimensional version of the 2D design. The basic rosette design requires eight matching squares to be folded into the 'kite' design. Mathematics teachers find the designs very useful as a practical way of demonstrating some basic properties of symmetry. Mathematics and technical origami Mathematics and practical applications The practice and study of origami encapsulates several subjects of mathematical interest. For instance, the problem of flat-foldability (whether a crease pattern can be folded into a 2-dimensional model) has been a topic of considerable mathematical study. A number of technological advances have come from insights obtained through paper folding. For example, techniques have been developed for the deployment of car airbags and stent implants from a folded position. The problem of rigid origami ("if we replaced the paper with sheet metal and had hinges in place of the crease lines, could we still fold the model?") has great practical importance. For example, the Miura map fold is a rigid fold that has been used to deploy large solar panel arrays for space satellites. Origami can be used to construct various geometrical designs not possible with compass and straightedge constructions. For instance paper folding may be used for angle trisection and doubling the cube. Technical origami Technical origami, known in Japanese as , is an origami design approach in which the model is conceived as an engineered crease pattern, rather than developed through trial-and-error. With advances in origami mathematics, the basic structure of a new origami model can be theoretically plotted out on paper before any actual folding even occurs. This method of origami design was developed by Robert Lang, Meguro Toshiyuki and others, and allows for the creation of extremely complex multi-limbed models such as many-legged centipedes, human figures with a full complement of fingers and toes, and the like. The crease pattern is a layout of the creases required to form the structure of the model. Paradoxically enough, when origami designers come up with a crease pattern for a new design, the majority of the smaller creases are relatively unimportant and added only towards the completion of the model. What is more important is the allocation of regions of the paper and how these are mapped to the structure of the object being designed. By opening up a folded model, you can observe the structures that comprise it; the study of these structures led to a number of crease-pattern-oriented design approaches The pattern of allocations is referred to as the 'circle-packing' or 'polygon-packing'. Using optimization algorithms, a circle-packing figure can be computed for any uniaxial base of arbitrary complexity. Once this figure is computed, the creases which are then used to obtain the base structure can be added. This is not a unique mathematical process, hence it is possible for two designs to have the same circle-packing, and yet different crease pattern structures. As a circle encloses the maximum amount of area for a given perimeter, circle packing allows for maximum efficiency in terms of paper usage. However, other polygonal shapes can be used to solve the packing problem as well. The use of polygonal shapes other than circles is often motivated by the desire to find easily locatable creases (such as multiples of 22.5 degrees) and hence an easier folding sequence as well. One popular offshoot of the circle packing method is box-pleating, where squares are used instead of circles. As a result, the crease pattern that arises from this method contains only 45 and 90 degree angles, which often makes for a more direct folding sequence. Origami-related computer programs A number of computer aids to origami such as TreeMaker and Oripa, have been devised. TreeMaker allows new origami bases to be designed for special purposes and Oripa tries to calculate the folded shape from the crease pattern. Ethics and copyright Copyright in origami designs and the use of models has become an increasingly important issue in the origami community, as the internet has made the sale and distribution of pirated designs very easy. It is considered good etiquette to always credit the original artist and the folder when displaying origami models. It has been claimed that all commercial rights to designs and models are typically reserved by origami artists; however, the degree to which this can be enforced has been disputed. Under such a view, a person who folds a model using a legally obtained design could publicly display the model unless such rights were specifically reserved, whereas folding a design for money or commercial use of a photo for instance would require consent. The Origami Authors and Creators group was set up to represent the copyright interests of origami artists and facilitate permissions requests. However, a court in Japan has asserted that the folding method of an origami model "comprises an idea and not a creative expression, and thus is not protected under the copyright law". Further, the court stated that "the method to folding origami is in the public domain; one cannot avoid using the same folding creases or the same arrows to show the direction in which to fold the paper". Therefore, it is legal to redraw the folding instructions of a model of another author even if the redrawn instructions share similarities to the original ones, as long as those similarities are "functional in nature". The redrawn instructions may be published (and even sold) without necessity of any permission from the original author. Origami in various meanings From a global perspective, the term 'origami' refers to the folding of paper to shape objects for entertainment purposes, but it has historically been used in various ways in Japan. For example, the term 'origami' also refers to the certificate of authenticity that accompanies a Japanese sword or tea utensil. The people of the Hon'ami clan, who were the authority on Japanese sword appraisal from the Muromachi period to the Edo period, responded to the requests of the shogun, daimyo and samurai by appraising Japanese swords, determining when and by which school the sword was made, whether the inscription on the nakago was genuine or not, and what the price was, and then issuing origami with the results written on it. This has led to the Japanese word 'origami tsuki' (折り紙付き) meaning 'origami is attached' meaning that the quality of the object or the ability of the person is sufficiently high. The term 'origami' also referred to a specific style of old documents in Japan. The paper folded vertically is called 'tategami' (竪紙), while the paper folded horizontally is called 'origami', and origami has a lower status than tategami. This style of letter began to be used at the end of the Heian period, and in the Kamakura period it was used as a complaint, and origami came to refer to the complaint itself. Furthermore, during the Muromachi period, origami was often used as a command document or a catalog of gifts, and it came to refer to the catalog of gifts itself. Gallery These pictures show examples of various types of origami. In popular culture In House of Cards season 1, episode 6, Claire Underwood gives a homeless man cash, and he later returns it folded into the shape of a bird. Claire then begins making origami animals, and in episode 7 she gives several to Peter Russo for his children. In Blade Runner, Gaff folds origami throughout the movie, and an origami unicorn he folds forms a major plot point. The philosophy and plot of the science fiction story "Ghostweight" by Yoon Ha Lee revolve around origami. In it, origami serves as a metaphor for history: "It is not true that the dead cannot be folded. Square becomes kite becomes swan; history becomes rumor becomes song. Even the act of remembrance creases the truth". A major element of the plot is the weaponry called jerengjen of space mercenaries, which unfold from flat shapes: "In the streets, jerengjen unfolded prettily, expanding into artillery with dragon-shaped shadows and sleek four-legged assault robots with wolf-shaped shadows. In the skies, jerengjen unfolded into bombers with kestrel-shaped shadows." The story says that the word means the art of paper folding in the mercenaries' main language. In an interview, when asked about the subject, the author tells that he became fascinated with dimensions since reading the novel Flatland. In Scooby-Doo! and the Samurai Sword, Scooby and Shaggy learn origami, which proves crucial in finding the Sword of Doom. In Kubo and the Two Strings, the main protagonist Kubo can magically manipulate origami with music from his shamisen. In Naruto Shippuden, Konan, the only female member of the Akatsuki, uses origami jutsu, in which she uses her chakra to bring origami to life and use them as weapons. The 2010 video game Heavy Rain has an antagonist known as the origami killer. In the BBC television program QI, it is reported that origami in the form it is commonly known, where paper is folded without being cut or glued likely originated in Germany and was imported to Japan as late as 1860 when Japan opened its borders (However, it is confirmed that paper cranes using this technique have existed in Japan since the Edo period before 1860). Paper Mario: The Origami King is a 2020 Nintendo Switch game featuring Mario series characters in an origami-themed world. Origami Yoda is a children's book series by Tom Angleberger about a group of middle school students who construct origami finger puppets resembling Star Wars characters. See also Chinese paper folding Fold-forming Furoshiki Japanese art List of origamists Origamic architecture Paper craft Paper fortune teller Paper plane Pop-up book References Further reading Kunihiko Kasahara (1988). Origami Omnibus: Paper Folding for Everybody. Tokyo: Japan Publications, Inc. A book for a more advanced origamian; this book presents many more complicated ideas and theories, as well as related topics in geometry and culture, along with model diagrams. Kunihiko Kasahara and Toshie Takahama (1987). Origami for the Connoisseur. Tokyo: Japan Publications, Inc. Satoshi Kamiya (2005). Works by Satoshi Kamiya, 1995–2003. Tokyo: Origami House An extremely complex book for the elite origamian, most models take 100+ steps to complete. Includes his famous Divine Dragon Bahamut and Ancient Dragons. Instructions are in Japanese and English. Kunihiko Kasahara (2001). Extreme Origami. Michael LaFosse. Origamido : Masterworks of Paper Folding Nick Robinson (2004). Encyclopedia of Origami. Quarto. . A book full of stimulating designs. External links Articles containing video clips Japanese inventions Japanese words and phrases Leisure activities Paper art
Origami
Mathematics
6,488
4,656,507
https://en.wikipedia.org/wiki/Bose%E2%80%93Hubbard%20model
The Bose–Hubbard model gives a description of the physics of interacting spinless bosons on a lattice. It is closely related to the Hubbard model that originated in solid-state physics as an approximate description of superconducting systems and the motion of electrons between the atoms of a crystalline solid. The model was introduced by Gersch and Knollman in 1963 in the context of granular superconductors. (The term 'Bose' in its name refers to the fact that the particles in the system are bosonic.) The model rose to prominence in the 1980s after it was found to capture the essence of the superfluid-insulator transition in a way that was much more mathematically tractable than fermionic metal-insulator models. The Bose–Hubbard model can be used to describe physical systems such as bosonic atoms in an optical lattice, as well as certain magnetic insulators. Furthermore, it can be generalized and applied to Bose–Fermi mixtures, in which case the corresponding Hamiltonian is called the Bose–Fermi–Hubbard Hamiltonian. Hamiltonian The physics of this model is given by the Bose–Hubbard Hamiltonian: Here, denotes summation over all neighboring lattice sites and , while and are bosonic creation and annihilation operators such that gives the number of particles on site . The model is parametrized by the hopping amplitude that describes boson mobility in the lattice, the on-site interaction which can be attractive () or repulsive (), and the chemical potential , which essentially sets the number of particles. If unspecified, typically the phrase 'Bose–Hubbard model' refers to the case where the on-site interaction is repulsive. This Hamiltonian has a global symmetry, which means that it is invariant (its physical properties are unchanged) by the transformation . In a superfluid phase, this symmetry is spontaneously broken. Hilbert space The dimension of the Hilbert space of the Bose–Hubbard model is given by , where is the total number of particles, while denotes the total number of lattice sites. At fixed or , the Hilbert space dimension grows polynomially, but at a fixed density of bosons per site, it grows exponentially as . Analogous Hamiltonians may be formulated to describe spinless fermions (the Fermi-Hubbard model) or mixtures of different atom species (Bose–Fermi mixtures, for example). In the case of a mixture, the Hilbert space is simply the tensor product of the Hilbert spaces of the individual species. Typically additional terms are included to model interaction between species. Phase diagram At zero temperature, the Bose–Hubbard model (in the absence of disorder) is in either a Mott insulating state at small , or in a superfluid state at large . The Mott insulating phases are characterized by integer boson densities, by the existence of an energy gap for particle-hole excitations, and by zero compressibility. The superfluid is characterized by long-range phase coherence, a spontaneous breaking of the Hamiltonian's continuous symmetry, a non-zero compressibility and superfluid susceptibility. At non-zero temperature, in certain parameter regimes a regular fluid phase appears that does not break the symmetry and does not display phase coherence. Both of these phases have been experimentally observed in ultracold atomic gases. In the presence of disorder, a third, "Bose glass" phase exists. The Bose glass is a Griffiths phase, and can be thought of as a Mott insulator containing rare 'puddles' of superfluid. These superfluid pools are not interconnected, so the system remains insulating, but their presence significantly changes model thermodynamics. The Bose glass phase is characterized by finite compressibility, the absence of a gap, and by an infinite superfluid susceptibility. It is insulating despite the absence of a gap, as low tunneling prevents the generation of excitations which, although close in energy, are spatially separated. The Bose glass has a non-zero Edwards–Anderson order parameter and has been suggested (but not proven) to display replica symmetry breaking. Mean-field theory The phases of the clean Bose–Hubbard model can be described using a mean-field Hamiltonian:where is the lattice co-ordination number. This can be obtained from the full Bose–Hubbard Hamiltonian by setting where , neglecting terms quadratic in (assumedly infinitesimal) and relabelling . Because this decoupling breaks the symmetry of the initial Hamiltonian for all non-zero values of , this parameter acts as a superfluid order parameter. For simplicity, this decoupling assumes to be the same on every site, which precludes exotic phases such as supersolids or other inhomogeneous phases. (Other decouplings are possible.) The phase diagram can be determined by calculating the energy of this mean-field Hamiltonian using second-order perturbation theory and finding the condition for which . To do this, the Hamiltonian is written as a site-local piece plus a perturbation:where the bilinear terms and its conjugate are treated as the perturbation. The order parameter is assumed to be small near the phase transition. The local term is diagonal in the Fock basis, giving the zeroth-order energy contribution:where is an integer that labels the filling of the Fock state. The perturbative piece can be treated with second-order perturbation theory, which leads to:The energy can be expressed as a series expansion in even powers of the order parameter (also known as the Landau formalism):After doing so, the condition for the mean-field, second-order phase transition between the Mott insulator and the superfluid phase is given by:where the integer describes the filling of the Mott insulating lobe. Plotting the line for different integer values of generates the boundary of the different Mott lobes, as shown in the phase diagram. Implementation in optical lattices Ultracold atoms in optical lattices are considered a standard realization of the Bose–Hubbard model. The ability to tune model parameters using simple experimental techniques and the lack of the lattice dynamics that are present in solid-state electronic systems mean that ultracold atoms offer a clean, controllable realisation of the Bose–Hubbard model. The biggest downside with optical lattice technology is the trap lifetime, with atoms typically trapped for only a few tens of seconds. To see why ultracold atoms offer such a convenient realization of Bose–Hubbard physics, the Bose–Hubbard Hamiltonian can be derived starting from the second quantized Hamiltonian that describes a gas of ultracold atoms in the optical lattice potential. This Hamiltonian is given by: , where is the optical lattice potential, is the (contact) interaction amplitude, and is the chemical potential. The tight binding approximation results in the substitution , which leads to the Bose–Hubbard Hamiltonian the physics are restricted to the lowest band () and the interactions are local at the level of the discrete mode. Mathematically, this can be stated as the requirement that except for case . Here, is a Wannier function for a particle in an optical lattice potential localized around site of the lattice and for the th Bloch band. Subtleties and approximations The tight-binding approximation significantly simplifies the second quantized Hamiltonian, though it introduces several limitations at the same time: For single-site states with several particles in a single state, the interactions may couple to higher Bloch bands, which contradicts base assumptions. Still, a single band model is able to address low-energy physics of such a setting but with parameters U and J becoming density-dependent. Instead of one parameter U, the interaction energy of n particles may be described by close, but not equal to U. When considering (fast) lattice dynamics, additional terms are added to the Hamiltonian so that the time-dependent Schrödinger equation is obeyed in the (time-dependent) Wannier function basis. The terms come from the Wannier functions' time dependence. Otherwise, the lattice dynamics may be incorporated by making the key parameters of the model time-dependent, varying with the instantaneous value of the optical potential. Experimental results Quantum phase transitions in the Bose–Hubbard model were experimentally observed by Greiner et al., and density dependent interaction parameters were observed by Immanuel Bloch's group. Single-atom resolution imaging of the Bose–Hubbard model has been possible since 2009 using quantum gas microscopes. Further applications The Bose–Hubbard model is of interest in the field of quantum computation and quantum information. Entanglement of ultra-cold atoms can be studied using this model. Numerical simulation In the calculation of low energy states the term proportional to means that large occupation of a single site is improbable, allowing for truncation of local Hilbert space to states containing at most particles. Then the local Hilbert space dimension is The dimension of the full Hilbert space grows exponentially with the number of lattice sites, limiting exact computer simulations of the entire Hilbert space to systems of 15-20 particles in 15-20 lattice sites. Experimental systems contain several million sites, with average filling above unity. One-dimensional lattices may be studied using density matrix renormalization group (DMRG) and related techniques such as time-evolving block decimation (TEBD). This includes calculating the ground state of the Hamiltonian for systems of thousands of particles on thousands of lattice sites, and simulating its dynamics governed by the time-dependent Schrödinger equation. Recently, two dimensional lattices have been studied using projected entangled pair states, a generalization of matrix product states in higher dimensions, both for the ground state and finite temperature. Higher dimensions are significantly more difficult due to the rapid growth of entanglement. All dimensions may be treated by quantum Monte Carlo algorithms, which provide a way to study properties of the Hamiltonian's thermal states, and in particular the ground state. Generalizations Bose–Hubbard-like Hamiltonians may be derived for different physical systems containing ultracold atom gas in the periodic potential. They include: systems with longer-ranged density-density interactions of the form , which may stabilise a supersolid phase for certain parameter values dimerised magnets, where spin-1/2 electrons are bound together in pairs called dimers that have bosonic excitation statistics and are described by a Bose–Hubbard model long-range dipolar interaction systems with interaction-induced tunneling terms internal spin structure of atoms, for example due to trapping an entire degenerate manifold of hyperfine spin states (for F=1 it leads to the spin-1 Bose–Hubbard model) situations where the gas experiences an additional potential—for example, in disordered systems. The disorder might be realised by a speckle pattern, or using a second, incommensurate, weaker, optical lattice. In the latter case inclusion of the disorder amounts to including extra term of the form: . See also Jaynes–Cummings–Hubbard model References Quantum lattice models
Bose–Hubbard model
Physics
2,285
5,066,643
https://en.wikipedia.org/wiki/HD%2079447
HD 79447 is a single star in the southern constellation of Carina. It has the Bayer designation i Carinae, while HD 79447 is the identifier from the Henry Draper catalogue. This star has a blue-white hue and is visible to the naked eye with an apparent visual magnitude of +3.96. It is located at a distance of approximately 540 light years from the Sun based on parallax, and has an absolute magnitude of −2.14. The star drifting further away with a radial velocity of +18 km/s. It is a candidate member of the Lower Centaurus–Crux group of the Sco OB2 association. This object is a B-type main-sequence star with a stellar classification of B3V. A surface magnetic field has been detected with a strength on the order of . It has an estimated age of around 39 million years with no measured spin rate. The star has about 5.6 times the radius of the Sun and 7 times the Sun's mass. It is radiating over two thousand times the luminosity of the Sun from its photosphere at an effective temperature of 18,900 K. References B-type main-sequence stars Lower Centaurus Crux Carina (constellation) Carinae, i Durchmusterung objects 079447 045101 3663
HD 79447
Astronomy
278
8,536,216
https://en.wikipedia.org/wiki/Generalized%20Poincar%C3%A9%20conjecture
In the mathematical area of topology, the generalized Poincaré conjecture is a statement that a manifold that is a homotopy sphere a sphere. More precisely, one fixes a category of manifolds: topological (Top), piecewise linear (PL), or differentiable (Diff). Then the statement is Every homotopy sphere (a closed n-manifold which is homotopy equivalent to the n-sphere) in the chosen category (i.e. topological manifolds, PL manifolds, or smooth manifolds) is isomorphic in the chosen category (i.e. homeomorphic, PL-isomorphic, or diffeomorphic) to the standard n-sphere. The name derives from the Poincaré conjecture, which was made for (topological or PL) manifolds of dimension 3, where being a homotopy sphere is equivalent to being simply connected and closed. The generalized Poincaré conjecture is known to be true or false in a number of instances, due to the work of many distinguished topologists, including the Fields medal awardees John Milnor, Steve Smale, Michael Freedman, and Grigori Perelman. Status Here is a summary of the status of the generalized Poincaré conjecture in various settings. Top: True in all dimensions. PL: True in dimensions other than 4; unknown in dimension 4, where it is equivalent to Diff. Diff: False generally, with the first known counterexample in dimension 7. True in some dimensions including 1, 2, 3, 5, 6, 12, 56 and 61. This list includes all odd dimensions for which the conjecture is true. For even dimensions, it is true only for those on the list, possibly dimension 4, and possibly some additional dimensions (though it is conjectured that there are none such). The case of dimension 4 is equivalent to PL. Thus the veracity of the Poincaré conjectures is different in each category Top, PL, and Diff. In general, the notion of isomorphism differs among the categories, but it is the same in dimension 3 and below. In dimension 4, PL and Diff agree, but Top differs. In dimensions above 6 they all differ. In dimensions 5 and 6 every PL manifold admits an infinitely differentiable structure that is so-called Whitehead compatible. History The cases n = 1 and 2 have long been known by the classification of manifolds in those dimensions. For a PL or smooth homotopy n-sphere, in 1960 Stephen Smale proved for that it was homeomorphic to the n-sphere and subsequently extended his proof to ; he received a Fields Medal for his work in 1966. Shortly after Smale's announcement of a proof, John Stallings gave a different proof for dimensions at least 7 that a PL homotopy n-sphere was homeomorphic to the n-sphere, using the notion of "engulfing". E. C. Zeeman modified Stalling's construction to work in dimensions 5 and 6. In 1962, Smale proved that a PL homotopy n-sphere is PL-isomorphic to the standard PL n-sphere for n at least 5. In 1966, M. H. A. Newman extended PL engulfing to the topological situation and proved that for a topological homotopy n-sphere is homeomorphic to the n-sphere. Michael Freedman solved the topological case in 1982 and received a Fields Medal in 1986. The initial proof consisted of a 50-page outline, with many details missing. Freedman gave a series of lectures at the time, convincing experts that the proof was correct. A project to produce a written version of the proof with background and all details filled in began in 2013, with Freedman's support. The project's output, edited by Stefan Behrens, Boldizsar Kalmar, Min Hoon Kim, Mark Powell, and Arunima Ray, with contributions from 20 mathematicians, was published in August 2021 in the form of a 496-page book, The Disc Embedding Theorem. Grigori Perelman solved the case (where the topological, PL, and differentiable cases all coincide) in 2003 in a sequence of three papers. He was offered a Fields Medal in August 2006 and the Millennium Prize from the Clay Mathematics Institute in March 2010, but declined both. Exotic spheres The generalized Poincaré conjecture is true topologically, but false smoothly in some dimensions. This results from the construction of the exotic spheres, manifolds that are homeomorphic, but not diffeomorphic, to the standard sphere, which can be interpreted as non-standard smooth structures on the standard (topological) sphere. Thus the homotopy spheres that John Milnor produced are homeomorphic (Top-isomorphic, and indeed piecewise linear homeomorphic) to the standard sphere , but are not diffeomorphic (Diff-isomorphic) to it, and thus are exotic spheres. Michel Kervaire and Milnor showed that the oriented 7-sphere has 28 = (7) different smooth structures (or 15 ignoring orientations), and in higher dimensions there are usually many different smooth structures on a sphere. It is suspected that certain differentiable structures on the 4-sphere, called Gluck twists, are not isomorphic to the standard one, but at the moment there are no known topological invariants capable of distinguishing different smooth structures on a 4-sphere. PL For piecewise linear manifolds, the Poincaré conjecture is true except possibly in dimension 4, where the answer is unknown, and equivalent to the smooth case. In other words, every compact PL manifold of dimension not equal to 4 that is homotopy equivalent to a sphere is PL isomorphic to a sphere. References Geometric topology Homotopy theory Conjectures
Generalized Poincaré conjecture
Mathematics
1,182
7,069,430
https://en.wikipedia.org/wiki/Kurtosis%20risk
In statistics and decision theory, kurtosis risk is the risk that results when a statistical model assumes the normal distribution, but is applied to observations that have a tendency to occasionally be much farther (in terms of number of standard deviations) from the average than is expected for a normal distribution. Overview Kurtosis risk applies to any kurtosis-related quantitative model that assumes the normal distribution for certain of its independent variables when the latter may in fact have kurtosis much greater than does the normal distribution. Kurtosis risk is commonly referred to as "fat tail" risk. The "fat tail" metaphor explicitly describes the situation of having more observations at either extreme than the tails of the normal distribution would suggest; therefore, the tails are "fatter". Ignoring kurtosis risk will cause any model to understate the risk of variables with high kurtosis. For instance, Long-Term Capital Management, a hedge fund cofounded by Myron Scholes, ignored kurtosis risk to its detriment. After four successful years, this hedge fund had to be bailed out by major investment banks in the late 1990s because it understated the kurtosis of many financial securities underlying the fund's own trading positions. Research by Mandelbrot Benoit Mandelbrot, a French mathematician, extensively researched this issue. He felt that the extensive reliance on the normal distribution for much of the body of modern finance and investment theory is a serious flaw of any related models including the Black–Scholes option model developed by Myron Scholes and Fischer Black, and the capital asset pricing model developed by William F. Sharpe. Mandelbrot explained his views and alternative finance theory in his book: The (Mis)Behavior of Markets: A Fractal View of Risk, Ruin, and Reward published on August 3, 2004. See also Kurtosis Skewness risk Stochastic volatility Holy grail distribution Taleb distribution The Black Swan: The Impact of the Highly Improbable by Nassim Nicholas Taleb Notes References Premaratne, G., Bera, A. K. (2000). Modeling Asymmetry and Excess Kurtosis in Stock Return Data. Office of Research Working Paper Number 00-0123, University of Illinois Normal distribution Investment Risk analysis Mathematical finance
Kurtosis risk
Mathematics
464
16,528,934
https://en.wikipedia.org/wiki/A%20Centauri
The Bayer designations A Centauri and a Centauri represent different stars. Due to technical limitations, both designations link here. A Centauri, HD 100673, a main sequence star a Centauri, V761 Centauri, a variable star See also α Centauri, Alpha Centauri α1 Centauri, Alpha Centauri A (HD 128620) α2 Centauri, Alpha Centauri B (HD 128621) Alpha Centauri (disambiguation) 1 Centauri Centauri, a Centaurus
A Centauri
Astronomy
126
61,681,836
https://en.wikipedia.org/wiki/AnIML
The Analytical Information Markup Language (AnIML) is an open ASTM XML standard for storing and sharing any analytical chemistry and biological data. AnIML and FAIR data A main reason of using AnIML is that FAIR data (Findable, Accessible, Interoperable and Reusable) standards are automatically implemented. As AnIML's structure is human-readable, Accessibility is given. Interoperability, Reusability and Findability are secured by the AnIML Core and AnIML Technique Definitions. History AnIML has been continuously worked on starting from 2003 up to 2020. The last AnIML Core Version update happened in 2010. So far, no standardisation document nor public example files have been published. The standard exists only in pre-release form. Architecture AnIML is a XML standard which consists of two logical layers: AnIML Core AnIML Technique Definitions Additionally, AnIML Technique Definition Documents apply constraints to the AnIML Core and are specified by the AnIML Technique Definitions. The AnIML Core consists of a set of rules defining the structure of the XML document, providing a universal container for arbitrary analytical data. AnIML Technique Definitions describe how to use the AnIML Core to record experiments of a particular scientific discipline. There is a big similarity between the mechanisms of AnIML and the AVI format. The AnIML Core defines the data container, whereas the AnIML Technique Definitions act similar to the AVI codec. It defines how the data needs to be structured and labeled. Technique Definitions are XML documents, specified by the Technique Schema. References External links official website XML-based standards Cheminformatics Bioinformatics Digital container formats
AnIML
Chemistry,Technology,Engineering,Biology
354
37,196
https://en.wikipedia.org/wiki/Causality
Causality is an influence by which one event, process, state, or object (a cause) contributes to the production of another event, process, state, or object (an effect) where the cause is at least partly responsible for the effect, and the effect is at least partly dependent on the cause. The cause of something may also be described as the reason for the event or process. In general, a process can have multiple causes, which are also said to be causal factors for it, and all lie in its past. An effect can in turn be a cause of, or causal factor for, many other effects, which all lie in its future. Some writers have held that causality is metaphysically prior to notions of time and space. Causality is an abstraction that indicates how the world progresses. As such it is a basic concept; it is more apt to be an explanation of other concepts of progression than something to be explained by other more fundamental concepts. The concept is like those of agency and efficacy. For this reason, a leap of intuition may be needed to grasp it. Accordingly, causality is implicit in the structure of ordinary language, as well as explicit in the language of scientific causal notation. In English studies of Aristotelian philosophy, the word "cause" is used as a specialized technical term, the translation of Aristotle's term αἰτία, by which Aristotle meant "explanation" or "answer to a 'why' question". Aristotle categorized the four types of answers as material, formal, efficient, and final "causes". In this case, the "cause" is the explanans for the explanandum, and failure to recognize that different kinds of "cause" are being considered can lead to futile debate. Of Aristotle's four explanatory modes, the one nearest to the concerns of the present article is the "efficient" one. David Hume, as part of his opposition to rationalism, argued that pure reason alone cannot prove the reality of efficient causality; instead, he appealed to custom and mental habit, observing that all human knowledge derives solely from experience. The topic of causality remains a staple in contemporary philosophy. Concept Metaphysics The nature of cause and effect is a concern of the subject known as metaphysics. Kant thought that time and space were notions prior to human understanding of the progress or evolution of the world, and he also recognized the priority of causality. But he did not have the understanding that came with knowledge of Minkowski geometry and the special theory of relativity, that the notion of causality can be used as a prior foundation from which to construct notions of time and space. Ontology A general metaphysical question about cause and effect is: "what kind of entity can be a cause, and what kind of entity can be an effect?" One viewpoint on this question is that cause and effect are of one and the same kind of entity, causality being an asymmetric relation between them. That is to say, it would make good sense grammatically to say either "A is the cause and B the effect" or "B is the cause and A the effect", though only one of those two can be actually true. In this view, one opinion, proposed as a metaphysical principle in process philosophy, is that every cause and every effect is respectively some process, event, becoming, or happening. An example is 'his tripping over the step was the cause, and his breaking his ankle the effect'. Another view is that causes and effects are 'states of affairs', with the exact natures of those entities being more loosely defined than in process philosophy. Another viewpoint on this question is the more classical one, that a cause and its effect can be of different kinds of entity. For example, in Aristotle's efficient causal explanation, an action can be a cause while an enduring object is its effect. For example, the generative actions of his parents can be regarded as the efficient cause, with Socrates being the effect, Socrates being regarded as an enduring object, in philosophical tradition called a 'substance', as distinct from an action. Epistemology Since causality is a subtle metaphysical notion, considerable intellectual effort, along with exhibition of evidence, is needed to establish knowledge of it in particular empirical circumstances. According to David Hume, the human mind is unable to perceive causal relations directly. On this ground, the scholar distinguished between the regularity view of causality and the counterfactual notion. According to the counterfactual view, X causes Y if and only if, without X, Y would not exist. Hume interpreted the latter as an ontological view, i.e., as a description of the nature of causality but, given the limitations of the human mind, advised using the former (stating, roughly, that X causes Y if and only if the two events are spatiotemporally conjoined, and X precedes Y) as an epistemic definition of causality. We need an epistemic concept of causality in order to distinguish between causal and noncausal relations. The contemporary philosophical literature on causality can be divided into five big approaches to causality. These include the (mentioned above) regularity, probabilistic, counterfactual, mechanistic, and manipulationist views. The five approaches can be shown to be reductive, i.e., define causality in terms of relations of other types. According to this reading, they define causality in terms of, respectively, empirical regularities (constant conjunctions of events), changes in conditional probabilities, counterfactual conditions, mechanisms underlying causal relations, and invariance under intervention. Geometrical significance Causality has the properties of antecedence and contiguity. These are topological, and are ingredients for space-time geometry. As developed by Alfred Robb, these properties allow the derivation of the notions of time and space. Max Jammer writes "the Einstein postulate ... opens the way to a straightforward construction of the causal topology ... of Minkowski space." Causal efficacy propagates no faster than light. Thus, the notion of causality is metaphysically prior to the notions of time and space. In practical terms, this is because use of the relation of causality is necessary for the interpretation of empirical experiments. Interpretation of experiments is needed to establish the physical and geometrical notions of time and space. Volition The deterministic world-view holds that the history of the universe can be exhaustively represented as a progression of events following one after the other as cause and effect. Incompatibilism holds that determinism is incompatible with free will, so if determinism is true, "free will" does not exist. Compatibilism, on the other hand, holds that determinism is compatible with, or even necessary for, free will. Necessary and sufficient causes Causes may sometimes be distinguished into two types: necessary and sufficient. A third type of causation, which requires neither necessity nor sufficiency, but which contributes to the effect, is called a "contributory cause". Necessary causes If x is a necessary cause of y, then the presence of y necessarily implies the prior occurrence of x. The presence of x, however, does not imply that y will occur. Sufficient causes If x is a sufficient cause of y, then the presence of x necessarily implies the subsequent occurrence of y. However, another cause z may alternatively cause y. Thus the presence of y does not imply the prior occurrence of x. Contributory causes For some specific effect, in a singular case, a factor that is a contributory cause is one among several co-occurrent causes. It is implicit that all of them are contributory. For the specific effect, in general, there is no implication that a contributory cause is necessary, though it may be so. In general, a factor that is a contributory cause is not sufficient, because it is by definition accompanied by other causes, which would not count as causes if it were sufficient. For the specific effect, a factor that is on some occasions a contributory cause might on some other occasions be sufficient, but on those other occasions it would not be merely contributory. J. L. Mackie argues that usual talk of "cause" in fact refers to INUS conditions (insufficient but non-redundant parts of a condition which is itself unnecessary but sufficient for the occurrence of the effect). An example is a short circuit as a cause for a house burning down. Consider the collection of events: the short circuit, the proximity of flammable material, and the absence of firefighters. Together these are unnecessary but sufficient to the house's burning down (since many other collections of events certainly could have led to the house burning down, for example shooting the house with a flamethrower in the presence of oxygen and so forth). Within this collection, the short circuit is an insufficient (since the short circuit by itself would not have caused the fire) but non-redundant (because the fire would not have happened without it, everything else being equal) part of a condition which is itself unnecessary but sufficient for the occurrence of the effect. So, the short circuit is an INUS condition for the occurrence of the house burning down. Contrasted with conditionals Conditional statements are not statements of causality. An important distinction is that statements of causality require the antecedent to precede or coincide with the consequent in time, whereas conditional statements do not require this temporal order. Confusion commonly arises since many different statements in English may be presented using "If ..., then ..." form (and, arguably, because this form is far more commonly used to make a statement of causality). The two types of statements are distinct, however. For example, all of the following statements are true when interpreting "If ..., then ..." as the material conditional: If Barack Obama is president of the United States in 2011, then Germany is in Europe. If George Washington is president of the United States in 2011, then . The first is true since both the antecedent and the consequent are true. The second is true in sentential logic and indeterminate in natural language, regardless of the consequent statement that follows, because the antecedent is false. The ordinary indicative conditional has somewhat more structure than the material conditional. For instance, although the first is the closest, neither of the preceding two statements seems true as an ordinary indicative reading. But the sentence: If Shakespeare of Stratford-on-Avon did not write Macbeth, then someone else did. intuitively seems to be true, even though there is no straightforward causal relation in this hypothetical situation between Shakespeare's not writing Macbeth and someone else's actually writing it. Another sort of conditional, the counterfactual conditional, has a stronger connection with causality, yet even counterfactual statements are not all examples of causality. Consider the following two statements: If A were a triangle, then A would have three sides. If switch S were thrown, then bulb B would light. In the first case, it would be incorrect to say that A's being a triangle caused it to have three sides, since the relationship between triangularity and three-sidedness is that of definition. The property of having three sides actually determines A's state as a triangle. Nonetheless, even when interpreted counterfactually, the first statement is true. An early version of Aristotle's "four cause" theory is described as recognizing "essential cause". In this version of the theory, that the closed polygon has three sides is said to be the "essential cause" of its being a triangle. This use of the word 'cause' is of course now far obsolete. Nevertheless, it is within the scope of ordinary language to say that it is essential to a triangle that it has three sides. A full grasp of the concept of conditionals is important to understanding the literature on causality. In everyday language, loose conditional statements are often enough made, and need to be interpreted carefully. Questionable cause Fallacies of questionable cause, also known as causal fallacies, non-causa pro causa (Latin for "non-cause for cause"), or false cause, are informal fallacies where a cause is incorrectly identified. Theories Counterfactual theories Counterfactual theories define causation in terms of a counterfactual relation, and can often be seen as "floating" their account of causality on top of an account of the logic of counterfactual conditionals. Counterfactual theories reduce facts about causation to facts about what would have been true under counterfactual circumstances. The idea is that causal relations can be framed in the form of "Had C not occurred, E would not have occurred." This approach can be traced back to David Hume's definition of the causal relation as that "where, if the first object had not been, the second never had existed." More full-fledged analysis of causation in terms of counterfactual conditionals only came in the 20th century after development of the possible world semantics for the evaluation of counterfactual conditionals. In his 1973 paper "Causation," David Lewis proposed the following definition of the notion of causal dependence: An event E causally depends on C if, and only if, (i) if C had occurred, then E would have occurred, and (ii) if C had not occurred, then E would not have occurred. Causation is then analyzed in terms of counterfactual dependence. That is, C causes E if and only if there exists a sequence of events C, D1, D2, ... Dk, E such that each event in the sequence counterfactually depends on the previous. This chain of causal dependence may be called a mechanism. Note that the analysis does not purport to explain how we make causal judgements or how we reason about causation, but rather to give a metaphysical account of what it is for there to be a causal relation between some pair of events. If correct, the analysis has the power to explain certain features of causation. Knowing that causation is a matter of counterfactual dependence, we may reflect on the nature of counterfactual dependence to account for the nature of causation. For example, in his paper "Counterfactual Dependence and Time's Arrow," Lewis sought to account for the time-directedness of counterfactual dependence in terms of the semantics of the counterfactual conditional. If correct, this theory can serve to explain a fundamental part of our experience, which is that we can causally affect the future but not the past. One challenge for the counterfactual account is overdetermination, whereby an effect has multiple causes. For instance, suppose Alice and Bob both throw bricks at a window and it breaks. If Alice hadn't thrown the brick, then it still would have broken, suggesting that Alice wasn't a cause; however, intuitively, Alice did cause the window to break. The Halpern-Pearl definitions of causality take account of examples like these. The first and third Halpern-Pearl conditions are easiest to understand: AC1 requires that Alice threw the brick and the window broke in the actual work. AC3 requires that Alice throwing the brick is a minimal cause (cf. blowing a kiss and throwing a brick). Taking the "updated" version of AC2(a), the basic idea is that we have to find a set of variables and settings thereof such that preventing Alice from throwing a brick also stops the window from breaking. One way to do this is to stop Bob from throwing the brick. Finally, for AC2(b), we have to hold things as per AC2(a) and show that Alice throwing the brick breaks the window. (The full definition is a little more involved, involving checking all subsets of variables.) Probabilistic causation Interpreting causation as a deterministic relation means that if A causes B, then A must always be followed by B. In this sense, war does not cause deaths, nor does smoking cause cancer or emphysema. As a result, many turn to a notion of probabilistic causation. Informally, A ("The person is a smoker") probabilistically causes B ("The person has now or will have cancer at some time in the future"), if the information that A occurred increases the likelihood of Bs occurrence. Formally, P{B|A}≥ P{B} where P{B|A} is the conditional probability that B will occur given the information that A occurred, and P{B} is the probability that B will occur having no knowledge whether A did or did not occur. This intuitive condition is not adequate as a definition for probabilistic causation because of its being too general and thus not meeting our intuitive notion of cause and effect. For example, if A denotes the event "The person is a smoker," B denotes the event "The person now has or will have cancer at some time in the future" and C denotes the event "The person now has or will have emphysema some time in the future," then the following three relationships hold: P{B|A} ≥ P{B}, P{C|A} ≥ P{C} and P{B|C} ≥ P{B}. The last relationship states that knowing that the person has emphysema increases the likelihood that he will have cancer. The reason for this is that having the information that the person has emphysema increases the likelihood that the person is a smoker, thus indirectly increasing the likelihood that the person will have cancer. However, we would not want to conclude that having emphysema causes cancer. Thus, we need additional conditions such as temporal relationship of A to B and a rational explanation as to the mechanism of action. It is hard to quantify this last requirement and thus different authors prefer somewhat different definitions. Causal calculus When experimental interventions are infeasible or illegal, the derivation of a cause-and-effect relationship from observational studies must rest on some qualitative theoretical assumptions, for example, that symptoms do not cause diseases, usually expressed in the form of missing arrows in causal graphs such as Bayesian networks or path diagrams. The theory underlying these derivations relies on the distinction between conditional probabilities, as in , and interventional probabilities, as in . The former reads: "the probability of finding cancer in a person known to smoke, having started, unforced by the experimenter, to do so at an unspecified time in the past", while the latter reads: "the probability of finding cancer in a person forced by the experimenter to smoke at a specified time in the past". The former is a statistical notion that can be estimated by observation with negligible intervention by the experimenter, while the latter is a causal notion which is estimated in an experiment with an important controlled randomized intervention. It is specifically characteristic of quantal phenomena that observations defined by incompatible variables always involve important intervention by the experimenter, as described quantitatively by the observer effect. In classical thermodynamics, processes are initiated by interventions called thermodynamic operations. In other branches of science, for example astronomy, the experimenter can often observe with negligible intervention. The theory of "causal calculus" (also known as do-calculus, Judea Pearl's Causal Calculus, Calculus of Actions) permits one to infer interventional probabilities from conditional probabilities in causal Bayesian networks with unmeasured variables. One very practical result of this theory is the characterization of confounding variables, namely, a sufficient set of variables that, if adjusted for, would yield the correct causal effect between variables of interest. It can be shown that a sufficient set for estimating the causal effect of on is any set of non-descendants of that -separate from after removing all arrows emanating from . This criterion, called "backdoor", provides a mathematical definition of "confounding" and helps researchers identify accessible sets of variables worthy of measurement. Structure learning While derivations in causal calculus rely on the structure of the causal graph, parts of the causal structure can, under certain assumptions, be learned from statistical data. The basic idea goes back to Sewall Wright's 1921 work on path analysis. A "recovery" algorithm was developed by Rebane and Pearl (1987) which rests on Wright's distinction between the three possible types of causal substructures allowed in a directed acyclic graph (DAG): Type 1 and type 2 represent the same statistical dependencies (i.e., and are independent given ) and are, therefore, indistinguishable within purely cross-sectional data. Type 3, however, can be uniquely identified, since and are marginally independent and all other pairs are dependent. Thus, while the skeletons (the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies when and have common ancestors, except that one must first condition on those ancestors. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independencies observed. Alternative methods of structure learning search through the many possible causal structures among the variables, and remove ones which are strongly incompatible with the observed correlations. In general this leaves a set of possible causal relations, which should then be tested by analyzing time series data or, preferably, designing appropriately controlled experiments. In contrast with Bayesian Networks, path analysis (and its generalization, structural equation modeling), serve better to estimate a known causal effect or to test a causal model than to generate causal hypotheses. For nonexperimental data, causal direction can often be inferred if information about time is available. This is because (according to many, though not all, theories) causes must precede their effects temporally. This can be determined by statistical time series models, for instance, or with a statistical test based on the idea of Granger causality, or by direct experimental manipulation. The use of temporal data can permit statistical tests of a pre-existing theory of causal direction. For instance, our degree of confidence in the direction and nature of causality is much greater when supported by cross-correlations, ARIMA models, or cross-spectral analysis using vector time series data than by cross-sectional data. Derivation theories Nobel laureate Herbert A. Simon and philosopher Nicholas Rescher claim that the asymmetry of the causal relation is unrelated to the asymmetry of any mode of implication that contraposes. Rather, a causal relation is not a relation between values of variables, but a function of one variable (the cause) on to another (the effect). So, given a system of equations, and a set of variables appearing in these equations, we can introduce an asymmetric relation among individual equations and variables that corresponds perfectly to our commonsense notion of a causal ordering. The system of equations must have certain properties, most importantly, if some values are chosen arbitrarily, the remaining values will be determined uniquely through a path of serial discovery that is perfectly causal. They postulate the inherent serialization of such a system of equations may correctly capture causation in all empirical fields, including physics and economics. Manipulation theories Some theorists have equated causality with manipulability. Under these theories, x causes y only in the case that one can change x in order to change y. This coincides with commonsense notions of causations, since often we ask causal questions in order to change some feature of the world. For instance, we are interested in knowing the causes of crime so that we might find ways of reducing it. These theories have been criticized on two primary grounds. First, theorists complain that these accounts are circular. Attempting to reduce causal claims to manipulation requires that manipulation is more basic than causal interaction. But describing manipulations in non-causal terms has provided a substantial difficulty. The second criticism centers around concerns of anthropocentrism. It seems to many people that causality is some existing relationship in the world that we can harness for our desires. If causality is identified with our manipulation, then this intuition is lost. In this sense, it makes humans overly central to interactions in the world. Some attempts to defend manipulability theories are recent accounts that do not claim to reduce causality to manipulation. These accounts use manipulation as a sign or feature in causation without claiming that manipulation is more fundamental than causation. Process theories Some theorists are interested in distinguishing between causal processes and non-causal processes (Russell 1948; Salmon 1984). These theorists often want to distinguish between a process and a pseudo-process. As an example, a ball moving through the air (a process) is contrasted with the motion of a shadow (a pseudo-process). The former is causal in nature while the latter is not. Salmon (1984) claims that causal processes can be identified by their ability to transmit an alteration over space and time. An alteration of the ball (a mark by a pen, perhaps) is carried with it as the ball goes through the air. On the other hand, an alteration of the shadow (insofar as it is possible) will not be transmitted by the shadow as it moves along. These theorists claim that the important concept for understanding causality is not causal relationships or causal interactions, but rather identifying causal processes. The former notions can then be defined in terms of causal processes. A subgroup of the process theories is the mechanistic view on causality. It states that causal relations supervene on mechanisms. While the notion of mechanism is understood differently, the definition put forward by the group of philosophers referred to as the 'New Mechanists' dominate the literature. Fields Science For the scientific investigation of efficient causality, the cause and effect are each best conceived of as temporally transient processes. Within the conceptual frame of the scientific method, an investigator sets up several distinct and contrasting temporally transient material processes that have the structure of experiments, and records candidate material responses, normally intending to determine causality in the physical world. For instance, one may want to know whether a high intake of carrots causes humans to develop the bubonic plague. The quantity of carrot intake is a process that is varied from occasion to occasion. The occurrence or non-occurrence of subsequent bubonic plague is recorded. To establish causality, the experiment must fulfill certain criteria, only one example of which is mentioned here. For example, instances of the hypothesized cause must be set up to occur at a time when the hypothesized effect is relatively unlikely in the absence of the hypothesized cause; such unlikelihood is to be established by empirical evidence. A mere observation of a correlation is not nearly adequate to establish causality. In nearly all cases, establishment of causality relies on repetition of experiments and probabilistic reasoning. Hardly ever is causality established more firmly than as more or less probable. It is most convenient for establishment of causality if the contrasting material states of affairs are precisely matched, except for only one variable factor, perhaps measured by a real number. Physics One has to be careful in the use of the word cause in physics. Properly speaking, the hypothesized cause and the hypothesized effect are each temporally transient processes. For example, force is a useful concept for the explanation of acceleration, but force is not by itself a cause. More is needed. For example, a temporally transient process might be characterized by a definite change of force at a definite time. Such a process can be regarded as a cause. Causality is not inherently implied in equations of motion, but postulated as an additional constraint that needs to be satisfied (i.e. a cause always precedes its effect). This constraint has mathematical implications such as the Kramers-Kronig relations. Causality is one of the most fundamental and essential notions of physics. Causal efficacy cannot 'propagate' faster than light. Otherwise, reference coordinate systems could be constructed (using the Lorentz transform of special relativity) in which an observer would see an effect precede its cause (i.e. the postulate of causality would be violated). Causal notions appear in the context of the flow of mass-energy. Any actual process has causal efficacy that can propagate no faster than light. In contrast, an abstraction has no causal efficacy. Its mathematical expression does not propagate in the ordinary sense of the word, though it may refer to virtual or nominal 'velocities' with magnitudes greater than that of light. For example, wave packets are mathematical objects that have group velocity and phase velocity. The energy of a wave packet travels at the group velocity (under normal circumstances); since energy has causal efficacy, the group velocity cannot be faster than the speed of light. The phase of a wave packet travels at the phase velocity; since phase is not causal, the phase velocity of a wave packet can be faster than light. Causal notions are important in general relativity to the extent that the existence of an arrow of time demands that the universe's semi-Riemannian manifold be orientable, so that "future" and "past" are globally definable quantities. Engineering A causal system is a system with output and internal states that depends only on the current and previous input values. A system that has some dependence on input values from the future (in addition to possible past or current input values) is termed an acausal system, and a system that depends solely on future input values is an anticausal system. Acausal filters, for example, can only exist as postprocessing filters, because these filters can extract future values from a memory buffer or a file. We have to be very careful with causality in physics and engineering. Cellier, Elmqvist, and Otter describe causality forming the basis of physics as a misconception, because physics is essentially acausal. In their article they cite a simple example: "The relationship between voltage across and current through an electrical resistor can be described by Ohm's law: V = IR, yet, whether it is the current flowing through the resistor that causes a voltage drop, or whether it is the difference between the electrical potentials on the two wires that causes current to flow is, from a physical perspective, a meaningless question". In fact, if we explain cause-effect using the law, we need two explanations to describe an electrical resistor: as a voltage-drop-causer or as a current-flow-causer. There is no physical experiment in the world that can distinguish between action and reaction. Biology, medicine and epidemiology Austin Bradford Hill built upon the work of Hume and Popper and suggested in his paper "The Environment and Disease: Association or Causation?" that aspects of an association such as strength, consistency, specificity, and temporality be considered in attempting to distinguish causal from noncausal associations in the epidemiological situation. (See Bradford Hill criteria.) He did not note however, that temporality is the only necessary criterion among those aspects. Directed acyclic graphs (DAGs) are increasingly used in epidemiology to help enlighten causal thinking. Psychology Psychologists take an empirical approach to causality, investigating how people and non-human animals detect or infer causation from sensory information, prior experience and innate knowledge. Attribution: Attribution theory is the theory concerning how people explain individual occurrences of causation. Attribution can be external (assigning causality to an outside agent or force—claiming that some outside thing motivated the event) or internal (assigning causality to factors within the person—taking personal responsibility or accountability for one's actions and claiming that the person was directly responsible for the event). Taking causation one step further, the type of attribution a person provides influences their future behavior. The intention behind the cause or the effect can be covered by the subject of action. See also accident; blame; intent; and responsibility. Causal powers Whereas David Hume argued that causes are inferred from non-causal observations, Immanuel Kant claimed that people have innate assumptions about causes. Within psychology, Patricia Cheng attempted to reconcile the Humean and Kantian views. According to her power PC theory, people filter observations of events through an intuition that causes have the power to generate (or prevent) their effects, thereby inferring specific cause-effect relations. Causation and salience Our view of causation depends on what we consider to be the relevant events. Another way to view the statement, "Lightning causes thunder" is to see both lightning and thunder as two perceptions of the same event, viz., an electric discharge that we perceive first visually and then aurally. Naming and causality David Sobel and Alison Gopnik from the Psychology Department of UC Berkeley designed a device known as the blicket detector which would turn on when an object was placed on it. Their research suggests that "even young children will easily and swiftly learn about a new causal power of an object and spontaneously use that information in classifying and naming the object." Perception of launching events Some researchers such as Anjan Chatterjee at the University of Pennsylvania and Jonathan Fugelsang at the University of Waterloo are using neuroscience techniques to investigate the neural and psychological underpinnings of causal launching events in which one object causes another object to move. Both temporal and spatial factors can be manipulated. See Causal Reasoning (Psychology) for more information. Statistics and economics Statistics and economics usually employ pre-existing data or experimental data to infer causality by regression methods. The body of statistical techniques involves substantial use of regression analysis. Typically a linear relationship such as is postulated, in which is the ith observation of the dependent variable (hypothesized to be the caused variable), for j=1,...,k is the ith observation on the jth independent variable (hypothesized to be a causative variable), and is the error term for the ith observation (containing the combined effects of all other causative variables, which must be uncorrelated with the included independent variables). If there is reason to believe that none of the s is caused by y, then estimates of the coefficients are obtained. If the null hypothesis that is rejected, then the alternative hypothesis that and equivalently that causes y cannot be rejected. On the other hand, if the null hypothesis that cannot be rejected, then equivalently the hypothesis of no causal effect of on y cannot be rejected. Here the notion of causality is one of contributory causality as discussed above: If the true value , then a change in will result in a change in y unless some other causative variable(s), either included in the regression or implicit in the error term, change in such a way as to exactly offset its effect; thus a change in is not sufficient to change y. Likewise, a change in is not necessary to change y, because a change in y could be caused by something implicit in the error term (or by some other causative explanatory variable included in the model). The above way of testing for causality requires belief that there is no reverse causation, in which y would cause . This belief can be established in one of several ways. First, the variable may be a non-economic variable: for example, if rainfall amount is hypothesized to affect the futures price y of some agricultural commodity, it is impossible that in fact the futures price affects rainfall amount (provided that cloud seeding is never attempted). Second, the instrumental variables technique may be employed to remove any reverse causation by introducing a role for other variables (instruments) that are known to be unaffected by the dependent variable. Third, the principle that effects cannot precede causes can be invoked, by including on the right side of the regression only variables that precede in time the dependent variable; this principle is invoked, for example, in testing for Granger causality and in its multivariate analog, vector autoregression, both of which control for lagged values of the dependent variable while testing for causal effects of lagged independent variables. Regression analysis controls for other relevant variables by including them as regressors (explanatory variables). This helps to avoid false inferences of causality due to the presence of a third, underlying, variable that influences both the potentially causative variable and the potentially caused variable: its effect on the potentially caused variable is captured by directly including it in the regression, so that effect will not be picked up as an indirect effect through the potentially causative variable of interest. Given the above procedures, coincidental (as opposed to causal) correlation can be probabilistically rejected if data samples are large and if regression results pass cross-validation tests showing that the correlations hold even for data that were not used in the regression. Asserting with certitude that a common-cause is absent and the regression represents the true causal structure is in principle impossible. The problem of omitted variable bias, however, has to be balanced against the risk of inserting Causal colliders, in which the addition of a new variable induces a correlation between and via Berkson's paradox. Apart from constructing statistical models of observational and experimental data, economists use axiomatic (mathematical) models to infer and represent causal mechanisms. Highly abstract theoretical models that isolate and idealize one mechanism dominate microeconomics. In macroeconomics, economists use broad mathematical models that are calibrated on historical data. A subgroup of calibrated models, dynamic stochastic general equilibrium (DSGE) models are employed to represent (in a simplified way) the whole economy and simulate changes in fiscal and monetary policy. Management For quality control in manufacturing in the 1960s, Kaoru Ishikawa developed a cause and effect diagram, known as an Ishikawa diagram or fishbone diagram. The diagram categorizes causes, such as into the six main categories shown here. These categories are then sub-divided. Ishikawa's method identifies "causes" in brainstorming sessions conducted among various groups involved in the manufacturing process. These groups can then be labeled as categories in the diagrams. The use of these diagrams has now spread beyond quality control, and they are used in other areas of management and in design and engineering. Ishikawa diagrams have been criticized for failing to make the distinction between necessary conditions and sufficient conditions. It seems that Ishikawa was not even aware of this distinction. Humanities History In the discussion of history, events are sometimes considered as if in some way being agents that can then bring about other historical events. Thus, the combination of poor harvests, the hardships of the peasants, high taxes, lack of representation of the people, and kingly ineptitude are among the causes of the French Revolution. This is a somewhat Platonic and Hegelian view that reifies causes as ontological entities. In Aristotelian terminology, this use approximates to the case of the efficient cause. Some philosophers of history such as Arthur Danto have claimed that "explanations in history and elsewhere" describe "not simply an event—something that happens—but a change". Like many practicing historians, they treat causes as intersecting actions and sets of actions which bring about "larger changes", in Danto's words: to decide "what are the elements which persist through a change" is "rather simple" when treating an individual's "shift in attitude", but "it is considerably more complex and metaphysically challenging when we are interested in such a change as, say, the break-up of feudalism or the emergence of nationalism". Much of the historical debate about causes has focused on the relationship between communicative and other actions, between singular and repeated ones, and between actions, structures of action or group and institutional contexts and wider sets of conditions. John Gaddis has distinguished between exceptional and general causes (following Marc Bloch) and between "routine" and "distinctive links" in causal relationships: "in accounting for what happened at Hiroshima on August 6, 1945, we attach greater importance to the fact that President Truman ordered the dropping of an atomic bomb than to the decision of the Army Air Force to carry out his orders." He has also pointed to the difference between immediate, intermediate and distant causes. For his part, Christopher Lloyd puts forward four "general concepts of causation" used in history: the "metaphysical idealist concept, which asserts that the phenomena of the universe are products of or emanations from an omnipotent being or such final cause"; "the empiricist (or Humean) regularity concept, which is based on the idea of causation being a matter of constant conjunctions of events"; "the functional/teleological/consequential concept", which is "goal-directed, so that goals are causes"; and the "realist, structurist and dispositional approach, which sees relational structures and internal dispositions as the causes of phenomena". Law According to law and jurisprudence, legal cause must be demonstrated to hold a defendant liable for a crime or a tort (i.e. a civil wrong such as negligence or trespass). It must be proven that causality, or a "sufficient causal link" relates the defendant's actions to the criminal event or damage in question. Causation is also an essential legal element that must be proven to qualify for remedy measures under international trade law. History Hindu philosophy Vedic period (–500 BCE) literature has karma's Eastern origins. Karma is the belief held by Sanatana Dharma and major religions that a person's actions cause certain effects in the current life and/or in future life, positively or negatively. The various philosophical schools (darshanas) provide different accounts of the subject. The doctrine of satkaryavada affirms that the effect inheres in the cause in some way. The effect is thus either a real or apparent modification of the cause. The doctrine of asatkaryavada affirms that the effect does not inhere in the cause, but is a new arising. See Nyaya for some details of the theory of causation in the Nyaya school. In Brahma Samhita, Brahma describes Krishna as the prime cause of all causes. Bhagavad-gītā 18.14 identifies five causes for any action (knowing which it can be perfected): the body, the individual soul, the senses, the efforts and the supersoul. According to Monier-Williams, in the Nyāya causation theory from Sutra I.2.I,2 in the Vaisheshika philosophy, from causal non-existence is effectual non-existence; but, not effectual non-existence from causal non-existence. A cause precedes an effect. With a threads and cloth metaphors, three causes are: Co-inherence cause: resulting from substantial contact, 'substantial causes', threads are substantial to cloth, corresponding to Aristotle's material cause. Non-substantial cause: Methods putting threads into cloth, corresponding to Aristotle's formal cause. Instrumental cause: Tools to make the cloth, corresponding to Aristotle's efficient cause. Monier-Williams also proposed that Aristotle's and the Nyaya's causality are considered conditional aggregates necessary to man's productive work. Buddhist philosophy Karma is the causality principle focusing on 1) causes, 2) actions, 3) effects, where it is the mind's phenomena that guide the actions that the actor performs. Buddhism trains the actor's actions for continued and uncontrived virtuous outcomes aimed at reducing suffering. This follows the Subject–verb–object structure. The general or universal definition of pratityasamutpada (or "dependent origination" or "dependent arising" or "interdependent co-arising") is that everything arises in dependence upon multiple causes and conditions; nothing exists as a singular, independent entity. A traditional example in Buddhist texts is of three sticks standing upright and leaning against each other and supporting each other. If one stick is taken away, the other two will fall to the ground. Causality in the Chittamatrin Buddhist school approach, Asanga's () mind-only Buddhist school, asserts that objects cause consciousness in the mind's image. Because causes precede effects, which must be different entities, then subject and object are different. For this school, there are no objects which are entities external to a perceiving consciousness. The Chittamatrin and the Yogachara Svatantrika schools accept that there are no objects external to the observer's causality. This largely follows the Nikayas approach. The Vaibhashika () is an early Buddhist school which favors direct object contact and accepts simultaneous cause and effects. This is based in the consciousness example which says, intentions and feelings are mutually accompanying mental factors that support each other like poles in tripod. In contrast, simultaneous cause and effect rejectors say that if the effect already exists, then it cannot effect the same way again. How past, present and future are accepted is a basis for various Buddhist school's causality viewpoints. All the classic Buddhist schools teach karma. "The law of karma is a special instance of the law of cause and effect, according to which all our actions of body, speech, and mind are causes and all our experiences are their effects." Western philosophy Aristotelian Aristotle identified four kinds of answer or explanatory mode to various "Why?" questions. He thought that, for any given topic, all four kinds of explanatory mode were important, each in its own right. As a result of traditional specialized philosophical peculiarities of language, with translations between ancient Greek, Latin, and English, the word 'cause' is nowadays in specialized philosophical writings used to label Aristotle's four kinds. In ordinary language, the word 'cause' has a variety of meanings, the most common of which refers to efficient causation, which is the topic of the present article. Material cause, the material whence a thing has come or that which persists while it changes, as for example, one's mother or the bronze of a statue (see also substance theory). Formal cause, whereby a thing's dynamic form or static shape determines the thing's properties and function, as a human differs from a statue of a human or as a statue differs from a lump of bronze. Efficient cause, which imparts the first relevant movement, as a human lifts a rock or raises a statue. This is the main topic of the present article. Final cause, the criterion of completion, or the end; it may refer to an action or to an inanimate process. Examples: Socrates takes a walk after dinner for the sake of his health; earth falls to the lowest level because that is its nature. Of Aristotle's four kinds or explanatory modes, only one, the 'efficient cause' is a cause as defined in the leading paragraph of this present article. The other three explanatory modes might be rendered material composition, structure and dynamics, and, again, criterion of completion. The word that Aristotle used was . For the present purpose, that Greek word would be better translated as "explanation" than as "cause" as those words are most often used in current English. Another translation of Aristotle is that he meant "the four Becauses" as four kinds of answer to "why" questions. Aristotle assumed efficient causality as referring to a basic fact of experience, not explicable by, or reducible to, anything more fundamental or basic. In some works of Aristotle, the four causes are listed as (1) the essential cause, (2) the logical ground, (3) the moving cause, and (4) the final cause. In this listing, a statement of essential cause is a demonstration that an indicated object conforms to a definition of the word that refers to it. A statement of logical ground is an argument as to why an object statement is true. These are further examples of the idea that a "cause" in general in the context of Aristotle's usage is an "explanation". The word "efficient" used here can also be translated from Aristotle as "moving" or "initiating". Efficient causation was connected with Aristotelian physics, which recognized the four elements (earth, air, fire, water), and added the fifth element (aether). Water and earth by their intrinsic property gravitas or heaviness intrinsically fall toward, whereas air and fire by their intrinsic property levitas or lightness intrinsically rise away from, Earth's center—the motionless center of the universe—in a straight line while accelerating during the substance's approach to its natural place. As air remained on Earth, however, and did not escape Earth while eventually achieving infinite speed—an absurdity—Aristotle inferred that the universe is finite in size and contains an invisible substance that holds planet Earth and its atmosphere, the sublunary sphere, centered in the universe. And since celestial bodies exhibit perpetual, unaccelerated motion orbiting planet Earth in unchanging relations, Aristotle inferred that the fifth element, aither, that fills space and composes celestial bodies intrinsically moves in perpetual circles, the only constant motion between two points. (An object traveling a straight line from point A to B and back must stop at either point before returning to the other.) Left to itself, a thing exhibits natural motion, but can—according to Aristotelian metaphysics—exhibit enforced motion imparted by an efficient cause. The form of plants endows plants with the processes nutrition and reproduction, the form of animals adds locomotion, and the form of humankind adds reason atop these. A rock normally exhibits natural motion—explained by the rock's material cause of being composed of the element earth—but a living thing can lift the rock, an enforced motion diverting the rock from its natural place and natural motion. As a further kind of explanation, Aristotle identified the final cause, specifying a purpose or criterion of completion in light of which something should be understood. Aristotle himself explained, Aristotle further discerned two modes of causation: proper (prior) causation and accidental (chance) causation. All causes, proper and accidental, can be spoken as potential or as actual, particular or generic. The same language refers to the effects of causes, so that generic effects are assigned to generic causes, particular effects to particular causes, and actual effects to operating causes. Averting infinite regress, Aristotle inferred the first mover—an unmoved mover. The first mover's motion, too, must have been caused, but, being an unmoved mover, must have moved only toward a particular goal or desire. Pyrrhonism While the plausibility of causality was accepted in Pyrrhonism, it was equally accepted that it was plausible that nothing was the cause of anything. Middle Ages In line with Aristotelian cosmology, Thomas Aquinas posed a hierarchy prioritizing Aristotle's four causes: "final > efficient > material > formal". Aquinas sought to identify the first efficient cause—now simply first cause—as everyone would agree, said Aquinas, to call it God. Later in the Middle Ages, many scholars conceded that the first cause was God, but explained that many earthly events occur within God's design or plan, and thereby scholars sought freedom to investigate the numerous secondary causes. After the Middle Ages For Aristotelian philosophy before Aquinas, the word cause had a broad meaning. It meant 'answer to a why question' or 'explanation', and Aristotelian scholars recognized four kinds of such answers. With the end of the Middle Ages, in many philosophical usages, the meaning of the word 'cause' narrowed. It often lost that broad meaning, and was restricted to just one of the four kinds. For authors such as Niccolò Machiavelli, in the field of political thinking, and Francis Bacon, concerning science more generally, Aristotle's moving cause was the focus of their interest. A widely used modern definition of causality in this newly narrowed sense was assumed by David Hume. He undertook an epistemological and metaphysical investigation of the notion of moving cause. He denied that we can ever perceive cause and effect, except by developing a habit or custom of mind where we come to associate two types of object or event, always contiguous and occurring one after the other. In Part III, section XV of his book A Treatise of Human Nature, Hume expanded this to a list of eight ways of judging whether two things might be cause and effect. The first three: "The cause and effect must be contiguous in space and time." "The cause must be prior to the effect." "There must be a constant union betwixt the cause and effect. 'Tis chiefly this quality, that constitutes the relation." And then additionally there are three connected criteria which come from our experience and which are "the source of most of our philosophical reasonings": And then two more: In 1949, physicist Max Born distinguished determination from causality. For him, determination meant that actual events are so linked by laws of nature that certainly reliable predictions and retrodictions can be made from sufficient present data about them. He describes two kinds of causation: nomic or generic causation and singular causation. Nomic causality means that cause and effect are linked by more or less certain or probabilistic general laws covering many possible or potential instances; this can be recognized as a probabilized version of Hume's criterion 3. An occasion of singular causation is a particular occurrence of a definite complex of events that are physically linked by antecedence and contiguity, which may be recognized as criteria 1 and 2. See also General Catch-22 (logic) Causal research Causal inference Causality (book) Causation (sociology) Cosmological argument Domino effect Sequence of events Mathematics Causal filter Causal system Causality conditions Chaos theory Physics Anthropic principle Arrow of time Butterfly effect Chain reaction Delayed choice quantum eraser Feedback Grandfather paradox Quantum Zeno effect Retrocausality Schrödinger's cat Wheeler–Feynman absorber theory Philosophy Aetiology Arche (ἀρχή) Causa sui Chance (philosophy) Chicken or the egg Condition of possibility Determinism Mill's Methods Newcomb's paradox Non sequitur (logic) Ontological paradox Post hoc ergo propter hoc Predestination paradox Proposed proofs of universal validity (principle of causality) Proximate and ultimate causation Quidditism Supervenience Philosophy of mind Synchronicity Statistics Causal loop diagram Causal Markov condition Correlation does not imply causation Experimental design Granger causality Linear regression Randomness Causal model (structural causal model) Rubin causal model Validity (statistics) Psychology and medicine Adverse effect Clinical trial Force dynamics Iatrogenesis Nocebo Placebo Scientific control Suggestibility Suggestion Pathology and epidemiology Causal inference Epidemiology Etiology Molecular pathology Molecular pathological epidemiology Pathogenesis Pathology Sociology and economics Instrumental variable Root cause analysis Self-fulfilling prophecy Supply and demand Unintended consequence Virtuous circle and vicious circle Environmental issues Causes of global warming Causes of deforestation Causes of land degradation Causes of soil contamination Causes of habitat fragmentation References Further reading Arthur Danto (1965). Analytical Philosophy of History. Cambridge University Press. Idem, 'Complex Events', Philosophy and Phenomenological Research, 30 (1969), 66–77. Idem, 'On Explanations in History', Philosophy of Science, 23 (1956), 15–30. Green, Celia (2003). The Lost Cause: Causation and the Mind-Body Problem. Oxford: Oxford Forum. Includes three chapters on causality at the microlevel in physics. Hewitson, Mark (2014). History and Causality. Palgrave Macmillan. . Little, Daniel (1998). Microfoundations, Method and Causation: On the Philosophy of the Social Sciences. New York: Transaction. Lloyd, Christopher (1993). The Structures of History. Oxford: Blackwell. Idem (1986). Explanation in Social History. Oxford: Blackwell. Maurice Mandelbaum (1977). The Anatomy of Historical Knowledge. Baltimore: Johns Hopkins Press. Judea Pearl (2000). Causality: Models of Reasoning and Inference CAUSALITY, 2nd Edition, 2009 Cambridge University Press Rosenberg, M. (1968). The Logic of Survey Analysis. New York: Basic Books, Inc. Spirtes, Peter, Clark Glymour and Richard Scheines Causation, Prediction, and Search, MIT Press, University of California journal articles, including Judea Pearl's articles between 1984 and 1998 Search Results - Technical Reports . Miguel Espinoza, Théorie du déterminisme causal, L'Harmattan, Paris, 2006. . External links Causation – Internet Encyclopedia of Philosophy Metaphysics of Science – Internet Encyclopedia of Philosophy Causal Processes at the Stanford Encyclopedia of Philosophy The Art and Science of Cause and Effect – A slide show and tutorial lecture by Judea Pearl Donald Davidson: Causal Explanation of Action – The Internet Encyclopedia of Philosophy Causal inference in statistics: An overview – By Judea Pearl (September 2009) An R implementation of causal calculus TimeSleuth – A tool for discovering causality Concepts in epistemology Metaphysical properties Conditionals Time Philosophy of science Scientific method
Causality
Physics,Mathematics
12,029
76,898,727
https://en.wikipedia.org/wiki/K%C5%8Dji%20%28food%29
Kōji (ニホンコウジカビ, 日本麹黴, ‘nihon kōji kabi’) refers to various molds of the genus Aspergillus sp., which are traditionally used in East Asian cuisine for the fermentation of food. In Japanese, kōji refers to both the Aspergillus starter culture and mixtures of Aspergillus with wheat and soybean meal. It can be fried and eaten directly or processed to a sauce. Characteristics Various types of kōji are used, including yellow, black, and white. The kōji is stored for two to three days at 30 °C under high humidity to allow A. oryzae to grow. In this process, the starch from cereals such as wheat, buckwheat or barley as well as from sweet potato is split into glucose, creating a sweet taste. Due to the amino acids glutamic acid and to a lesser extent also aspartic acid split off from the proteins during fermentation, a strong umami taste is created on the human tongue when consumed. Depending on the Aspergillus used, culture substrate and culture conditions (temperature, pH value, salt content, humidity), different products are created in terms of composition, flavour and odour. Kōji can be freeze-dried and crushed to produce spores. Dried kōji-spores can be stored and transported light-protected at room temperature. Yellow kōji Yellow kōji is used, among other things, for the production of soy sauce, miso, sake, tsukemono, jiang, makgeolli, meju, tapai, kōji-amazake, rice vinegar, mirin, shio koji and natto. Typically, for the production of soy sauce (shoyu), soybeans and sometimes also wheat are swollen in water, steamed, and possibly mixed with wheat bran roasted at 160–180 °C and ground. The enrichment with kōji creates a moist mash. There are three Aspergillus species that are used as yellow kōji: Aspergillus flavus var. oryzae (キコウジキン / 黄麹菌 ‘ki kōji-kin’). The growth range of this species includes pH values from below 2 to above 8, a temperature optimum of 32 – 36 °C, a temperature minimum of 7 – 9 °C and a temperature maximum of 45 – 47 °C. The colony color is initially yellow-green, later more or less brown. Aspergillus sojae (醤油麹菌 ‘shōyu-kōji-kin’) Aspergillus tamarii A. oryzae has three α-amylase genes, which allows it to break down starch relatively quickly into glucose. In contrast, A. sojae has only one α-amylase gene under a weak promoter and the CAAT box has a gene expression attenuating mutation (CCAAA instead of CCAAT), but has a higher enzyme activity of endopolygalacturonase and glutaminase. A too rapid release of glucose from starch at the beginning of fermentation inhibits the growth of the microorganisms in the maturation phase. For the breakdown of proteins to amino acids, A. oryzae strain RIB40 has 65 endopeptidase genes and 69 exopeptidase genes, and A. sojae strain SMF134 has 83 endopeptidase genes and 67 exopeptidase genes. Similarly, starch-degrading enzymes (glucosidases) are more strongly expressed and protein-degrading enzymes (proteases) less strongly expressed in A. oryzae, and the odour profiles differ significantly. A. sojae has 10 glutaminase genes. Various mutants of A. oryzae with altered properties were generated by irradiation or by the CRISPR/CAS method. Similarly, mutants of A. sojae with altered properties were generated by a variant of the CRISPR/Cas method or chemical mutagenesis. Black & white kōji Black kōji produces citric acid during fermentation, which inhibits the growth of unwanted microorganisms. It is typically used for the production of Awamori. There are three Aspergillus species that are used as black kōji: Aspergillus luchuensis (synonym Aspergillus awamori, Aspergillus inuii, Aspergillus nakazawai and Aspergillus coreanus, クロコウジキン / 黒麹菌 ‘kuro kōji-kin’) Aspergillus niger (synonym Aspergillus batatae, Aspergillus aureus or Aspergillus foetidus, Aspergillus miyakoensis and Aspergillus usamii including A. usamii mut. shirousamii) Aspergillus tubingensis (synonym Aspergillus saitoi and A. saitoi var. kagoshimaensis) White kōji (Aspergillus kawachii) is an albino variant of Aspergillus luchuensis. It is typically used in the production of Shochu. History The process of making rice wine and fermented bean paste using moulds was first documented in the 4th century B.C. In 725 AD the Japanese book Harima no Kuni Fudoki ('Geography and Culture of the Harima Province') first mentioned kōji outside of China and described that the Japanese produced kōji with fungal spores from the air. Around the 10th century, the kōji production method underwent a change and moved from the natural sowing system in rice to the so-called tomodane. This involved cultivating kōji until spores were released and using the spores to start a new batch of production. In the Meiji era, the integration of new microbiological techniques made it possible to isolate and propagate kōji in pure cultures for the first time. These advances facilitated the improvement of mushroom culture quality and the selection of desirable characteristics. It later became known that Kōji comprises different species of Aspergillus. Aspergillus oryzae was first described in 1878 as Eurotium oryzae Ahlb. and in 1883 as Aspergillus oryzae (Ahlb.) Cohn. Aspergillus luchuensis was first described in 1901 by Tamaki Inui at the University of Tokyo. Genichiro Kawachi isolated a colourless mutant of A. luchuensis (black Kōji) in 1918 and named it Aspergillus kawachii (white Kōji). Aspergillus sojae was first described as a distinct species in Kōji in 1944. Initially, Aspergillus sojae was considered a variety of Aspergillus parasiticus because, unlike the other fungi of Kōji, it had never been isolated from the soil. Literature H. Kitagaki: Medical Application of Substances Derived from Non-Pathogenic Fungi and -Containing. In: Journal of fungi. Band 7, Nummer 4, März 2021, S. , , PMID 33804991, . References Foods Japanese cuisine Fermentation in food processing
Kōji (food)
Chemistry
1,532
2,945,235
https://en.wikipedia.org/wiki/Infectivity
In epidemiology, infectivity is the ability of a pathogen to establish an infection. More specifically, infectivity is the extent to which the pathogen can enter, survive, and multiply in a host. It is measured by the ratio of the number of people who become infected to the total number exposed to the pathogen. Infectivity has been shown to positively correlate with virulence, in plants. This means that as a pathogen's ability to infect a greater number of hosts increases, so does the level of harm it brings to the host. A pathogen's infectivity is different from its transmissibility, which refers to a pathogen's capacity to pass from one organism to another. See also Basic reproduction number (basic reproductive rate, basic reproductive ratio, R0, or r nought) References Epidemiology
Infectivity
Environmental_science
178
39,227,870
https://en.wikipedia.org/wiki/List%20of%20bioacoustics%20software
The following is a list of some referenced bioacoustics software. References See also Free software Open-Source Software General Public License (GPL) Bioacoustics Software
List of bioacoustics software
Technology
37
1,064,587
https://en.wikipedia.org/wiki/Bigram
A bigram or digram is a sequence of two adjacent elements from a string of tokens, which are typically letters, syllables, or words. A bigram is an n-gram for n=2. The frequency distribution of every bigram in a string is commonly used for simple statistical analysis of text in many applications, including in computational linguistics, cryptography, and speech recognition. Gappy bigrams or skipping bigrams are word pairs which allow gaps (perhaps avoiding connecting words, or allowing some simulation of dependencies, as in a dependency grammar). Applications Bigrams, along with other n-grams, are used in most successful language models for speech recognition. Bigram frequency attacks can be used in cryptography to solve cryptograms. See frequency analysis. Bigram frequency is one approach to statistical language identification. Some activities in logology or recreational linguistics involve bigrams. These include attempts to find English words beginning with every possible bigram, or words containing a string of repeated bigrams, such as logogogue. Bigram frequency in the English language The frequency of the most common letter bigrams in a large English corpus is: th 3.56% of 1.17% io 0.83% he 3.07% ed 1.17% le 0.83% in 2.43% is 1.13% ve 0.83% er 2.05% it 1.12% co 0.79% an 1.99% al 1.09% me 0.79% re 1.85% ar 1.07% de 0.76% on 1.76% st 1.05% hi 0.76% at 1.49% to 1.05% ri 0.73% en 1.45% nt 1.04% ro 0.73% nd 1.35% ng 0.95% ic 0.70% ti 1.34% se 0.93% ne 0.69% es 1.34% ha 0.93% ea 0.69% or 1.28% as 0.87% ra 0.69% te 1.20% ou 0.87% ce 0.65% See also Digraph (orthography) Letter frequency Sørensen–Dice coefficient References Formal languages Classical cryptography Natural language processing
Bigram
Mathematics,Technology
478
71,907,438
https://en.wikipedia.org/wiki/Spin%20chain
A spin chain is a type of model in statistical physics. Spin chains were originally formulated to model magnetic systems, which typically consist of particles with magnetic spin located at fixed sites on a lattice. A prototypical example is the quantum Heisenberg model. Interactions between the sites are modelled by operators which act on two different sites, often neighboring sites. They can be seen as a quantum version of statistical lattice models, such as the Ising model, in the sense that the parameter describing the spin at each site is promoted from a variable taking values in a discrete set (typically , representing 'spin up' and 'spin down') to a variable taking values in a vector space (typically the spin-1/2 or two-dimensional representation of ). History The prototypical example of a spin chain is the Heisenberg model, described by Werner Heisenberg in 1928. This models a one-dimensional lattice of fixed particles with spin 1/2. A simple version (the antiferromagnetic XXX model) was solved, that is, the spectrum of the Hamiltonian of the Heisenberg model was determined, by Hans Bethe using the Bethe ansatz. Now the term Bethe ansatz is used generally to refer to many ansatzes used to solve exactly solvable problems in spin chain theory such as for the other variations of the Heisenberg model (XXZ, XYZ), and even in statistical lattice theory, such as for the six-vertex model. Another spin chain with physical applications is the Hubbard model, introduced by John Hubbard in 1963. This model was shown to be exactly solvable by Elliott Lieb and Fa-Yueh Wu in 1968. Another example of (a class of) spin chains is the Gaudin model, described and solved by Michel Gaudin in 1976 Mathematical description The lattice is described by a graph with vertex set and edge set . The model has an associated Lie algebra . More generally, this Lie algebra can be taken to be any complex, finite-dimensional semi-simple Lie algebra . More generally still it can be taken to be an arbitrary Lie algebra. Each vertex has an associated representation of the Lie algebra , labelled . This is a quantum generalization of statistical lattice models, where each vertex has an associated 'spin variable'. The Hilbert space for the whole system, which could be called the configuration space, is the tensor product of the representation spaces at each vertex: A Hamiltonian is then an operator on the Hilbert space. In the theory of spin chains, there are possibly many Hamiltonians which mutually commute. This allows the operators to be simultaneously diagonalized. There is a notion of exact solvability for spin chains, often stated as determining the spectrum of the model. In precise terms, this means determining the simultaneous eigenvectors of the Hilbert space for the Hamiltonians of the system as well as the eigenvalues of each eigenvector with respect to each Hamiltonian. Examples Spin 1/2 XXX model in detail The prototypical example, and a particular example of the Heisenberg spin chain, is known as the spin 1/2 Heisenberg XXX model. The graph is the periodic 1-dimensional lattice with -sites. Explicitly, this is given by , and the elements of being with identified with . The associated Lie algebra is . At site there is an associated Hilbert space which is isomorphic to the two dimensional representation of (and therefore further isomorphic to ). The Hilbert space of system configurations is , of dimension . Given an operator on the two-dimensional representation of , denote by the operator on which acts as on and as identity on the other with . Explicitly, it can be written where the 1 denotes identity. The Hamiltonian is essentially, up to an affine transformation, with implied summation over index , and where are the Pauli matrices. The Hamiltonian has symmetry under the action of the three total spin operators . The central problem is then to determine the spectrum (eigenvalues and eigenvectors in ) of the Hamiltonian. This is solved by the method of an Algebraic Bethe ansatz, discovered by Hans Bethe and further explored by Ludwig Faddeev. List of spin chains Quantum Heisenberg model Inozemtsev model Haldane–Shastry model Quantum Gaudin model See also Lattice model (physics) Exactly solvable References External links Spin chain in nLab Spin models Quantum magnetism Quantum lattice models Magnetic ordering
Spin chain
Physics,Chemistry,Materials_science,Engineering
916
809,198
https://en.wikipedia.org/wiki/Cock%20and%20ball%20torture
Cock and ball torture (CBT) is a sexual activity involving the application of pain or constriction to the male genitals. This may involve directly painful activities, such as genital piercing, wax play, genital spanking, squeezing, ball-busting, genital flogging, urethral play, tickle torture, erotic electrostimulation, kneeing, or kicking. The recipient of such activities may receive direct physical pleasure via masochism, emotional pleasure through erotic humiliation, or knowledge that the play is pleasing to a sadistic dominant. Many of these practices carry significant health risks. Devices and practices Similar to many other sexual activities, CBT can be performed using toys and devices to make the penis and testicles more easily accessible for attack or foreplay purposes. Ball stretcher A ball stretcher is a sex toy that is used to elongate the scrotum and provide a feeling of weight pulling the testicles away from the body. This can be particularly enjoyable for the wearer as it can make an orgasm more intense, as testicles are prevented from moving up. Intended to make one's testicles hang lower than normal (temporarily or, if used regularly for extended periods of time, permanently), this sex toy can be potentially harmful to the genitals as the circulation of blood can be easily cut off if over-tightened. Most ball stretchers are leather, rubber, or stainless steel. Leather ones usually are fastened with snaps, wrapping around the scrotum with the testicles hanging below. (See "Testicle Cuffs" below.) Rubber ones stretch enough for the testicles to pass through, and can be either a tube or a ring, where multiple rings can be added to create the desired length of "stretch". The length of a stretcher can range from . Steel stretchers often use weight in addition to, or instead of, a physically restricted tube, to hold the testicles away from the torso. Steel weights need to come apart in order to fasten them securely around the scrotum. The removable segment might be held in place by friction, magnets, a set screw, or cap screws, and can range in weight from 150 grams to 1200 g or more. A more dangerous type of ball stretcher can be home-made simply by wrapping rope or string around one's scrotum until it is eventually stretched to the desired length. Ball crusher A ball crusher is a device made from either metal or often clear acrylic that squeezes the testicles slowly by turning sets of nuts or screws. How tight it is clamped depends on the pain tolerance of the person it is used on. A ball crusher is often combined with bondage, either with a partner or by oneself, or with other types of torture or impact play. Parachute A parachute is a small collar, usually made from leather, which fastens around the scrotum, and from which weights can be hung. It is conical in shape, with three or four short chains hanging beneath, to which weights can be attached. Used as part of cock and ball torture within a BDSM relationship, the parachute provides a constant drag, and a squeezing effect on the testicles. Moderate weights of 3–5 kg can be suspended, especially during bondage, though occasionally much heavier weights are used. Smaller weights can be used when the participant wearing it is free to move; the swinging effect of the weight can restrict sudden movements, as well as providing a visual stimulus for the dominant partner. Humbler A humbler is a BDSM physical restraint device, a cock-and-ball bondage toy used to restrict the movement of a submissive participant in a BDSM scene. It consists of a testicle cuff device, typically a ring, that clamps around the base of the scrotum while it is drawn back between the legs. This is mounted in the center of a bar or pair of rods that pass behind the thighs at the base of the buttocks. As a result, the wearer is forced to keep their legs folded forward, as any attempt to straighten them even slightly pulls hard on the scrotum, causing anything from considerable discomfort to extreme pain. In this way the wearer is prevented from standing up straight and has to stay bent over or crawl on all fours. Testicle cuffs A testicle cuff is a ring-shaped device that can be placed around the scrotum between the body and the testicles. When it is closed it prevents the testicles from passing through. A common type of testicle cuff consists of two connected cuffs, one around the scrotum and the other around the base of the penis. Testicle cuffs are one of the many devices that are used to restrain the male genitalia. A standard padlock, which cannot be removed without its key, may also be locked around the scrotum. Some passive participants enjoy the feeling of being "owned", while dominant individuals enjoy the sense of "owning" their partners. Requiring such an individual wear testicle cuffs symbolizes that their sexual organs belong to their partner. There is a level of erotic humiliation involved, through which they find sexual arousal. The cuffs may also form part of a sexual fetish of the wearer or their partner. However, these are extreme uses of testicle cuffs. More conventionally, the device pulls down the testicles and keeps them there during stimulation, which has a number of benefits: Making the penis appear longer. Pulling the testicles down and away from the base of the penis stretches the skin over the base of the penis and pubic bone, exposing the additional few centimetres of penile shaft that is normally hidden from view. Improving sexual arousal. While some participants may be aroused by the feeling of being "owned", the physical feeling of stretching the ligaments that suspend the testicles has an effect similar to the more common practice of stretching one's legs and pointing the toes. Preventing the testicles from lifting up so far that they become lodged under the skin immediately adjacent to the base of the penis, a condition which can be very uncomfortable, especially if the testicle is then squashed by the slap of skin during thrusting in sexual intercourse. Delaying or intensifying ejaculation by preventing the testicles from rising normally to the "point of no return". It is much harder to reach an orgasm. Cock harness A cock harness is a device designed to be worn around the penis and scrotum. Its function is similar to that of a cock ring. Early cock harnesses were used to prevent erections in a variety of ways; many of them caused pain in the penis, typically using sharp projections from cock rings or sometimes via electric shocks. These devices were designed for medical use, to prevent ejaculation while sleeping which was believed to cause "seminal weakness" and other physical problems. An example was the jugum penis. Modern cock harnesses are penile sex toys and their use is associated with BDSM activities. The Gates of Hell is a male chastity device made up of multiple cock rings that can be used for CBT. Kali's Teeth is a metal bracelet with interior spikes that closes around the penis and can be used for preventing or punishing erections. Leather penis sheaths lined with internal spikes can be used for similar purposes. Ball busting "Ball busting" is the practice of kicking or kneeing participants in the testicles. It carries several medical risks, including the danger of testicular rupture from blunt trauma. In Japan Tamakeri (玉蹴り) (lit. ball kicking) is a sexual fetish and subgenre of BDSM within which a man's testicles are abused. The genre is also referred to as ballbusting ("bb" for short). Tamakeri is the Japanese term, but it is used by many non-Japanese people to describe media where Asian people—mainly women—are participating in it. The dynamics of tamakeri consist of a masochist having their testicles hurt by a sadist. The fetish is popular among heterosexual and homosexual men and women. Denkianma (電気按摩) (lit. "electric massage") is a popular Japanese prank played between two people where one person puts their foot into the genital area of the other and shakes it in a vibrating motion. Often this is done by grabbing the other person's feet, raising them, and then placing one's own foot on their crotch and vibrating it. This is often done between school aged boys as a prank similar to kancho and could be seen by a western audience as a type of bullying. In 2006, Frito Lay released a special, Taitsukun-themed edition of Doritos chips, that referenced denki anma. Safety Loss of blood flow is one of the greatest risks in cock and ball torture and may cause irreversible damage. Bleeding is an indicator of unsafe behavior. Because numbness may result from circulation problems in the affected member, the level of pain is not an indicator of a problem and signs of danger include numbness or loss of color and edemas. Bondage in which the testicles are tied to another object is especially dangerous, increasing the risk of damaging the testicles through excessive tension or pulling. The most serious injuries are testicular rupture, testicular torsion and testicular avulsion, which are medical emergencies that require urgent medical attention. See also Breast torture Chastity cage Chastity piercing Forced orgasm Groin attack Penile injury Urethral sounding Notes References Further reading Hardy Haberman, Fetish Diva Midori. The Family Jewels: A Guide to Male Genital Play and Torment. Greenery Press, 2001. . BDSM activities Penis Testicle Sexual acts Paraphilias
Cock and ball torture
Biology
2,014
4,687,085
https://en.wikipedia.org/wiki/Streamflow
Streamflow, or channel runoff, is the flow of water in streams and other channels, and is a major element of the water cycle. It is one runoff component, the movement of water from the land to waterbodies, the other component being surface runoff. Water flowing in channels comes from surface runoff from adjacent hillslopes, from groundwater flow out of the ground, and from water discharged from pipes. The discharge of water flowing in a channel is measured using stream gauges or can be estimated by the Manning equation. The record of flow over time is called a hydrograph. Flooding occurs when the volume of water exceeds the capacity of the channel. Role in the water cycle Streams play a critical role in the hydrologic cycle that is essential for all life on Earth. A diversity of biological species, from unicellular organisms to vertebrates, depend on flowing-water systems for their habitat and food resources. Rivers are major aquatic landscapes for all manners of plants and animals. Rivers even help keep the aquifers underground full of water by discharging water downward through their streambeds. In addition to that, the oceans stay full of water because rivers and runoff continually refreshes them. Streamflow is the main mechanism by which water moves from the land to the oceans or to basins of interior drainage. Sources Stream discharge is derived from four sources: channel precipitation, overland flow, interflow, and groundwater. Channel precipitation is the moisture falling directly on the water surface, and in most streams, it adds very little to discharge. Groundwater enters the streambed where the channel intersects the water table, providing a steady supply of water, termed baseflow, during both dry and rainy periods. Because of the large supply of groundwater available to the streams and the slowness of the response of groundwater to precipitation events, baseflow changes only gradually over time, and it is rarely the main cause of flooding. However, it does contribute to flooding by providing a stage onto which runoff from other sources is superimposed. Interflow is water that infiltrates the soil and then moves laterally to the stream channel in the zone above the water table. Much of this water is transmitted within the soil, some of it moving within the horizons. Next to baseflow, it is the most important source of discharge for streams in forested lands. Overland flow in heavily forested areas makes negligible contributions to streamflow. In dry regions, cultivated, and urbanized areas, overland flow or surface runoff is usually a major source of streamflow. Overland flow is a stormwater runoff that begins as thin layer of water that moves very slowly (typically less than 0.25 feet per second) over the ground. Under intensive rainfall and in the absence of barriers such as rough ground, vegetation, and absorbing soil, it can mount up, rapidly reaching stream channels in minutes and causing sudden rises in discharge. The quickest response times between rainfall and streamflow occur in urbanized areas where yard drains, street gutters, and storm sewers collect overland flow and route it to streams straightaway. Runoff velocities in storm sewer pipes can reach 10 to 15 feet per second. Mechanisms that cause changes in streamflow Rivers are always moving, which is good for environment, as stagnant water does not stay fresh and inviting very long. There are many factors, both natural and human-induced, that cause rivers to continuously change: Natural mechanisms Runoff from rainfall and snowmelt Evaporation from soil and surface-water bodies Transpiration by vegetation Ground-water discharge from aquifers Ground-water recharge from surface-water bodies Sedimentation of lakes and wetlands Formation or dissipation of glaciers, snowfields, and permafrost Human-induced mechanisms Surface-water withdrawals and transbasin diversions River-flow regulation for hydropower and navigation Construction, removal, and sedimentation of reservoirs and stormwater retention ponds Stream channelization and levee construction Drainage or restoration of wetlands Land use changes such as urbanization that alter rates of erosion, infiltration, overland flow, or evapotranspiration Wastewater outfalls Irrigation Measurement Streamflow is measured as an amount of water passing through a specific point over time. The units used in the United States are cubic feet per second, while in most other countries cubic meters per second are utilized. There are a variety of ways to measure the discharge of a stream or canal. A stream gauge provides continuous flow over time at one location for water resource and environmental management or other purposes. Streamflow values are better indicators than gage height of conditions along the whole river. Measurements of streamflow are made about every six weeks by United States Geological Survey (USGS) personnel. They wade into the stream to make the measurement or do so from a boat, bridge, or cableway over the stream. For each gaging station, a relation between gage height and streamflow is determined by simultaneous measurements of gage height and streamflow over the natural range of flows (from very low flows to floods). This relation provides the streamflow data from that station. For purposes that do not require a continuous measurement of stream flow over time, current meters or acoustic Doppler velocity profilers can be used. For small streams—a few meters wide or smaller—weirs may be installed. Approximation One informal method that provides an approximation of the stream flow termed the orange method or float method is: Measure a length of stream, and mark the start and finish points. The longest length without changing stream conditions is desired to obtain the most accurate measurement. Place an orange at the starting point and measure the time for it to reach the finish point with a stopwatch. Repeat this at least three times and average the measurement times. Express velocity in meters per second. If the measurements were made at midstream (maximum velocity), the mean stream velocity is approximately 0.8 of the measured velocity for rough (rocky) bottom conditions and 0.9 of the measured velocity for smooth (mud, sand, smooth bedrock) bottom conditions. Monitoring In the United States, streamflow gauges are funded primarily from state and local government funds. In fiscal year 2008, the USGS provided 35% of the funding for everyday operation and maintenance of gauges. Additionally, USGS uses hydrographs to study streamflow in rivers. A hydrograph is a chart showing, most often, river stage (height of the water above an arbitrary altitude) and streamflow (amount of water, usually in cubic feet per second). Other properties, such as rainfall and water quality parameters can also be plotted. Forecasting For most streams especially those with a small watershed, no record of discharge is available. In that case, it is possible to make discharge estimates using the rational method or some modified version of it. However, if chronological records of discharge are available for a stream, a short term forecast of discharge can be made for a given rainstorm using a hydrograph. Unit hydrograph method This method involves building a graph in which the discharge generated by a rainstorm of a given size is plotted over time, usually hours or days. It is called the unit hydrograph method because it addresses only the runoff produced by a particular rainstorm in a specified period of time—the time taken for a river to rise, peak, and fall in response to a storm. Once a rainfall-runoff relationship is established, then subsequent rainfall data can be used to forecast streamflow for selected storms, called standard storms. A standard rainstorm is a high intensity storm of some known magnitude and frequency. One method of unit hydrograph analysis involves expressing the hour by hour or day by day increase in streamflow as a percentage of total runoff. Plotted on a graph, these data from the unit hydrograph for that storm, which represents the runoff added to the pre-storm baseflow. To forecast the flows in a large drainage basin using the unit hydrograph method would be difficult because in a large basin geographic conditions may vary significantly from one part of the basin to another. This is especially so with the distribution of rainfall because an individual rainstorm rarely covers the basin evenly. As a result, the basin does not respond as a unit to a given storm, making it difficult to construct a reliable hydrograph. Magnitude and frequency method For large basins, where unit hydrograph might not be useful and reliable, the magnitude and frequency method is used to calculate the probability of recurrence of large flows based on records of past years' flows. In United States, these records are maintained by the Hydrological Division of the USGS for large streams. For a basin with an area of 5,000 square miles or more, the river system is typically gauged at five to ten places. The data from each gauging station apply to the part of the basin upstream that location. Given several decades of peak annual discharges for a river, limited projections can be made to estimate the size of some large flow that has not been experienced during the period of record. The technique involves projecting the curve (graph line) formed when peak annual discharges are plotted against their respective recurrence intervals. However, in most cases the curve bends strongly, making it difficult to plot a projection accurately. This problem can be overcome by plotting the discharge and/or recurrence interval data on logarithmic graph paper. Once the plot is straightened, a line can be ruled drawn through the points. A projection can then be made by extending the line beyond the points and then reading the appropriate discharge for the recurrence interval in question. Relationship to the environment Runoff of water in channels is responsible for transport of sediment, nutrients, and pollution downstream. Without streamflow, the water in a given watershed would not be able to naturally progress to its final destination in a lake or ocean. This would disrupt the ecosystem. Streamflow is one important route of water from the land to lakes and oceans. The other main routes are surface runoff (the flow of water from the land into nearby watercourses that occurs during precipitation and as a result of irrigation), flow of groundwater into surface waters, and the flow of water from constructed pipes and channels. Relationship to society Streamflow confers on society both benefits and hazards. Runoff downstream is a means to collect water for storage in dams for power generation of water abstraction. The flow of water assists transport downstream. A given watercourse has a maximum streamflow rate that can be accommodated by the channel that can be calculated. If the streamflow exceeds this maximum rate, as happens when an excessive amount of water is present in the watercourse, the channel cannot handle all the water, and flooding occurs. The 1993 Mississippi river flood, the largest ever recorded on the river, was a response to a heavy, long duration spring and summer rainfalls. Early rains saturated the soil over more than a 300,000 square miles of the upper watershed, greatly reducing infiltration and leaving soils with little or no storage capacity. As rains continued, surface depressions, wetlands, ponds, ditches, and farm fields filled with overland flow and rainwater. With no remaining capacity to hold water, additional rainfall was forced from the land into tributary channels and thence to the Mississippi River. For more than a month, the total load of water from hundreds of tributaries exceeded the Mississippi's channel capacity, causing it to spill over its banks onto adjacent floodplains. Where the flood waters were artificially constricted by an engineered channel bordered by constructed levees and unable to spill onto large section of floodplain, the flood levels forced even higher. See also Hydrological modelling List of rivers by discharge Losing stream Perennial stream Runoff model (reservoir) Stream bed Water resources Open-channel flow References USGS, Atlanta, GA. "The Water Cycle: Streamflow." 2 August 2010. Hydrology
Streamflow
Chemistry,Engineering,Environmental_science
2,393
71,853,402
https://en.wikipedia.org/wiki/Animals%20in%20ancient%20Greece%20and%20Rome
Animals had a variety of roles and functions in ancient Greece and Rome. Fish and birds were served as food. Species such as donkeys and horses served as work animals. The military used elephants. It was common to keep animals such as parrots, cats, or dogs as pets. Many animals held important places in the Graeco-Roman religion or culture. For example, owls symbolized wisdom and were associated with Athena. Humans would form close relationships with their animals in antiquity. Philosophers often debated about the nature of animals and humans. Many believed that the fundamental difference was that humans were capable of reason while animals were not. Philosophers such as Porphyry advocated for veganism. Marine life Fishing For the ancient Greeks and Romans fishing served as a source of income, food, and entertainment. Fishes such as tuna, sturgeons, mackerel, jellyfish, anchovies, lobsters, sprat, red mullet, oysters, mussels, sea urchins, salted fish, squid, and octopus were popular meals in ancient Greece or Rome. Octopuses were stored in pots and they would be given as a gift. Fish was also used to make a popular Roman condiment known as garum. Species such as Bluefin Tuna were expensive delicacies in ancient Greece. In ancient Rome, many fish species were delicacies. The poor had limited access to these fish. Fishes were also used to help guide seamen and as methods of foretelling the weather. Ancient fisherman used nets, short rods, traps, and lines with hooks. Roman fishing lines would often have an artificial fly attached to the end of the line. These flies were made of small feathers and were made to imitate the insects that landed on the surface of the water. The ancient Greek word griphos referred to a type of fishing basket. Poseidon, or Neptune in Roman mythology carried a trident that is used for spearfishing. People would hunt and trap tuna through seine fishing. Traps such as the almadraba might have been used in ancient Rome. Fisherman would sell their goods in the markets of Roman cities for profit. There were thriving fish markets across the Roman world. Polybius describes fishing techniques in his works. Boats would sail towards the fishing spot, with one sailor serving as a lookout. The lookout would signal to the other sailors when they had found fish. One sailor would row the boat towards the fish, while another man stood by ready to harpoon the fish. After the fish was stabbed the harpooner would pull the weapon out of the body leaving behind the barbed spear point inside of the fish. The fisherman attached a rope to the spear point, which they would then use to pull the fish towards them. Oppian, a Greek writer who wrote the Halieutica describes fishing in his work. He wrote that fisherman would use their oars to scare the fish into running into their nets made of buoyant flax which the fish thought was shelter. In ancient Athens a bell would announce the arrival of fishing boats. The ancient Greeks and Romans practiced whaling. They hunted both sperm and killer whales. Usually, they would whale by Corsica, Sardinia, and the Peloponnese. Ancient whaling was a dangerous practice and whalers used long lines with animals skins at the end to catch the whale and prevent themselves from being dragged under by it. Stories of fishermen and fishing appear throughout Ancient Greek literature. For example, in Sophron's The Fisherman and the Clown or the comedian Plato's Phaon. Roman and Greek writers were infatuated with the idea of a weather-beaten fisherman. Oppian depicted the fisherman as heroic. The Greeks and Romans practiced aquaculture and would create artificial bodies of water for fish. Sometimes they would keep fish as pets. Alcaeus, a Greek lyric poet wrote: Dolphins and whales Pliny the Elder described a story of a boy who befriended a dolphin by feeding him bread. The ancient Greeks had numerous stories of people being rescued by dolphins. Arion, a Greek musician, and Dionysus, a Greek god both had such stories told about them. The Romans called dolphins porcus piscus, which translates to pig-fish. During the reign of Septimius Severus, a whale was stranded on the Tiber River. The Romans built a model of this whale, which people would walk through. This site became a popular tourist attraction. People would watch animals such as lions walk through the model. Lobsters Although there are a wide variety of images of lobsters throughout ancient Greece or Rome, very few are anatomically accurate. The ancient Romans knew that lobsters had five arms and they had detailed information about their claws and other external features. Pliny considered them bloodless animals. The Romans also knew that lobsters live in rocky terrain, their reproductive habits, and their seasonal movements. Roman authors provide accounts of how lobsters interacted with octopuses, which are their predators. Fishermen used this information to catch the lobsters using lobster traps, which became a popular metaphor in Roman theatre. Lobsters were considered a prestigious food in ancient Rome and Greece and the wealthy would hunt them to gather them. Other species Seaweed was known as alga in ancient Rome and the Greeks knew it as phycos. In ancient Rome it was considered a medicine for gout and ankle swelling. Roman poet Virgil wrote, "nothing is more vile than seaweed." Juvenal, another Roman poet, once joked about "inspectors of seaweed," who waste time on trivial matters. The Romans would wrap the roots of their crops with seaweed to preserve the freshness and humidity of the seedlings. Red algae were also used as soil fertilizer in ancient Rome. Dioscorides described the usage of algae as medicine in De Materia Medica. Several species of venus clam, or as they were called in Greek, kheme. One species was the kheme trakheia or the glykymaris. They were known for their hard shells, taste similar to the sea, and high level of nutrition. The khemea leia, or the smooth clam was known for its smoothness and taste. Another species, known as peloris, is possibly named after Cape Peloris, which is where they were found. Saltfish were used in Roman medicine. They were believed to provoke the bowels and the appetite. It was considered difficult to digest, although nutritious. Xenocrates differentiates between a variety of kinds of saltfish. One kind had hard flesh, another kind had soft flesh, another kinds had flesh somewhere in between. Some saltfish were fleshy and others were fatty. The fat saltfish were capable of floating on their stomach. Birds Birds in ancient Rome and Greece were eaten as food. Flamingo tongues were highly valuable in ancient Rome. Emperors would collect them and serve them at feasts. The Hēliou Zōön, or "creature of the sun" was an ancient Greek term for a species of bird, which was likely the Greater Flamingo or the Phoenix. Pheasants and geese were valuable delicacies in ancient Rome. They, along with guineafowl and partridges were farmed. Partridges were considered good for people with dysentery. Quails have been hunted since antiquity and eaten as food. Ostrich eggs were also eaten, although rarely. The Romans ate peacocks and peahens. They usually were pastured in fields and were sacrificed to Juno and lived in temples. Through Juno, they became associated with marriage and fertility. The arrival of a Barn Swallow was believed to be a signal that spring had arrived. This belief is the origin of an ancient proverb “One Swallow doesn’t make a spring”. Swallow chicks are hatched blind, which may have led to an ancient misconception that if the eyes of a chick was removed, they would heal and would be granted sight. Swallows also appear in Greek myths such as with Procne and Philomela. Red and Black Kites were thought of as a sign of spring and a marker that was used by farmers to determine when they should shear their sheep. It was also considered to be a greedy and malevolent animal that killed young children and stole from the people. The Eurasian Wryneck was connected with erotic magic in Ancient Rome and Greece. There are several explanations for this. One holds that the Wryneck was invented by Aphrodite to help Jason win Medea. Another is that Hera turned the daughter of Echo into a wryneck because of her affairs with Zeus. One bird, which is possibly the Ruff was known for traveling to the grave of Memnon and then killing each other. Bats were thought to have mythical properties. Tits, or as they were known in ancient Greece, Aigthalos,’’ were said to have laid more eggs than other birds and to have attacked bees and wasps. Which may have formed half their diet. This formed the origin for an Ancient Greek proverb, “Bolder than an Aigthalos.” Sometimes two females of this species were said to have laid in the same nest. The Aigiothos or Aigithos is a bird described by Aristotle as fighting a war against donkeys. Donkeys rub their sides on thorn bushes which hide the nests and eggs of the bird, thus destroying them. The birds would peck the sores on the donkey’s back. This species is described as producing many children and being lame in one foot. It may have been a White wagtail, a Western Yellow wagtail, or a Northern Lapwing. The Yellow Wagtail was valued for its usage to farmers. If Jackdaws screamed it was considered a sign of oncoming rain. However, if they screamed following a storm it was a sign of good weather. Ravens were also used as messengers of the weather. The Black Francolin bird became a word for a branded runaway slave because of its color and its ability to conceal itself. "Kepphos" was slang for an idiot in ancient Athens. This is because it was also the name of a bird which may be the European Storm Petrel. It was considered stupid as it fed on sea foam, and therefore hunters could catch it by throwing sea foam at it. Emperor Claudius had a pet thrush which could replicate human speech. Owls Aeiskops was the Greek for the Scops owl. Aristotle called the Scops Owls that lived in Greece all year-long “Always-Scops Owls.”  These owls were inedible, while the ones that only stayed in Greece for only a couple of days were considered nutritious. These species were silent and fatter while the other species was loud and skinnier. The Aigolios was a bird said to be the size of a domesticated chicken. Aristotle wrote that it hunted Jays and fed at night. Therefore, it was rarely visible during the day. It may have lived in caves and rocks. This species may have been the Ural Owl. Owls were associated with Athena and wisdom. Due to this association, the Acropolis was a safe haven for them. They were signs of victory and were believed to protect soldiers. Owls were also thought to watch over the Greek economy. The Greeks also believed that owls were capable of foretelling weather. The early Romans believed that if one nailed a dead owl to a door it would protect that house from death. Owls signified death and defeat. In ancient Rome, the Eagle Owl or Little Owl were believed to signify the imminent death of any person related to a house they landed on. Ascalaphos, an ancient Greek mythological figure was turned into an owl by Demeter as punishment for informing her that Persephone had eaten pomegranate seeds in the underworld. Chickens The Junglefowl was domesticated by the ancient Greeks. It may have arrived during the seventh century BCE from Persia. From this, it may have earned its name "The Persian bird." Chickens were used in cock fights and became known as Alektōr, which means "repeller" because of this. Cock fights were a popular sport throughout all of antiquity. Their practice of crowing around daybreak became a wake-up call. Live chickens were also used as gifts for lovers. Beginning in the Sixth Century BCE the Romans began to use chickens as farm animals. The Romans may have introduced chickens to Britain. Pliny wrote that the best hens had an upright comb, uneven claws, black feathers, and red beaks. The ancient Romans and Greeks had detailed knowledge of Chicken biology and behavior. Chickens were also relevant to classical religion. Athena had a helmet with a chicken on it and people partaking in the Eleusinian mysteries were forbidden from eating chickens. Cockfights were important to Dionysus. Marine birds Geese were domesticated by the ancient Greeks and Romans. They were kept as pets and eaten as food. Geese also appeared in mythology and folklore. The Charites had chariots driven by geese and they appeared in many of Aesop's fables. Geese also allegedly helped save Rome during the Gaul's sack of Rome with their loud noises. For this, they were valued as guards. Swans were believed to be the servants of Apollo and to release a beautiful song when near death. Mallards were domesticated by the ancient Romans. Aristotle describes a seabird known in Greece as the Aithyia or the Mergus in ancient Rome. It was believed to be native to Greece and to have reproduced by laying two or three eggs after the spring solstice in coastal rocks. Roman writers noted that this bird sometimes lived in trees or on rocks. In ancient Greece, this bird was known for diving into the sea. The Romans later wrote that it only did so to dive after oily fish such as eels. They have been identified as shearwaters, European Shags, or the Great Cormorant. Cory's Shearwater is a species of bird that may have been the Diomēdeios Ornis, or the Bird of Diomedes. Which is an ancient Greek term for a bird which was said to have been Greek warriors under the command of Diomedes that were transformed into birds after being killed by Illyrians. One species of seabird known as the Chandrios was believed to be capable of curing jaundice if the patient looked at the bird in the eyes. This species may be the Stone Curlew. Greek and Roman farmers and sailors used Cranes as markers of time before the invention of the calendar. Storks were associated with family and were believed to take good care of their family. The ancient Greeks and Romans would describe trustworthy people as behaving like storks. Their honesty was believed to lead the birds to be transformed into humans at the end of their lives. An ancient Roman law known as the Lex Ciconia required that people care for their elderly. Foreign birds Aelian describes a bird he calls the Agreus. Which, according to him, is a black bird that sings a song to lure in prey. It may have been a bird species belonging to the Indian Mynas, however, it is not described as a foreign word. Other possibilities are that is a Ring Ouzel or a Masked Shrike. He also writes about two other Indian birds. One of which, which may be the Scarlet Finch, was described as being as red as flame and flying in such large numbers that people would mistake them for clouds. Another species was described as having a beautiful song and being colored like a rainbow. It may be a species of sunbird. The Greeks wrote of a bird they called the Hēliodromos or "Sun-runner." It was believed to only live for a year and follow the sun. This species may have been the Indian Courser. Land mammals Varro spoke of three species of rabbits and hares. The Italian species, the white Gaulic species, and the Spanish species. In ancient Greece in Rome rabbits had a sexual connotation and were associated with Aphrodite or Venus. Likely due to their high rates of reproduction. Women would sacrifice hares to the gods in the hopes of improving their fertility. This animal was important to the goddesses Diana and Lucina. Since meat was an expensive delicacy only available to the upper classes, animals such as rabbits were also only available as food to the wealthy. The Romans may have lumped rats and mice into the same species. They likely were a common occurrence in ancient Rome due to poor sanitation and difficulty eradicating rodents. Rats and mice likely arrived in Rome due to trade. They may have caused plague outbreaks in Rome and Greece. Bats were thought to have mythical properties. People would fasten bat heads to dovecotes to protect pigeons. Bat bodies could also serve as magic charms protecting sheepfolds and antidotes to snakebites. Dogs and cats Numerous animals were kept as pets in ancient Greece and Rome. These animals included weasels, as they were seen as the ideal rodent killers, as well as dogs and cats. Aristotle believed that female cats are "naturally lecherous." The Greeks later syncretized the goddess Artemis with the Egyptian goddess Bastet, adopting Bastet's associations with cats and ascribing them to Artemis. Dogs were associated with Hecate and were sacred to Ares and Artemis. Cerberus, Argos, and Laelaps were dogs in Greek mythology. During the Battle of Marathon, one Athenian may have been accompanied by a dog. In the ancient world, dogs may have been used as guards and messengers for the military. They were seen as protectors and/or guardians of their owners and their property. Elephants Alexander the Great was influenced by Persian war elephants to utilize the species in battle. The Macedonians would fight elephants by loosening their ranks to allow the elephants to pass through their ranks and then throw javelins at them. Through this tactic, they would pierce the legs of the unarmored elephants, thus scaring them into fleeing back to their armie's lines. The riders would be attacked by archers and javelins. After his Indian campaign, Alexander created the position of elephantarch to lead his elephant units. Polypercon, one of Alexander's generals made the first use of war elephants in Europe during his siege of Megalopolis. During the Punic Wars and the Pyrrhic War the Romans and their enemies used war elephants. After the Punic Wars, the Romans brought back many elephants from Africa. The Romans used the North African elephant and the African bush elephant. Wolves The ancient Greeks associated wolves with Apollo, and the Romans associated wolves with Mars. In Roman mythology, the Capitoline Wolf nursed Romulus and Remus, sons of Mars and future founders of Rome. As a consequence, the she-wolf became a symbol of Rome and the Romans. It may have become an expression of loyalty to Rome and the emperor. The Romans possibly refrained from harming or hunting wolves. "Lupus", the Roman word for wolf became a Roman cognomen. Plautus, a Roman comedian, used imagery of wolves to discuss the cruelty of men. An altar of Zeus was located at Mount Lykaion, a mountain in Arcadia. Lycaon, king of Arcadia, was said to sacrificed humans at this altar. Following this sacrifice, there would be a feast where one man would eat a portion of the sacrificed people. They would then be turned into a wolf. Lions Lions were present in the Greek peninsula until classical times; the prestige of lion hunting was shown in Heracles' first labor, the killing of the Nemean lion, and lions were depicted as prominent symbols of royalty, for example in the Lion Gate to the citadel of Mycenae. Antiquity has examples where groups of dogs defeat the 'king of beasts,' the lion. Greek legend reflects Achilles' shield with the depiction of a fight of his dog against two lions. Early figurative Greek art places a strong emphasis on lions, especially Mycenaean art. Cambyses II of the Achaemenid Empire possessed a dog that started a fight with two full-grown lions. Claudius Aelianus wrote that Indians showed Alexander the Great powerful dogs bred for lion-baiting. Reptiles and amphibians Tortoiseshells may have been luxury goods imported from other parts of the world. They were often used to display the owner's wealth or to veneer furniture. Sometimes they would be dyed to increase their value or make them resemble a Tortoise shell may also have been used to make a type of instrument known as a chelys. Different areas were thought to provide different tortoise shells. The best shells were said to come from the Malay Peninsula. Tortoises inspired the testudo formation. Lizards were symbols of death and rebirth as they were believed to go into hibernation. The ancient Greeks and Romans had numerous cultural depictions of salamanders. Aristotle and Theophrastus both describe the salamanders as a sign of rain. Nicander stated that the salamander could be used to make poison. While Theocritus may describe a way to use a salamander to make a love potion. Aelian and Pliny the Elder also describe the salamander. Insects and snails Butterflies were considered a symbol of the soul due to the many changes they went through in their lives. Butterfly wings were associated with magic and dreams. The goddess Psyche is usually depicted with butterfly wings. Butterflies show up in art, architecture, and furniture throughout the Greco-Roman world. There are numerous depictions of insects such as grasshoppers, ants, and scorpion flies in ancient Greece and Rome. The Romans would create fibulae in the shape of cicada and flies. Grasshoppers were also used to decorate pottery and oil lamps. Alongside this, the Greeks and Romans were also beekeepers and had extensive knowledge of wasps and ants. Wasps were thought to build their nests by roads, which would attract children, who would disturb the wasp nests. Wasps were said to attack anyone who walked near the nests due to this. Ancient Roman beekeepers kept bee hives in the recesses in walls. Other hives were organized horizontally and vertically. Only the combs with honey would be removed from the hives. Hives could be made from a variety of materials such as terracotta, ferula, wood, cork, bark, and wicker. Galen believed that bee venom could be used for pain relief. Ants were thought to be capable of foretelling the future, including the weather. Cicadas were believed to have been men who were enthralled by the music of the Muses. They would keep singing until they died and the muses turned them into cicadas. This myth was likely linked to the belief the cicadas exclusively ate dew as the Muses would transform the singers into cicadas because they would capable of living without food or water. Zeus was believed to have transformed ants into the Myrmidons and to have transformed a group of men who stole their neighbor's fruit into ants. Snails would be bred in breeding pens known as cochlearia. Triton shells were used by centaurs and gods in Greek mythology. In culture Resources In ancient Greece and Rome, the captive breeding of livestock, particularly the rearing of cattle, was an integral part of the economy. In both the Greek and Roman economies of antiquity, cattle were seen as a determiner of wealth, and herds often served as a dowry in certain arranged-marriage scenarios, as they still do today in many African and Central Asian cultures. The works of Homer depict animal husbandry and livestock management as something practiced by the wealthy and powerful, with the larger herds generally belonging to respectable, higher-status men. Cattle were versatile animals, as they are today, valued as beasts-of-burden as well as sources of milk, leather and meat. Sheep and goats were also highly versatile animals. They served as sources of protein, but were mainly prized for their dairy products, including feta and goat cheese, as well as their wool. Sheep and goats were valued as natural “lawnmowers”, as well, being able to digest many weeds and perennials that are toxic or unpalatable to other hoofed animals. For example, in preparation for the sowing of vegetables or other crops, sheep and/or goats would help the farmers to clear a lot or field by eating all the unwanted, overgrown plant material. The animals would also be fertilizing the soil (with their droppings) in the process and aerating the soil with their hooves, thus preparing an area for planting. Alongside sheep and cereal, other animals such as goats and pigs were crucial parts of ancient Greek cuisine. Horses were considered a luxurious animal and a signifier of wealth and power. Horses, mules, oxen, camels, and elephants were all used as working animals in ancient Rome and Greece. Entertainment Venationes were some of the most popular public spectacles in ancient Rome. These performances involved the simulated “hunting” (and killing) of wild or exotic animals for public entertainment, usually within a stadium or colosseum. Outside of the colosseum setting, Roman legionaries likely played a part in capturing animals for these spectacles, with Julius Africanus recommending the task of animal capture as a form of military exercise. Some of the soldiers would earn exemption from other tasks or duties in return for successfully partaking in these hunts. These men became known as the venatores immunes. The hunters may likely have been assigned quotas of animals to hunt, by type, as well. Animals would be held in vivaria, which also may have served as a place of central organization for these hunts. Other groups were associated with hunting animals in ancient Rome. These included the vestigiatores and the ursarii. The vestigiatores were a group of animal trackers and the ursarii were bear-hunters. Other forms of public entertainment involving animals in the classical world included theriotropheia and leporaria, which were further examples of animal pens, as well as the piscinae (early aquariums and fish ponds) and walk-through bird aviaries. Larger animals would be displayed in cages, triumphs, or pens. Numerous dangerous and exotic species were captured for display—and likely eventual public slaughter—such as the Atlas bear, baboons, antelope (of numerous forms), the Barbary lion and leopard, wild and domestic buffalo, cheetahs, Caspian tigers, European bison (wizent), Aurochs, deer (of all types), giraffe, hippopotamus, Nile crocodiles, ostriches, black and white rhinoceroses, wild asses, zebras, in addition to various types of reptiles, including large monitors and pythons. It was a popular sport and social “custom” to hit others in the head with quails, and men would walk around with quails under their coats in case a challenge appeared. Love stories between different species are common in Classical literature. They were usually seen as comical or scandalous by the people of the classical world. Animals could be given as gifts, as they were a source of entertainment and proof of high social status. People would make apes drunk for their own amusement, sometimes with disastrous consequences. There may have once been a zoo or a similar garden, full of local and exotic animals in Alexandria, Egypt. Religious significance Animals were seen as mediators between the gods and humans. Many gods took anthropomorphic forms and had close associations with animals. For example, Zeus turned into a swan and was associated with eagles. Numerous animals also appeared in Greco-Roman mythology, such as the Hydra and the Chimera. The ancient Greeks practiced Ornithomancy and the Romans practiced Augury, which are the practices of foretelling omens through the movement of birds. Animal sacrifice was a common religious practice throughout the classical world. In philosophy Some Neopythagoreans, which was a school of Hellenistic philosophy, practiced vegetarianism and believed that animals should be protected. Plutarch and Porphyry also believed in vegetarianism, and Plutarch believed that animals were more virtuous than humans. Porphyry believed that animal sacrifice was inefficient as the gods did not want dead animals. He also argued against hurting animals or using them for labor. In Roman art, animals were typically depicted as subservient to humans. Many ancient Roman sarcophagi depict the deceased hunting animals and therefore their bravery. Philosophers debated the differences between animals and humans. According to Aristotle humans are separate from animals as they have the capacity for reason and are meant to achieve their best. Philosophers such as Plutarch placed animals in human situations to better convey the positives and negatives of human nature. Plutarch believed that animals were of higher moral virtue because they could not act against their moral nature, while humans can. The Stoics believed that animals naturally achieved the stoic way of life. Ancient writers had a concept of “animal envy” which is the idea that animals were envious of human skills. Augustine, a Christian theologian believed that animals were not part of the City of God as they were irrational beings. Relationships In the ancient world, people could have strong emotional connections to animals. People made personal connections with their cattle and other work animals. People would also form deep connections with their horses. For example, Alexander the Great had a close bond with his horse Bucephalus. Emperor Hadrian once said: “My horse knew me not by the thousand approximate notions of title, function, and name which complicate human friendship, but solely by my just weight as a man. He shared my every impetus; he knew perfectly, and perhaps better than I, the point where my strength faltered under my will.” Aristocrats likely had a less personal relationship with their horses. Wealthy Greeks likely would replace horses for the sake of novelty. This would demonstrate wealth as horses were expensive and it required a high level of wealth and prestige to afford to consistently replace them. People kept pets in Classical antiquity. Arrian describes his relationship with his dog Horme in his writings. Arrian also describes humans trying to keep their dogs with chronic medical conditions. According to Plato, dogs are valuable pets as they provide unconditional love and affection. Plutarch wrote that humans with difficult relationships with other people often find themselves close to dogs. People would sometimes build graves for their pets. One such burial is located behind the Stoa of Attalus in Athens. The ancient Greeks had unique animal naming conventions. Pets would sometimes be given names, but only those which could not be given to a human. Indicating that they were not seen as equals. Dogs were seen as a positive reflection of the owner’s masculinity and bravery. Birds were valuable pets in the ancient world. Talking birds were seen as useful for entertainment and attracting attention. Birds were popular pets among women and often played with children. References Bibliography Culture of ancient Rome Culture of ancient Greece Economy of ancient Rome Economy of ancient Greece Human–animal interaction
Animals in ancient Greece and Rome
Biology
6,399
45,525,318
https://en.wikipedia.org/wiki/Manufacture%20Modules%20Technologies
Manufacture Modules Technologies Sarl (MMT) is a Swiss company established in Geneva in 2015 which originally specialised in the development and commercialization of "Horological Smartwatch modules", firmware, apps and cloud. Located at Geneva's Skylab high-tech hub, it expanded into the development and manufacturing of "E-Straps" operated with a mobile application. Philippe Fraboulet is the CEO. History In June 2015, Fullpower Technologies and Union Horlogère Suisse (Swiss Watchmakers Corporation) formed MMT as a joint venture, which then launched the MotionX Horological Smartwatch Open Platform for the Swiss watch industry. The initial licensees were Frederique Constant, Alpina and Mondaine, brands owned by Union Horlogère Suisse. Fullpower created and managed the circuit design, firmware, smartphone applications (including sleep activity), as well as the cloud Infrastructure. MMT managed the Swiss watch movement development and production as well as licensing and support. In July 2016, Union Horlogere Holding and MMT were spun-out of the Frédérique Constant Group. Fullpower Technologies' 19.99% share was acquired by Union Horlogere Holding BV, giving it 100% of MMT's shares. Business The company offers firmware, a cloud, manufacturing, service and over-the-air facilities for upgrades. The company also offers its own apps, which bear the label “Swiss Made software”. References Smartwatches Luxury brands Activity trackers Ambient intelligence Internet of things Human–computer interaction Ubiquitous computing Personal digital assistants Mobile computers Watch manufacturing companies of Switzerland Privately held companies of Switzerland
Manufacture Modules Technologies
Technology,Engineering
340
145,375
https://en.wikipedia.org/wiki/Local%20field
In mathematics, a field K is called a non-Archimedean local field if it is complete with respect to a metric induced by a discrete valuation v and if its residue field k is finite. In general, a local field is a locally compact topological field with respect to a non-discrete topology. The real numbers R, and the complex numbers C (with their standard topologies) are Archimedean local fields. Given a local field, the valuation defined on it can be of either of two types, each one corresponds to one of the two basic types of local fields: those in which the valuation is Archimedean and those in which it is not. In the first case, one calls the local field an Archimedean local field, in the second case, one calls it a non-Archimedean local field. Local fields arise naturally in number theory as completions of global fields. While Archimedean local fields have been quite well known in mathematics for at least 250 years, the first examples of non-Archimedean local fields, the fields of p-adic numbers for positive prime integer p, were introduced by Kurt Hensel at the end of the 19th century. Every local field is isomorphic (as a topological field) to one of the following: Archimedean local fields (characteristic zero): the real numbers R, and the complex numbers C. Non-Archimedean local fields of characteristic zero: finite extensions of the p-adic numbers Qp (where p is any prime number). Non-Archimedean local fields of characteristic p (for p any given prime number): the field of formal Laurent series Fq((T)) over a finite field Fq, where q is a power of p. In particular, of importance in number theory, classes of local fields show up as the completions of algebraic number fields with respect to their discrete valuation corresponding to one of their maximal ideals. Research papers in modern number theory often consider a more general notion, requiring only that the residue field be perfect of positive characteristic, not necessarily finite. This article uses the former definition. Induced absolute value Given such an absolute value on a field K, the following topology can be defined on K: for a positive real number m, define the subset Bm of K by Then, the b+Bm make up a neighbourhood basis of b in K. Conversely, a topological field with a non-discrete locally compact topology has an absolute value defining its topology. It can be constructed using the Haar measure of the additive group of the field. Basic features of non-Archimedean local fields For a non-Archimedean local field F (with absolute value denoted by |·|), the following objects are important: its ring of integers which is a discrete valuation ring, is the closed unit ball of F, and is compact; the units in its ring of integers which forms a group and is the unit sphere of F; the unique non-zero prime ideal in its ring of integers which is its open unit ball ; a generator of called a uniformizer of ; its residue field which is finite (since it is compact and discrete). Every non-zero element a of F can be written as a = ϖnu with u a unit, and n a unique integer. The normalized valuation of F is the surjective function v : F → Z ∪ {∞} defined by sending a non-zero a to the unique integer n such that a = ϖnu with u a unit, and by sending 0 to ∞. If q is the cardinality of the residue field, the absolute value on F induced by its structure as a local field is given by: An equivalent and very important definition of a non-Archimedean local field is that it is a field that is complete with respect to a discrete valuation and whose residue field is finite. Examples The p-adic numbers: the ring of integers of Qp is the ring of p-adic integers Zp. Its prime ideal is pZp and its residue field is Z/pZ. Every non-zero element of Qp can be written as u pn where u is a unit in Zp and n is an integer, with v(u pn) = n for the normalized valuation. The formal Laurent series over a finite field: the ring of integers of Fq((T)) is the ring of formal power series Fq[[T]]. Its maximal ideal is (T) (i.e. the set of power series whose constant terms are zero) and its residue field is Fq. Its normalized valuation is related to the (lower) degree of a formal Laurent series as follows: (where a−m is non-zero). The formal Laurent series over the complex numbers is not a local field. For example, its residue field is C[[T]]/(T) = C, which is not finite. Higher unit groups The nth higher unit group of a non-Archimedean local field F is for n ≥ 1. The group U(1) is called the group of principal units, and any element of it is called a principal unit. The full unit group is denoted U(0). The higher unit groups form a decreasing filtration of the unit group whose quotients are given by for n ≥ 1. (Here "" means a non-canonical isomorphism.) Structure of the unit group The multiplicative group of non-zero elements of a non-Archimedean local field F is isomorphic to where q is the order of the residue field, and μq−1 is the group of (q−1)st roots of unity (in F). Its structure as an abelian group depends on its characteristic: If F has positive characteristic p, then where N denotes the natural numbers; If F has characteristic zero (i.e. it is a finite extension of Qp of degree d), then where a ≥ 0 is defined so that the group of p-power roots of unity in F is . Theory of local fields This theory includes the study of types of local fields, extensions of local fields using Hensel's lemma, Galois extensions of local fields, ramification groups filtrations of Galois groups of local fields, the behavior of the norm map on local fields, the local reciprocity homomorphism and existence theorem in local class field theory, local Langlands correspondence, Hodge-Tate theory (also called p-adic Hodge theory), explicit formulas for the Hilbert symbol in local class field theory, see e.g. Higher-dimensional local fields A local field is sometimes called a one-dimensional local field. A non-Archimedean local field can be viewed as the field of fractions of the completion of the local ring of a one-dimensional arithmetic scheme of rank 1 at its non-singular point. For a non-negative integer n, an n-dimensional local field is a complete discrete valuation field whose residue field is an (n − 1)-dimensional local field. Depending on the definition of local field, a zero-dimensional local field is then either a finite field (with the definition used in this article), or a perfect field of positive characteristic. From the geometric point of view, n-dimensional local fields with last finite residue field are naturally associated to a complete flag of subschemes of an n-dimensional arithmetic scheme. See also Hensel's lemma Ramification group Local class field theory Higher local field Citations References External links Field (mathematics) Algebraic number theory
Local field
Mathematics
1,570
1,376,411
https://en.wikipedia.org/wiki/Electronic%20counter-countermeasure
Electronic counter-countermeasures (ECCM) is a part of electronic warfare which includes a variety of practices which attempt to reduce or eliminate the effect of electronic countermeasures (ECM) on electronic sensors aboard vehicles, ships and aircraft and weapons such as missiles. ECCM is also known as electronic protective measures (EPM), chiefly in Europe. In practice, EPM often means resistance to jamming. A more detailed description defines it as the electronic warfare operations taken by a radar to offset the enemy's countermeasure. History Ever since electronics have been used in battle in an attempt to gain superiority over the enemy, effort has been spent on techniques to reduce the effectiveness of those electronics. More recently, sensors and weapons are being modified to deal with this threat. One of the most common types of ECM is radar jamming or spoofing. This originated with the Royal Air Force's use of what they codenamed Window during World War II, which Americans referred to as chaff. It was first used during the Hamburg raid on July 24-25, 1943. Jamming also may have originated with the British during World War II, when they began jamming German radio communications. These efforts include the successful British disruption of German Luftwaffe navigational radio beams. In perhaps the first example of ECCM, the Germans increased their radio transmitter power in an attempt to 'burn through' or override the British jamming, which by necessity of the jammer being airborne or further away produced weaker signals. This is still one of the primary methods of ECCM today. For example, modern airborne jammers are able to identify incoming radar signals from other aircraft and send them back with random delays and other modifications in an attempt to confuse the opponent's radar set, making the 'blip' jump around wildly and become impossible to range. More powerful airborne radars means that it is possible to 'burn through' the jamming at much greater ranges by overpowering the jamming energy with the actual radar returns. The Germans were not really able to overcome the chaff spoofing very successfully and had to work around it (by guiding the aircraft to the target area and then having them visually acquire the targets). Today, more powerful electronics with smarter software for operation of the radar might be able to better discriminate between a moving target like an aircraft and an almost stationary target like a chaff bundle. The technology powering modern sensors and seekers allow all successful systems partly due to ECCM designed into them. Today, electronic warfare is composed of ECM, ECCM and, electronic reconnaissance/intelligent (ELINT) activities. Examples of electronic counter-countermeasures include the American Big Crow program, which served as a Bear bomber and a standoff jammer. It was a modified Air Force NKC-135A and was built to provide capability and flexibility of conducting varied and precision electronic warfare experiments. Throughout its 20-year existence, the U.S. government developed and installed over 3,143 electronic counter-countermeasures to its array of weapons. There is also the BAMS Project, which was funded by the Belgian government since 1982. This system, together with advanced microelectronics, also provided secure voice, data, and text communications under the most severe electronic warfare conditions. Specific ECCM techniques The following are some examples of EPM (other than simply increasing the fidelity of sensors through techniques such as increasing power or improving discrimination): ECM detection Sensor logic may be programmed to be able to recognize attempts at spoofing (e.g., aircraft dropping chaff during terminal homing phase) and ignore them. Even more sophisticated applications of ECCM might be to recognize the type of ECM being used, and be able to cancel out the signal. Pulse compression by "chirping", or linear frequency modulation One of the effects of the pulse compression technique is boosting the apparent signal strength as perceived by the radar receiver. The outgoing radar pulses are chirped, that is, the frequency of the carrier is varied within the pulse, much like the sound of a cricket chirping. When the pulse reflects off a target and returns to the receiver, the signal is processed to add a delay as a function of the frequency. This has the effect of "stacking" the pulse so it seems stronger, but shorter in duration, to further processors. The effect can increase the received signal strength to above that of noise jamming. Similarly, jamming pulses (used in deception jamming) will not typically have the same chirp, so will not benefit from the increase in signal strength. Frequency hopping Frequency agility ("frequency hopping") may be used to rapidly switch the frequency of the transmitted energy, and receiving only that frequency during the receiving time window. This foils jammers which cannot detect this switch in frequency quickly enough or predict the next hop frequency, and switch their own jamming frequency accordingly during the receiving time window. The most advanced jamming techniques have a very wide and fast frequency range, and might possibly jam out an antijammer. This method is also useful against barrage jamming in that it forces the jammer to spread its jamming power across multiple frequencies in the jammed system's frequency range, reducing its power in the actual frequency used by the equipment at any one time. The use of spread-spectrum techniques allow signals to be spread over a wide enough spectrum to make jamming of such a wideband signal difficult. Sidelobe blanking Radar jamming can be effective from directions other than the direction the radar antenna is currently aimed. When jamming is strong enough, the radar receiver can detect it from a relatively low gain sidelobe. The radar, however, will process signals as if they were received in the main lobe. Therefore, jamming can be seen in directions other than where the jammer is located. To combat this, an omnidirectional antenna is used for a comparison signal. By comparing the signal strength as received by both the omnidirectional and the (directional) main antenna, signals can be identified that are not from the direction of interest. These signals are then ignored. Polarization Polarization can be used to filter out unwanted signals, such as jamming. If a jammer and receiver do not have the same polarization, the jamming signal will incur a loss that reduces its effectiveness. The four basic polarizations are linear horizontal, linear vertical, right-hand circular, and left-hand circular. The signal loss inherent in a cross polarized (transmitter different from receiver) pair is 3 dB for dissimilar types, and 17 dB for opposites. Aside from power loss to the jammer, radar receivers can also benefit from using two or more antennas of differing polarization and comparing the signals received on each. This effect can effectively eliminate all jamming of the wrong polarization, although enough jamming may still obscure the actual signal. Radiation homing Another practice of ECCM is to program sensors or seekers to detect attempts at ECM and possibly even to take advantage of them. Specialized anti-radiation missiles have existed even before modern jammers to target radar sites and they can be repurposed to target ECM. The jamming in this case effectively becomes a beacon announcing the presence and location of the transmitter. This makes the use of such ECM a difficult decision – it may serve to obscure an exact location from non-ARMs, but in doing so it must put the jamming vehicle at risk of being targeted and hit by ARMs. Some modern fire-and-forget missiles like the Vympel R-77 and the AMRAAM use a combined approach, by using radar in the normal case, but switching to an antiradiation mode if the jamming is too powerful to allow them to find and track the target normally. This mode, called "home-on-jam", actually makes the missile's job easier, as the jammer usually puts out more power than normal radar return would. See also Jamming Electronic warfare Electronic warfare support measures Wartime reserve mode References Raytheon ECCM-capable radio set Camp Evans Engineers develop World War II counter-measures with the help of Allen B. DuMont. Electronic countermeasures Electronic warfare Military technology Military communications Missile technology Radar signal processing Electronic counter-countermeasures
Electronic counter-countermeasure
Engineering
1,706
23,270,645
https://en.wikipedia.org/wiki/Beta-decay%20stable%20isobars
Beta-decay stable isobars are the set of nuclides which cannot undergo beta decay, that is, the transformation of a neutron to a proton or a proton to a neutron within the nucleus. A subset of these nuclides are also stable with regards to double beta decay or theoretically higher simultaneous beta decay, as they have the lowest energy of all isobars with the same mass number. This set of nuclides is also known as the line of beta stability, a term already in common use in 1965. This line lies along the bottom of the nuclear valley of stability. Introduction The line of beta stability can be defined mathematically by finding the nuclide with the greatest binding energy for a given mass number, by a model such as the classical semi-empirical mass formula developed by C. F. Weizsäcker. These nuclides are local maxima in terms of binding energy for a given mass number. All odd mass numbers have only one beta decay stable nuclide. Among even mass number, five (124, 130, 136, 150, 154) have three beta-stable nuclides. None have more than three; all others have either one or two. From 2 to 34, all have only one. From 36 to 72, only eight (36, 40, 46, 50, 54, 58, 64, 70) have two, and the remaining 11 have one. From 74 to 122, three (88, 90, 118) have one, and the remaining 22 have two. From 124 to 154, only one (140) has one, five have three, and the remaining 10 have two. From 156 to 262, only eighteen have one, and the remaining 36 have two, though there may also exist some undiscovered ones. All primordial nuclides are beta decay stable, with the exception of 40K, 50V, 87Rb, 113Cd, 115In, 138La, 176Lu, and 187Re. In addition, 123Te and 180mTa have not been observed to decay, but are believed to undergo beta decay with an extremely long half-life (over 1015 years). (123Te can only undergo electron capture to 123Sb, whereas 180mTa can decay in both directions, to 180Hf or 180W.) Among non-primordial nuclides, there are some other cases of theoretically possible but never-observed beta decay, notably including 222Rn and 247Cm (the most stable isotopes of their elements considering all decay modes). Finally, 48Ca and 96Zr have not been observed to undergo beta decay (theoretically possible for both) which is extremely suppressed, but double beta decay is known for both. Similar suppression of single beta decay occurs also for 148Gd, a rather short-lived alpha emitter. All elements up to and including nobelium, except technetium, promethium, and mendelevium, are known to have at least one beta-stable isotope. It is known that technetium and promethium have no beta-stable isotopes; current measurement uncertainties are not enough to say whether mendelevium has them or not. List of known beta-decay stable isobars 346 nuclides (including Fm whose discovery is unconfirmed) have been definitively identified as beta-stable. Theoretically predicted or experimentally observed double beta decay is shown by arrows, i.e. arrows point toward the lightest-mass isobar. This is sometimes dominated by alpha decay or spontaneous fission, especially for the heavy elements. Observed decay modes are listed as α for alpha decay, SF for spontaneous fission, and n for neutron emission in the special case of He. For mass 5 there are no bound isobars at all; mass 8 has bound isobars, but the beta-stable Be is unbound. Two beta-decay stable nuclides exist for odd neutron numbers 1 (2H and 3He), 3 (5He and 6Li – the former has an extremely short half-life), 5 (9Be and 10B), 7 (13C and 14N), 55 (97Mo and 99Ru), and 85 (145Nd and 147Sm); the first four cases involve very light nuclides where odd-odd nuclides are more stable than their surrounding even-even isobars, and the last two surround the proton numbers 43 and 61 which have no beta-stable isotopes. Also, two beta-decay stable nuclides exist for odd proton numbers 1, 3, 5, 7, 17, 19, 29, 31, 35, 47, 51, 63, 77, 81, and 95; the first four cases involve very light nuclides where odd-odd nuclides are more stable than their surrounding even-even isobars, and the other numbers surround the neutron numbers 19, 21, 35, 39, 45, 61, 71, 89, 115, 123, 147 which have no beta-stable isotopes. (For N = 21 the long-lived primordial 40K exists, and for N = 71 there is 123Te whose electron capture has not yet been observed, but neither are beta-stable.) All even proton numbers 2 ≤ Z ≤ 102 have at least two beta-decay stable nuclides, with exactly two for Z = 4 (8Be and 9Be – the former having an extremely short half-life) and 6 (12C and 13C). Also, the only even neutron numbers with only one beta-decay stable nuclide are 0 (1H) and 2 (4He); at least two beta-decay stable nuclides exist for even neutron numbers in the range 4 ≤ N ≤ 160, with exactly two for N = 4 (7Li and 8Be), 6 (11B and 12C), 8 (15N and 16O), 66 (114Cd and 116Sn, noting also primordial but not beta-stable 115In), 120 (198Pt and 200Hg), and 128 (212Po and 214Rn – both very unstable to alpha decay). Seven beta-decay stable nuclides exist for the magic N = 82 (136Xe, 138Ba, 139La, 140Ce, 141Pr, 142Nd, and 144Sm) and five for N = 20 (36S, 37Cl, 38Ar, 39K, and 40Ca), 50 (86Kr, 88Sr, 89Y, 90Zr, and 92Mo, noting also primordial but not beta-stable 87Rb), 58 (100Mo, 102Ru, 103Rh, 104Pd, and 106Cd), 74 (124Sn, 126Te, 127I, 128Xe, and 130Ba), 78 (130Te, 132Xe, 133Cs, 134Ba, and 136Ce), 88 (148Nd, 150Sm, 151Eu, 152Gd, and 154Dy – the last not primordial), and 90 (150Nd, 152Sm, 153Eu, 154Gd, and 156Dy). For A ≤ 209, the only beta-decay stable nuclides that are not primordial nuclides are 5He, 8Be, 146Sm, 150Gd, and 154Dy. (146Sm has a half-life long enough that it should barely survive as a primordial nuclide, but it has never been experimentally confirmed as such.) All beta-decay stable nuclides with A ≥ 209 are known to undergo alpha decay, though for some, spontaneous fission is the dominant decay mode. Cluster decay is sometimes also possible, but in all known cases it is a minor branch compared to alpha decay or spontaneous fission. Alpha decay is energetically possible for all beta-stable nuclides with A ≥ 165 with the single exception of 204Hg, but in most cases the Q-value is small enough that such decay has never been seen. With the exception of 262No, no nuclides with A > 260 are currently known to be beta-stable. Moreover, the known beta-stable nuclei for individual masses A = 222, A = 256, and A ≥ 258 (corresponding to proton numbers Z = 86 and Z ≥ 98, or to neutron numbers N = 136 and N ≥ 158) may not represent the complete set. The general patterns of beta-stability are expected to continue into the region of superheavy elements, though the exact location of the center of the valley of stability is model dependent. It is widely believed that an island of stability exists along the beta-stability line for isotopes of elements around copernicium that are stabilized by shell closures in the region; such isotopes would decay primarily through alpha decay or spontaneous fission. Beyond the island of stability, various models that correctly predict many known beta-stable isotopes also predict anomalies in the beta-stability line that are unobserved in any known nuclides, such as the existence of two beta-stable nuclides with the same odd mass number. This is a consequence of the fact that a semi-empirical mass formula must consider shell correction and nuclear deformation, which become far more pronounced for heavy nuclides. The beta-stable fully ionized nuclei (with all electrons stripped) are somewhat different. Firstly, if a proton-rich nuclide can only decay by electron capture (because the energy difference between the parent and daughter is less than 1.022 MeV, the amount of decay energy needed for positron emission), then full ionization makes decay impossible. This happens for example for 7Be. Moreover, sometimes the energy difference is such that while β− decay violates conservation of energy for a neutral atom, bound-state β− decay (in which the decay electron remains bound to the daughter in an atomic orbital) is possible for the corresponding bare nucleus. Within the range , this means that 163Dy, 193Ir, 205Tl, 215At, and 243Am among beta-stable neutral nuclides cease to be beta-stable as bare nuclides, and are replaced by their daughters 163Ho, 193Pt, 205Pb, 215Rn, and 243Cm (bound-state β− decay has been observed for 163Dy, 205Tl and is predicted for 193Ir, 215At, 243Am). Beta decay toward minimum mass Beta decay generally causes nuclides to decay toward the isobar with the lowest mass (which is often, but not always, the one with highest binding energy) with the same mass number. Those with lower atomic number and higher neutron number than the minimum-mass isobar undergo beta-minus decay, while those with higher atomic number and lower neutron number undergo beta-plus decay or electron capture. However, there are a few odd-odd nuclides between two beta-stable even-even isobars, that predominantly decay to the higher-mass of the two beta-stable isobars. For example, 40K could either undergo electron capture or positron emission to 40Ar, or undergo beta minus decay to 40Ca: both possible products are beta-stable. The former process would produce the lighter of the two beta-stable isobars, yet the latter is more common. Isotope masses from: Notes References External links Decay-Chains https://www-nds.iaea.org/relnsd/NdsEnsdf/masschain.html (Russian) Beta-decay stable nuclides up to Z = 118 (data for Z ≥ 102 are predictions) Nuclear physics
Beta-decay stable isobars
Physics
2,390
2,354,244
https://en.wikipedia.org/wiki/Droste%20effect
The Droste effect (), known in art as an example of mise en abyme, is the effect of a picture recursively appearing within itself, in a place where a similar picture would realistically be expected to appear. This produces a loop which in theory could go on forever, but in practice only continues as far as the image's resolution allows. The effect is named after Droste, a Dutch brand of cocoa, with an image designed by Jan Misset in 1904. The Droste effect has since been used in the packaging of a variety of products. Apart from advertising, the effect is also seen in the Dutch artist M. C. Escher's 1956 lithograph Print Gallery, which portrays a gallery that depicts itself. The effect has been widely used on the covers of comic books, mainly in the 1940s. Effect Origins The Droste effect is named after the image on the tins and boxes of Droste cocoa powder which displayed a nurse carrying a serving tray with a cup of hot chocolate and a box with the same image, designed by Jan Misset. This familiar image was introduced in 1904 and maintained for decades with slight variations from 1912 by artists including Adolphe Mouron. The poet and columnist Nico Scheepmaker introduced wider usage of the term in the late 1970s. Mathematics The appearance is recursive: the smaller version contains an even smaller version of the picture, and so on. Only in theory could this go on forever, as fractals do; practically, it continues only as long as the resolution of the picture allows, which is relatively short, since each iteration geometrically reduces the picture's size. Medieval art The Droste effect was anticipated by Giotto early in the 14th century, in his Stefaneschi Triptych. The altarpiece portrays in its centre panel Cardinal Giacomo Gaetani Stefaneschi offering the triptych itself to St. Peter. There are also several examples from medieval times of books featuring images containing the book itself or window panels in churches depicting miniature copies of the window panel itself. M. C. Escher The Dutch artist M. C. Escher made use of the Droste effect in his 1956 lithograph Print Gallery, which portrays a gallery containing a print which depicts the gallery, each time both reduced and rotated, but with a void at the centre of the image. The work has attracted the attention of mathematicians including Hendrik Lenstra. They devised a method of filling in the artwork's central void in an additional application of the Droste effect by successively rotating and shrinking an image of the artwork. Advertising In the 20th century, the Droste effect was used to market a variety of products. The packaging of Land O'Lakes butter featured a Native American woman holding a package of butter with a picture of herself. Morton Salt similarly made use of the effect. The cover of the 1969 vinyl album Ummagumma by Pink Floyd shows the band members sitting in various places, with a picture on the wall showing the same scene, but the order of the band members rotated. The logo of The Laughing Cow cheese spread brand pictures a cow with earrings. On closer inspection, these are seen to be images of the circular cheese spread package, each bearing the image of the mascot itself. The Droste effect is a theme in Russell Hoban's children's novel, The Mouse and His Child, appearing in the form of a label on a can of "Bonzo Dog Food" which depicts itself. Comic books The Droste effect has been a motif for the cover of comic books for many years, known as an "infinity cover". Such covers were especially popular during the 1940s. Examples include Batman #8 (December 1941–January 1942), Action Comics #500 (October 1979), and Bongo Comics Free For All! (2007 ed.). Little Giant Comics #1 (July 1938) is said to be the first-published example of an infinity cover. Video games The main menu screen for The Stanley Parable (and the re-release The Stanley Parable: Ultra Deluxe), known for its self-referential humor and commentary about video games, shows the protagonist's desk on which a computer monitor displays the same main menu screen. Besides having the expected Droste effect where the computer monitor renders itself recursively, this is a rare example of the Droste effect extending the other direction out of its own medium into the real world, since the player is also presumably sitting behind their desk looking at a computer monitor. See also Beyond the Infinite Two Minutes, a movie prominently incorporating the effect Chinese boxes Dream within a dream Fractal Homunculus argument Infinity mirror Infinite regress Matryoshka doll Infinity Quine Scale invariance Self-similarity Story within a story § Fractal fiction Video feedback Notes References External links Escher and the Droste effect The Math Behind the Droste Effect (article by Jos Leys summarizing the results of the Leiden study and article) Droste Effect with Mathematica Droste Effect from Wolfram Demonstrations Project Artistic techniques Recursion Symmetry
Droste effect
Physics,Mathematics
1,055
72,592,525
https://en.wikipedia.org/wiki/Floating%20cable-stayed%20bridge
A floating cable-stayed bridge is a type of cable-stayed bridge where the towers float on tension-leg submerged material, tethered to the seabed for buoyancy. No floating cable-stayed bridge has been made or planned yet, a floating suspension bridge has been planned in Norway. This bridge could be more stable horizontally across the bridge than floating suspension bridges, the lateral movement force from the wind and current in the water is a problem trying to be resolved by placing the tethered cables at different angles from the floating platform to the seabed. See also Cable-stayed suspension bridge Floating suspension bridge List of cable-stayed bridges in the United States List of longest cable-stayed bridge spans List of longest suspension bridge spans List of straits References Bridges by structural type Structural engineering
Floating cable-stayed bridge
Engineering
159
7,149,012
https://en.wikipedia.org/wiki/Factorization%20system
In mathematics, it can be shown that every function can be written as the composite of a surjective function followed by an injective function. Factorization systems are a generalization of this situation in category theory. Definition A factorization system (E, M) for a category C consists of two classes of morphisms E and M of C such that: E and M both contain all isomorphisms of C and are closed under composition. Every morphism f of C can be factored as for some morphisms and . The factorization is functorial: if and are two morphisms such that for some morphisms and , then there exists a unique morphism making the following diagram commute: Remark: is a morphism from to in the arrow category. Orthogonality Two morphisms and are said to be orthogonal, denoted , if for every pair of morphisms and such that there is a unique morphism such that the diagram commutes. This notion can be extended to define the orthogonals of sets of morphisms by and Since in a factorization system contains all the isomorphisms, the condition (3) of the definition is equivalent to (3') and Proof: In the previous diagram (3), take (identity on the appropriate object) and . Equivalent definition The pair of classes of morphisms of C is a factorization system if and only if it satisfies the following conditions: Every morphism f of C can be factored as with and and Weak factorization systems Suppose e and m are two morphisms in a category C. Then e has the left lifting property with respect to m (respectively m has the right lifting property with respect to e) when for every pair of morphisms u and v such that ve = mu there is a morphism w such that the following diagram commutes. The difference with orthogonality is that w is not necessarily unique. A weak factorization system (E, M) for a category C consists of two classes of morphisms E and M of C such that: The class E is exactly the class of morphisms having the left lifting property with respect to each morphism in M. The class M is exactly the class of morphisms having the right lifting property with respect to each morphism in E. Every morphism f of C can be factored as for some morphisms and . This notion leads to a succinct definition of model categories: a model category is a pair consisting of a category C and classes of (so-called) weak equivalences W, fibrations F and cofibrations C so that C has all limits and colimits, is a weak factorization system, is a weak factorization system, and satisfies the two-out-of-three property: if and are composable morphisms and two of are in , then so is the third. A model category is a complete and cocomplete category equipped with a model structure. A map is called a trivial fibration if it belongs to and it is called a trivial cofibration if it belongs to An object is called fibrant if the morphism to the terminal object is a fibration, and it is called cofibrant if the morphism from the initial object is a cofibration. References External links Category theory
Factorization system
Mathematics
708
23,870,096
https://en.wikipedia.org/wiki/Negative-index%20metamaterial
Negative-index metamaterial or negative-index material (NIM) is a metamaterial whose refractive index for an electromagnetic wave has a negative value over some frequency range. NIMs are constructed of periodic basic parts called unit cells, which are usually significantly smaller than the wavelength of the externally applied electromagnetic radiation. The unit cells of the first experimentally investigated NIMs were constructed from circuit board material, or in other words, wires and dielectrics. In general, these artificially constructed cells are stacked or planar and configured in a particular repeated pattern to compose the individual NIM. For instance, the unit cells of the first NIMs were stacked horizontally and vertically, resulting in a pattern that was repeated and intended (see below images). Specifications for the response of each unit cell are predetermined prior to construction and are based on the intended response of the entire, newly constructed, material. In other words, each cell is individually tuned to respond in a certain way, based on the desired output of the NIM. The aggregate response is mainly determined by each unit cell's geometry and substantially differs from the response of its constituent materials. In other words, the way the NIM responds is that of a new material, unlike the wires or metals and dielectrics it is made from. Hence, the NIM has become an effective medium. Also, in effect, this metamaterial has become an “ordered macroscopic material, synthesized from the bottom up”, and has emergent properties beyond its components. Metamaterials that exhibit a negative value for the refractive index are often referred to by any of several terminologies: left-handed media or left-handed material (LHM), backward-wave media (BW media), media with negative refractive index, double negative (DNG) metamaterials, and other similar names. Properties and characteristics Electrodynamics of media with negative indices of refraction were first studied by Russian theoretical physicist Victor Veselago from Moscow Institute of Physics and Technology in 1967. The proposed left-handed or negative-index materials were theorized to exhibit optical properties opposite to those of glass, air, and other transparent media. Such materials were predicted to exhibit counterintuitive properties like bending or refracting light in unusual and unexpected ways. However, the first practical metamaterial was not constructed until 33 years later and it does support Veselago's concepts. Currently, negative-index metamaterials are being developed to manipulate electromagnetic radiation in new ways. For example, optical and electromagnetic properties of natural materials are often altered through chemistry. With metamaterials, optical and electromagnetic properties can be engineered by changing the geometry of its unit cells. The unit cells are materials that are ordered in geometric arrangements with dimensions that are fractions of the wavelength of the radiated electromagnetic wave. Each artificial unit responds to the radiation from the source. The collective result is the material's response to the electromagnetic wave that is broader than normal. Subsequently, transmission is altered by adjusting the shape, size, and configurations of the unit cells. This results in control over material parameters known as permittivity and magnetic permeability. These two parameters (or quantities) determine the propagation of electromagnetic waves in matter. Therefore, controlling the values of permittivity and permeability means that the refractive index can be negative or zero as well as conventionally positive. It all depends on the intended application or desired result. So, optical properties can be expanded beyond the capabilities of lenses, mirrors, and other conventional materials. Additionally, one of the effects most studied is the negative index of refraction. Reverse propagation When a negative index of refraction occurs, propagation of the electromagnetic wave is reversed. Resolution below the diffraction limit becomes possible. This is known as subwavelength imaging. Transmitting a beam of light via an electromagnetically flat surface is another capability. In contrast, conventional materials are usually curved, and cannot achieve resolution below the diffraction limit. Also, reversing the electromagnetic waves in a material, in conjunction with other ordinary materials (including air) could result in minimizing losses that would normally occur. The reverse of the electromagnetic wave, characterized by an antiparallel phase velocity is also an indicator of negative index of refraction. Furthermore, negative-index materials are customized composites. In other words, materials are combined with a desired result in mind. Combinations of materials can be designed to achieve optical properties not seen in nature. The properties of the composite material stem from its lattice structure constructed from components smaller than the impinging electromagnetic wavelength separated by distances that are also smaller than the impinging electromagnetic wavelength. Likewise, by fabricating such metamaterials researchers are trying to overcome fundamental limits tied to the wavelength of light. The unusual and counterintuitive properties currently have practical and commercial use manipulating electromagnetic microwaves in wireless and communication systems. Lastly, research continues in the other domains of the electromagnetic spectrum, including visible light. Materials The first actual metamaterials worked in the microwave regime, or centimeter wavelengths, of the electromagnetic spectrum (about 4.3 GHz). It was constructed of split-ring resonators and conducting straight wires (as unit cells). The unit cells were sized from 7 to 10 millimeters. The unit cells were arranged in a two-dimensional (periodic) repeating pattern which produces a crystal-like geometry. Both the unit cells and the lattice spacing were smaller than the radiated electromagnetic wave. This produced the first left-handed material when both the permittivity and permeability of the material were negative. This system relies on the resonant behavior of the unit cells. Below a group of researchers develop an idea for a left-handed metamaterial that does not rely on such resonant behavior. Research in the microwave range continues with split-ring resonators and conducting wires. Research also continues in the shorter wavelengths with this configuration of materials and the unit cell sizes are scaled down. However, at around 200 terahertz issues arise which make using the split ring resonator problematic. "Alternative materials become more suitable for the terahertz and optical regimes." At these wavelengths selection of materials and size limitations become important. For example, in 2007 a 100 nanometer mesh wire design made of silver and woven in a repeating pattern transmitted beams at the 780 nanometer wavelength, the far end of the visible spectrum. The researchers believe this produced a negative refraction of 0.6. Nevertheless, this operates at only a single wavelength like its predecessor metamaterials in the microwave regime. Hence, the challenges are to fabricate metamaterials so that they "refract light at ever-smaller wavelengths" and to develop broad band capabilities. Artificial transmission-line-media In the metamaterial literature, medium or media refers to transmission medium or optical medium. In 2002, a group of researchers came up with the idea that in contrast to materials that depended on resonant behavior, non-resonant phenomena could surpass narrow bandwidth constraints of the wire/split-ring resonator configuration. This idea translated into a type of medium with broader bandwidth abilities, negative refraction, backward waves, and focusing beyond the diffraction limit. They dispensed with split-ring-resonators and instead used a network of L–C loaded transmission lines. In metamaterial literature this became known as artificial transmission-line media. At that time it had the added advantage of being more compact than a unit made of wires and split ring resonators. The network was both scalable (from the megahertz to the tens of gigahertz range) and tunable. It also includes a method for focusing the wavelengths of interest. By 2007 the negative refractive index transmission line was employed as a subwavelength focusing free-space flat lens. That this is a free-space lens is a significant advance. Part of prior research efforts targeted creating a lens that did not need to be embedded in a transmission line. The optical domain Metamaterial components shrink as research explores shorter wavelengths (higher frequencies) of the electromagnetic spectrum in the infrared and visible spectrums. For example, theory and experiment have investigated smaller horseshoe shaped split ring resonators designed with lithographic techniques, as well as paired metal nanorods or nanostrips, and nanoparticles as circuits designed with lumped element models Applications The science of negative-index materials is being matched with conventional devices that broadcast, transmit, shape, or receive electromagnetic signals that travel over cables, wires, or air. The materials, devices and systems that are involved with this work could have their properties altered or heightened. Hence, this is already happening with metamaterial antennas and related devices which are commercially available. Moreover, in the wireless domain these metamaterial apparatuses continue to be researched. Other applications are also being researched. These are electromagnetic absorbers such as radar-microwave absorbers, electrically small resonators, waveguides that can go beyond the diffraction limit, phase compensators, advancements in focusing devices (e.g. microwave lens), and improved electrically small antennas. In the optical frequency regime developing the superlens may allow for imaging below the diffraction limit. Other potential applications for negative-index metamaterials are optical nanolithography, nanotechnology circuitry, as well as a near field superlens (Pendry, 2000) that could be useful for biomedical imaging and subwavelength photolithography. Manipulating permittivity and permeability To describe any electromagnetic properties of a given achiral material such as an optical lens, there are two significant parameters. These are permittivity, , and permeability, , which allow accurate prediction of light waves traveling within materials, and electromagnetic phenomena that occur at the interface between two materials. For example, refraction is an electromagnetic phenomenon which occurs at the interface between two materials. Snell's law states that the relationship between the angle of incidence of a beam of electromagnetic radiation (light) and the resulting angle of refraction rests on the refractive indices, , of the two media (materials). The refractive index of an achiral medium is given by . Hence, it can be seen that the refractive index is dependent on these two parameters. Therefore, if designed or arbitrarily modified values can be inputs for and , then the behavior of propagating electromagnetic waves inside the material can be manipulated at will. This ability then allows for intentional determination of the refractive index. For example, in 1967, Victor Veselago analytically determined that light will refract in the reverse direction (negatively) at the interface between a material with negative refractive index and a material exhibiting conventional positive refractive index. This extraordinary material was realized on paper with simultaneous negative values for and , and could therefore be termed a double negative material. However, in Veselago's day a material which exhibits double negative parameters simultaneously seemed impossible because no natural materials exist which can produce this effect. Therefore, his work was ignored for three decades. It was nominated for the Nobel Prize later. In general the physical properties of natural materials cause limitations. Most dielectrics only have positive permittivities, > 0. Metals will exhibit negative permittivity, < 0 at optical frequencies, and plasmas exhibit negative permittivity values in certain frequency bands. Pendry et al. demonstrated that the plasma frequency can be made to occur in the lower microwave frequencies for metals with a material made of metal rods that replaces the bulk metal. However, in each of these cases permeability remains always positive. At microwave frequencies it is possible for negative μ to occur in some ferromagnetic materials. But the inherent drawback is they are difficult to find above terahertz frequencies. In any case, a natural material that can achieve negative values for permittivity and permeability simultaneously has not been found or discovered. Hence, all of this has led to constructing artificial composite materials known as metamaterials in order to achieve the desired results. Negative index of refraction due to chirality In case of chiral materials, the refractive index depends not only on permittivity and permeability , but also on the chirality parameter , resulting in distinct values for left and right circularly polarized waves, given by A negative index will occur for waves of one circular polarization if > . In this case, it is not necessary that either or both and be negative to achieve a negative index of refraction. A negative refractive index due to chirality was predicted by Pendry and Tretyakov et al., and first observed simultaneously and independently by Plum et al. and Zhang et al. in 2009. Physical properties never before produced in nature Theoretical articles were published in 1996 and 1999 which showed that synthetic materials could be constructed to purposely exhibit a negative permittivity and permeability. These papers, along with Veselago's 1967 theoretical analysis of the properties of negative-index materials, provided the background to fabricate a metamaterial with negative effective permittivity and permeability. See below. A metamaterial developed to exhibit negative-index behavior is typically formed from individual components. Each component responds differently and independently to a radiated electromagnetic wave as it travels through the material. Since these components are smaller than the radiated wavelength it is understood that a macroscopic view includes an effective value for both permittivity and permeability. Composite material In the year 2000, David R. Smith's team of UCSD researchers produced a new class of composite materials by depositing a structure onto a circuit-board substrate consisting of a series of thin copper split-rings and ordinary wire segments strung parallel to the rings. This material exhibited unusual physical properties that had never been observed in nature. These materials obey the laws of physics, but behave differently from normal materials. In essence these negative-index metamaterials were noted for having the ability to reverse many of the physical properties that govern the behavior of ordinary optical materials. One of those unusual properties is the ability to reverse, for the first time, Snell's law of refraction. Until the demonstration of negative refractive index for microwaves by the UCSD team, the material had been unavailable. Advances during the 1990s in fabrication and computation abilities allowed these first metamaterials to be constructed. Thus, the "new" metamaterial was tested for the effects described by Victor Veselago 30 years earlier. Studies of this experiment, which followed shortly thereafter, announced that other effects had occurred. With antiferromagnets and certain types of insulating ferromagnets, effective negative magnetic permeability is achievable when polariton resonance exists. To achieve a negative index of refraction, however, permittivity with negative values must occur within the same frequency range. The artificially fabricated split-ring resonator is a design that accomplishes this, along with the promise of dampening high losses. With this first introduction of the metamaterial, it appears that the losses incurred were smaller than antiferromagnetic, or ferromagnetic materials. When first demonstrated in 2000, the composite material (NIM) was limited to transmitting microwave radiation at frequencies of 4 to 7 gigahertz (4.28–7.49 cm wavelengths). This range is between the frequency of household microwave ovens (~2.45 GHz, 12.23 cm) and military radars (~10 GHz, 3 cm). At demonstrated frequencies, pulses of electromagnetic radiation moving through the material in one direction are composed of constituent waves moving in the opposite direction. The metamaterial was constructed as a periodic array of copper split ring and wire conducting elements deposited onto a circuit-board substrate. The design was such that the cells, and the lattice spacing between the cells, were much smaller than the radiated electromagnetic wavelength. Hence, it behaves as an effective medium. The material has become notable because its range of (effective) permittivity εeff and permeability μeff values have exceeded those found in any ordinary material. Furthermore, the characteristic of negative (effective) permeability evinced by this medium is particularly notable, because it has not been found in ordinary materials. In addition, the negative values for the magnetic component is directly related to its left-handed nomenclature, and properties (discussed in a section below). The split-ring resonator (SRR), based on the prior 1999 theoretical article, is the tool employed to achieve negative permeability. This first composite metamaterial is then composed of split-ring resonators and electrical conducting posts. Initially, these materials were only demonstrated at wavelengths longer than those in the visible spectrum. In addition, early NIMs were fabricated from opaque materials and usually made of non-magnetic constituents. As an illustration, however, if these materials are constructed at visible frequencies, and a flashlight is shone onto the resulting NIM slab, the material should focus the light at a point on the other side. This is not possible with a sheet of ordinary opaque material. In 2007, the NIST in collaboration with the Atwater Lab at Caltech created the first NIM active at optical frequencies. More recently (), layered "fishnet" NIM materials made of silicon and silver wires have been integrated into optical fibers to create active optical elements. Simultaneous negative permittivity and permeability Negative permittivity εeff < 0 had already been discovered and realized in metals for frequencies all the way up to the plasma frequency, before the first metamaterial. There are two requirements to achieve a negative value for refraction. First, is to fabricate a material which can produce negative permeability μeff < 0. Second, negative values for both permittivity and permeability must occur simultaneously over a common range of frequencies. Therefore, for the first metamaterial, the nuts and bolts are one split-ring resonator electromagnetically combined with one (electric) conducting post. These are designed to resonate at designated frequencies to achieve the desired values. Looking at the make-up of the split ring, the associated magnetic field pattern from the SRR is dipolar. This dipolar behavior is notable because this means it mimics nature's atom, but on a much larger scale, such as in this case at 2.5 millimeters. Atoms exist on the scale of picometers. The splits in the rings create a dynamic where the SRR unit cell can be made resonant at radiated wavelengths much larger than the diameter of the rings. If the rings were closed, a half wavelength boundary would be electromagnetically imposed as a requirement for resonance. The split in the second ring is oriented opposite to the split in the first ring. It is there to generate a large capacitance, which occurs in the small gap. This capacitance substantially decreases the resonant frequency while concentrating the electric field. The individual SRR depicted on the right had a resonant frequency of 4.845 GHz, and the resonance curve, inset in the graph, is also shown. The radiative losses from absorption and reflection are noted to be small, because the unit dimensions are much smaller than the free space, radiated wavelength. When these units or cells are combined into a periodic arrangement, the magnetic coupling between the resonators is strengthened, and a strong magnetic coupling occurs. Properties unique in comparison to ordinary or conventional materials begin to emerge. For one thing, this periodic strong coupling creates a material, which now has an effective magnetic permeability μeff in response to the radiated-incident magnetic field. Composite material passband Graphing the general dispersion curve, a region of propagation occurs from zero up to a lower band edge, followed by a gap, and then an upper passband. The presence of a 400 MHz gap between 4.2 GHz and 4.6 GHz implies a band of frequencies where μeff < 0 occurs. (Please see the image in the previous section) Furthermore, when wires are added symmetrically between the split rings, a passband occurs within the previously forbidden band of the split ring dispersion curves. That this passband occurs within a previously forbidden region indicates that the negative εeff for this region has combined with the negative μeff to allow propagation, which fits with theoretical predictions. Mathematically, the dispersion relation leads to a band with negative group velocity everywhere, and a bandwidth that is independent of the plasma frequency, within the stated conditions. Mathematical modeling and experiment have both shown that periodically arrayed conducting elements (non-magnetic by nature) respond predominantly to the magnetic component of incident electromagnetic fields. The result is an effective medium and negative μeff over a band of frequencies. The permeability was verified to be the region of the forbidden band, where the gap in propagation occurred – from a finite section of material. This was combined with a negative permittivity material, εeff < 0, to form a “left-handed” medium, which formed a propagation band with negative group velocity where previously there was only attenuation. This validated predictions. In addition, a later work determined that this first metamaterial had a range of frequencies over which the refractive index was predicted to be negative for one direction of propagation (see ref #). Other predicted electrodynamic effects were to be investigated in other research. Describing a left-handed material From the conclusions in the above section a left-handed material (LHM) can be defined. It is a material which exhibits simultaneous negative values for permittivity, ε, and permeability, μ, in an overlapping frequency region. Since the values are derived from the effects of the composite medium system as a whole, these are defined as effective permittivity, εeff, and effective permeability, μeff. Real values are then derived to denote the value of negative index of refraction, and wave vectors. This means that in practice losses will occur for a given medium used to transmit electromagnetic radiation such as microwave, or infrared frequencies, or visible light – for example. In this instance, real values describe either the amplitude or the intensity of a transmitted wave relative to an incident wave, while ignoring the negligible loss values. Isotropic negative index in two dimensions In the above sections first fabricated metamaterial was constructed with resonating elements, which exhibited one direction of incidence and polarization. In other words, this structure exhibited left-handed propagation in one dimension. This was discussed in relation to Veselago's seminal work 33 years earlier (1967). He predicted that intrinsic to a material, which manifests negative values of effective permittivity and permeability, are several types of reversed physics phenomena. Hence, there was then a critical need for a higher-dimensional LHMs to confirm Veselago's theory, as expected. The confirmation would include reversal of Snell's law (index of refraction), along with other reversed phenomena. In the beginning of 2001 the existence of a higher-dimensional structure was reported. It was two-dimensional and demonstrated by both experiment and numerical confirmation. It was an LHM, a composite constructed of wire strips mounted behind the split-ring resonators (SRRs) in a periodic configuration. It was created for the express purpose of being suitable for further experiments to produce the effects predicted by Veselago. Experimental verification of a negative index of refraction A theoretical work published in 1967 by Soviet physicist Victor Veselago showed that a refractive index with negative values is possible and that this does not violate the laws of physics. As discussed previously (above), the first metamaterial had a range of frequencies over which the refractive index was predicted to be negative for one direction of propagation. It was reported in May 2000. In 2001, a team of researchers constructed a prism composed of metamaterials (negative-index metamaterials) to experimentally test for negative refractive index. The experiment used a waveguide to help transmit the proper frequency and isolate the material. This test achieved its goal because it successfully verified a negative index of refraction. The experimental demonstration of negative refractive index was followed by another demonstration, in 2003, of a reversal of Snell's law, or reversed refraction. However, in this experiment negative index of refraction material is in free space from 12.6 to 13.2 GHz. Although the radiated frequency range is about the same, a notable distinction is this experiment is conducted in free space rather than employing waveguides. Furthering the authenticity of negative refraction, the power flow of a wave transmitted through a dispersive left-handed material was calculated and compared to a dispersive right-handed material. The transmission of an incident field, composed of many frequencies, from an isotropic nondispersive material into an isotropic dispersive media is employed. The direction of power flow for both nondispersive and dispersive media is determined by the time-averaged Poynting vector. Negative refraction was shown to be possible for multiple frequency signals by explicit calculation of the Poynting vector in the LHM. Fundamental electromagnetic properties of the NIM In a slab of conventional material with an ordinary refractive index – a right-handed material (RHM) – the wave front is transmitted away from the source. In a NIM the wavefront travels toward the source. However, the magnitude and direction of the flow of energy essentially remains the same in both the ordinary material and the NIM. Since the flow of energy remains the same in both materials (media), the impedance of the NIM matches the RHM. Hence, the sign of the intrinsic impedance is still positive in a NIM. Light incident on a left-handed material, or NIM, will bend to the same side as the incident beam, and for Snell's law to hold, the refraction angle should be negative. In a passive metamaterial medium this determines a negative real and imaginary part of the refractive index. Negative refractive index in left-handed materials In 1968 Victor Veselago's paper showed that the opposite directions of EM plane waves and the flow of energy was derived from the individual Maxwell curl equations. In ordinary optical materials, the curl equation for the electric field show a "right hand rule" for the directions of the electric field E, the magnetic induction B, and wave propagation, which goes in the direction of wave vector k. However, the direction of energy flow formed by E × H is right-handed only when permeability is greater than zero. This means that when permeability is less than zero, e.g. negative, wave propagation is reversed (determined by k), and contrary to the direction of energy flow. Furthermore, the relations of vectors E, H, and k form a "left-handed" system – and it was Veselago who coined the term "left-handed" (LH) material, which is in wide use today (2011). He contended that an LH material has a negative refractive index and relied on the steady-state solutions of Maxwell's equations as a center for his argument. After a 30-year void, when LH materials were finally demonstrated, it could be said that the designation of negative refractive index is unique to LH systems; even when compared to photonic crystals. Photonic crystals, like many other known systems, can exhibit unusual propagation behavior such as reversal of phase and group velocities. But, negative refraction does not occur in these systems, and not yet realistically in photonic crystals. Negative refraction at optical frequencies The negative refractive index in the optical range was first demonstrated in 2005 by Shalaev et al. (at the telecom wavelength λ = 1.5 μm) and by Brueck et al. (at λ = 2 μm) at nearly the same time. In 2006, a Caltech team led by Lezec, Dionne, and Atwater achieved negative refraction in the visible spectral regime. Reversed Cherenkov radiation Besides reversed values for the index of refraction, Veselago predicted the occurrence of reversed Cherenkov radiation in a left-handed medium. Whereas ordinary Cherenkov radiation is emitted in a cone around the direction in which a charged particle is travelling through the medium, reversed Cherenkov radiation is emitted in a cone around the opposite direction. Reversed Cherenkov radiation was first experimentally demonstrated indirectly in 2009, using a phased electromagnetic dipole array to model a moving charged particle. Reversed Cherenkov radiation emitted by actual charged particles was first observed in 2017. Other optics with NIMs Theoretical work, along with numerical simulations, began in the early 2000s on the abilities of DNG slabs for subwavelength focusing. The research began with Pendry's proposed "Perfect lens." Several research investigations that followed Pendry's concluded that the "Perfect lens" was possible in theory but impractical. One direction in subwavelength focusing proceeded with the use of negative-index metamaterials, but based on the enhancements for imaging with surface plasmons. In another direction researchers explored paraxial approximations of NIM slabs. Implications of negative refractive materials The existence of negative refractive materials can result in a change in electrodynamic calculations for the case of permeability μ = 1 . A change from a conventional refractive index to a negative value gives incorrect results for conventional calculations, because some properties and effects have been altered. When permeability μ has values other than 1 this affects Snell's law, the Doppler effect, the Cherenkov radiation, Fresnel's equations, and Fermat's principle. The refractive index is basic to the science of optics. Shifting the refractive index to a negative value may be a cause to revisit or reconsider the interpretation of some norms, or basic laws. US patent on left-handed composite media The first US patent for a fabricated metamaterial, titled "Left handed composite media" by David R. Smith, Sheldon Schultz, Norman Kroll and Richard A. Shelby, was issued in 2004. The invention achieves simultaneous negative permittivity and permeability over a common band of frequencies. The material can integrate media which is already composite or continuous, but which will produce negative permittivity and permeability within the same spectrum of frequencies. Different types of continuous or composite may be deemed appropriate when combined for the desired effect. However, the inclusion of a periodic array of conducting elements is preferred. The array scatters electromagnetic radiation at wavelengths longer than the size of the element and lattice spacing. The array is then viewed as an effective medium. See also History of metamaterials Superlens Metamaterial cloaking Photonic metamaterials Metamaterial antenna Nonlinear metamaterials Photonic crystal Seismic metamaterials Split-ring resonator Acoustic metamaterials Metamaterial absorber Metamaterial Plasmonic metamaterials Terahertz metamaterials Tunable metamaterials Transformation optics Theories of cloaking Academic journals Metamaterials Metamaterials books Metamaterials Handbook Metamaterials: Physics and Engineering Explorations Notes -NIST References Further reading Also see the Preprint-author's copy. External links Manipulating the Near Field with Metamaterials Slide show, with audio available, by Dr. John Pendry, Imperial College, London List of science website news stories on Left Handed Materials Metamaterials Electromagnetism 2000 in science 21st century in science 20th century in science Articles containing video clips
Negative-index metamaterial
Physics,Materials_science,Engineering
6,551
38,369,589
https://en.wikipedia.org/wiki/Chi%20Puppis
χ Puppis, Latinised as Chi Puppis, is a single star in the southern constellation of Puppis. It has a white hue and is faintly visible to the eye at night with an apparent visual magnitude of 4.79. The star is located at a distance of approximately 1,800 light years from the Sun based on parallax, and is drifting further away with a radial velocity of +30 km/s. O. J. Eggen listed this star as a member of the Hyades Stream based on its space motion. There has been some disagreement as to the stellar classification of Chi Puppis. In 1962, W. Buscombe classified it as A2Vvar, matching a variable A-type main-sequence star. However, P. S. Conti in 1965 considered that to be a misclassification on the basis of its B-V color index. He considers it of later type A5. In their study of the nearby open cluster NGC 2483, M. P. Fitzgerald and A. F. J. Moffat used the same class, A2Vv. In 1979, Nancy Houk assigned it to class A7 III, indicating it may be an A-type giant star. Finally, R. O. Gray and associates found a class of A5 II, matching a bright giant. In his star atlas Neue Uranometrie, Friedrich Wilhelm Argelander labelled this star as χ Argo. It was probably labelled as χ by Bayer in the original Uranometria, although Bayer's chart is somewhat fanciful. Nicolas-Louis de Lacaille changed Bayer's designations in Argo Navis and applied χ to the star now called χ Carinae. References External links chi Puppis (HIP 38901) A-type bright giants Suspected variables Puppis Puppis, Chi CD-29 5236 065456 038901 3113
Chi Puppis
Astronomy
388
24,426,227
https://en.wikipedia.org/wiki/Paco%20Lagerstrom
Paco Axel Lagerstrom (February 24, 1914 – February 16, 1989) was a Swedish-American applied mathematician and aeronautical engineer. He was trained formally in mathematics, but worked for much of his career in aeronautical applications. He was known for work in applying the method of asymptotic expansion to fluid mechanics problems. Several of his works have become classics, including "Matched Asymptotic Expansions: Ideas And Techniques". Biography He was born on February 24, 1914, in Oskarshamn, Sweden. Lagerstrom earned bachelor's and master's degrees, in 1935 and 1939 respectively, at the University of Stockholm. He then came to America as a graduate student at Princeton University, earning a PhD in 1942 in mathematics under Salomon Bochner with a dissertation entitled "Measure and Integral in Partially Ordered Spaces". During this time, Lagerstrom was also a mathematics instructor. He left Princeton in 1944 to work briefly at Bell Aircraft in Niagara Falls, New York until 1945, after which he worked for a similarly brief period at Douglas Aircraft in Santa Monica. While he had already published significant results in pure mathematics, he was, by this time, firmly interested in its applications to fluid dynamic and aerodynamic problems. In 1946, Lagerstrom was recruited by Hans Liepmann to the Guggenheim Aeronautical Laboratory at Caltech. He was later promoted to Professor of Aeronautics in 1952 and Professor of Applied Mathematics in 1967, having departed only briefly to the University of Paris in 1960–1961 as visiting professor on a Guggenheim Fellowship. He died on February 16, 1989. Publications His book "Laminar flow theory", initially published in 1964 in the Theory of Laminar flows, edited by F.K.Moore, is still considered as the standard textbook for fluid mechanics. References External links 20th-century American mathematicians Fluid dynamicists 20th-century Swedish mathematicians 20th-century American engineers 1989 deaths 1914 births Swedish emigrants to the United States
Paco Lagerstrom
Chemistry
396
17,978,926
https://en.wikipedia.org/wiki/HD%20223229
HD 223229 is a suspected variable star in the northern constellation of Andromeda. It is a double star consisting of a magnitude 6.11 primary and a magnitude 8.73 companion. The pair have an angular separation of 0.80″ along a position angle of 250°, as of 2009. The primary is a B-type subgiant star with a stellar classification of B3IV. It has an estimated 6.3 times the mass of the Sun, with an effective temperature of 17,900 K. References External links Image HD 223229 Andromeda (constellation) 223229 B-type subgiants 9011 Suspected variables Durchmusterung objects 117340
HD 223229
Astronomy
144
19,042,247
https://en.wikipedia.org/wiki/Dark%20fermentation
Dark fermentation is the fermentative conversion of organic substrate to biohydrogen. It is a complex process manifested by diverse groups of bacteria, involving a series of biochemical reactions using three steps similar to anaerobic conversion. Dark fermentation differs from photofermentation in that it proceeds without the presence of light. Overview Fermentative/hydrolytic microorganisms hydrolyze complex organic polymers to monomers which are further converted to a mixture of lower-molecular-weight organic acids and alcohols by obligatory producing acidogenic bacteria. Utilization of wastewater as a potential substrate for biohydrogen production has been drawing considerable interest in recent years especially in the dark fermentation process. Industrial wastewater as a fermentative substrate for H2 production addresses most of the criteria required for substrate selection viz., availability, cost and biodegradability. Chemical wastewater (Venkata Mohan, et al., 2007a,b), cattle wastewater (Tang, et al., 2008), dairy process wastewater (Venkata Mohan, et al. 2007c, Rai et al. 2012), starch hydrolysate wastewater (Chen, et al., 2008) and designed synthetic wastewater (Venkata Mohan, et al., 2007a, 2008b) have been reported to produce biohydrogen apart from wastewater treatment from dark fermentation processes using selectively enriched mixed cultures under acidophilic conditions. Various wastewaters viz., paper mill wastewater (Idania, et al., 2005), starch effluent (Zhang, et al., 2003), food processing wastewater (Shin et al., 2004, van Ginkel, et al., 2005), domestic wastewater (Shin, et al., 2004, 2008e), rice winery wastewater (Yu et al., 2002), distillery and molasses based wastewater (Ren, et al., 2007, Venkata Mohan, et al., 2008a), wheat straw wastes (Fan, et al., 2006) and palm oil mill wastewater (Vijayaraghavan and Ahmed, 2006) have been studied as fermentable substrates for H2 production along with wastewater treatment. Using wastewater as a fermentable substrate facilitates both wastewater treatment apart from H2 production. The efficiency of the dark fermentative H2 production process was found to depend on pre-treatment of the mixed consortia used as a biocatalyst, operating pH, and organic loading rate apart from wastewater characteristics (Venkata Mohan, et al., 2007d, 2008c, d, Vijaya Bhaskar, et al., 2008d). In spite of its advantages, the main challenge observed with fermentative H2 production processes is the relatively low energy conversion efficiency from the organic source. Typical H2 yields range from 1 to 2 mol of H2/mol of glucose, which results in 80-90% of the initial COD remaining in the wastewater in the form of various volatile organic acids (VFAs) and solvents, such as acetic acid, propionic acid, butyric acid, and ethanol. Even under optimal conditions about 60-70% of the original organic matter remains in solution. Bioaugmentation with selectively enriched acidogenic consortia to enhance H2 production was also reported (Venkata Mohan, et al., 2007b). Generation and accumulation of soluble acid metabolites causes a sharp drop in the system pH and inhibits the H2 production process. Usage of unutilized carbon sources present in acidogenic process for additional biogas production sustains the practical applicability of the process. One way to utilize/recover the remaining organic matter in a usable form is to produce additional H2 by terminal integration of photo-fermentative processes of H2 production (Venkata Mohan, et al. 2008e, Rai et al. 2012) and methane by integrating acidogenic processes to terminal methanogenic processes. See also Biogas Biohydrogen Biological hydrogen production (algae) Biomass Electrohydrogenesis Fermentation (biochemistry) Microbial fuel cell References Chen, S.-D., Lee, K.-S., Lo, Y.-C., Chen, W.-M., Wu, J.-F., Lin, C.-Y., Chang, J.-S., 2008, "Batch and continuous biohydrogen production from starch hydrolysate by Clostridium species". International Journal of Hydrogen Energy 33, 1803–12 Dabrock, B., Bahl, H., Gottschalk, G., 1992. "Parameters affecting solvent production by Clostridium pasteurianum", Appl Environ Microbiol, 58, 1233-9 Das, D., Veziroglu, T.N., 2001. "Hydrogen production by biological process: a survey of literature". International Journal of Hydrogen Energy 26, 13-28 Das, D., 2008, "International workshop on biohydrogen production technology" (IWBT 2008),7–9 February 2008, IIT Kharapgur. International Journal of Hydrogen Energy 33, 2627-8 Fan, Y.T, Zhang, Y.H., Zhang, S.F., Hou, H-W., Ren, B-Z., 2006. "Efficient conversion of wheat straw wastes into biohydrogen gas by cow dung compost". Biores Technol 97, 500-5 Ferchichi, M., Crabbe, E., Gwang-Hoon, G., Hintz, W., Almadidy, A., 2005. "Influence of initial pH on hydrogen production from cheese whey". J Biotechnol 120, 402-9 Idania, V.V., Richard, S., Derek, R., Noemi, R.S., Hector, M.P.V., 2005. "Hydrogen generation via anaerobic fermentation of paper mill wastes". Biores Technol 96, 1907–13 Kapdan, I. K., Kargi, F., 2006. "Bio-hydrogen production from waste materials", Enzyme Microb Technol 38, 569–82 Kim, J., Park, C., Kim, T-H., Lee, M., Kim, S., Kim, S., Seung-Wook., Lee, J., 2003. "Effects of various pretreatments for enhanced anaerobic digestion with waste activated sludge". J. Biosci. Bioeng 95, 271-5 Kraemer, J.T., Bagley, D.M., 2007. "Improving the yield from fermentative hydrogen production". Biotechnol Let 29, 685–95 Logan, B.E., 2004. Feature article: "Biologically extracting energy from wastewater: Biohydrogen production and microbial fuel cells". Environ Sci Technol 38, 160A-167A Logan, B.E., Oh, S.E., van Ginkel, S., Kim, I.S., 2002. "Biological hydrogen production measured in batch anaerobic respirometers". Environ Sci Technol 36, 2530-5 Rai, Pankaj K, Singh, S.P & Asthana, R.K . "Biohydrogen production from cheese whey wastewater in a two-step anaerobic process". Applied Biochemistry and Biotechnology 2012, 167 (6) 1540-9 Ren, N.Q., Chua, H., Chan, S.Y., Tsang, Y.F., Wang, Y.J., Sin, N., 2007. "Assessing optimal fermentation type for bio-hydrogen production in continuous flow acidogenic reactors", Biores Technol 98, 1774–80 Roy Chowdhury, S., Cox, D., Levandowsky, M., 1988. "Production of hydrogen by microbial fermentation". International Journal of Hydrogen Energy 13, 407-10 Shin, H.S., Youn, J.H., Kim, S.H., 2004. "Hydrogen production from food waste in anaerobic mesophilic and thermophilic acidogenesis". International Journal of Hydrogen Energy 29, 1355–63 Sparling, R., Risbey, D., Poggi-Varaldo, H.M., 1997. "Hydrogen production from inhibited anaerobic composters". International Journal of Hydrogen Energy 22, 563–6 Tang, G., Huang, J., Sun, Z., Tang, Q., Yan, C., Liu, G., 2008. "Biohydrogen production from cattle wastewater by enriched anaerobic mixed consortia: Influence of fermentation temperature and pH". J Biosci Bioengng., 106, 80-7 Valdez-Vazquez, I., Rıos-Leal, E., Munoz-Paez, K.M., Carmona-Martınez, A., Poggi-Varaldo, H.M., 2006. "Effect of inhibition treatment, type of Inocula, and incubation temperature on batch H2 production from organic solid waste". Biotechnol Bioeng 95, 342-9 van Ginkel, S.W., Oh, S.E., Logan. B. E., 2005. "Biohydrogen gas production from food processing and domestic wastewaters". International Journal of Hydrogen Energy 30, 1535–42 Venkata Mohan, S., Vijaya Bhaskar, Y., Sarm, P.N., 2007a. "Biohydrogen production from chemical wastewater treatment by selectively enriched anaerobic mixed consortia in biofilm configured reactor operated in periodic discontinuous batch mode". Water Res 41, 2652–64 Venkata Mohan, S., Mohanakrishna G., Veer Raghuvulu S., Sarma, P.N., 2007b. "Enhancing biohydrogen production from chemical wastewater treatment in anaerobic sequencing batch biofilm reactor (AnSBBR) by bioaugmenting with selectively enriched kanamycin resistant anaerobic mixed consortia". International Journal of Hydrogen Energy 32, 3284–92 Venkata Mohan, S., Lalit Babu, V., Sarma, P.N., 2007c. "Anaerobic biohydrogen production from dairy wastewater treatment in sequencing batch reactor (AnSBR): Effect of organic loading rate". Enzyme and Microbial Technology 41(4), 506-15 Venkata Mohan, S., Bhaskar, Y.B., Krishna, T.M., Chandrasekhara Rao N., Lalit Babu V., Sarma, P.N., 2007d. "Biohydrogen production from chemical wastewater as substrate by selectively enriched anaerobic mixed consortia: Influence of fermentation pH and substrate composition". International Journal of Hydrogen Energy, 32, 2286–95 Venkata Mohan, S., Mohanakrishna, G., Ramanaiah, S.V, Sarma, P.N., 2008a. "Simultaneous biohydrogen production and wastewater treatment in biofilm configured anaerobic periodic discontinuous batch reactor using distillery wastewater". International Journal of Hydrogen Energy 33(2), 550-8 Venkata Mohan, S., Mohanakrishna, G., Ramanaiah, S.V, Sarma, P.N., 2008b. "Integration of acidogenic and methanogenic processes for simultaneous production of biohydrogen and methane from wastewater treatment". International Journal of Hydrogen Energy 33, 2156–66 Venkata Mohan, S., Lalit Babu, V., Sarma, P.N., 2008c. "Effect of various pre-treatment methods on anaerobic mixed microflora to enhance biohydrogen production utilizing dairy wastewater as substrate". Biores Technol 99, 59-67 Venkata Mohan, S., Lalit Babu, V., Srikanth, S., Sarma, P.N., 2008d. "Bio-electrochemical behavior of fermentative hydrogen production process with the function of feeding pH". International Journal of Hydrogen Energy Venkata Mohan, S., Srikanth, S., Dinakar, P., Sarma, P.N., 2008e. "Photo-biological hydrogen production by the adopted mixed culture: Data enveloping analysis". International Journal of Hydrogen Energy 33(2), 559-69 Venkata Mohan, S., Mohanakrishna, G., Reddy, S.S., Raju, B.D., Rama Rao, K.S., Sarma, P, N., 2008f. "Self-immobilization of acidogenic mixed consortia on mesoporous material (SBA-15) and activated carbon to enhance fermentative hydrogen production". International Journal of Hydrogen Energy Vijaya Bhaskar, Y., Venkata Mohan S, Sarma, P.N., 2008. "Effect of substrate loading rate of chemical wastewater on fermentative biohydrogen production in biofilm configured sequencing batch reactor". Biores Technol 99, 6941–8 Vijayaraghavan, K., Ahmad, D., "Biohydrogen generation from palm oil mill effluent using anaerobic contact filter". International Journal of Hydrogen Energy 31, 1284–91 Yu, H., Zhu, Z., Hu, W., Zhang, H., 2002. "Hydrogen production from rice winery wastewater in an upflow anaerobic reactor by using mixed anaerobic cultures", International Journal of Hydrogen Energy 27, 1359–65 Zhang, T., Liu, H., Fang, H.H.P., 2003. "Biohydrogen production from starch in wastewater under thermophilic condition". J Environ Manag 69, 149-56 Zhu, H., Beland, M., 2006, "Evaluation of alternative methods of preparing hydrogen producing seeds from digested wastewater sludge". International Journal of Hydrogen Energy 31, 1980-8 External links Bio-hydrogen production from wastewater Biofuels technology Catalysis Environmental engineering Hydrogen biology Hydrogen production
Dark fermentation
Chemistry,Engineering,Biology
3,100
766,186
https://en.wikipedia.org/wiki/Paul%20Berg
Paul Berg (June 30, 1926 – February 15, 2023) was an American biochemist and professor at Stanford University. He was the recipient of the Nobel Prize in Chemistry in 1980, along with Walter Gilbert and Frederick Sanger. The award recognized their contributions to basic research involving nucleic acids, especially recombinant DNA. Berg received his undergraduate education at Penn State University, where he majored in biochemistry. He received his PhD in biochemistry from Case Western Reserve University in 1952. Berg worked as a professor at Washington University School of Medicine and Stanford University School of Medicine, in addition to serving as the director of the Beckman Center for Molecular and Genetic Medicine. In addition to the Nobel Prize, Berg was presented with the National Medal of Science in 1983 and the National Library of Medicine Medal in 1986. Berg was a member of the Board of Sponsors for the Bulletin of the Atomic Scientists. Early life and education Berg was born in Brooklyn, New York City. He was the son of a Russian Jewish immigrant couple, Sarah Brodsky, a homemaker, and Harry Berg, a clothing manufacturer. Berg graduated from Abraham Lincoln High School in 1943, received his Bachelor of Science degree in biochemistry from Penn State University in 1948 and PhD in biochemistry from Case Western Reserve University in 1952. He was a member of the Jewish fraternity, ΒΣΡ. Research and career Academic posts After completing his graduate studies, Berg spent two years (1952–1954) as a postdoctoral fellow with the American Cancer Society, working at the Institute of Cytophysiology in Copenhagen, Denmark, and the Washington University School of Medicine, and spent additional time in 1954 as a scholar in cancer research with the department of microbiology at the Washington University School of Medicine. He worked with Arthur Kornberg, while at Washington University. Berg was also tenured as a research fellow at Clare Hall, Cambridge. He was a professor at Washington University School of Medicine from 1955 until 1959. After 1959, Berg moved to Stanford University, where he taught biochemistry from 1959 until 2000 and served as director of the Beckman Center for Molecular and Genetic Medicine from 1985 until 2000. In 2000 he retired from his administrative and teaching posts, continuing to be active in research. Research interests Berg's postgraduate studies involved the use of radioisotope tracers to study intermediary metabolism. This resulted in the understanding of how foodstuffs are converted to cellular materials, through the use of isotopic carbons or heavy nitrogen atoms. Paul Berg's doctorate paper is now known as the conversion of formic acid, formaldehyde and methanol to fully reduced states of methyl groups in methionine. He was also one of the first to demonstrate that folic acid and B12 cofactors had roles in the processes mentioned. Berg is arguably most famous for his pioneering work involving gene splicing of recombinant DNA. Berg was the first scientist to create a molecule containing DNA from two different species by inserting DNA from another species into a molecule. This gene-splicing technique was a fundamental step in the development of modern genetic engineering. After developing the technique, Berg used it for his studies of viral chromosomes. Berg was a professor emeritus at Stanford. As of 2000, he stopped doing active research, to focus on other interests, including involvement in public policy for biomedical issues involving recombinant DNA and embryonic stem cells and publishing a book about geneticist George Beadle. Berg was a member of the Board of Sponsors of the Bulletin of the Atomic Scientists. He was also an organizer of the Asilomar conference on recombinant DNA in 1975. The previous year, Berg and other scientists had called for a voluntary moratorium on certain recombinant DNA research until they could evaluate the risks. That influential conference did evaluate the potential hazards and set guidelines for biotechnology research. It can be seen as an early application of the precautionary principle. Awards and honors Nobel Prize Berg was awarded one-half of the 1980 Nobel Prize in Chemistry, with the other half being shared by Walter Gilbert and Frederick Sanger. Berg was recognized for "his fundamental studies of the biochemistry of nucleic acids, with particular regard to recombinant DNA", while Sanger and Gilbert were honored for "their contributions concerning the determination of base sequences in nucleic acids." Other awards and honors He was elected a Fellow of the American Academy of Arts and Sciences and a member of the United States National Academy of Sciences in 1966. In 1983, Ronald Reagan presented Berg with the National Medal of Science. That same year, he was elected to the American Philosophical Society. In 1989, he received the Golden Plate Award of the American Academy of Achievement. He was elected a Foreign Member of the Royal Society (ForMemRS) in 1992. In 2005 he was awarded the Biotechnology Heritage Award by the Biotechnology Industry Organization (BIO) and the Chemical Heritage Foundation. In 2006 he received Wonderfest's Carl Sagan Prize for Science Popularization. Death Berg died on February 15, 2023, at the age of 96. See also List of Jewish Nobel laureates References External links Paul Berg narrating "Protein Synthesis: An Epic on the Cellular Level" at Google Video Paul Berg's Discussion with Larry Goldstein: "Recombinant DNA and Science Policy" and "Contemporary Science Policy Issues" Carl Sagan Prize for Science Popularization award recipient, Wonderfest 2006. The Paul Berg Papers – Profiles in Science, National Library of Medicine Paul Berg Papers, 1953–1986 (65 linear ft.) are housed in the Department of Special Collections and University Archives at Stanford University Libraries 1926 births 2023 deaths Nobel laureates in Chemistry American Nobel laureates Abraham Lincoln High School (Brooklyn) alumni American biochemists Fellows of the American Academy of Arts and Sciences Foreign members of the Royal Society History of biotechnology HIV/AIDS researchers Jewish American scientists Jewish chemists Jewish Nobel laureates Members of the European Molecular Biology Organization Members of the French Academy of Sciences Members of the United States National Academy of Sciences National Medal of Science laureates Members of the Pontifical Academy of Sciences Eberly College of Science alumni Scientists from Brooklyn Stanford University School of Medicine faculty Institute for Advanced Study visiting scholars Fellows of Clare Hall, Cambridge Alumni of Clare Hall, Cambridge American biotechnologists Recipients of the Albert Lasker Award for Basic Medical Research Members of the American Philosophical Society Members of the National Academy of Medicine Washington University School of Medicine faculty Washington University in St. Louis faculty
Paul Berg
Biology
1,311
75,011,465
https://en.wikipedia.org/wiki/Tremella%20anaptychiae
Tremella anaptychiae is a species of lichenicolous (lichen-dwelling) fungus in the family Tremellaceae. It was first reported in the literature in 1996 by mycologist Paul Diederich, who did not formally describe it as a new species due to the paucity of material. Additional material was collected in later years, and it was finally described in 2017 by Juan Carlos Zamora and Diederich. The fungus is known to occur in Italy, Macedonia, Spain (including the Canary Islands), and Sweden. It is confined to the host lichen Anaptychia ciliaris, which has a largely palearctic distribution. Description Tremella anaptychiae produces basidiomata that are typically more or less spherical in shape, becoming slightly tuberculate as they age. These are characterized by a waxy-gelatinous texture and can exhibit a variety of colours, ranging from cream to pinkish, brownish, or even blackish, with rare instances of greenish shades. Typically measuring 0.2–2 mm in diameter, they grow on the thallus of their host, often encompassing the and, less frequently, on the margin of . The internal context hyphae and the hyphae below the basidia are slender and thick-walled, typically ranging from 3–5.5 μm in diameter. These hyphae do not have clamps but may sometimes show small spur-like swellings. Abundant haustorial branches are present, with the mother cell being roughly spherical to broadly ellipsoid. The hymenium is well-developed, either clear or subtly brownish, and contains numerous probasidia. The basidia, when mature, are two-celled, stalked, and thick-walled, often displaying longitudinal or oblique septa. They produce basidiospores that are somewhat spherical, sometimes broadly ellipsoid, and germinate to form ballistoconidia and blastic conidia. Additionally, Tremella anaptychiae may sometimes produce asteroconidia, which have a unique four-armed structure. These asteroconidia are about 10–15 μm in diameter, with individual arms ranging from 3.5–8 μm in length. In some basidiomata, where basidia are sparse, these conidiogenous cells can be particularly numerous. References anaptychiae Fungi of Europe Fungi of the Canary Islands Fungi described in 2017 Lichenicolous fungi Taxa named by Paul Diederich Fungus species
Tremella anaptychiae
Biology
522
61,854,053
https://en.wikipedia.org/wiki/Metal%20assisted%20chemical%20etching
Metal Assisted Chemical Etching (also known as MACE) is the process of wet chemical etching of semiconductors (mainly silicon) with the use of a metal catalyst, usually deposited on the surface of a semiconductor in the form of a thin film or nanoparticles. The semiconductor, covered with the metal, is then immersed in an etching solution containing an oxidizing agent and hydrofluoric acid. The metal on the surface catalyzes the reduction of the oxidizing agent and therefore in turn also the dissolution of silicon. In the majority of the conducted research this phenomenon of increased dissolution rate is also spatially confined, such that it is increased in close proximity to a metal particle at the surface. Eventually this leads to the formation of straight pores that are etched into the semiconductor (see figure to the right). This means that a pre-defined pattern of the metal on the surface can be directly transferred to a semiconductor substrate. History of development MACE is a relatively new technology in semiconductor engineering and therefore it has yet to be a process that is used in industry. The first attempts of MACE consisted of a silicon wafer that was partially covered with aluminum and then immersed in an etching solution. This material combination led to an increased etching rate compared to bare silicon. Often this very first attempt is also called galvanic etching instead of metal assisted chemical etching. Further research showed that a thin film of a noble metal deposited on a silicon wafer's surface can also locally increase the etching rate. In particular, it was observed that noble metal particles sink down into the material when the sample is immersed in an etching solution containing an oxidizing agent and hydrofluoric acid (see image in the introduction). This method is now commonly called the metal assisted chemical etching of silicon. Other semiconductors were also successfully etched with MACE, such as silicon carbide or gallium nitride. However, the main portion of research is dedicated to MACE of silicon. It has been shown that both noble metals such as gold, platinum, palladium, and silver, and base metals such as iron, nickel, copper, and aluminium can act as a catalyst in the process. Theory Some elements of MACE are commonly accepted in the scientific community, while others are still under debate. There is agreement that the reduction of the oxidizing agent is catalyzed by the noble metal particle (see figure to the left). This means that the metal particle has a surplus of positive charge which is eventually transferred to the silicon substrate. Each of the positive charges in the substrate can be identified as a hole (h+) in the valence band of the substrate, or in more chemical terms it may be interpreted as a weakened Si-Si bond due to the removal of an electron. The weakened bonds can be attacked by a nucleophilic species such as HF or H2O, which in turn leads to the dissolution of the silicon substrate in close proximity to the noble metal particle. From a thermodynamic point of view, the MACE process is possible because the redox potential of the redox couple corresponding to the used oxidizing agents (hydrogen peroxide or potassium permanganate) are below the valence band edge at the electrochemical energy scale. Equivalently, one could say that the electrochemical potential of the electron in the etching solution (due to the presence of oxidizing agent) is lower than the electrochemical potential of the electron in the substrate, hence electrons are removed from the silicon. In the end, this accumulation of positive charge leads to the dissolution of the substrate by hydrofluoric acid. MACE consists of multiple individual reactions. At the metal particle, the oxidizing agent is reduced. In the case of hydrogen peroxide this can be written down as follows: H2O2 + 2H+ -> 2H2O + 2h+ The created holes (h+) are then consumed during the dissolution of silicon. There are several possible reactions via which the dissolution can take place, but here just one example is given: Si + 6HF + 4h+ -> SiF6^{2}- + 6H+ There are still some unclear aspects of the MACE process. The model proposed above requires contact of the metal particle with the silicon substrate which is somehow conflicting with the etching solution being underneath the particle. This can be explained with a dissolution and redeposition of metal during MACE. In particular it is proposed, that some metal ions from the particle are dissolved and eventually are re-deposited at the silicon surface with a redox reaction. In this case the metal particle (or even larger noble metal thin films) could partially maintain contact to the substrate while also etching could partially take place underneath the metal. It is also observed that in the vicinity of straight pores as shown in the introduction also a micro-porous region between the pores is formed. Generally this is attributed to holes that diffuse away from the particle and hence contribute to etching at more distant locations. This behavior is dependent on the doping type of substrate as well as on the type of noble metal particle. Therefore, it is proposed that the formation of such a porous region beneath the straight pores depends on the type of barrier that is formed at the metal/silicon interface. In the case of an upward band bending the electric field in the depletion layer would point towards the metal. Therefore, holes cannot diffuse further into the substrate and thus no formation of a micro-porous region is observed. In the case of downward band-bending holes could escape into the bulk of the silicon substrate and eventually lead to etching there. Experimental procedure of MACE As already stated above MACE requires metal particles or a thin metal thin film on top of a silicon substrate. This can be achieved with several methods such as sputter deposition or thermal evaporation. A method to obtain particles from a continuous thin film is thermal dewetting. These deposition methods can be combined with lithography such that only desired regions are covered with metal. Since MACE is an anisotropic etching method (etching takes place not in all spatial directions) a pre-defined metal pattern can be directly transferred into the silicon substrate. Another method of depositing metal particles or thin films is electroless plating of noble metals on the surface of silicon. Since the redox potentials of the redox couples of noble metals are below the valence band edge of silicon, noble metal ions can (like described in the theory section) inject holes (or extract electrons) from the substrate while they are reduced. In the end metallic particles or films are obtained at the surface. Finally, after the deposition of the metal on the surface of silicon, the sample is immersed in an etching solution containing hydrofluoric acid and oxidizing agent. Etching will take place as long as the oxidizing agent and the acid are consumed or until the sample is removed from the etching solution. Applications of MACE The reason why MACE is heavily researched is that it allows completely anisotropic etching of silicon substrates which is not possible with other wet chemical etching methods (see figure to the right). Usually the silicon substrate is covered with a protective layer such as photoresist before it is immersed in an etching solution. The etching solution usually has no preferred direction of attacking the substrate, therefore isotropic etching takes place. In semiconductor engineering, however it is often required that the sidewalls of the etched trenches are steep. This is usually realized with methods that operate in the gas-phase such as reactive ion etching. These methods require expensive equipment compared to simple wet etching. MACE, in principle allows the fabrication of steep trenches but is still cheap compared to gas-phase etching methods. Porous silicon Metal assisted chemical etching allows for the production of porous silicon with photoluminescence. Black silicon Black silicon is silicon with a modified surface and is a type of porous silicon. There are several works on obtaining black silicon using MACE technology. The main application of black silicon is solar energy. Black Gallium Arsenide Black Gallium Arsenide with light trapping properties have been also produced by MACE. References Etching Chemistry Research lasers Semiconductors Engineering
Metal assisted chemical etching
Physics,Chemistry,Materials_science,Engineering
1,697
92,512
https://en.wikipedia.org/wiki/Lipoprotein
A lipoprotein is a biochemical assembly whose primary function is to transport hydrophobic lipid (also known as fat) molecules in water, as in blood plasma or other extracellular fluids. They consist of a triglyceride and cholesterol center, surrounded by a phospholipid outer shell, with the hydrophilic portions oriented outward toward the surrounding water and lipophilic portions oriented inward toward the lipid center. A special kind of protein, called apolipoprotein, is embedded in the outer shell, both stabilising the complex and giving it a functional identity that determines its role. Plasma lipoprotein particles are commonly divided into five main classes, based on size, lipid composition, and apolipoprotein content: HDL, LDL, IDL, VLDL and chylomicrons. Subgroups of these plasma particles are primary drivers or modulators of atherosclerosis. Many enzymes, transporters, structural proteins, antigens, adhesins, and toxins are sometimes also classified as lipoproteins, since they are formed by lipids and proteins. Scope Transmembrane lipoproteins Some transmembrane proteolipids, especially those found in bacteria, are referred to as lipoproteins; they are not related to the lipoprotein particles that this article is about. Such transmembrane proteins are difficult to isolate, as they bind tightly to the lipid membrane, often require lipids to display the proper structure, and can be water-insoluble. Detergents are usually required to isolate transmembrane lipoproteins from their associated biological membranes. Plasma lipoprotein particles Because fats are insoluble in water, they cannot be transported on their own in extracellular water, including blood plasma. Instead, they are surrounded by a hydrophilic external shell that functions as a transport vehicle. The role of lipoprotein particles is to transport fat molecules, such as triglycerides, phospholipids, and cholesterol within the extracellular water of the body to all the cells and tissues of the body. The proteins included in the external shell of these particles, called apolipoproteins, are synthesized and secreted into the extracellular water by both the small intestine and liver cells. The external shell also contains phospholipids and cholesterol. All cells use and rely on fats and cholesterol as building blocks to create the multiple membranes that cells use both to control internal water content and internal water-soluble elements and to organize their internal structure and protein enzymatic systems. The outer shell of lipoprotein particles have the hydrophilic groups of phospholipids, cholesterol, and apolipoproteins directed outward. Such characteristics make them soluble in the salt-water-based blood pool. Triglycerides and cholesteryl esters are carried internally, shielded from the water by the outer shell. The kind of apolipoproteins contained in the outer shell determines the functional identity of the lipoprotein particles. The interaction of these apolipoproteins with enzymes in the blood, with each other, or with specific proteins on the surfaces of cells, determines whether triglycerides and cholesterol will be added to or removed from the lipoprotein transport particles. Characterization in human plasma Structure Lipoproteins are complex particles that have a central hydrophobic core of non-polar lipids, primarily cholesteryl esters and triglycerides. This hydrophobic core is surrounded by a hydrophilic membrane consisting of phospholipids, free cholesterol, and apolipoproteins. Plasma lipoproteins, found in blood plasma, are typically divided into five main classes based on size, lipid composition, and apolipoprotein content: HDL, LDL, IDL, VLDL and chylomicrons. Functions Metabolism The handling of lipoprotein particles in the body is referred to as lipoprotein particle metabolism. It is divided into two pathways, exogenous and endogenous, depending in large part on whether the lipoprotein particles in question are composed chiefly of dietary (exogenous) lipids or whether they originated in the liver (endogenous), through de novo synthesis of triglycerides. The hepatocytes are the main platform for the handling of triglycerides and cholesterol; the liver can also store certain amounts of glycogen and triglycerides. While adipocytes are the main storage cells for triglycerides, they do not produce any lipoproteins. Exogenous pathway Bile emulsifies fats contained in the chyme, then pancreatic lipase cleaves triglyceride molecules into two fatty acids and one 2-monoacylglycerol. Enterocytes readily absorb the small molecules from the chymus. Inside of the enterocytes, fatty acids and monoacylglycerides are transformed again into triglycerides. Then these lipids are assembled with apolipoprotein B-48 into nascent chylomicrons. These particles are then secreted into the lacteals in a process that depends heavily on apolipoprotein B-48. As they circulate through the lymphatic vessels, nascent chylomicrons bypass the liver circulation and are drained via the thoracic duct into the bloodstream. In the blood stream, nascent chylomicron particles interact with HDL particles, resulting in HDL donation of apolipoprotein C-II and apolipoprotein E to the nascent chylomicron. The chylomicron at this stage is then considered mature. Via apolipoprotein C-II, mature chylomicrons activate lipoprotein lipase (LPL), an enzyme on endothelial cells lining the blood vessels. LPL catalyzes the hydrolysis of triglycerides that ultimately releases glycerol and fatty acids from the chylomicrons. Glycerol and fatty acids can then be absorbed in peripheral tissues, especially adipose and muscle, for energy and storage. The hydrolyzed chylomicrons are now called chylomicron remnants. The chylomicron remnants continue circulating the bloodstream until they interact via apolipoprotein E with chylomicron remnant receptors, found chiefly in the liver. This interaction causes the endocytosis of the chylomicron remnants, which are subsequently hydrolyzed within lysosomes. Lysosomal hydrolysis releases glycerol and fatty acids into the cell, which can be used for energy or stored for later use. Endogenous pathway The liver is the central platform for the handling of lipids: it is able to store glycerols and fats in its cells, the hepatocytes. Hepatocytes are also able to create triglycerides via de novo synthesis. They also produce the bile from cholesterol. The intestines are responsible for absorbing cholesterol. They transfer it over into the blood stream. In the hepatocytes, triglycerides and cholesteryl esters are assembled with apolipoprotein B-100 to form nascent VLDL particles. Nascent VLDL particles are released into the bloodstream via a process that depends upon apolipoprotein B-100. In the blood stream, nascent VLDL particles bump with HDL particles; as a result, HDL particles donate apolipoprotein C-II and apolipoprotein E to the nascent VLDL particle. Once loaded with apolipoproteins C-II and E, the nascent VLDL particle is considered mature. VLDL particles circulate and encounter LPL expressed on endothelial cells. Apolipoprotein C-II activates LPL, causing hydrolysis of the VLDL particle and the release of glycerol and fatty acids. These products can be absorbed from the blood by peripheral tissues, principally adipose and muscle. The hydrolyzed VLDL particles are now called VLDL remnants or intermediate-density lipoproteins (IDLs). VLDL remnants can circulate and, via an interaction between apolipoprotein E and the remnant receptor, be absorbed by the liver, or they can be further hydrolyzed by hepatic lipase. Hydrolysis by hepatic lipase releases glycerol and fatty acids, leaving behind IDL remnants, called low-density lipoproteins (LDL), which contain a relatively high cholesterol content (). LDL circulates and is absorbed by the liver and peripheral cells. Binding of LDL to its target tissue occurs through an interaction between the LDL receptor and apolipoprotein B-100 on the LDL particle. Absorption occurs through endocytosis, and the internalized LDL particles are hydrolyzed within lysosomes, releasing lipids, chiefly cholesterol. Possible role in oxygen transport Plasma lipoproteins may carry oxygen gas. This property is due to the crystalline hydrophobic structure of lipids, providing a suitable environment for O2 solubility compared to an aqueous medium. Role in inflammation Inflammation, a biological system response to stimuli such as the introduction of a pathogen, has an underlying role in numerous systemic biological functions and pathologies. This is a useful response by the immune system when the body is exposed to pathogens, such as bacteria in locations that will prove harmful, but can also have detrimental effects if left unregulated. It has been demonstrated that lipoproteins, specifically HDL, have important roles in the inflammatory process. When the body is functioning under normal, stable physiological conditions, HDL has been shown to be beneficial in several ways. LDL contains apolipoprotein B (apoB), which allows LDL to bind to different tissues, such as the artery wall if the glycocalyx has been damaged by high blood sugar levels. If oxidised, the LDL can become trapped in the proteoglycans, preventing its removal by HDL cholesterol efflux. Normal functioning HDL is able to prevent the process of oxidation of LDL and the subsequent inflammatory processes seen after oxidation. Lipopolysaccharide, or LPS, is the major pathogenic factor on the cell wall of Gram-negative bacteria. Gram-positive bacteria has a similar component named Lipoteichoic acid, or LTA. HDL has the ability to bind LPS and LTA, creating HDL-LPS complexes to neutralize the harmful effects in the body and clear the LPS from the body. HDL also has significant roles interacting with cells of the immune system to modulate the availability of cholesterol and modulate the immune response. Under certain abnormal physiological conditions such as system infection or sepsis, the major components of HDL become altered, The composition and quantity of lipids and apolipoproteins are altered as compared to normal physiological conditions, such as a decrease in HDL cholesterol (HDL-C), phospholipids, apoA-I (a major lipoprotein in HDL that has been shown to have beneficial anti-inflammatory properties), and an increase in Serum amyloid A. This altered composition of HDL is commonly referred to as acute-phase HDL in an acute-phase inflammatory response, during which time HDL can lose its ability to inhibit the oxidation of LDL. In fact, this altered composition of HDL is associated with increased mortality and worse clinical outcomes in patients with sepsis. Classification By density Lipoproteins may be classified as five major groups, listed from larger and lower density to smaller and higher density. Lipoproteins are larger and less dense when the fat to protein ratio is increased. They are classified on the basis of electrophoresis, ultracentrifugation and nuclear magnetic resonance spectroscopy via the Vantera Analyzer. Chylomicrons carry triglycerides (fat) from the intestines to the liver, to skeletal muscle, and to adipose tissue. Very-low-density lipoproteins (VLDL) carry (newly synthesised) triglycerides from the liver to adipose tissue. Intermediate-density lipoproteins (IDL) are intermediate between VLDL and LDL. They are not usually detectable in the blood when fasting. Low-density lipoproteins (LDL) carry 3,000 to 6,000 fat molecules (phospholipids, cholesterol, triglycerides, etc.) around the body. LDL particles are sometimes referred to as "bad" lipoprotein because concentrations of two kinds of LDL (sd-LDL and LPA), correlate with atherosclerosis progression. In healthy individuals, most LDL is large and buoyant (lb LDL). large buoyant LDL (lb LDL) particles small dense LDL (sd LDL) particles Lipoprotein(a) (LPA) is a lipoprotein particle of a certain phenotype High-density lipoproteins (HDL) collect fat molecules from the body's cells/tissues and take them back to the liver. HDLs are sometimes referred to as "good" lipoprotein because higher concentrations correlate with low rates of atherosclerosis progression and/or regression. For young healthy research subjects, ~70 kg (154 lb), these data represent averages across individuals studied, percentages represent % dry weight: However, these data are not necessarily reliable for any one individual or for the general clinical population. Alpha and beta It is also possible to classify lipoproteins as "alpha" and "beta", according to the classification of proteins in serum protein electrophoresis. This terminology is sometimes used in describing lipid disorders such as abetalipoproteinemia. Subdivisions Lipoproteins, such as LDL and HDL, can be further subdivided into subspecies isolated through a variety of methods. These are subdivided by density or by the protein contents/ proteins they carry. While the research is currently ongoing, researchers are learning that different subspecies contain different apolipoproteins, proteins, and lipid contents between species which have different physiological roles. For example, within the HDL lipoprotein subspecies, a large number of proteins are involved in general lipid metabolism. However, it is being elucidated that HDL subspecies also contain proteins involved in the following functions: homeostasis, fibrinogen, clotting cascade, inflammatory and immune responses, including the complement system, proteolysis inhibitors, acute-phase response proteins, and the LPS-binding protein, heme and iron metabolism, platelet regulation, vitamin binding and general transport. Research High levels of lipoprotein(a) are a significant risk factor for atherosclerotic cardiovascular diseases via mechanisms associated with inflammation and thrombosis. The links of mechanisms between different lipoprotein isoforms and risk for cardiovascular diseases, lipoprotein synthesis, regulation, and metabolism, and related risks for genetic diseases are under active research, as of 2022. See also Lipid anchored protein Remnant cholesterol Reverse cholesterol transport Vertical Auto Profile References External links Lipids Physiology Cardiology he:כולסטרול#ליפופרוטאינים
Lipoprotein
Chemistry,Biology
3,309
7,799,280
https://en.wikipedia.org/wiki/Timelike%20simply%20connected
Suppose a Lorentzian manifold contains a closed timelike curve (CTC). No CTC can be continuously deformed as a CTC (is timelike homotopic) to a point, as that point would not be causally well behaved. Therefore, any Lorentzian manifold containing a CTC is said to be timelike multiply connected. A Lorentzian manifold that does not contain a CTC is said to be timelike simply connected. Any Lorentzian manifold which is timelike multiply connected has a diffeomorphic universal covering space which is timelike simply connected. For instance, a three-sphere with a Lorentzian metric is timelike multiply connected, (because any compact Lorentzian manifold contains a CTC), but has a diffeomorphic universal covering space which contains no CTC (and is therefore not compact). By contrast, a three-sphere with the standard metric is simply connected, and is therefore its own universal cover. References Algebraic topology Homotopy theory Lorentzian manifolds
Timelike simply connected
Physics,Mathematics
215
33,829,553
https://en.wikipedia.org/wiki/Clean%20agent%20FS%2049%20C2
Clean agent FS 49 C2 is an environmentally engineered, human safe, fast acting Clean Agent fire extinguishing gas for gaseous fire suppression installed in a suited fire suppression system. The gas consists of tetrafluoroethane, pentafluoroethane and carbon dioxide. FS 49 C2 maintains breathable concentrations of oxygen in the air. It can extinguish a fire with less danger to people in the room at extinguishing concentrations in contrast to pure carbon dioxide based fire suppression system that is deadly to humans when released in extinguishing amounts. The gas was initially called Halotron II B/FS49C2. Thermal fire suppression FS 49 C2 acts similarly to an inert gas. It absorbs the heat produced from the combustion process. This mechanism is consistent with the observation that the fire heat release rate does not decrease until sufficient gas is released. The difference from inert gases and FS 49 C2 is that it takes less gas to suppress a fire, and therefore gas storage takes less space, depending on the storage pressure. Savings may vary between a 50-90%. Composition It is a gaseous mixture of 60-80% tetrafluoroethane (R-134a), 10-30% pentafluoroethane (R-125) and 10-30% carbon dioxide (CO2). Its physical properties are similar to those of Halon 1301. Halon comparison FS 49 C2 is believed to cause less damage to the environment. Its main component is the most widely used replacement gas for refrigeration systems, characterized by zero Ozone Depletion Potential (ODP) factor. FS 49 C2 is suitable to replace Halon 1301 as a "drop in" upgrade of existing Halon systems. Filling a room 12% by FS 49 C2 is sufficient to suppress a flame-based fire. Even though FS 49 C2 gas does not leave toxic gases behind, a self-contained breathing apparatus is recommended when in a fire site because in the process of extinguishing the fire, FS 49 C2 may release potentially harmful gases. Montreal protocol UNEP banned the use of Halon gases in the Montreal Protocol treaty in 1987 due to ozone depletion and the ozone-depleting effect of Halon gases. Developing countries were granted an extension to still use Halon until 2010. After 2010 UNEP recommended that those countries replace Halon with ozone friendly alternatives. References External links Information on Brassbells product pages Information on Incosafety Corp's product pages Fire suppression agents Greenhouse gases
Clean agent FS 49 C2
Chemistry,Environmental_science
534
38,380,773
https://en.wikipedia.org/wiki/Phallus%20minusculus
Phallus minusculus is a species of fungus in the stinkhorn family. Found in Tanzania growing on decaying wood, it was described as new to science in 2002. References External links Fungi described in 2002 Fungi of Africa Phallales Fungus species
Phallus minusculus
Biology
52
210,091
https://en.wikipedia.org/wiki/Reflexive%20space
In the area of mathematics known as functional analysis, a reflexive space is a locally convex topological vector space for which the canonical evaluation map from into its bidual (which is the strong dual of the strong dual of ) is a homeomorphism (or equivalently, a TVS isomorphism). A normed space is reflexive if and only if this canonical evaluation map is surjective, in which case this (always linear) evaluation map is an isometric isomorphism and the normed space is a Banach space. Those spaces for which the canonical evaluation map is surjective are called semi-reflexive spaces. In 1951, R. C. James discovered a Banach space, now known as James' space, that is reflexive (meaning that the canonical evaluation map is not an isomorphism) but is nevertheless isometrically isomorphic to its bidual (any such isometric isomorphism is necessarily the canonical evaluation map). So importantly, for a Banach space to be reflexive, it is not enough for it to be isometrically isomorphic to its bidual; it is the canonical evaluation map in particular that has to be a homeomorphism. Reflexive spaces play an important role in the general theory of locally convex TVSs and in the theory of Banach spaces in particular. Hilbert spaces are prominent examples of reflexive Banach spaces. Reflexive Banach spaces are often characterized by their geometric properties. Definition Definition of the bidual Suppose that is a topological vector space (TVS) over the field (which is either the real or complex numbers) whose continuous dual space, separates points on (that is, for any there exists some such that ). Let (some texts write ) denote the strong dual of which is the vector space of continuous linear functionals on endowed with the topology of uniform convergence on bounded subsets of ; this topology is also called the strong dual topology and it is the "default" topology placed on a continuous dual space (unless another topology is specified). If is a normed space, then the strong dual of is the continuous dual space with its usual norm topology. The bidual of denoted by is the strong dual of ; that is, it is the space If is a normed space, then is the continuous dual space of the Banach space with its usual norm topology. Definitions of the evaluation map and reflexive spaces For any let be defined by where is a linear map called the evaluation map at ; since is necessarily continuous, it follows that Since separates points on the linear map defined by is injective where this map is called the evaluation map or the canonical map. Call semi-reflexive if is bijective (or equivalently, surjective) and we call reflexive if in addition is an isomorphism of TVSs. A normable space is reflexive if and only if it is semi-reflexive or equivalently, if and only if the evaluation map is surjective. Reflexive Banach spaces Suppose is a normed vector space over the number field or (the real numbers or the complex numbers), with a norm Consider its dual normed space that consists of all continuous linear functionals and is equipped with the dual norm defined by The dual is a normed space (a Banach space to be precise), and its dual normed space is called bidual space for The bidual consists of all continuous linear functionals and is equipped with the norm dual to Each vector generates a scalar function by the formula: and is a continuous linear functional on that is, One obtains in this way a map called evaluation map, that is linear. It follows from the Hahn–Banach theorem that is injective and preserves norms: that is, maps isometrically onto its image in Furthermore, the image is closed in but it need not be equal to A normed space is called reflexive if it satisfies the following equivalent conditions: the evaluation map is surjective, the evaluation map is an isometric isomorphism of normed spaces, the evaluation map is an isomorphism of normed spaces. A reflexive space is a Banach space, since is then isometric to the Banach space Remark A Banach space is reflexive if it is linearly isometric to its bidual under this canonical embedding James' space is an example of a non-reflexive space which is linearly isometric to its bidual. Furthermore, the image of James' space under the canonical embedding has codimension one in its bidual. A Banach space is called quasi-reflexive (of order ) if the quotient has finite dimension Examples Every finite-dimensional normed space is reflexive, simply because in this case, the space, its dual and bidual all have the same linear dimension, hence the linear injection from the definition is bijective, by the rank–nullity theorem. The Banach space of scalar sequences tending to 0 at infinity, equipped with the supremum norm, is not reflexive. It follows from the general properties below that and are not reflexive, because is isomorphic to the dual of and is isomorphic to the dual of All Hilbert spaces are reflexive, as are the Lp spaces for More generally: all uniformly convex Banach spaces are reflexive according to the Milman–Pettis theorem. The and spaces are not reflexive (unless they are finite dimensional, which happens for example when is a measure on a finite set). Likewise, the Banach space of continuous functions on is not reflexive. The spaces of operators in the Schatten class on a Hilbert space are uniformly convex, hence reflexive, when When the dimension of is infinite, then (the trace class) is not reflexive, because it contains a subspace isomorphic to and (the bounded linear operators on ) is not reflexive, because it contains a subspace isomorphic to In both cases, the subspace can be chosen to be the operators diagonal with respect to a given orthonormal basis of Properties Since every finite-dimensional normed space is a reflexive Banach space, only infinite-dimensional spaces can be non-reflexive. If a Banach space is isomorphic to a reflexive Banach space then is reflexive. Every closed linear subspace of a reflexive space is reflexive. The continuous dual of a reflexive space is reflexive. Every quotient of a reflexive space by a closed subspace is reflexive. Let be a Banach space. The following are equivalent. The space is reflexive. The continuous dual of is reflexive. The closed unit ball of is compact in the weak topology. (This is known as Kakutani's Theorem.) Every bounded sequence in has a weakly convergent subsequence. The statement of Riesz's lemma holds when the real number is exactly Explicitly, for every closed proper vector subspace of there exists some vector of unit norm such that for all Using to denote the distance between the vector and the set this can be restated in simpler language as: is reflexive if and only if for every closed proper vector subspace there is some vector on the unit sphere of that is always at least a distance of away from the subspace. For example, if the reflexive Banach space is endowed with the usual Euclidean norm and is the plane then the points satisfy the conclusion If is instead the -axis then every point belonging to the unit circle in the plane satisfies the conclusion. Every continuous linear functional on attains its supremum on the closed unit ball in (James' theorem) Since norm-closed convex subsets in a Banach space are weakly closed, it follows from the third property that closed bounded convex subsets of a reflexive space are weakly compact. Thus, for every decreasing sequence of non-empty closed bounded convex subsets of the intersection is non-empty. As a consequence, every continuous convex function on a closed convex subset of such that the set is non-empty and bounded for some real number attains its minimum value on The promised geometric property of reflexive Banach spaces is the following: if is a closed non-empty convex subset of the reflexive space then for every there exists a such that minimizes the distance between and points of This follows from the preceding result for convex functions, applied to Note that while the minimal distance between and is uniquely defined by the point is not. The closest point is unique when is uniformly convex. A reflexive Banach space is separable if and only if its continuous dual is separable. This follows from the fact that for every normed space separability of the continuous dual implies separability of Super-reflexive space Informally, a super-reflexive Banach space has the following property: given an arbitrary Banach space if all finite-dimensional subspaces of have a very similar copy sitting somewhere in then must be reflexive. By this definition, the space itself must be reflexive. As an elementary example, every Banach space whose two dimensional subspaces are isometric to subspaces of satisfies the parallelogram law, hence is a Hilbert space, therefore is reflexive. So is super-reflexive. The formal definition does not use isometries, but almost isometries. A Banach space is finitely representable in a Banach space if for every finite-dimensional subspace of and every there is a subspace of such that the multiplicative Banach–Mazur distance between and satisfies A Banach space finitely representable in is a Hilbert space. Every Banach space is finitely representable in The Lp space is finitely representable in A Banach space is super-reflexive if all Banach spaces finitely representable in are reflexive, or, in other words, if no non-reflexive space is finitely representable in The notion of ultraproduct of a family of Banach spaces allows for a concise definition: the Banach space is super-reflexive when its ultrapowers are reflexive. James proved that a space is super-reflexive if and only if its dual is super-reflexive. Finite trees in Banach spaces One of James' characterizations of super-reflexivity uses the growth of separated trees. The description of a vectorial binary tree begins with a rooted binary tree labeled by vectors: a tree of height in a Banach space is a family of vectors of that can be organized in successive levels, starting with level 0 that consists of a single vector the root of the tree, followed, for by a family of 2 vectors forming level that are the children of vertices of level In addition to the tree structure, it is required here that each vector that is an internal vertex of the tree be the midpoint between its two children: Given a positive real number the tree is said to be -separated if for every internal vertex, the two children are -separated in the given space norm: Theorem. The Banach space is super-reflexive if and only if for every there is a number such that every -separated tree contained in the unit ball of has height less than Uniformly convex spaces are super-reflexive. Let be uniformly convex, with modulus of convexity and let be a real number in By the properties of the modulus of convexity, a -separated tree of height contained in the unit ball, must have all points of level contained in the ball of radius By induction, it follows that all points of level are contained in the ball of radius If the height was so large that then the two points of the first level could not be -separated, contrary to the assumption. This gives the required bound function of only. Using the tree-characterization, Enflo proved that super-reflexive Banach spaces admit an equivalent uniformly convex norm. Trees in a Banach space are a special instance of vector-valued martingales. Adding techniques from scalar martingale theory, Pisier improved Enflo's result by showing that a super-reflexive space admits an equivalent uniformly convex norm for which the modulus of convexity satisfies, for some constant and some real number Reflexive locally convex spaces The notion of reflexive Banach space can be generalized to topological vector spaces in the following way. Let be a topological vector space over a number field (of real numbers or complex numbers ). Consider its strong dual space which consists of all continuous linear functionals and is equipped with the strong topology that is,, the topology of uniform convergence on bounded subsets in The space is a topological vector space (to be more precise, a locally convex space), so one can consider its strong dual space which is called the strong bidual space for It consists of all continuous linear functionals and is equipped with the strong topology Each vector generates a map by the following formula: This is a continuous linear functional on that is,, This induces a map called the evaluation map: This map is linear. If is locally convex, from the Hahn–Banach theorem it follows that is injective and open (that is, for each neighbourhood of zero in there is a neighbourhood of zero in such that ). But it can be non-surjective and/or discontinuous. A locally convex space is called semi-reflexive if the evaluation map is surjective (hence bijective), reflexive if the evaluation map is surjective and continuous (in this case is an isomorphism of topological vector spaces). Semireflexive spaces Characterizations If is a Hausdorff locally convex space then the following are equivalent: is semireflexive; The weak topology on had the Heine-Borel property (that is, for the weak topology every closed and bounded subset of is weakly compact). If linear form on that continuous when has the strong dual topology, then it is continuous when has the weak topology; is barreled; with the weak topology is quasi-complete. Characterizations of reflexive spaces If is a Hausdorff locally convex space then the following are equivalent: is reflexive; is semireflexive and infrabarreled; is semireflexive and barreled; is barreled and the weak topology on had the Heine-Borel property (that is, for the weak topology every closed and bounded subset of is weakly compact). is semireflexive and quasibarrelled. If is a normed space then the following are equivalent: is reflexive; The closed unit ball is compact when has the weak topology is a Banach space and is reflexive. Every sequence with for all of nonempty closed bounded convex subsets of has nonempty intersection. Sufficient conditions Normed spaces A normed space that is semireflexive is a reflexive Banach space. A closed vector subspace of a reflexive Banach space is reflexive. Let be a Banach space and a closed vector subspace of If two of and are reflexive then they all are. This is why reflexivity is referred to as a . Topological vector spaces If a barreled locally convex Hausdorff space is semireflexive then it is reflexive. The strong dual of a reflexive space is reflexive.Every Montel space is reflexive. And the strong dual of a Montel space is a Montel space (and thus is reflexive). Properties A locally convex Hausdorff reflexive space is barrelled. If is a normed space then is an isometry onto a closed subspace of This isometry can be expressed by: Suppose that is a normed space and is its bidual equipped with the bidual norm. Then the unit ball of is dense in the unit ball of for the weak topology Examples Every finite-dimensional Hausdorff topological vector space is reflexive, because is bijective by linear algebra, and because there is a unique Hausdorff vector space topology on a finite dimensional vector space. A normed space is reflexive as a normed space if and only if it is reflexive as a locally convex space. This follows from the fact that for a normed space its dual normed space coincides as a topological vector space with the strong dual space As a corollary, the evaluation map coincides with the evaluation map and the following conditions become equivalent: is a reflexive normed space (that is, is an isomorphism of normed spaces), is a reflexive locally convex space (that is, is an isomorphism of topological vector spaces), is a semi-reflexive locally convex space (that is, is surjective). A (somewhat artificial) example of a semi-reflexive space that is not reflexive is obtained as follows: let be an infinite dimensional reflexive Banach space, and let be the topological vector space that is, the vector space equipped with the weak topology. Then the continuous dual of and are the same set of functionals, and bounded subsets of (that is, weakly bounded subsets of ) are norm-bounded, hence the Banach space is the strong dual of Since is reflexive, the continuous dual of is equal to the image of under the canonical embedding but the topology on (the weak topology of ) is not the strong topology that is equal to the norm topology of Montel spaces are reflexive locally convex topological vector spaces. In particular, the following functional spaces frequently used in functional analysis are reflexive locally convex spaces: the space of smooth functions on arbitrary (real) smooth manifold and its strong dual space of distributions with compact support on the space of smooth functions with compact support on arbitrary (real) smooth manifold and its strong dual space of distributions on the space of holomorphic functions on arbitrary complex manifold and its strong dual space of analytic functionals on the Schwartz space on and its strong dual space of tempered distributions on Counter-examples There exists a non-reflexive locally convex TVS whose strong dual is reflexive. Other types of reflexivity A stereotype space, or polar reflexive space, is defined as a topological vector space (TVS) satisfying a similar condition of reflexivity, but with the topology of uniform convergence on totally bounded subsets (instead of bounded subsets) in the definition of dual space More precisely, a TVS is called polar reflexive or stereotype if the evaluation map into the second dual space is an isomorphism of topological vector spaces. Here the stereotype dual space is defined as the space of continuous linear functionals endowed with the topology of uniform convergence on totally bounded sets in (and the stereotype second dual space is the space dual to in the same sense). In contrast to the classical reflexive spaces the class Ste of stereotype spaces is very wide (it contains, in particular, all Fréchet spaces and thus, all Banach spaces), it forms a closed monoidal category, and it admits standard operations (defined inside of Ste) of constructing new spaces, like taking closed subspaces, quotient spaces, projective and injective limits, the space of operators, tensor products, etc. The category Ste have applications in duality theory for non-commutative groups. Similarly, one can replace the class of bounded (and totally bounded) subsets in in the definition of dual space by other classes of subsets, for example, by the class of compact subsets in – the spaces defined by the corresponding reflexivity condition are called , and they form an even wider class than Ste, but it is not clear (2012), whether this class forms a category with properties similar to those of Ste. See also A generalization which has some of the properties of reflexive spaces and includes many spaces of practical importance is the concept of Grothendieck space. References Notes Citations General references . . Banach spaces Duality theories
Reflexive space
Mathematics
4,087
507,648
https://en.wikipedia.org/wiki/No%20symbol
The general prohibition sign, also known informally as the no symbol, 'do not' sign, circle-backslash symbol, nay, interdictory circle, prohibited symbol, don't do it symbol, or universal no, is a red circle with a 45-degree diagonal line inside the circle from upper-left to lower-right. It is typically overlaid on a pictogram to warn that an activity is not permitted, or has accompanying text to describe what is prohibited. It is a mechanism in graphical form to assert 'drawn norms', i.e. to qualify behaviour without the use of words. Appearance According to the ISO standard (and also under a UK Statutory Instrument), the red area must take up at least 35 percent of the total area of the sign within the outer circumference of the "prohibition sign". Under the UK rules the width of a "no symbol" is 80 percent the height of the printed area. For computer display and printing, the symbol is supported in Unicode by combining elements rather than with individual code points (see below). Uses Motor vehicle traffic signage The "prohibition" symbol is used on traffic signs, so that drivers can interpret traffic laws quickly while driving. For example: No left turn or No right turn No U-turn No parking (English) or No estacionarse (Spanish) Road closed to vehicles (Japan), Road closed to vehicles (Germany, but typical in Europe) Non-motor traffic The symbol's use is not limited to informing drivers of motorized vehicles, and is commonly used for other forms of traffic: , , No horse-riding , , No bicycles , , No pedestrians , , No animal-drawn vehicles General prohibitions and warnings The symbol is used for non-traffic purposes to warn or prohibit certain activities: No smoking (with symbol of a lit cigarette). or No littering (with symbol of person littering or of litter) No swimming (with symbol of swimmer in water underneath) Packaging and products It is also used on packages sent through the mail and sealed boxes of merchandise that are sold in stores. Using a graphical symbol is useful to convey important warnings regardless of language. For example: Breakable; do not drop Keep away from magnetic fields In product documentation, the symbol may be accompanied by drawings of the product being threatened by the prohibited items: for instance, a cartoon of a floppy disk being menaced by horseshoe magnets. It is also used on clothing, linens, and other household products to indicate the care, treatment or cleaning of the item. For example: Do not iron Promotional and advertising Some companies use the "prohibition sign" when describing the services they offer, e.g. an insect deterrent spray brand symbol showing the "prohibition sign" over a mosquito. The Ghostbusters logo is a fictional example of this, although it uses a mirror image of the symbol with the slash going from upper right down to lower left. Other uses It can also be used as a symbol of opposition, to strike out the unwanted item, as in "no airport here!" with a No symbol superimposed on an aeroplane symbol. International standards The official prohibition sign design characteristics are governed by regional and international standards. The symbol's canonical definition comes from the International Organization for Standardization which published ISO 3864-1 in 2002, a revision of a standard first published in 1984. The current version was published in 2011. ISO 3864-1 sets the rules for the color, shape, and dimensions of safety signage. The regulations include the incorporation of text and pictograms, with reference to materials used, sign size, and viewing conditions. The introduction includes language on the need for using as few words as possible to convey information. Design specifics The 3864 standard defines the color and design for the prohibition symbol. The verbal definition reads "circle with diagonal bar" with a red safety color, a white contrast color, and black for the graphical symbol (i.e. the pictogram). Dimensions per 3864-1 The symbol is defined as a circle, with the circular band having a thickness of 10% of the outer diameter of the circle. The inner diagonal line has a thickness of 8% of the outer diameter of the circle (i.e. 80% of the circle's line width). The diagonal is centered in the circle and at a 45-degree angle going from upper left to lower right. It is recommended to have a white outside border that is 2.5% to 5% of the outer diameter of the circle. The circle and line are red, the background is white, and the pictogram or descriptive text is black. Color red per 3864-1 The standard defines the range of CIE x,y chromacity coordinates for the color red to be used, relative to the CIE 1931 2° standard observer. They also list equivalent colors for various common color systems such as Munsell, defining red as Munsell 7,5R 4/14. Relative to CSS colors for the web and sRGB, and assuming a white background of #fff, the red color should be no lighter than #f80000, and no darker than #a00000, with #b00 being a useful choice in terms of good contrast and color. Variations Non-conforming designs Despite the fact that the ISO standard is freely available, it is not uncommon for graphic artists to improvise on the particular color and dimensions. As a result, there is a wide variation of the symbol in common use, for instance using a lower-contrast red than specified in the standard or using the same width for the diagonal line as the circle (the standard specifies the diagonal width is 80% as wide as the circle). For example, compare (a non-conforming example) with (drawn using the ISO standard). Alternate or regional design differences Circles with red borders and no slanted or diagonal line are used under the Vienna Convention on Road Signs and Signals to indicate "No entry to vehicles with the following characteristics" (often defined on a plate beneath) such as height, width, mass, or speed. The European Vienna Convention prohibits a diagonal line in the symbol for any sign other than no turning signs. An alternative use for red bordered circles is as a Mandatory Action Symbol type B. In many jurisdictions (such as Germany), 'no entry' is indicated by a solid red disc with white horizontal bar. See the general article on Prohibitory traffic signs. A blue filled circle with an illustration or legend means that a lane is restricted to a particular class of users (such as buses, cyclists, pedestrians) as shown, and no other traffic may use it. In contrast, a blue filled circle without a diagonal line through it is used as a Mandatory Action Symbol, indicating that the activity represented inside the circle is mandatory and must be executed. Unicode and fonts Unicode combining character The Unicode code point for the prohibition sign is . It is a combining character, which means that it appears on top of the character immediately before it. Example: Putting W&#x20E0; will display the letter W inside the prohibition sign: W⃠ (if the user's system handles it correctly, which is not always the case). Emoji and other unicode versions There are also several prohibition sign emojis and related unicode characters: (Japanese sign meaning "prohibited") In fonts The symbol appears in a number of different computer fonts, such as Arial Unicode MS, and in dingbat fonts such Webdings and Wingdings 2. These are not necessarily "combining characters." In the case of Webdings and Wingdings 2, the character encoding does not match the Unicode standard, so if these fonts are not present on the user's system, the symbol may not render correctly. This is particularly an issue in webpages. If the page designer wishes to use Webdings for instance, it is important to provide the font resource via the CSS @import font-face command. Similar-appearing symbols There are Unicode code points for other glyphs that look very similar to the 'prohibited' symbol, but which may be available in more font repertoires: , which is difficult to distinguish from the 'prohibited' symbol. Other glyphs exist but are incorrectly oriented, for example All of these are spacing characters, which means that they cannot readily be used in combination with a symbol for the action to be prohibited. See also Federal Highway Administration ISO 3864 List of international common standards References UK Highway Code - Signs & markings International Organization for Standardization External links UK legislation regarding health and safety signs, The Health and Safety (Safety Signs and Signals) Regulations 1996 (SI 341) UK legislation regarding road signs Working drafts for UK road signage Prohibitionism Infographics Pictograms Symbols Traffic signs
No symbol
Mathematics
1,810