id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
47,926,198
https://en.wikipedia.org/wiki/Open%20Tree%20of%20Life
The Open Tree of Life is an online phylogenetic tree of life – a collaborative effort, funded by the National Science Foundation. The first draft, including 2.3 million species, was released in September 2015. The Interactive graph allows the user to zoom in to taxonomic classifications, phylogenetic trees, and information about a node. Clicking on a species will return its source and reference taxonomy. Approach The project uses a supertree approach to generate a single phylogenetic tree (served at tree.opentreeoflife.org) from a comprehensive taxonomy and a curated set of published phylogenetic estimates. The taxonomy is a combination of several large classifications produced by other projects; it is created using a software tool called "smasher". The resulting taxonomy is called an Open Tree Taxonomy (OTT) and can be browsed on-line. History The project was started in June 2012 with a three-year NSF award to researchers at ten universities. In 2015, a two-year supplemental award was made to researchers at three institutions. See also Tree of Life Web Project References Biodiversity databases Biology websites Software using the BSD license 2015 software
Open Tree of Life
Biology,Environmental_science
224
35,215,267
https://en.wikipedia.org/wiki/Young%E2%80%93Deruyts%20development
In mathematics, the Young–Deruyts development is a method of writing invariants of an action of a group on an n-dimensional vector space V in terms of invariants depending on at most n–1 vectors . References Invariant theory
Young–Deruyts development
Physics
50
22,483,813
https://en.wikipedia.org/wiki/%C5%81ojasiewicz%20inequality
In real algebraic geometry, the Łojasiewicz inequality, named after Stanisław Łojasiewicz, gives an upper bound for the distance of a point to the nearest zero of a given real analytic function. Specifically, let ƒ : U → R be a real analytic function on an open set U in Rn, and let Z be the zero locus of ƒ. Assume that Z is not empty. Then for any compact set K in U, there exist positive constants α and C such that, for all x in K Here α can be large. The following form of this inequality is often seen in more analytic contexts: with the same assumptions on f, for every p ∈ U there is a possibly smaller open neighborhood W of p and constants θ ∈ (0,1) and c > 0 such that Polyak inequality A special case of the Łojasiewicz inequality, due to , is commonly used to prove linear convergence of gradient descent algorithms. This section is based on and . Definitions is a function of type , and has a continuous derivative . is the subset of on which achieves its global minimum (if one exists). Throughout this section we assume such a global minimum value exists, unless otherwise stated. The optimization objective is to find some point in . are constants. is -Lipschitz continuous iff is -strongly convex iff is -PL (where "PL" means "Polyak-Łojasiewicz") iff Basic properties Gradient descent Coordinate descent The coordinate descent algorithm first samples a random coordinate uniformly, then perform gradient descent by Stochastic gradient descent In stochastic gradient descent, we have a function to minimize , but we cannot sample its gradient directly. Instead, we sample a random gradient , where are such that For example, in typical machine learning, are the parameters of the neural network, and is the loss incurred on the -th training data point, while is the average loss over all training data points. The gradient update step is where are a sequence of learning rates (the learning rate schedule). As it is, the proposition is difficult to use. We can make it easier to use by some further assumptions. The second-moment on the right can be removed by assuming a uniform upper bound. That is, if there exists some such that during the SG process, we have for all , thenSimilarly, if then Learning rate schedules For constant learning rate schedule, with , we haveBy induction, we have We see that the loss decreases in expectation first exponentially, but then stops decreasing, which is caused by the term. In short, because the gradient descent steps are too large, the variance in the stochastic gradient starts to dominate, and starts doing a random walk in the vicinity of . For decreasing learning rate schedule with , we have . References External links Inequalities Mathematical analysis Real algebraic geometry
Łojasiewicz inequality
Mathematics
581
33,214,426
https://en.wikipedia.org/wiki/Stuart%20Pivar
Stuart Pivar (born 1930) is an American chemist and art collector known for his unorthodox views about evolution. Trained as a scientist, Pivar has long endorsed the study of anatomy and need for artists to acquire technical skills. He grew his fortune in the plastics industry and is also the author of several books. Pivar is one of the founders of the New York Academy of Art with, among others, Andy Warhol. Early life and education Stuart Pivar was born 1930 in Brooklyn, New York to a father who imported velvet ribbons and a mother known for being "intensely style-conscious". Pivar speaks Yiddish and was brought up in a Jewish family. He began collecting objects at age 7, starting with insects in Central Park, and later bottle caps on Kings Highway at age 8. He spent time at a summer camp in Kingston, New York. Pivar attended Brooklyn Technical High School before going on to earn a B.Sc in chemistry at Hofstra University, graduating in 1951. Career In 1959, Pivar founded Chemtainer Industries, a business that specialized in bulk-storage plastic containers. As an inventor, he made a large fortune in plastics. While remaining active in the plastics industry, he became an independently wealthy investor and buyer on the art scene. New York Academy of Art Pivar endorsed the reintroduction of traditional skills into art school curricula, including the study of human and animal anatomy. With Warhol, he helped to found the New York Academy of Art in 1980, becoming one of its board members. The academy opposed abstract art and promoted traditional skills. According to Eliot Goldfinger, Pivar "strongly supported the acquisition of an anatomical collection of comparative skeletons, related artwork, anatomical models and charts, and the use of dissection as part of the curriculum." He donated over $1.2 million to the Academy during his involvement with it. Pivar resigned from the Academy in 1994 and complained that he had been "lied to and outmaneuvered" by other senior figures at the institution. A report placed most of the blame on his "disruptive, angry and abusive" behavior for problems at the institution. Pivar attempted to sue the Academy for $50 million, claiming that he had been caused "emotional and mental distress" and that he had been ostracised for pointing out falsification of financial records and employment of illegal immigrants. Pivar has stated that he resigned from the Academy Board along with Caroline Newhouse. Biological theories Beginning with his book Lifecode in 2004 Pivar has published novel claims about the evolution of species. He asserts that the body form of species are encoded not in DNA but in the patterned structure of a primordial germ plasm. However, critics have stated that Pivar's proposed developmental sequences bear no resemblance to anything actually observed during embryological development. In his book on the demarcation problem, Nonsense on Stilts, Massimo Pigliucci says that he and his graduate students received "more or less threatening" emails from Pivar complaining that his non-novel theory (written in quotes) of form was not being taken seriously. On his blog "Pharyngula", developmental biologist PZ Myers reviewed Lifecode and concluded that it was "a description of the development and evolution of balloon animals". In 2007, Pivar attempted to sue Seed Media, whose ScienceBlogs hosted "Pharyngula", for describing him as "classic crackpot", but the case was withdrawn after ten days. On July 5, 2016 a scientific paper titled The Origin of the Vertebrate Body Plan in the Geometric Patterns in the Embryonic Blastula was published in the peer-reviewed journal Progress in Biophysics and Molecular Biology identifying Stuart Pivar as the principal investigator. Personal life At The Factory in the early 1970s, Pivar met Andy Warhol who became one of his closest longtime friends. With Warhol he would go on regular shopping trips to buy "masterpieces", which could be objects bought anywhere, from a high-end auction house to a fleamarket. After the artist's death in 1987, Pivar recalled that "Andy Warhol loved to buy art. We used to go shopping together for it for a few hours practically every day in the past couple of years." Pivar was also a well known friend of the late financier Jeffrey Epstein, however the two had a falling out prior to Epstein facing charges for sex crimes. Pivar corroborated the account of Maria Farmer, a graduate of the New York Academy of Art in 1995, who stated that she had informed him about her abuse at the hands of Epstein in 1996. According to Pivar, this was when the friendship with Epstein ended. In the interview, Pivar described Epstein accuser Virginia Giuffre as a "16-year-old trollop." Art collection Pivar is a life long art collector. He was a collector of 19th-century academic art at a time when it was unfashionable. A scholar of the work of the sculptor Antoine-Louis Barye, he wrote "The Barye Bronzes: A Catalogue Raisonne" in 1974, a collation of critical commentary on all the sculptor's known works. Publications References Living people 1930 births American art collectors 21st-century American chemists Non-Darwinian evolution Pseudoscientific biologists Hofstra University alumni
Stuart Pivar
Biology
1,104
9,481,141
https://en.wikipedia.org/wiki/Polyvinyl%20nitrate
Polyvinyl nitrate (abbreviated: PVN) is a high-energy polymer with the idealized formula of [CH2CH(ONO2)]. Polyvinyl nitrate is a long carbon chain (polymer) with nitrate groups (-O-NO2) bonded randomly along the chain. PVN is a white, fibrous solid, and is soluble in polar organic solvents such as acetone. PVN can be prepared by nitrating polyvinyl alcohol with an excess of nitric acid. Because PVN is also a nitrate ester such as nitroglycerin (a common explosive), it exhibits energetic properties and is commonly used in explosives and propellants. Preparation Polyvinyl nitrate was first synthesized by submersing polyvinyl alcohol (PVA) in a solution of concentrated sulfuric and nitric acids. This causes the PVA to lose a hydrogen atom from its hydroxy group (deprotonation), and the nitric acid (HNO3) to lose a NO2+ when in sulfuric acid. The NO2+ attaches to the oxygen in the PVA and creates a nitrate group, producing polyvinyl nitrate. This method results in a low nitrogen content of 10% and an overall yield of 80%. This method is inferior, as PVA has a low solubility in sulfuric acid and a slow rate of nitration for PVA. This meant that a lot of sulfuric acid was needed relative to PVA and did not produce a high nitrogen PVN, which is desirable for its energetic properties. An improved method is where PVA is nitrated without sulfuric acid; however, when this solution is exposed to air, the PVA combusts. In this new method, either the PVA nitration is done in an inert gas (carbon dioxide or nitrogen) or the PVA powder is clumped into larger particles and submerged underneath the nitric acid to limit the amount of air exposure. Currently, the most common method is when PVA powder is dissolved in acetic anhydride at -10°C. Then cooled nitric acid is slowly added. This produces a high nitrogen content PVN within about 5-7 hours. Because acetic anhydride was used as the solvent instead of sulfuric acid, the PVA will not combust when exposed to air. Physical properties PVN is a white thermoplastic with a softening point of 40-50°C. The theoretical maximum nitrogen content of PVN is 15.73%. PVN is a polymer that has an atactic configuration, meaning the nitrate groups are randomly distributed along the main chain. Fibrous PVN increases in crystallinity as the nitrogen content increases, showing that the PVN molecules organize themselves more orderly as nitrogen percent increases. Intramolecularly, the geometry of the polymer is planar zigzag. The porous PVN can be gelatinized when added to acetone at room temperature. This creates a viscous slurry and loses its fibrous and porous nature; however, it retains most of its energetic properties. Chemical properties Combustion Polyvinyl nitrate is a high-energy polymer due to the significant presence of O - NO2 groups, similar to nitrocellulose and nitroglycerin. These nitrate groups have an activation energy of 53 kcal/mol are the primary cause of PVN's high chemical potential energy. The complete combustion reaction of PVN assuming full nitration is: 2CH2CH(ONO2) + 5/2O2 -> 4CO2 + N2 + 3H2O When burned, PVN samples with less nitrogen had a significantly higher heat of combustion because there were more hydrogen molecules and more heat was generated when oxygen was present. The heat of combustion was about 3,000 cal/g for 15.71% N and 3,700 cal/g for 11.76% N. Alternatively, PVN samples with a higher nitrogen content had a significantly higher heat of explosion as it had more O - NO2 groups as it had more oxygen leading to more complete combustion. This leads to a more complete combustion and more heat generated when burned in inert or low oxygen environments. Stability Nitrate esters, in general, are unstable because of the weak N - O bond and tend to decompose at higher temperatures. Fibrous PVN is relatively stable at 80°C and is less stable as the nitrogen content increases. Gelatinized PVN is less stable than fibrous PVN. Activation energy Ignition temperature is the temperature at which a substance combusts spontaneously and requires no other additional energy (other than the temperature)/ This temperature can be used to determine the activation energy. For samples of varying nitrogen content, the ignition temperature decreases as nitrogen percentage increases, showing that PVN is more ignitable as nitrogen content increases. Using the Semenov equation: where D is the ignition delay (the time it takes for a substance to ignite), E is the activation energy, R is the universal gas constant, T is absolute temperature, and C is a constant, dependent on the material. The activation energy is greater than 13 kcal/mol and reaches 16 kcal/mol (at 15.71% nitrogen, near theoretical maximum) and varies greatly between different nitrogen concentrations and has no linear pattern between activation energy and the degree of nitration. Impact sensitivity The height at which a mass is dropped on PVN and causes an explosion shows the sensitivity of PVN to impacts. As nitrogen content increases, fibrous PVN is more sensitive to impacts. Gelatinous PVN is similar to fibrous PVN in impact sensitivity. Applications Because of the nitrate groups of PVN, polyvinyl nitrate is mainly used for its explosive and energetic capabilities. Structurally, PVN is similar to nitrocellulose in that it is a polymer with several nitrate groups off the main branch, differing only in their main chain (carbon and cellulose respectively). Because of this similarity, PVN is typically used in explosives and propellants as a binder. In explosives, a binder is used to form an explosive where the explosive materials are difficult to mold (see Polymer-bonded explosive (PBX)). A common binder polymer is hydroxyl-terminated polybutadiene (HTPB) or glycidyl azide polymer (GAP). Moreover, the binder needs a plasticizer such as dioctyl adipate (DOP) or 2-nitrodiphenylamine (2-NDPA) to make the explosive more flexible. Polyvinyl nitrate combines the traits of both a binder and a plasticizer, as this polymer binds the explosive ingredients together and is flexible at is softening point (40-50°C). Moreover, PVN adds to the explosive's overall energetic potential due to its nitrate groups. An example composition including polyvinyl nitrate is PVN, nitrocellulose and/or polyvinyl acetate, and 2-nitrodiphenylamine. This creates a moldable thermoplastic that can be combined with a powder containing nitrocellulose to create a cartridge case where the PVN composition acts as a propellant and assists as an explosive material. See also Nitrate ester Polyvinyl ester Vinyl polymer References Explosive chemicals Explosive polymers Nitrate esters Plastics Vinyl polymers
Polyvinyl nitrate
Physics,Chemistry
1,549
716,048
https://en.wikipedia.org/wiki/Thorotrast
Thorotrast is a suspension containing particles of the radioactive compound thorium dioxide, ThO2; it was used as a radiocontrast agent in clinical radiography in the 1930s to 1950s. It is no longer used clinically. Thorium compounds produce excellent images because of thorium's high opacity to X-rays (it has a high cross section for absorption). However, thorium is retained in the body, and it is radioactive, emitting harmful alpha radiation as it decays. Because the suspension offered high image quality and had virtually no immediate side-effects compared to the alternatives available at the time, Thorotrast became widely used after its introduction in 1931. António Egas Moniz contributed to its development. About 2 to 10 million patients worldwide have been treated with Thorotrast. However, today it has shown to increase risk of certain cancers, such as cholangiocarcinomas, angiosarcomas and hepatocellular carcinoma, and fibrosis of the liver. Safety Even at the time of introduction, there was concern about the safety of Thorotrast. Following injection, the drug is distributed to the liver, spleen, lymph nodes, and bone, where it is absorbed. After this initial absorption, redistribution takes place at a very slow pace. Specifically, the biological half-life is estimated to be 22 years. This means that the organs of patients who have been given Thorotrast will be exposed to internal alpha radiation for the rest of their lives. The significance of this long-term exposure was not fully understood at the time of Thorotrast's introduction in 1931. Thorotrast has significant effects in the liver due to its hepatic accumulation. It is linked to the development of liver fibrosis, liver cancer and peliosis hepatis. Due to the release of alpha particles, Thorotrast was found to be extremely carcinogenic. There is a high over-incidence of various cancers in patients who have been treated with Thorotrast. The cancers occur some years (usually 20–30) after injection of Thorotrast. The risk of developing liver cancer (or bile duct cancer) in former Thorotrast patients has been measured to be well above 100 times the risk of the rest of the population. The risk of leukemia appears to be 20 times higher in Thorotrast patients. Thorotrast exposure has also been associated with the development of angiosarcoma. German patients exposed to Thorotrast had their median life-expectancy shortened by 14 years in comparison to a similar non-exposed control group. Epidemiological studies from Portugal where Thorotrast was in use between 1930 and 1955 showed that the link between it and the risk of developing leukaemia was significant, and went so far as describing Thorotrast as the most potent leukaemogen reported. The studies also noted very high levels of haemangioendotheliomas typically of the liver and that they were very rarely seen in controls. Thorium is no longer used in clinical X-ray studies. Today, hydrophilic (water-soluble) iodinated contrast agents, which are not radioactive, are universally used intravenously in clinical X-ray procedures. The Danish director Nils Malmros's movie, Facing the Truth (original Danish title ) from 2002, portrays the dilemma that faced Malmros's father, Richard Malmros, when treating his patients in the 1940s. Richard Malmros was deeply concerned about the persistence of Thorotrast in the body but was forced to use Thorotrast, because the only available alternative (per-abrodil) had serious immediate side-effects, suffered from image quality problems and was difficult to obtain during the Second World War. The use of Thorotrast in Denmark ended in 1947 when safer alternatives became available. Current use In the decades after the cessation of its clinical use, Thorotrast has sometimes been used in laboratory research to stain neural tissue samples for examination by historadiography. References Hepatotoxins IARC Group 1 carcinogens Health disasters Thorium Withdrawn drugs Radiocontrast agents
Thorotrast
Chemistry
873
19,356,049
https://en.wikipedia.org/wiki/Emulation%20%28observational%20learning%29
In emulation learning, subjects learn about parts of their environment and use this to achieve their own goals and is an observational learning mechanism (sometimes called social learning mechanisms). In this context, emulation was first coined by child psychologist David Wood in 1988. In 1990 "emulation" was taken up by Michael Tomasello to explain the findings of an earlier study on ape social learning. The meaning of the term emulation has changed gradually over time. Emulation is different from imitation - because emulation focuses on the action's environmental results instead of a model's action themselves. The fidelity of an observational learning mechanism is expected to have profound implications for its capacity for cultural transmission. Emulation is argued by some to produce only fleeting fidelity - though this is still being discussed. History of the term In the original version, emulation referred to observers understanding objects in their potential to help them achieve desired results. They gained this understanding (or were triggered in their understanding) by seeing demonstrators achieving these very results with these objects. The actions performed by the demonstrators however were not copied, so it was concluded that observers learn "from the demonstration, that the tool may be used to obtain the food" (Tomasello et al., 1987). In 1996, Tomasello redefined the term: "The individual observing and learning some affordances of the behavior of another animal, and then using what it has learned in devising its own behavioral strategies, is what I have called emulation learning. ... an individual is not just attracted to the location of another but actually learns something about the environment as a result of its behavior". An even later definition further clarifies: "In emulation learning, learners see the movement of the objects involved and then come to some insight about its relevance to their own problems". Here animals are described as learning some physics or causal relations of the environment. This does not necessarily involve a very complex understanding of abstract phenomena (as to what defines a "tool as a tool"). Emulation comprises a large span of cognitive complexity, from minimal cognitive complexity to complex levels. Emulation was originally invented as a "cognitivist's alternative" to associative learning (Tomasello, 1999), spanning learning about how things function and their "affordances" put to the use of achieving one's own goals: "Emulation learning in tool-use tasks seems to require the perception and understanding of some causal relations among objects". This necessarily involves some "insight" – a cognitive domain. To further highlight this point Call & Carpenter wrote in 2001: "it would be a harder task to teach robots to emulate than it is already to teach them to imitate". Current theory Huang & Chaman (2005) have summarized the different connotations of emulation that are being discussed. These versions are: "end state emulation", "goal emulation", "object movement reenactment", and "emulation via affordance learning". In their words: in end state emulation "the presence of an end result motivates an observer to replicate the result without explicitly encoding it in relation to the model's goal". In goal emulation, "an observer attributes a goal to the model while attempting to devise his or her own strategy to reproduce the end result". In object movement reenactment "when an observer sees an object or its parts move, and that movement leads to a salient outcome, seeing the object movement might motivate the observer to reproduce the outcome". Emulation via affordance learning "refers to a process whereby an observer detects stimulus consequences, such as dynamic properties and temporal–spatial causal relations of objects, through watching the object movements". Byrne (2002) has come up with a slightly different classification, and which is looking more closely at the learning on the object level. He distinguishes three forms: 1) learning physical properties of objects 2) learning the relationships among objects 3) understanding cause-and-effect relationships and changes of state of objects (e.g. "that a stick can be used as a rake"). Experimental approaches Emulation has been researched in a diverse range of species, including humans. The methodology most often applied is the so-called ghost-condition – put forward by Cecilia Heyes and colleagues in 1994. Ghost condition demonstrations do not involve any information on body movements. Instead, the parts of the apparatus move as if a ghost moves them (for this purpose often very thin fishing line is attached to the moving parts and which transmits the necessary forces). While the use of this method (and subsequently the interpretation of findings) has been criticized on the basis of it lacking ecological validity (it is a strange thing for non-animate objects to move on their own accord), it succeeded in showing that environmental information can be enough for observational learning to occur (work on pigeons). Thus, the general validity of the ghost condition is now established. Chimpanzees tested with this methodology have sometimes failed to copy, but copied in another study – as did dogs. Recently it was shown that in human children, emulation learning enables children to copy in a constructive task solutions that they themselves were unable to produce on their own, an important stepping stone for cumulative culture. This study therefore showed, empirically, that imitation is not a necessary requirement for cumulative culture (contra to some previous claims). See also Cognitive imitation Culture Modeling (psychology) Observational learning Social learning and cumulative cultural evolution References Further reading Tennie, C., Call, J. & Tomasello M. (2009). Ratcheting up the ratchet: on the evolution of cumulative culture. Philosophical Transactions of the Royal Society, 364, 2405-2415. Whiten, A. Horner, V., Litchfield, C. A., & Marshall-Pescini, S. (2004). How do apes ape?. Learning & Behavior, 32, 36-52. Zentall, T.R. (2006). Imitation: Definitions, evidence, and mechanisms. Animal Cognition, 9, 335-353. Animal cognition Social learning theory
Emulation (observational learning)
Biology
1,266
28,073,818
https://en.wikipedia.org/wiki/Hellenic%20Geodetic%20Reference%20System%201987
The Hellenic Geodetic Reference System 1987 or HGRS87 () is a geodetic system commonly used in Greece (SRID=2100). The system specifies a local geodetic datum and a projection system. In some documents it is called Greek Geodetic Reference System 1987 or GGRS87. HGRS87 datum HGRS87 specifies a non-geocentric datum that is tied to the coordinates of the key geodetic station at the Dionysos Satellite Observatory (DSO) northeast of Athens (). The central pedestal (CP) at this location has by definition HGRS87 coordinates 38° 4'  33.8000" N - 23° 55'  51.0000"E, N = +7 m. Although HGRS87 uses the GRS80 ellipsoid, the origin is shifted relative to the GRS80 geocenter, so that the ellipsoidal surface is best for Greece. The specified offsets relative to WGS84 (WGS84-HGRS87) are: Δx = -199.87 m, Δy = 74.79 m, Δz = 246.62 m. The HGRS87 datum is implemented by a first order geodetic network, which consists of approximately 30 triangulation stations throughout Greece and is maintained by the Hellenic Military Geographical Service. The initial uncertainty was estimated as 0.1 ppm (1x10−7). However, there are considerable tectonic movements that move parts of Greece towards different directions, causing incompatibilities between surveys taking place at different times. HGRS87 replaced an earlier de facto geodetic system. The datum of that system was based on the Bessel ellipsoid, with an accurate determination of the geodetic coordinates at the central premises of the National Observatory of Athens 37° 58'  20.1" N - 23° 42'  58.5"E with current Google Earth TM coordinates:37° 58'  20.20" N - 23° 43'  05.36"E and supplemented by an accurately measured azimuth from the observatory to Mount Parnes. Cartographic projections for civilian use were based on the Hatt projection system, with different projection parameters for each 1:100000 map. HGRS87 projection HGRS87 also specifies a transverse Mercator cartographic projection (TM) with m0=0.9996, covering six degrees of longitude either side of 24 degrees east (18-30 degrees east). This way all Greek territory (stretching to approximately 9° of longitude) is projected in one zone. References are in meters. Northings are counted from the equator. A false easting of 500000 m is assigned to the central meridian (24° east), so eastings are always positive. Conversion from geographical coordinates in GRS80 to a projection in HGRS87 is supported by PROJ with appropriate parameters. The conversion is performed with the following code (shell command line): proj +proj=tmerc +lat_0=0 +lon_0=24 +k=0.9996 +x_0=500000 +y_0=0 +ellps=GRS80 +towgs84=-199.87,74.79,246.62,0,0,0,0 +units=m +no_defs Migration to HTRS07 While HGRS87 is still widely used for most civilian uses, it is partly replaced by the new Hellenic Terrestrial Reference System 2007 or HTRS07 (SRID=96758). HTRS07, which was specified for the Hellenic Positioning System (HEPOS) project, is GPS based and is compatible with European Terrestrial Reference System 1989 (ETRS89). HTRS07 is currently used for the cadastral surveys. Converters SurvoGR online converter (updated 2023) Online conversion among WGS84 / EGSA87 / HRTS07 / HATT (THEMOS SA) Matlab & C functions converting between WGS84 / EGSA87 See also Hellenic Military Geographical Service References Further reading Συντεταγμένες ΕΓΣΑ’87 σε δέκτες GPS Programming popular GPS receivers to display coordinates using the HGRS87 datum (in Greek). Coordinate Reference Systems GR Map Projections GR Geographic coordinate systems Geodesy
Hellenic Geodetic Reference System 1987
Mathematics
960
64,321,036
https://en.wikipedia.org/wiki/Alexander%20Glazastikov
Alexander Olegovich Glazastikov () is a Russian co-founder of the anonymous group Shaltai Boltai. In 2017, he applied for political asylum in Estonia. In October 2018, he was arrested in absentia in Russia. Early interactions with Anikeyev Glazastikov reportedly met Vladimir Anikeyev at a party in Russia sometime between 2003 and 2005. The two would infrequently stay in touch over the years. In 2013, Anikeyev proposed the idea of creating a political blog, as well as an accompanying Twitter account. Glazastikov concurred, and considered being an official press secretary for the blog. Shaltai Boltai During its first year, Shaltai Boltai had published the private correspondence of an assortment of Russian public figures, including Arkady Dvorkovich, Dmitry Medvedev, Robert Schlegel, Timur Prokopenko, Igor Strelkov (officer), and Yevgeny Prigozhin. Fallout from Anikeyev's arrest In February 2017, shortly after press reports had announced that Anikeyev had been arrested in Russia, Glazastikov told TV Rain that he would be applying for asylum in Estonia. In October 2018, Glazastikov was arrested in absentia by the Moscow City Court. References Year of birth missing (living people) Living people Hackers Russian cybercriminals Refugees in Europe Russian emigrants to Estonia
Alexander Glazastikov
Technology
301
41,533,457
https://en.wikipedia.org/wiki/11%20Persei
11 Persei is a single star in the constellation of Perseus, located about 418 light years away from the Sun. It is visible to the naked eye as a dim, blue-white hued star with an apparent visual magnitude of 5.76. This is a chemically peculiar mercury-manganese star. Cowley (1972) found a stellar classification of , while Hube (1970) had B8 IV, and Appenzeller (1967) showed B6 V. Stellar models indicate this is a young B-type main sequence star with an estimated age of around 51 million years. It has a low rotation rate, showing a projected rotational velocity of 4.50 km/s. The star has 3.8 times the mass of the Sun and is radiating 210 times the Sun's luminosity from its photosphere at an effective temperature of 14,550 K. References B-type main-sequence stars Mercury-manganese stars Perseus (constellation) Durchmusterung objects Persei, 11 016727 012692 0785
11 Persei
Astronomy
223
44,960,229
https://en.wikipedia.org/wiki/Strategic%20pluralism
Strategic pluralism (also known as the dual-mating strategy) is a theory in evolutionary psychology regarding human mating strategies that suggests women have evolved to evaluate men in two categories: whether they are reliable long term providers, and whether they contain high quality genes. The theory of strategic pluralism was proposed by Steven Gangestad and Jeffry Simpson, two professors of psychology at the University of New Mexico and Texas A&M University, respectively. Experiments and studies Although strategic pluralism is believed to occur for both animals and humans, the majority of experiments have been performed with humans. One experiment concluded that between short term and long-term relationships, males and females prioritized different things. It was shown that both preferred physical attractiveness for short term mates. However, for long term, females preferred males with traits that indicated that they could be better caretakers, whereas the males did not change their priorities. The experimenters used the following setup: subjects were given an overall 'budget' and asked to assign points to different traits. For long-term mates, women gave more points to social and kindness traits, agreeing with results found in other studies suggesting that females prefer long-term mates who would provide resources and emotional security for them, as opposed to physically attractive mates. The females also prefer males who can offer them more financial security as this would help them raise their offspring. Females have also chosen males who have more feminine appearances because of a (hypothesized) inverse relationship between a male's facial attractiveness and effort willing to spend in raising offspring. That is, in theory, a more attractive male would put in less work as a caretaker while a less attractive male would put in more work. On average, there is a wider amount of variability in male characteristics than in females. This suggests there are enough of both males more suited for short-term relationships and those more suited for longer relationships. Criticism Bellis and Baker calculated that if double-mating strategy does occur, the rate of paternal discrepancy would be between 6.9 and 13.8%. When taking kin selection into account, Gaulin, McBurney, and Brakeman-Wartell hypothesised that mother’s side of family is more certain that the child is their kin and therefore invest more. Based on this matrilateral bias they calculated the rate of cuckoldry to be roughly 13% to 20%. These estimates were refuted by Y-chromosome tracking and HLA tracking that put the estimates between 1-2%. David Buss, prominent evolutionary psychologist, cited this evidence as a reason to be sceptical of dual-mating strategy hypothesis. See also Ovulatory shift hypothesis Human mating strategies Extra-pair copulation Sexual selection in humans References Evolutionary biology
Strategic pluralism
Biology
562
1,007,921
https://en.wikipedia.org/wiki/Identification%20%28information%29
For data storage, identification is the capability to find, retrieve, report, change, or delete specific data without ambiguity. This applies especially to information stored in databases. In database normalisation, the process of organizing the fields and tables of a relational database to minimize redundancy and dependency, is the central, defining function of the discipline. |SHAFTS SPRINT v.2.1-1.0.141 Net 8./ Net 9.1=</ref> built -in Security Robustness the universal -Js"n Index sovereign Licence faut program technology company Weighed Ethical self policy Notice :Licence Arthur.Ruby Jane D. Aquino Ownership entity Licence work Atributionu ^|Source of code Compliance team= Company Limited partners Alphabet Inc. Work Experience |the work< internal> |Computer technology |trademark data- Source of Code Compliance team Microsoft Corporation Ohne Microsoft Way Raymond ,VA 98052 USA INFORMATION Do Not Translate or Lacalize This source software incorporated Material from work experience Experience make certain reason is the Component including the product name ,open source component name and Version of number Not wit standin any other terms may reverse engeneer is Software to the extent Required to debugg changes automation To any libraries detect program investigation libraries licensed under GNU Lesser General public licences or you make order for US $110,000 component encluding product name andriod.activity/activity Compose of Apache 2.0 Under https://3rdparty.source.microsoft Licence compliance v.1 negotiation investigation most of weighed license Author Royalty S 0 Hardware Software GPU tools SORCE PROJECT APACHE LICENCE >Com.squareup/0tto1.2.3 apache 2.0 >Org.slf4j/slf-2.0.7 -MT investigation See also Authentication Domain Name System Identification (disambiguation) Forensic profiling Profiling (information science) Unique identifier References Data modeling
Identification (information)
Engineering
394
1,041,812
https://en.wikipedia.org/wiki/Superpotential
In theoretical physics, the superpotential is a function in supersymmetric quantum mechanics. Given a superpotential, two "partner potentials" are derived that can each serve as a potential in the Schrödinger equation. The partner potentials have the same spectrum, apart from a possible eigenvalue of zero, meaning that the physical systems represented by the two potentials have the same characteristic energies, apart from a possible zero-energy ground state. One-dimensional example Consider a one-dimensional, non-relativistic particle with a two state internal degree of freedom called "spin". (This is not quite the usual notion of spin encountered in nonrelativistic quantum mechanics, because "real" spin applies only to particles in three-dimensional space.) Let b and its Hermitian adjoint b† signify operators which transform a "spin up" particle into a "spin down" particle and vice versa, respectively. Furthermore, take b and b† to be normalized such that the anticommutator {b,b†} equals 1, and take that b2 equals 0. Let p represent the momentum of the particle and x represent its position with [x,p]=i, where we use natural units so that . Let W (the superpotential) represent an arbitrary differentiable function of x and define the supersymmetric operators Q1 and Q2 as The operators Q1 and Q2 are self-adjoint. Let the Hamiltonian be where W''' signifies the derivative of W. Also note that {Q1,Q2}=0. Under these circumstances, the above system is a toy model of N=2 supersymmetry. The spin down and spin up states are often referred to as the "bosonic" and "fermionic" states, respectively, in an analogy to quantum field theory. With these definitions, Q1 and Q2 map "bosonic" states into "fermionic" states and vice versa. Restricting to the bosonic or fermionic sectors gives two partner potentials determined by In four spacetime dimensions In supersymmetric quantum field theories with four spacetime dimensions, which might have some connection to nature, it turns out that scalar fields arise as the lowest component of a chiral superfield, which tends to automatically be complex valued. We may identify the complex conjugate of a chiral superfield as an anti-chiral superfield. There are two possible ways to obtain an action from a set of superfields: Integrate a superfield on the whole superspace spanned by and , or Integrate a chiral superfield on the chiral half of a superspace, spanned by and , not on . The second option tells us that an arbitrary holomorphic function of a set of chiral superfields can show up as a term in a Lagrangian which is invariant under supersymmetry. In this context, holomorphic means that the function can only depend on the chiral superfields, not their complex conjugates. We may call such a function W, the superpotential. The fact that W is holomorphic in the chiral superfields helps explain why supersymmetric theories are relatively tractable, as it allows one to use powerful mathematical tools from complex analysis. Indeed, it is known that W receives no perturbative corrections, a result referred to as the perturbative non-renormalization theorem. Note that non-perturbative processes may correct this, for example through contributions to the beta functions due to instantons. See also Komar superpotential References Stephen P. Martin, A Supersymmetry Primer''. . B. Mielnik and O. Rosas-Ortiz, "Factorization: Little or great algorithm?", J. Phys. A: Math. Gen. 37: 10007-10035, 2004 Supersymmetry Supersymmetric quantum field theory Potentials
Superpotential
Physics
829
4,390,618
https://en.wikipedia.org/wiki/Solketal
Solketal is a protected form of glycerol with an isopropylidene acetal group joining two neighboring hydroxyl groups. Solketal contains a chiral center on the center carbon of the glycerol backbone, and so can be purchased as either the racemate or as one of the two enantiomers. Solketal has been used extensively in the synthesis of mono-, di- and triglycerides by ester bond formation. The free hydroxyl group of solketal can be esterified with a carboxylic acid to form the protected monoglyceride. The isopropylene group can then be removed using an acid catalyst in aqueous or alcoholic medium. The unprotected diol can then be esterified further to form either the di- or triglyceride. Another route to specific di- or triglycerides involves converting the solketal to glycidol (2,3-epoxy-1-propanol) and esterifying this with one fatty acid before opening the epoxy by heating in the presence of a second fatty acid and a catalyst. This second fatty acid is put on the third carbon atom, and then a third fatty acid can be added to the second carbon atom. References Ketals Primary alcohols
Solketal
Chemistry
278
19,660,624
https://en.wikipedia.org/wiki/Ursell%20function
In statistical mechanics, an Ursell function or connected correlation function, is a cumulant of a random variable. It can often be obtained by summing over connected Feynman diagrams (the sum over all Feynman diagrams gives the correlation functions). The Ursell function was named after Harold Ursell, who introduced it in 1927. Definition If X is a random variable, the moments sn and cumulants (same as the Ursell functions) un are functions of X related by the exponential formula: (where is the expectation). The Ursell functions for multivariate random variables are defined analogously to the above, and in the same way as multivariate cumulants. The Ursell functions of a single random variable X are obtained from these by setting . The first few are given by Characterization showed that the Ursell functions, considered as multilinear functions of several random variables, are uniquely determined up to a constant by the fact that they vanish whenever the variables Xi can be divided into two nonempty independent sets. See also Cumulant References Statistical mechanics Theory of probability distributions
Ursell function
Physics
226
40,419,048
https://en.wikipedia.org/wiki/Macrocarpamine
Macrocarpamine is an Alstonia alkaloid with antiplasmodial activity. References Indole alkaloids
Macrocarpamine
Chemistry
28
8,533,909
https://en.wikipedia.org/wiki/Recursive%20join
The recursive join is an operation used in relational databases, also sometimes called a "fixed-point join". It is a compound operation that involves repeating the join operation, typically accumulating more records each time, until a repetition makes no change to the results (as compared to the results of the previous iteration). For example, if a database of family relationships is to be searched, and the record for each person has "mother" and "father" fields, a recursive join would be one way to retrieve all of a person's known ancestors: first the person's direct parents' records would be retrieved, then the parents' information would be used to retrieve the grandparents' records, and so on until no new records are being found. In this example, as in many real cases, the repetition involves only a single database table, and so is more specifically a "recursive self-join". Recursive joins can be very time-consuming unless optimized through indexing, the addition of extra key fields, or other techniques. Graph traversals come at a lower cost than the method of recursive joins. Recursive joins are highly characteristic of hierarchical data, and therefore become a serious issue with XML data. In XML, operations such as determining whether one element contains another are extremely common, and the recursive join is perhaps the most obvious way to implement them when the XML data is stored in a relational database. The standard way to define recursive joins in the SQL:1999 standard is by way of recursive common table expressions. Database management systems that support recursive CTEs include Microsoft SQL Server, Oracle, PostgreSQL and others. See also Join Hierarchical and recursive queries in SQL Database theory Relational model References
Recursive join
Technology
368
14,323,631
https://en.wikipedia.org/wiki/Old%20Street%20Roundabout
Old Street Roundabout is a road junction in Central London, England. Historically a square roundabout, it is now a three-way junction. It is among access points of the Inner Ring Road for the adjoining St Luke's south part of Islington and the City of London beyond, west and south, respectively. It is roughly on the western limit of Hoxton in the London Borough of Hackney which straddles both sides of the Ring Road, a road which after taking up a little of the eastern part of Old Street then veers south-east, taking Great Eastern Street, at Apex Junction. It is sometimes known as St. Agnes Well after the shopping centre beneath it, while the moniker of Silicon Roundabout owes to the local prominence of technology companies. Since October 2020 the layout has been a simple junction, not gyratory. Connections City Road crosses the roundabout, running south towards the City of London (particularly Moorgate and Liverpool Street stations), and north-west towards Angel, Pentonville, and the two northward railway terminus districts: King's Cross/St. Pancras and Euston. The main, namely north-east side, the north-western continuation of City Road, and Great Eastern Street are the limit the congestion charge zone (CCZ). To the west of Old Street are Clerkenwell, Finsbury, and (further afield) the West End. To the east are Shoreditch and London's East End. St. Agnes Well The shopping complex serving the broad underpass at the centre of the roundabout is named St. Agnes Well, after an ancient well thought to have been about to the east, at the junction of Old Street and Great Eastern Street. Remnants of the well can be found within Old Street station. Old Street station Old Street station is below Old Street Roundabout. It is served by the Bank branch of the London Underground Northern line and by National Rail Great Northern trains. With the increase in passenger numbers using the station, in 2014 Transport for London announced that it was to offer pop-up retail space at Old Street station as part of a drive to increase its revenue. Silicon Roundabout The term Silicon Roundabout refers to the high number of web businesses near the Old Street Roundabout (also in East London), by analogy to Silicon Valley in California. Collisions involving cyclists A number of collisions involving cyclists have occurred at Old Street roundabout. According to the London Cycling Campaign, the junction is among the top three in London for collisions involving cyclists. Within a few days in February 2011 two cyclists were severely injured in collisions involving lorries on or very close to the roundabout. In another collision involving a lorry in 2008, a cyclist suffered severe leg injuries, which the police described as "potentially life-changing". In response to this Transport for London proposed a massive transformation of the roundabout, into a pedestrian square with segregated cycle lanes and road signals. On 25 July 2018, a cyclist was severely injured on Old Street roundabout following a collision with a lorry. Reconfiguration After extensive public consultation held in 2014–15, plans to broaden the non-motor vehicle area began in 2018. In 2019, the work began by Transport for London (TfL) in conjunction with Morgan Sindall. and the Boroughs of Islington and Hackney to create a much more pedestrian- and cycle-friendly zone. Remaining motor traffic is two-way to speed up pedestrian crossings and allow segregated cycle lanes. The work created a well-lit pedestrianised space around the new station main entrance. References Junctions Roundabouts in England Streets in the London Borough of Hackney Streets in the London Borough of Islington Information technology places High-technology business districts in the United Kingdom it:Old Street#La rotatoria di Old Street
Old Street Roundabout
Technology
758
47,328,766
https://en.wikipedia.org/wiki/Beer%20chemistry
The chemical compounds in beer give it a distinctive taste, smell and appearance. The majority of compounds in beer come from the metabolic activities of plants and yeast and so are covered by the fields of biochemistry and organic chemistry. The main exception is that beer contains over 90% water and the mineral ions in the water (hardness) can have a significant effect upon the taste. Four main ingredients Four main ingredients are used for making beer in the process of brewing: carbohydrates (from malt), hops, yeast, and water. Carbohydrates (from malt) The carbohydrate source is an essential part of the beer because unicellular yeast organisms convert carbohydrates into energy to live. Yeast metabolize the carbohydrate source to form a number of compounds including ethanol. The process of brewing beer starts with malting and mashing, which breaks down the long carbohydrates in the barley grain into more simple sugars. This is important because yeast can only metabolize very short chains of sugars. Long-carbohydrates are polymers, large branching linkages of the same molecule over and over. In the case of barley, we mostly see polymers called amylopectin and amylose which are made of repeating linkages of glucose. On very large time-scales (thermodynamically) these polymers would break down on their own, and there would be no need for the malting process. The process is normally sped up by heating up the barley grain. This heating process activates enzymes called amylases. The shape of these enzymes, their active site, gives them the unique and powerful ability to speed these degradation reactions to over 100,000 times faster. The reaction that takes place at the active site is called a hydrolysis reaction, which is a cleavage of the linkages between the sugars. Repeated hydrolysis breaks the long amylopectin polymers into simpler sugars that can be digested by the yeast. Hops Hops are the flowers of the hops plant Humulus lupulus. These flowers contain over 440 essential oils, which contribute to the aroma and non-bitter flavors of beer. However, the distinct bitterness especially characteristic of pale ales comes from a family of compounds called alpha-acids (also called humulones) and beta-acids (also called lupulones). Generally, brewers believe that α-acids give the beer a pleasant bitterness whereas β-acids are considered less pleasant. α-acids isomerize during the boiling process in the reaction pictured. The six-member ring in the humulone isomerizes to a five-member ring, but it is not commonly discussed how this affects perceived bitterness. Yeast In beer, the metabolic waste products of yeast are a significant factor. In aerobic conditions, the yeast will use in the glycolysis the simple sugars obtained from the malting process, and convert pyruvate, the major organic product of glycolysis, into carbon dioxide and water via the cellular respiration. Many homebrewers use this aspect of yeast metabolism to carbonate their beers. However, under industrial anaerobic conditions yeasts cannot use pyruvate, the end products of glycolysis, to generate energy in cellular respiration. Instead, they rely on a process called fermentation. Fermentation converts pyruvate into ethanol through the intermediate acetaldehyde. Water Water can often play, directly or indirectly, a very important role in the way a beer tastes, as it is the main ingredient. The ion species present in water can affect the metabolic pathways of yeast, and thus the metabolites one can taste. For example, calcium and iron ions are essential in small amounts for yeast to survive, because these metal ions are usually required cofactors for yeast enzymes. Beer carbonation In aerobic conditions, yeast turns sugars into pyruvate then converts pyruvate into water and carbon dioxide. This process can carbonate beers. In commercial production, the yeast works in anaerobic conditions to convert pyruvate into ethanol, and does not carbonate beer. Beer is carbonated with pressurized CO2. When beer is poured, carbon dioxide dissolved in the beer escapes and forms tiny bubbles. These bubbles grow and accelerate as they rise by feeding off of nearby smaller bubbles, a phenomenon known as Ostwald ripening. These larger bubbles lead to “coarser” foam on top of poured beer. Nitro beer (CO2 replaced by N2 gas) Beers can be carbonated with CO2 or made sparkling with an inert gas such as nitrogen (N2), argon (Ar), or helium (He). Inert gases are not as soluble in water as carbon dioxide, so they form bubbles that do not grow through Ostwald ripening. This means that the beer has smaller bubbles and a more creamy and stable head. These less soluble inert gases give the beer a different and flatter texture. In beer terms, the mouthfeel is smooth, not bubbly like beers with normal carbonation. Nitro beer (for nitrogen beer) could taste less acidic than normal beer. Aromatic compounds Beers contain many aromatic substances. Up to now, chemists using advanced analytical instruments such as gas and high performance liquid chromatographs coupled to mass spectrometers, have discovered over 7,700 different chemical compounds in beers. Foam stabilizers The beer foam stability depends amongst other on the presence of transition metal ions (, , , ...), macromolecules such as polysaccharides, proteins, and isohumulone compounds from hops in the beer. Foam stability is an important concern for the first perception of the beer by the consumer and is therefore the object of the greatest care by the brewers and the barmen in charge to serve draft beer, or to properly pour beer into a glass from the bottle (with a good head retention and without overfoaming, or gushing when opening the bottle). Many patents for various types of beer foam stabilizers have been filed by breweries and the agro-chemical industry in the last decades. Cobalt salts added at low concentration (1 – 2 ppm) were popular in the sixties, but raised the question of cobalt toxicity in case of undetected accidental overdosage during beer production. As an alternative, organic foam stabilizers are produced by hydrolysis of recovered by-products of beer manufacture, such as spent grains or hops residues. Amongst the large spectrum of purified, or modified, natural food additives available on the market, soluble carboxymethyl hydroxyethyl cellulose, propylene glycol alginate (PGA, food additive with E number E405), pectins and gellan gum have also been investigated as foam stabilizer. Cobalt salts In 1957, two brewing chemists, Thorne and Helm, discovered that the cation was able to stabilize beer foam and to avoid beer overfoaming and gushing. The addition of a tiny amount of cobalt ions in the range 1 – 2 mg/L (ppm) was effective. Higher concentrations would be toxic and lower ones ineffective. Cobalt is a transition metal whose atomic orbitals are able to interact with ligands, or functional groups (–OH, –COOH, –NH2), attached to organic molecules naturally present in the beer, making macromolecular coordination complexes stabilizing the beer foam. Cobalt could behave as an inter- or intra-molecular bridge between different polysaccharide molecules (changing their shape or size), or cause some conformational changes of different types of molecules present in solution, affecting their absolute configuration and thus the foam molecular structure and its behavior. Thorne and Helm (1957) also formulated the hypothesis that cobalt, by being complexed with certain nitrogenous constituents of the beer (e.g., amino acids from malt proteins), might produce surface-active substances inactivating the gaseous nuclei responsible for overfoaming and gushing. Gushing is a specific problem also studied into more details by Rudin and Hudson (1958). These authors discovered that gushing is also promoted by other transition metal ions such as these of nickel and iron, but not by cobalt ions. Isohumulone (an iso-alpha acid responsible for the bitter taste of hops) and its combinations with Ni, or Fe, also favor gushing, while pure Co ions or their combination with isohumulone do not exhibit gushing and overfoaming. This explains why cobalt salts were specifically selected at a concentration of 1 – 2 mg/L as anti-gushing agent for beer. Rudin and Hudson (1958) and other authors also found that Co, Ni and Fe ions preferentially concentrate in the foam itself. In the sixties, after approval by the US FDA, cobalt sulfate was commonly used at low concentration in the USA as an additive to stabilize beer foam and to prevent gushing after beer is exposed to vibrations during its transport or handling. Although cobalt is an essential micronutrient needed for vitamin B12 synthesis, excess levels of cobalt in the body can lead to cobalt poisoning and must be avoided. It triggered the development of qualitative and quantitative analysis methods to accurately assay cobalt in beer in order to prevent accidental overdosage and cobalt poisoning. Too high levels of cobalt are known to be responsible for the beer drinker's cardiomyopathy. The first issues mentioned in the literature were reported in Canada in the middle of the sixties after an accidental overdosage in the Dow Breweries in Quebec City. In August 1965, a person presented to a hospital in Quebec City with symptoms suggestive of alcoholic cardiomyopathy. Over the next eight months, fifty more cases with similar findings appeared in the same area with twenty of these being fatal. It was noted that all were heavy drinkers who mostly drank beer and preferred the Dow brand; thirty out of those drank more than 6 litres (12 pints) of beer per day. Epidemiological studies found that the Dow Breweries had been adding cobalt sulfate to the beer for foam stability since July 1965 and that the concentration added in the Quebec city brewery was ten times that of the same beer brewed in Montreal where there were no reported cases. Storage and degradation A particular problem with beer is that, unlike wine, its quality tends to deteriorate as it ages. A cat urine smell and flavor called ribes, named for the genus of the black currant, tends to develop and peak. A cardboard smell then dominates which is due to the release of 2-nonenal. In general, chemists believe that the "off-flavors" that come from old beers are due to reactive oxygen species. These may come in the form of oxygen free radicals, for example, which can change the chemical structures of compounds in beer that give them their taste. Oxygen radicals can cause increased concentrations of aldehydes from the Strecker degradation reactions of amino acids in beer. Beer is unique when compared to other alcoholic beveragess because it is unstable in the final package. There are many variables and chemical compounds that affect the flavor of beer during the production steps, but also during the storage of beer. Beer will develop an off-flavor during storage because of many factors, including sunlight and the amount of oxygen in the headspace of the bottle. Other than changes in taste, beer can also develop visual changes. Beer can become hazy during storage. This is called colloidal stability (haze formation) and is typically caused by the raw materials used during the brewing process. The primary reaction that causes beer haze is the polymerization of polyphenols and binding with specific proteins. This type of haze can be seen when beer is cooled below 0 degrees Celsius. When the beer is raised to room temperature, the haze dissolves. But if a beer is stored at room temperature for too long (about 6 months) a permanent haze will form. A study done by Heuberger et al. (2012) concludes that storage temperature of beers affects the flavor stability. They found that the metabolite profile of room temperature and cold temperature stored beer differed significantly from fresh beer. They also have evidence to support significant beer oxidation after weeks of storage, which also has an effect on the flavor of beer. The off-flavour in beer, such as a cardboard or green apple taste, is often associated with the appearance of staling aldehydes. The Strecker aldehydes responsible for the flavor change are formed during storage of the beers. Philip Wietstock et al. performed experiments to test what causes the formation of Strecker aldehydes during storage. They found that only amino acid concentration (leucine (Leu), isoleucine (Ile), and phenylalanine (Phe), specifically) and dissolved oxygen concentration caused Strecker aldehyde formation. They also tested carbohydrate and Fe2+ additions. A linear relationship was found between Strecker aldehydes formed and total packaged oxygen. This is important for brewers to know so that they can control the taste of their beer. Wietstock concludes that capping beers with oxygen barrier crown corks will diminish Strecker aldehyde formation. In another study done by Vanderhaegen et al. (2003), different aging conditions were tested on a bottled beer after 6 months. They found a decrease in volatile esters was responsible for a reduced fruity flavor. They also found an increase in many other compounds including carbonyl compounds, ethyl esters, Maillard compounds, dioxolanes, and furanic ethers. The carbonyl compounds, as stated previously in the Wietstock experiments, will create Strecker aldehydes, which tend to cause a green apple flavor. Esters are known to cause fruity flavors such as pears, roses, and bananas. Maillard compounds will cause a toasty, malty flavor. A study done by Charles Bamforth and Roy Parsons (1985) also confirms that beer staling flavors are caused by various carbonyl compounds. They used thiobarbituric acid (TBA) to estimate the staling substances after using an accelerated aging technique. They found that beer staling is reduced by scavengers of the hydroxyl radical (•OH), such as mannitol and ascorbic acid. They also tested the hypothesis that soybean extracts included in the fermenting wort enhance the shelf life of beer flavor. See also Barm Beer head Beer faults Brewing Bright beer Carbonation Lightstruck beer Mouthfeel Isohumulone Yeast in Beermaking References Citations Sources External links Tapping into the Chemistry of Beer and Brewing—an online lecture by Charles Bamforth, Professor of Malting & Brewing Sciences at the University of California Chemistry of Beer—an online course in the subject at the University of Oklahoma Beer Alcohol chemistry
Beer chemistry
Chemistry
3,094
14,483,033
https://en.wikipedia.org/wiki/Aurora%20%28protocol%29
The Aurora Protocol is a link layer communications protocol for use on point-to-point serial links. Developed by Xilinx, it is intended for use in high-speed (gigabits/second and more) connections internally in a computer or in an embedded system. It uses either 8b/10b encoding or 64b/66b encoding. External links Official Document (8b/10b) Official Document (64b/66b) Serial buses Link protocols
Aurora (protocol)
Technology
98
1,159,033
https://en.wikipedia.org/wiki/Boule%20%28crystal%29
A boule is a single-crystal ingot produced by synthetic means. A boule of silicon is the starting material for most of the integrated circuits used today. In the semiconductor industry synthetic boules can be made by a number of methods, such as the Bridgman technique and the Czochralski process, which result in a cylindrical rod of material. In the Czochralski process a seed crystal is required to create a larger crystal, or ingot. This seed crystal is dipped into the pure molten silicon and slowly extracted. The molten silicon grows on the seed crystal in a crystalline fashion. As the seed is extracted the silicon solidifies and eventually a large, cylindrical boule is produced. A semiconductor crystal boule is normally cut into circular wafers using an inside hole diamond saw or diamond wire saw, and each wafer is lapped and polished to provide substrates suitable for the fabrication of semiconductor devices on its surface. The process is also used to create sapphires, which are used for substrates in the production of blue and white LEDs, optical windows in special applications and as the protective covers for watches. References Crystals Semiconductor growth
Boule (crystal)
Chemistry,Materials_science
234
2,665,417
https://en.wikipedia.org/wiki/Gujarati%20numerals
Gujarati numerals is the numeral system of the Gujarati script of South Asia, which is a derivative of Devanagari numerals. It is the official numeral system of Gujarat, India. It is also officially recognized in India and as a minor script in Pakistan. Digits The following table shows Gujarati digits and the Gujarati word for each of them in various scripts. Larger numbers Digits are combined to represent numbers larger than 9 as per the standard positional decimal rules. See also Gujarati script Gurmukhi numerals References Numerals Gujarati culture External links 1 to 100 Gujarati Numbers and Words from English
Gujarati numerals
Mathematics
126
68,445,549
https://en.wikipedia.org/wiki/Mizab%20al-Rahma
The Mīzāb al-Raḥma (, 'gutter of mercy'), also known as the Mīzāb al-Kaʿba ('gutter of the Kaʿba'), is a rain gutter projecting from the roof of the Kaʿba enabling rainwater to pour to the ground below. Architecture The roof of the Kaʿba is flat, but slopes gently down to the north-west corner. From this corner, the mīzāb juts out, conducting rainwater from the roof. The lip of the mīzāb has an appendage known as the "beard of the mīzāb". The ground below is paved with marble slabs and decorated with inlaid mosaic designs. The design of the mīzāb has changed over the years; the current form is golden. Its length is , which is included in the wall of the Kaaba, its cavity width is , the height of each side is , and its entry into the roof wall is . A detailed description of the mīzāb around 1183–85 CE is offered by Ibn Jubayr: The Mizab is on the top of the wall which overlooks the Hijr. It is of gilded copper and projects four cubits over the Hijr, its breadth being a span. This place under the waterspout is also considered as being a place where, by the favour of God Most High, prayers are answered. The Yemen corner is the same. The wall connecting this place with the Syrian corner is called al-Mustajar [The Place of Refuge]. Underneath the water-spout, and in the court of the Hijr near to the wall of the blessed House, is the tomb of Isma'il [Ishmael] - may God bless and preserve him. Its mark is a slab of green marble, almost oblong and in the form of a mihrab. Beside it is a round green slab of marble, and both [they are verde antico] are remarkable to look upon. Role in worship In his Kitāb Akhbār Makka, the ninth-century scholar al-Azraqī wrote with reference to the mīzāb that "anyone who performs the ṣalāt under the mat̲h̲ʿab becomes as pure as on the day when his mother bore him". Ibn Jubayr offers a vivid account of worship at the mīzāb in 1183 CE: One of the things that deserve to be confirmed and recorded for the blessings and favour of seeing and observing it is that on Friday the 19th of Jumada l-Ula, which was the 9th of September [1183], God raised from the sea a cloud which moved towards Damascus and rained heavily like an abundant fountain, according to the words of the Messenger of God--may God bless and preserve him. It came at the ending of the afternoon's prayers and with the evening of the same day, raining copiously. Men hastened to the Hijr and stood beneath the blessed water-spout, stripping off their clothes and meeting the water that flowed from it with their heads, their hands, and their mouths. They pressed round it in a throng, raising a great clamour, each one coveting for his body a share of the divine mercy. Their prayers went up, the tears of the contrite flowed, and you could hear nothing but the swell of voices in prayer and the sobs of the weeping. The women stood without the Hijr, watching with weeping eyes and humble hearts, wishing they could go to that spot. Some pilgrims listful of performing a meritorious act, and moved as well to pity, drenched their clothes in the blessed water and, going out to the women, wrung them into the hands of some of them. They took it and drank it and laved it over their faces and bodies. History The first Mīzāb that worked for the Kaʿba was that the Quraish made when building it before the Prophetic mission. Then the Mīzāb of Abd Allah ibn al-Zubayr when he built the Kaʿba in 684 AD. Then the Mīzāb of Al-Hajjaj ibn Yusuf, who rebuilt the Kaʿba in 692 AD. Then the Mīzāb of Sheikh Abu al-Qasim Ramesht, which his slave reached after his death in 1142 AD. Then the Mīzāb of Al-Muqtafi in 1146 AD. Then the Mīzāb of Al-Nasir in 1279 AD. Then the Mīzāb of Suleiman the Magnificent in 1551 AD. Then the Mīzāb which was made from Egypt in 1554 AD. Then the Mīzāb of The Ottoman Sultan Ahmed I Ibn Muhammad III in 1612 AD. Then the Mīzāb of The Sultan Abdulmejid I in 1856 AD. Then the Mīzāb, which was sent with Haji Rida Pasha in 1859 AD. Then the Mīzāb of the reign of King Fahd bin Abdulaziz in 1997, when he replaced the old Mīzāb for the roof of the Ka'aba with a new one, stronger with the same specifications as the old one. Further reading Caїd Ben Chérif, Aux Villes Saintes de l’Islam (Paris, 1919), p. 75. References Kaaba Stormwater management
Mizab al-Rahma
Chemistry,Environmental_science
1,101
15,355,129
https://en.wikipedia.org/wiki/WNT7B
Wnt7b is a signaling protein that plays a crucial role for many developmental processes including placental, lung, eye, dendrite, and bone formation along with kidney development. The primary role of Wnt7b is to establish the cortico-medullary axis of epithelial organization. Protein Wnt-7b is a protein that in humans is encoded by the WNT7B gene. Function The WNT gene family consists of structurally related genes that encode secreted signaling proteins. These proteins have been implicated in oncogenesis and in several developmental processes, including regulation of cell fate and patterning during embryogenesis. This gene is a member of the WNT gene family. It encodes a protein showing 99% and 91% amino acid identity to the mouse and Xenopus Wnt7A proteins, respectively. Among members of the human WNT family, this protein is most similar to WNT7A protein (77.1% total amino acid identity). This gene may play important roles in the development and progression of gastric cancer, esophageal cancer, and pancreatic cancer. Role in kidney Wnt7b is a key paracrine signaling factor secreted by the ureteric epithelium that establishes the cortico-medullary axis of the mammalian kidney. Wnt7b is a signaling protein that plays a crucial role for many developmental processes including placental, lung, eye, dendrite, and bone formation along with kidney development. The primary role of Wnt7b is to establish the cortico-medullary axis of epithelial organization. The establishment of the cortico-medullary axis plays an essential role for the development of the medullary component of the kidney. Wnt7b regulates orientation of cell divisions in renal medullary collecting duct epithelium, in which is the major structure driving renal medulla formation. There are two interstitial regulators that are explicitly linked to medullary development; Pod1 which encodes a transcription factor expressed in the renal interstitium while p57Kip2 encodes a cyclin-dependent kinase inhibitor which is linked to Beckwith-Wiedemann syndrome. Pod1, p57Kip2 and integrin α3 (ITGA3) are three factors that are involved in renal medullary morphogenesis. The knockout of Pod1 results in no renal medullary formation while p57Kip2 and Itga3 knockouts resulted in a reduced renal medulla. Removal of Wnt7b activity leads to a failure of medullary development while other aspects of kidney development including ureteric branching, development of the renal cortex, and nephrogenesis are unaffected. The absence of renal medulla also affects the plane of epithelial cell division along with little proliferative growth of the loop of Henle. Wnt7b null allele will result in fatality due to the diminution of placental function leading to the failure to initiate organogenesis. References Further reading
WNT7B
Chemistry
636
8,908,895
https://en.wikipedia.org/wiki/Thioacetic%20acid
Thioacetic acid is an organosulfur compound with the molecular formula . It is a thioic acid: the sulfur analogue of acetic acid (), as implied by the thio- prefix. It is a yellow liquid with a strong thiol-like odor. It is used in organic synthesis for the introduction of thiol groups () in molecules. Synthesis and properties Thioacetic acid is prepared by the reaction of acetic anhydride with hydrogen sulfide: It has also been produced by the action of phosphorus pentasulfide on glacial acetic acid, followed by distillation. Thioacetic acid is typically contaminated by acetic acid. The compound exists exclusively as the thiol tautomer, consistent with the strength of the double bond. Reflecting the influence of hydrogen-bonding, the boiling point (93 °C) and melting points are 20 and 75 K lower than those for acetic acid. Reactivity Acidity With a pKa near 3.4, thioacetic acid is about 15 times more acidic than acetic acid. The conjugate base is thioacetate: In neutral water, thioacetic acid is fully ionized. Reactivity of thioacetate Most of the reactivity of thioacetic acid arises from the conjugate base, thioacetate. Salts of this anion, e.g. potassium thioacetate, are used to generate thioacetate esters. Thioacetate esters undergo hydrolysis to give thiols. A typical method for preparing a thiol from an alkyl halide using thioacetic acid proceeds in four discrete steps, some of which can be conducted sequentially in the same flask: , where X = Cl, Br, I In an application that illustrates the use of its radical behavior, thioacetic acid is used with AIBN in a free radical mediated nucleophilic addition to an exocyclic alkene forming a thioester: Reductive acetylation Potassium thioacetate can be used convert nitroarenes to aryl acetamides in one step. This is particularly useful in the preparation of pharmaceuticals, e.g., paracetamol from 4-nitrophenol or 4-nitroanisole. References Thiocarboxylic acids Reagents for organic chemistry Foul-smelling chemicals
Thioacetic acid
Chemistry
503
213,380
https://en.wikipedia.org/wiki/Left-%20and%20right-hand%20traffic
Left-hand traffic (LHT) and right-hand traffic (RHT) are the practices, in bidirectional traffic, of keeping to the left side and to the right side of the road, respectively. They are fundamental to traffic flow, and are sometimes called the rule of the road. The terms right- and left-hand drive refer to the position of the driver and the steering wheel in the vehicle and are, in automobiles, the reverse of the terms right- and left-hand traffic. The rule also includes where on the road a vehicle is to be driven, if there is room for more than one vehicle in one direction, and the side on which the vehicle in the rear overtakes the one in the front. For example, a driver in an LHT country would typically overtake on the right of the vehicle being overtaken. RHT is used in 165 countries and territories, mainly in the Americas, Continental Europe, most of Africa and mainland Asia (except South Asia), while 75 countries use LHT, which account for about a sixth of the world's land area, a quarter of its roads, and about a third of its population. In 1919, 104 of the world's territories were LHT and an equal number were RHT. Between 1919 and 1986, 34 of the LHT territories switched to RHT. While many of the countries using LHT were part of the British Empire, others such as Indonesia, Japan, Nepal, Bhutan, Macao, Thailand, Mozambique and Suriname were not. Sweden and Iceland, which have used RHT since September 1967 and late May 1968 respectively, previously used LHT. Most of the countries that were part of the French colonial empire adopted RHT. Historical switches of traffic handedness have often been motivated by factors such as changes in political administration, a desire for uniformity within a country or with neighboring states, or availability and affordability of vehicles. In LHT, traffic keeps left and cars usually have the steering wheel on the right (RHD: right-hand drive) and roundabouts circulate clockwise. RHT is the opposite: traffic keeps right, the driver usually sits on the left side of the car (LHD: left-hand drive), and roundabouts circulate anticlockwise. In most countries, rail traffic follows the handedness of the roads; but many of the countries that switched road traffic from LHT to RHT did not switch their trains. Boat traffic on bodies of water is RHT, regardless of location. Boats are traditionally piloted from the starboard side (and not the port side like RHT road traffic vehicles) to facilitate priority to the right. Background Historically, many places kept left, while many others kept right, often within the same country. There are many myths that attempt to explain why one or the other is preferred. About 90 percent of people are right-handed, and many explanations reference this. Horses are traditionally mounted from the left, and led from the left, with the reins in the right hand. So people walking horses might use RHT, to keep the animals separated. Also referenced is the need for pedestrians to keep their swords in the right hand and pass on the left as in LHT, for self-defence. It has been suggested that wagon-drivers whipped their horses with their right hand, and thus sat on the left-hand side of the wagon, as in RHT. Academic Chris McManus notes that writers have stated that in 1300, Pope Boniface VIII directed pilgrims to keep left; others suggest that he directed them to keep to the right, and there is no documented evidence to back either claim. Africa The UK introduced LHT in the East Africa Protectorate (present-day Kenya), the Protectorate of Uganda, Tanganyika (formerly part of German East Africa; present-day Tanzania), Rhodesia (present-day Zambia/Zimbabwe), Eswatini and the Cape Colony (present-day South Africa and Lesotho), as well as in British West Africa (present-day Ghana, Gambia, Sierra Leone and Nigeria); former British West Africa, however, has now switched to RHT, as all its neighbours, which are former French colonies, use RHT. South Africa, formerly the Cape Colony, introduced LHT in former German South West Africa, present-day Namibia, after the end of World War I. Sudan, formerly part of Anglo-Egyptian Sudan, switched to RHT in 1973. Most of its neighbours were RHT countries, with the exception of Uganda and Kenya, but since the independence of South Sudan in 2011, all of its neighbours drive on the right (including South Sudan, despite its land borders with two LHT countries). Although Portugal switched to RHT in 1928, its colony of Mozambique remained LHT because it has land borders with former British colonies (with LHT). France introduced RHT in French West Africa and the Maghreb, where it is still used. Countries in these areas include Mali, Mauritania, Ivory Coast, Burkina Faso, Benin, Niger, Morocco, Algeria, and Tunisia. Other French former colonies that are RHT include Cameroon, Central African Republic, Chad, Djibouti, Gabon, and the Republic of the Congo. Rwanda and Burundi are RHT but are considering switching to LHT (see "Potential future shifts" section below). Americas United States In the late 18th century, right-hand traffic started to be introduced in the United States based on teamsters' use of large freight wagons pulled by several pairs of horses and without a driver's seat; the (typically right-handed) postilion held his whip in his right hand and thus sat on the left rear horse, and therefore preferred other wagons passing on the left so that he would have a clear view of other vehicles. The first keep-right law for driving in the United States was passed in 1792 and applied to the Philadelphia and Lancaster Turnpike. Massachusetts formalized RHT in 1821. However, the National Road was LHT until 1850, "long after the rest of the country had settled on the keep-right convention". Today the United States is RHT except the United States Virgin Islands, which is LHT like many neighbouring islands. Some special-purpose vehicles in the United States, like certain postal service trucks, garbage trucks, and parking-enforcement vehicles, are built with the driver's seat on the right for safer and easier access to the curb. A common example is the Grumman LLV, which is used nationwide by the US Postal Service and by Canada Post. Other countries in the Americas In Canada, the provinces of Quebec and Ontario were always RHT because they were created out of the former French colony of New France. The province of British Columbia changed to RHT in stages from 1920 to 1923, New Brunswick, Nova Scotia, and Prince Edward Island in 1922, 1923, and 1924 respectively, and the Dominion of Newfoundland (part of Canada since 1949) in 1947, in order to allow traffic (without side switch) to or from the United States. In the West Indies, colonies and territories drive on the same side as their parent countries, except for the United States Virgin Islands. Many of the island nations are former British colonies and drive on the left, including Jamaica, Antigua and Barbuda, Barbados, Dominica, Grenada, Saint Kitts and Nevis, Saint Lucia, Saint Vincent and the Grenadines, Trinidad and Tobago, and The Bahamas. However, most vehicles in The Bahamas, Cayman Islands, Turks and Caicos Islands and both the British Virgin Islands, and the United States Virgin Islands are LHD due to their being imported from the United States. Brazil, a Portuguese colony until the early 19th century, had in the 19th and the early 20th century mixed rules, with some regions still on LHT, switching these remaining regions to RHT in 1928, the same year Portugal switched sides. Other Central and South American countries that later switched from LHT to RHT include Argentina, Chile, Panama, Paraguay, and Uruguay. Suriname, along with neighbouring Guyana, are the only two remaining LHT countries in South America. Asia LHT was introduced by the UK in British India (now India, Pakistan, Myanmar, and Bangladesh), British Malaya and British Borneo (now Malaysia, Brunei and Singapore), as well as British Hong Kong. These countries, except Myanmar, are still LHT, as well as neighbouring countries Bhutan and Nepal. Myanmar switched to RHT in 1970, although much of its infrastructure is still geared to LHT as its neighbours India, Bangladesh and Thailand use LHT. Most cars are used RHD vehicles imported from Japan. Afghanistan was LHT until the 1950s, in line with Pakistan (former part of British India). Although Portuguese Timor (present-day East Timor), which shares the island of Timor with Indonesia, who is LHT, switched to RHT with Portugal in 1928, it switched back to LHT in 1976 during the Indonesian occupation of East Timor. In the 1930s, parts of China such as the Shanghai International Settlement, Canton and Japanese-occupied northeast China used LHT. However, in 1946 the Republic of China made RHT mandatory in China (including Taiwan). Taiwan was LHT under Japanese colonization from 1895–1945. Portuguese Macau (present-day Macau) remained LHT, along with British Hong Kong, despite being transferred to China in 1999 and 1997 respectively. Both North Korea and South Korea use RHT since 1946, after liberation from Japanese colonialization. The Philippines was mostly LHT during its Spanish and American colonial periods, as well as during the Commonwealth era. During the Japanese occupation, the Philippines remained LHT, as was required by the Japanese; but during the Battle of Manila, the liberating American forces drove their tanks to the right for easier facilitation of movement. RHT was formalized in 1945 through a decree by president Sergio Osmeña. Even though RHT was formalized, RHD vehicles such as public buses were still imported into the Philippines until a law passed banning the importation of RHD vehicles except in special cases. These RHD vehicles are required to be converted to LHD. Japan was never part of the British Empire, but its traffic also drives on the left. Although this practice goes back to the Edo period (1603–1868), it was not until 1872 – the year Japan's first railway was introduced, built with technical aid from the British – that this unwritten rule received official acknowledgment. Gradually, a massive network of railways and tram tracks was built, with all railway vehicles driven on the left-hand side. However, it took another half-century, until 1924, until left-hand traffic was legally mandated. Post-World War II Okinawa was ruled by the United States Civil Administration of the Ryukyu Islands until 1972, and was RHT until 6 a.m. the morning of 30 July 1978, when it switched back to LHT. The conversion operation was known as 730 (Nana-San-Maru, which refers to the date of the changeover). Okinawa is one of only a few places to have changed from RHT to LHT in the late 20th century. While Japan drives on the left and most Japanese vehicles are RHD, imported vehicles (e.g. BMW, Mercedes-Benz, Porsche) are generally bought as LHD since LHD cars are considered to be status symbols. Vietnam became RHT as part of French Indochina, as did Laos and Cambodia. In Cambodia, RHD cars, many of which were smuggled from Thailand, were banned in 2001, even though they accounted for 80% of vehicles in the country. Europe In a study of the ancient traffic system of Pompeii, Eric Poehler was able to show that drivers of carts drove in the middle of the road whenever possible. This was the case even on roads wide enough for two lanes. The wear marks on the kerbstones, however, prove that when there were two lanes of traffic, and the volume of traffic made it necessary to divide the lanes, the drivers always drove on the right-hand side. These considerations can also be demonstrated in the archaeological findings of other cities in the Roman Empire. One of the first references in England to requiring traffic direction was an order by the London Court of Aldermen in 1669, requiring a man to be posted on London Bridge to ensure that "all cartes going to keep on the one side and all cartes coming to keep on the other side". It was later legislated as the London Bridge Act 1756 (29 Geo. 2 c. 40), which required that "all carriages passing over the said bridge from London shall go on the east side thereof" – those going south to remain on the east, i.e. the left-hand side by direction of travel. This may represent the first statutory requirement for LHT. In the Kingdom of Ireland, a law of 1793 (33 Geo. 3. c. 56 (I)) provided a ten-shilling fine to anyone not driving or riding on the left side of the road within the county of the city of Dublin, and required the local road overseers to erect written or printed notices informing road users of the law. The Road in Down and Antrim Act 1798 (38 Geo. 3. c. 28 (I)) required drivers on the road from Dublin to Donadea to keep to the left. This time, the punishment was ten shillings if the offender was not the owner of the vehicle, or one Irish pound (twenty shillings) if he/she was. The Grand Juries (Ireland) Act 1836 (6 & 7 Will. 4 c. 116) mandated LHT for the whole country, violators to be fined up to five shillings and imprisoned in default for up to one month. An oft-repeated story is that Napoleon changed the custom from LHT to RHT in France and the countries he conquered after the French Revolution. Scholars who have looked for documentary evidence of this story have found none, and contemporary sources have not surfaced, In 1827, long after Napoleon's reign, Edward Planta wrote that, in Paris, "The coachmen have no established rule by which they drive on the right or left of the road, but they cross and jostle one another without ceremony." Rotterdam had no fixed rules until 1917, although the rest of the Netherlands was RHT. In May 1917 the police in Rotterdam ended traffic chaos by enforcing right hand traffic. In Russia, in 1709, the Danish envoy under Tsar Peter the Great noted the widespread custom for traffic in Russia to pass on the right, but it was only in 1752 that Empress Elizabeth officially issued an edict for traffic to keep to the right. After the Austro-Hungarian Empire broke up, the resulting countries gradually changed to RHT. In Austria, Vorarlberg switched in 1921, North Tyrol in 1930, Carinthia and East Tyrol in 1935, and the rest of the country in 1938. In Romania, Transylvania, the Banat and Bukovina were LHT until 1919, while Wallachia and Moldavia were already RHT. Partitions of Poland belonging to the German Empire and the Russian Empire were RHT, while the former Austrian Partition changed in the 1920s. Croatia-Slavonia switched on joining the Kingdom of Yugoslavia in 1918, although Istria and Dalmatia were already RHT. The switch in Czechoslovakia from LHT to RHT had been planned for 1939, but was accelerated by the start of the German occupation of Czechoslovakia that year. In Italy, it had been decreed in 1901 that each province define its own traffic code, including the handedness of traffic, and the 1903 Baedeker guide reported that the rule of the road varied by region. For example, in Northern Italy, the provinces of Brescia, Como, Vicenza, and Ravenna were RHT while nearby provinces of Lecco, Verona, and Varese were LHT, as were the cities Milan, Turin, and Florence. In 1915, allied forces of World War I imposed LHT in areas of military operation, but this was revoked in 1918. Rome was reported by Goethe as LHT in the 1780s. Naples was also LHT although surrounding areas were often RHT. In cities, LHT was considered safer since pedestrians, accustomed to keeping right, could better see oncoming vehicular traffic. In 1923 Benito Mussolini decreed that all LHT areas would gradually transition to RHT. Portugal switched to RHT in 1928. Finland, formerly part of LHT Sweden, switched to RHT in 1858 as the Grand Duchy of Finland by Russian decree. Spain switched to RHT in 1918, but not in the entire country. In Madrid people continued to drive on the left until 1924 when a national law forced drivers in Madrid switch to RHT. Madrid Metro still uses LHT. Sweden switched to RHT in 1967, having been LHT from about 1734 despite having land borders with RHT countries Norway and Finland, and approximately 90% of cars being left-hand drive (LHD). A referendum in 1955 overwhelmingly rejected a change to RHT, but, a few years later, the government ordered it and it occurred on Sunday, 3 September 1967 at 5 am. The accident rate then dropped sharply, but soon rose to near its original level. The day was known as Högertrafikomläggningen, or Dagen H for short. When Iceland switched to RHT the following year, it was known as Hægri dagurinn or H-dagurinn ("The H-Day"). Most passenger cars in Iceland were already LHD. The United Kingdom is LHT, but two of its overseas territories, Gibraltar and the British Indian Ocean Territory, are RHT. In the late 1960s, the British Department for Transport considered switching to RHT, but declared it unsafe and too costly for such a built-up nation. Road building standards, for motorways in particular, allow asymmetrically designed road junctions, where merge and diverge lanes differ in length. Today, four countries in Europe continue to use LHT, all island nations: the United Kingdom, Republic of Ireland (formerly part of the UK), Cyprus and Malta (both former British colonies). Oceania Many former British colonies in the region have always been LHT, including Australia, New Zealand, Fiji, Kiribati, Solomon Islands, Tonga, and Tuvalu; and nations that were previously administered by Australia: Nauru and Papua New Guinea. New Zealand Initially traffic was slow and very sparse, but, as early as 1856, a newspaper said, "The cart was near to the right hand kerb. According to the rules of the road, it should have been on the left side. In turning sharp round a right-hand corner, a driver should keep away to the opposite side." That rule was codified when the first Highway Code was written in 1936. Samoa Samoa, a former German colony, had been RHT for more than a century, but switched to LHT in 2009, making it the first territory in almost 30 years to change sides. The move was legislated in 2008 to allow Samoans to use cheaper vehicles imported from Australia, New Zealand, or Japan, and to harmonise with other South Pacific nations. A political party, The People's Party, was formed by the group People Against Switching Sides (PASS) to protest against the change, with PASS launching a legal challenge; in April 2008 an estimated 18,000 people attended demonstrations against switching. The motor industry was also opposed, as 14,000 of Samoa's 18,000 vehicles were designed for RHT and the government refused to meet the cost of conversion. After months of preparation, the switch from right to left happened in an atmosphere of national celebration. There were no reported incidents. At 05:50 local time, Monday 7 September, a radio announcement halted traffic, and an announcement at 6:00 ordered traffic to switch to LHT. The change coincided with more restrictive enforcement of speeding and seat-belt laws. That day and the following were declared public holidays, to reduce traffic. The change included a three-day ban on alcohol sales, while police mounted dozens of checkpoints, warning drivers to drive slowly. Potential future shifts Rwanda and Burundi, former Belgian colonies in Central Africa, are RHT but are considering switching to LHT like neighbouring members of the East African Community (EAC). A survey in 2009 found that 54% of Rwandans favoured the switch. Reasons cited were the perceived lower costs of RHD vehicles, easier maintenance and the political benefit of harmonising traffic regulations with other EAC countries. The survey indicated that RHD cars were 16% to 49% cheaper than their LHD counterparts. In 2014, an internal report by consultants to the Ministry of Infrastructure recommended a switch to LHT. In 2015, the ban on RHD vehicles was lifted; RHD trucks from neighbouring countries cost $1,000 less than LHD models imported from Europe. Changing sides at borders Although many LHT jurisdictions are on islands, there are cases where vehicles may be driven from LHT across a border into a RHT area. Such borders are mostly located in Africa and southern Asia. The Vienna Convention on Road Traffic regulates the use of foreign registered vehicles in the 78 countries that have ratified it. LHT Thailand has three RHT neighbours: Cambodia, Laos, and Myanmar. Most of its borders use a simple traffic light to do the switch, but there are also interchanges that enable the switch while keeping up a continuous flow of traffic. There are six road border crossing points between Hong Kong and mainland China. In 2006, the daily average number of vehicle trips recorded at Lok Ma Chau was 31,100. The next largest is Man Kam To, where there is no changeover system and the border roads on the mainland side Wenjindu intersect as one-way streets with a main road. The Takutu River Bridge (which links LHT Guyana and RHT Brazil) is the only border in the Americas where traffic changes sides. Road vehicle configurations Steering wheel position In RHT jurisdictions, vehicles are typically configured as left hand drive (LHD), with the steering wheel on the left side of the passenger compartment. In LHT jurisdictions, the reverse is true as the right hand drive (RHD) configuration. In most jurisdictions, the position of the steering wheel is not regulated, or explicitly permitted to be anywhere. The driver's side, the side closer to the centre of the road, is sometimes called the offside, while the passenger side, the side closer to the side of the road, is sometimes called the nearside. Most windscreen wipers are preferentially designed to better clean the driver's side of the windscreen and thus have a longer wiper blade on the driver's side and wipe up from the passenger side to the driver's side. Thus on LHD configurations, they wipe up from right to left, viewed from inside the vehicle, and do the opposite on RHD vehicles. In both LHD and RHD vehicles, gear shifters are in the same position, and the shift patterns are not reversed. Historically there was less consistency in the relationship of the position of the driver to the handedness of traffic. Most American cars produced before 1910 were RHD. In 1908 Henry Ford standardised the Model T as LHD in RHT America, arguing that with RHD and RHT, the passenger was obliged to "get out on the street side and walk around the car" and that with steering from the left, the driver "is able to see even the wheels of the other car and easily avoids danger." By 1915 other manufacturers followed Ford's lead, due to the popularity of the Model T. In specialised cases, the driver will sit on the nearside, or curbside. Examples include: Where the driver needs a good view of the nearside, e.g. street sweepers, or vehicles driven along unstable road edges. Similarly in mountainous areas the driver may be seated opposite side so that they have a better view of the road edge which may fall away for very many metres into the valley below. Swiss Postbuses in mountainous areas are a well known example. Where it is more convenient for the driver to be on the nearside, e.g. delivery vehicles. The Grumman LLV postal delivery truck is widely used with RHD configurations in RHT North America. Some Unimogs are designed to switch between LHD and RHD to permit operators to work on the more convenient side of the truck. Generally, the convention is to mount a motorcycle on the left, and kickstands are usually on the left which makes it more convenient to mount on the safer kerbside as is the case in LHT. Some jurisdictions prohibit fitting a sidecar to a motorcycle's offside. In 2020, there were 160 LHD heavy goods vehicles in the UK involved in accidents (%) for a total of 3,175 accidents, killing 215 people (%) for a total of 4271. It has been suggested that right-hand drive vehicles, and hence the left-hand traffic direction, are associated with greater safety. As most drivers are right-handed, the dominant right hand remains controlled on the steering wheel while the non-dominant left hand can manipulate gears. The right field of vision may also be more dominant, thereby permitting a superior view of oncoming traffic. Dashboard configuration Some manufacturers primarily produce left-hand drive vehicles, due to the larger or nearer market for such vehicles. For such models supplied to left-hand traffic markets, in the right-hand drive configuration, the manufacturer may reuse the same dashboard configuration as is used in the left-hand drive models, with the steering column and pedals moved to the right-hand side. Oft-used controls (such as audio volume and fan controls) that were placed near the left-hand driver for ease of access, are now situated on the far side of the center console for the right-hand driver. This may make them more difficult to reach quickly or without looking away from the road ahead. In some cases, the manufacturer's dashboard design incorporates blanks and modular components, which permits the controls and underlying electronics to be rearranged to suit the right-hand drive model. This may be done in the factory, after import, or as an after-market modification. Headlamps and other lighting equipment Most low-beam headlamps produce an asymmetrical light suitable for use on only one side of the road. Low beam headlamps in LHT jurisdictions throw most of their light forward-leftward; those for RHT throw most of their light forward-rightward, thus illuminating obstacles and road signs while minimising glare for oncoming traffic. In Europe, headlamps approved for use on one side of the road must be adaptable to produce adequate illumination with controlled glare for temporarily driving on the other side of the road,. This may be achieved by affixing masking strips or prismatic lenses to a part of the lens or by moving all or part of the headlamp optic so all or part of the beam is shifted or the asymmetrical portion is occluded. Some varieties of the projector-type headlamp can be fully adjusted to produce a proper LHT or RHT beam by shifting a lever or other movable element in or on the lamp assembly. Some vehicles adjust the headlamps automatically when the car's GPS detects that the vehicle has moved from LHT to RHT and vice versa. Rear fog lamps In Europe since early 1980s, cars must be equipped with one or two red rear fog lamps. A single rear fog lamp must be located between the vehicle's longitudinal centreline and the outer extent of the driver's side of the vehicle. Crash testing differences ANCAP reports that some RHD cars imported to Australia did not perform as well on crash tests as the LHD versions, although the cause is unknown, and may be due to differences in testing methodology. Rail traffic National rail In most countries rail traffic travels on the same side as road traffic. However, there are many instances of railways built using LHT British technology which remained LHT despite their nations' road traffic becoming RHT. Examples include: Argentina, Belgium, Bolivia, Cambodia, China, Egypt, France, Iraq, Israel, Italy, Laos, Monaco, Morocco, Myanmar, Nigeria, Peru, Portugal, Senegal, Slovenia, Sweden, Switzerland, Taiwan, Tunisia, Uruguay and Venezuela. France is mainly LHT for trains except for the classic lines in Alsace–Lorraine, which were converted from LHT to RHT under German administration from 1870 to 1918. In North America, multi-track rail lines with centralized traffic control are typically signaled to allow operation on any track in both directions, and the side of operation will vary based on the railroad's specific operational requirements. In practice however, rail traffic is more often RHT. Indonesia is the only country in the world which has RHT for rails (even for newer rail systems such as the LRT and the MRT systems) and LHT for roads. Metro/Tram/Light rail Metro and light rail sides of operation vary and might not match railways or roads in their country. Some systems where the metro matches the side of the national rail network but not the roads include those in Bilbao, Buenos Aires, Cairo, Catania, Jakarta, Lisbon, Lyon, Naples, and Rome. A small number of cities, including Madrid and Stockholm, originally ran on the same side as road traffic when the systems opened in 1919 and 1950 respectively, but had road traffic change in 1924 and 1967 respectively. Conversely, metros in France (except for the aforementioned Lyon) and mainland China run on the right just like roads, while mainline trains run on the left. A small number of systems have situational reasons to differ from the norm. On the MTR in Hong Kong, the section originally known as the Ma On Shan line (now part of the Tuen Ma line) runs on the right to make interchanging with the East Rail line easier, while the rest of the system runs on the left. On the Seoul Metropolitan Subway, lines that integrate with Korail (except Line 3, which is disconnected from the rest of the network) run on the left, while the lines that are not run on the right. In Nizhny Novgorod, Line 2 runs on the left due to the track layout when it first opened as a branch of Line 1. In Lima, Line 1 runs entirely on the left, while Line 2 runs entirely on the right. Metro Line M1 in Budapest is the only metro line to have switched sides. It originally ran on the left but switched to right hand-running during the line's reconstruction around 1973. Because trams frequently operate on roads, they generally operate on the same side as other road traffic. Boat traffic Boats are traditionally piloted from starboard (the right-hand side) to facilitate priority to the right. According to the International Regulations for Preventing Collisions at Sea, water traffic is effectively RHT: a vessel proceeding along a narrow channel must keep to starboard, and when two power-driven vessels are meeting head-on both must alter course to starboard also. Typically, especially for larger vessels, a radio call will be made between two vessels, or with a Vessel Traffic Service (VTS) to co-ordinate if the vessels will pass "green-to-green" or "red-to-red". Marine traffic uses a system of green lighting for the starboard (right-hand) side and red for port (left-hand) side: to pass "green-to-green" the green (starboard, right-hand) side of the vessels will pass each other, essentially being left-hand traffic. Similarly, passing "red-to-red" means the red (port, left-hand) side of the vessels will pass each other, forming right-hand traffic. In busy waterways, directional shipping lanes may be set up to facilitate handedness of traffic. For example, the Strait of Dover (Pas-de-Calais) on the English Channel uses RHT with North Sea-bound vessels following the French coast and Atlantic-bound vessels following the English coast. Aircraft traffic For aircraft the US Federal Aviation Regulations suggest RHT principles, both in the air and on water, and in aircraft with side-by-side cockpit seating, the pilot-in-command (or more senior flight officer) traditionally occupies the left seat. However, helicopter practice tends to favour the right hand seat for the pilot-in-command, particularly when flying solo. Worldwide distribution by country Of the 195 countries currently recognised by the United Nations, 141 use RHT and 54 use LHT on roads in general. A country and its territories and dependencies are counted as one. Whichever directionality is listed first is the type that is used in general in the traffic category. Legality of wrong-hand-drive vehicles by country According to the Vienna Convention on Road Traffic, which mostly covers Europe, if having a vehicle registered and legal to drive in one of the Convention countries, it is legal to drive it in any other of the countries, for visits and first year of residence after moving. This is regardless of whether it fulfils all the rules of the visitor countries. This convention does not affect rules on usage or registration of local vehicles. Gallery See also Hook turn Traffic-light signalling and operation World Forum for Harmonization of Vehicle Regulations References External links Google Maps placemarks of border crossings where traffic changes sides (placemarks file, requires Google Earth) The Extraordinary Street Railways of Asunción, Paraguay Chirality Driving Road transport Rules of the road Traffic law
Left- and right-hand traffic
Physics,Chemistry,Biology
6,913
48,429,332
https://en.wikipedia.org/wiki/Isoazimuth
The isoazimuth is the locus of the points on the Earth's surface whose initial orthodromic course with respect to a fixed point is constant. That is, if the initial orthodromic course Z from the starting point S to the fixed point X is 80 degrees, the associated isoazimuth is formed by all points whose initial orthodromic course with respect to point X is 80° (with respect to true north). The isoazimuth is written using the notation isoz(X, Z) . The isoazimuth is of use when navigating with respect to an object of known location, such as a radio beacon. A straight line called the azimuth line of position is drawn on a map, and on most common map projections this is a close enough approximation to the isoazimuth. On the Littrow projection, the correspondence is exact. This line is then crossed with an astronomical observation called a Sumner line, and the result gives an estimate of the navigator's position. Isoazimutal on the spherical Earth Let X be a fixed point on the Earth of coordinates latitude: , and longitude: . In a terrestrial spherical model, the equation of isoazimuth curve with initial course C passing through point S(B, L) is: Isoazimutal of a star In this case the X point is the illuminating pole of the observed star, and the angle Z is its azimuth. The equation of the isoazimuthal curve for a star with coordinates (Dec, GHA), - Declination and Greenwich hour angle -, observed under an azimuth Z is given by: where LHA is the local hour angle, and all points with latitude B and longitude L, they define the curve. See also Great circle Rhumb line Cartography Navigational algorithms References External links Navigational Algorithms http://sites.google.com/site/navigationalalgorithms/ Institut français de navigation https://web.archive.org/web/20140103212146/http://www.ifnavigation.org/ Cartography Navigation Celestial navigation
Isoazimuth
Astronomy
451
3,198,102
https://en.wikipedia.org/wiki/Earth%20lodge
An earth lodge is a semi-subterranean building covered partially or completely with earth, best known from the Native American cultures of the Great Plains and Eastern Woodlands. Most earth lodges are circular in construction with a dome-like roof, often with a central or slightly offset smoke hole at the apex of the dome. Earth lodges are well-known from the more-sedentary tribes of the Plains such as the Hidatsa, Mandan, and Arikara, but they have also been identified archaeologically among sites of the Mississippian culture in the eastern United States. Structure Construction materials and techniques Earth lodges were typically constructed using the wattle and daub technique, with a thick coating of earth. The dome-like shape of the earth lodge was achieved by the use of angled (or carefully bent) tree trunks, although hipped roofs were also sometimes used. During construction the workers would dig an area a few feet beneath the surface, allowing the entire building to have a floor somewhat beneath the surrounding ground level. They set posts into holes in the ground around the edges of the earth lodge, and made the tops meet in (or near) the middle. This construction technique is sturdy and can produce large buildings (some as much as across), in which more than one family would live. Their size is limited by the length of available tree trunks. Internal vertical support posts were sometimes used to give additional structural support to the roof rafters. After a strong layer of sticks (or reeds) was wrapped through and over the radiating roof timbers, the people often applied a layer of thatch as part of the roof. The structure was then entirely covered in earth. The earth layer (and the partially subterranean foundation) provided insulation against the extreme temperatures of the Plains. The structures consisted of a clay outer shell over an inner shell of long grasses and a woven willow ceiling. The middle of the earth lodge was used as a fire pit, and a hole was built into the center. This smoke hole was often covered by a bullboat during inclement weather. Logs were gathered each spring as the ice receded and sheared them off; fresh logs were also cut. The most common wood used was cottonwood. Cottonwood was a wet, soft wood; this meant that lodges often required rebuilding every six to eight years. Labor and use In Hidatsa culture, men only raised the large logs; the rest of the work was done by women. Therefore, a lodge was considered to be owned by the woman who built it. A vestibule of exposed logs marked the entrance and provided an entryway; these vestibules were often a minimum of in length (determined by the size of the lodge and resulting outer-clay thickness). A windbreak was built on the interior of the lodge, blocking the wind and giving privacy to the occupants. Earth lodges often also contained cache pits (root cellar-type holes) lined with willow and grasses, within which dried vegetables were stored. Locations Earth lodges were often built alongside tribal farm fields, alternating with tipis (which were used during the nomadic hunting season). A reconstructed earth lodge can be seen at the Glenwood, Iowa's Lake Park. A village entirely made up of earth-lodges may be seen at New Town, North Dakota. The village consists of six family-sized earth lodges and one large ceremonial lodge. In addition, a garden area and corrals have been built for authenticity. The park is open to the public and located west of New Town at the Earthlodge Village Site. The family earth lodges are roughly in diameter. The ceremonial earth lodge is more than in diameter. The park is the central point in a rebuilding and cultural renewal effort by the three affiliated tribes of the Fort Berthold Indian Reservation. This is the only village of its kind to be constructed by the Mandan, Hidatsa and Arikara Nations in over 100 years. Use by the Mississippian culture A number of major Mississippian culture mound centers have identified earth lodges, either beneath (i.e. preceding) mound construction or as a mound-top building. Sequential constructions, collapses, and rebuilding of earth lodges seems to be part of the mechanism of construction for certain mounds (including the mound at Town Creek Indian Mound and some of the mounds at the Ocmulgee National Monument). In Kanabec County, Minnesota, the Groundhouse River flows through such a center. According to Newton H. Winchell in The Aborigines of Minnesota, the river was named for the earth lodges of the Hidatsa, who lived in the area before being driven westward to the Missouri River by the Sioux. The Hidatsa lived in wooden huts, covered with earth. See also Earth house Kiva Quiggly hole Zemlyanka Vernacular architecture References External links House types Traditional Native American dwellings Pre-Columbian architecture Mound Builders Indigenous culture of the Great Plains Semi-subterranean structures Vernacular architecture Indigenous architecture Western (genre) staples and terminology Earth structures
Earth lodge
Engineering
1,008
3,864,929
https://en.wikipedia.org/wiki/GRDDL
GRDDL (pronounced "griddle") is a markup format for Gleaning Resource Descriptions from Dialects of Languages. It is a W3C Recommendation, and enables users to obtain RDF triples out of XML documents, including XHTML. The GRDDL specification shows examples using XSLT, however it was intended to be abstract enough to allow for other implementations as well. It became a Recommendation on September 11, 2007. Mechanism XHTML and transformations A document specifies associated transformations, using one of a number of ways. For instance, an XHTML document may contain the following markup: <head profile="http://www.w3.org/2003/g/data-view http://dublincore.org/documents/dcq-html/ http://gmpg.org/xfn/11"> <link rel="transformation" href="grokXFN.xsl" /> Document consumers are informed that there are GRDDL transformations available in this page, by including the following in the profile attribute of the head element: http://www.w3.org/2003/g/data-view The available transformations are revealed through one or more link elements: <link rel="transformation" href="grokXFN.xsl" /> This code is valid for XHTML 1.x only. The profile attribute has been dropped in HTML5, including its XML serialisation. Microformats and profile transformations If an XHTML page contains Microformats, there is usually a specific profile. For instance, a document with hcard information should have: <head profile="http://www.w3.org/2003/g/data-view http://www.w3.org/2006/03/hcard"> When fetched http://www.w3.org/2006/03/hcard has: <head profile="http://www.w3.org/2003/g/data-view"> and <p>Use of this profile licenses RDF data extracted by <a rel="profileTransformation" href="../vcard/hcard2rdf.xsl">hcard2rdf.xsl</a> from <a href="http://www.w3.org/2006/vcard/ns">the 2006 vCard/RDF work</a>. </p> The GRDDL aware agent can then use that profileTransformation to extract all hcard data from pages that reference that link. XML and transformations In a similar fashion to XHTML, GRDDL transformations can be attached to XML documents. XML namespace transformations Just like a profileTransformation, an XML namespace can have a transformation associated with it. This allows entire XML dialects (for instance, KML or Atom) to provide meaningful RDF. An XML document simply points to a namespace <foo xmlns="http://example.com/1.0/"> <!-- document content here --> </foo> and when fetched, http://example.com/1.0/ points to a namespaceTransformation. This also allows very large amounts of the existing XML data in the wild to become RDF/XML with minimal effort from the namespace author. Output Once a document has been transformed, there is an RDF representation of that data. This output is generally put into a database and queried via SPARQL. Implementations GRDDL consumers (also known as GRDDL aware agents) OpenLink Virtuoso through its Sponger cartridge system XML_GRDDL, a semi compliant PHP 5 library See other implementations See also Microformats – a simplified approach to semantically annotate data in websites RDFa – a W3C Recommendation for annotating websites with RDF data eRDF – an alternative to RDFa References Notes External links W3C GRDDL Specification W3C GRDDL Working Group W3C GRDDL Primer W3C GRDDL Use-Cases Semantic Web World Wide Web Consortium standards XML-based standards
GRDDL
Technology
905
47,786,948
https://en.wikipedia.org/wiki/Appreciative%20inquiry%20in%20education
Appreciative Inquiry (AI) is an approach that believes improvement is more engaging when the focus is made on the strengths rather than the weaknesses. People tend to respond to positive statements but react to negative statements that concern them. Children are more sensitive to their self-worth and thrive on what makes them feel good, what makes them feel accepted, included and recognized. AI is a powerful tool that can be used in the field of Education to enable children discover what is good about them and dream of what they can do with this realization. People can be affected by the uses of AI. Overview Appreciative Inquiry is the cooperative search for the best in people their organizations, and the world around them. It involves systematic discovery of what gives a system 'life' when it is most effective and capable in economic, ecological, and human terms. AI involves the art and practice of asking questions that strengthen a system's capacity to heighten positive potential. It mobilizes inquiry through crafting an "unconditional positive question' often involving hundreds or sometimes thousands of people." Application Applied in the education sector, AI is a cooperative search for the best in children, their school, their teachers, their classmates, their parents and this discovery influences and helps shape their image of the future. It all begins with a story which the appreciative inquirer tells about him/herself and this story is only about where the child has experienced the best of what he could e.g. in reading, writing, passing tests and exams. With this flow of energy from past experience, the child is poised for a similar experience in the future and so nurtures all that give energy and brings joy of performance, acceptance and readiness to move ahead. AI starts with a statement of purpose or object of inquiry and which then takes the inquirer through the five steps (known as the 5Ds of AI) and graphically illustrated as follows: Refer the picture in Appreciative Inquiry in the education sector can amplify the motivation of the students and help them become most alive and effective. AI brings about social change in the pupil as the emphasis is on what is good and the belief that people nurture what they appreciate, than what they are not happy about. The system of education can be based on the five principles of AI that will enable the child discover through her/his own story, what is good about him/her and dream of how he/she can capitalize on this story of goodness to do more of such things that he/she appreciates about himself, about his environment, about his world. Principles A quick look at the principles will enable the understanding of why AI is suitable for our education system: i. Constructionist Principle – argues that the language and metaphors we use don't just describe reality (the world), they actually create 'our 'reality (the world). It means that great care is taken in the choice of words that we use as it will influence the kind of future we create. The language of the teacher influences what the child considers as his/her reality and this influences his self-perception and hence his self-worth which is very important for what he/she becomes in future. ii. Principle of Simultaneity – change begins from the moment we ask a question about a thing. The heart of AI is an unconditional positive question. For example, – what was the best thing that has happened to you in the last week? iii. The Poetic Principle - as the topic of inquiry is on what is good about the individual or the environment, this helps open a new chapter in the life of the child. Stories reveal qualities which had not been previously realized and appreciated. iv. The Anticipatory Principle – we grow into the images we create, hence, when the child is made to see himself as good, his imagination about his future will always be good enough and as magnet this imagined future will always pull the child towards this goal. v. The Positive Principle – feelings of hope, inspiration, caring, sense of purpose, joy and creating something meaningful or being part of something good are among what we define as positive. It is therefore, important that the questions asked to the child are affirmative and positive. It allows a student to be potentially free from any kind of bondage or control. AI gives an opportunity for the students to showcase their innovative side rather than just rote memory. This in turn makes them autonomous learners. The students are able to understand their strengths every time their potentials are amplified. Use of this in the educational sector would bring about a sea of difference as there would be more room for amplifying the existing positive energy. Even the basic assumptions of AI which includes the assumption that 'in every human situation, there is something that works" is a clear indication that no child is incapable of producing a result that would even surprise the child him/herself. All that the child needs are such questions that would enable him/her tap into the core of his/her being. The system of education which relies on an average test or examination grades label children who do not meet the marks as 'failed'. AI in education enables the child to identify the subjects where he/she is very satisfied with the performance, and through story, the child discovers what he/she did differently and how to tap this aspect for more satisfaction. This is why AI is also referred as 'locating the energy for change'. It is a search for what is good through stories and what needs to be done through dreams. AI brings a dream to reality because, motivation for the future depends on the images of success of the past. Educational reform movement One of the strands of educational reform movements in the last two decades has been the call for greater collaborative efforts, both among educators as well as with parents, students and the surrounding community. Educational researcher Hargreaves (1994) referred to collaboration as an 'articulating and integrating principle' (p. 245) for school improvement, providing a way for teachers to learn from each other, gain moral support, coordinate action, and reflect on their classroom practices, their values, and the meaning of their work . These concerns point to the need for a change process that has a positive focus, is essentially self-organizing, encourages deep reflection, and avoids the pitfalls of manipulation by school administrators. This analysis points to a consideration of appreciative inquiry, a strengths-based process that builds on 'the best of what is' in an organization. References Educational practices Developmental psychology
Appreciative inquiry in education
Biology
1,332
35,757,264
https://en.wikipedia.org/wiki/Prescriptive%20analytics
Prescriptive analytics is a form of business analytics which suggests decision options for how to take advantage of a future opportunity or mitigate a future risk, and shows the implication of each decision option. It enables an enterprise to consider "the best course of action to take" in the light of information derived from descriptive and predictive analytics. Overview Prescriptive analytics is the third and final phase of business analytics, which also includes descriptive and predictive analytics. Referred to as the "final frontier of analytic capabilities", prescriptive analytics entails the application of mathematical and computational sciences and suggests decision options for how to take advantage of the results of descriptive and predictive phases. The first stage of business analytics is descriptive analytics, which still accounts for the majority of all business analytics today. Descriptive analytics looks at past performance and understands that performance by mining historical data to look for the reasons behind past success or failure. Most management reporting – such as sales, marketing, operations, and finance – uses this type of post-mortem analysis. The next phase is predictive analytics. Predictive analytics answers the question of what is likely to happen. This is where historical data is combined with rules, algorithms, and occasionally external data to determine the probable future outcome of an event or the likelihood of a situation occurring. The final phase is prescriptive analytics, which goes beyond predicting future outcomes but also suggesting actions to benefit from the predictions and showing the implications of each decision option. Prescriptive analytics uses algorithms and machine learning models to simulate various scenarios and predict the likely outcomes of different decisions. It then suggests the best course of action based on the desired outcome and the constraints of the situation. Prescriptive analytics not only anticipates what will happen and when it will happen, but also why it will happen. Further, prescriptive analytics suggests decision options on how to take advantage of a future opportunity or mitigate a future risk and shows the implication of each decision option. Prescriptive analytics incorporates both structured and unstructured data, and uses a combination of advanced analytic techniques and disciplines to predict, prescribe, and adapt. It can continually take in new data to re-predict and re-prescribe, thus automatically improving prediction accuracy and prescribing better decision options. Effective prescriptive analytics utilises hybrid data, a combination of structured (numbers, categories) and unstructured data (videos, images, sounds, texts), and business rules to predict what lies ahead and to prescribe how to take advantage of this predicted future without compromising other priorities. Basu suggests that without hybrid data input, the benefits of prescriptive analytics are limited. In addition to this variety of data types and growing data volume, incoming data can also evolve with respect to velocity, that is, more data being generated at a faster or a variable pace. Business rules define the business process and include objectives constraints, preferences, policies, best practices, and boundaries. Mathematical models and computational models are techniques derived from mathematical sciences, computer science and related disciplines such as applied statistics, machine learning, operations research, natural language processing, computer vision, pattern recognition, image processing, speech recognition, and signal processing. The correct application of all these methods and the verification of their results implies the need for resources on a massive scale including human, computational and temporal for every Prescriptive Analytic project. In order to spare the expense of dozens of people, high performance machines and weeks of work one must consider the reduction of resources and therefore a reduction in the accuracy or reliability of the outcome. The preferable route is a reduction that produces a probabilistic result within acceptable limits. All three phases of analytics can be performed through professional services or technology or a combination. In order to scale, prescriptive analytics technologies need to be adaptive to take into account the growing volume, velocity, and variety of data that most mission critical processes and their environments may produce. One criticism of prescriptive analytics is that its distinction from predictive analytics is ill-defined and therefore ill-conceived. History While the term prescriptive analytics was first coined by IBM, and was later trademarked by Texas-based company Ayata, the underlying concepts have been around for hundreds of years. The technology behind prescriptive analytics synergistically combines hybrid data, business rules with mathematical models and computational models. The data inputs to prescriptive analytics may come from multiple sources: internal, such as inside a corporation; and external, also known as environmental data. The data may be structured, which includes numbers and categories, as well as unstructured data, such as texts, images, sounds, and videos. Unstructured data differs from structured data in that its format varies widely and cannot be stored in traditional relational databases without significant effort at data transformation. More than 80% of the world's data today is unstructured, according to IBM. Ayata's trade mark was cancelled in 2018. Applications in Oil and Gas Energy is the largest industry in the world ($6 trillion in size). The processes and decisions related to oil and natural gas exploration, development and production generate large amounts of data. Many types of captured data are used to create models and images of the Earth’s structure and layers 5,000 - 35,000 feet below the surface and to describe activities around the wells themselves, such as depositional characteristics, machinery performance, oil flow rates, reservoir temperatures and pressures. Prescriptive analytics software can help with both locating and producing hydrocarbons by taking in seismic data, well log data, production data, and other related data sets to prescribe specific recipes for how and where to drill, complete, and produce wells in order to optimize recovery, minimize cost, and reduce environmental footprint. Unconventional Resource Development With the value of the end product determined by global commodity economics, the basis of competition for operators in upstream E&P is the ability to effectively deploy capital to locate and extract resources more efficiently, effectively, predictably, and safely than their peers. In unconventional resource plays, operational efficiency and effectiveness is diminished by reservoir inconsistencies, and decision-making impaired by high degrees of uncertainty. These challenges manifest themselves in the form of low recovery factors and wide performance variations. Prescriptive Analytics software can accurately predict production and prescribe optimal configurations of controllable drilling, completion, and production variables by modeling numerous internal and external variables simultaneously, regardless of source, structure, size, or format. Prescriptive analytics software can also provide decision options and show the impact of each decision option so the operations managers can proactively take appropriate actions, on time, to guarantee future exploration and production performance, and maximize the economic value of assets at every point over the course of their serviceable lifetimes. Oilfield Equipment Maintenance In the realm of oilfield equipment maintenance, Prescriptive Analytics can optimize configuration, anticipate and prevent unplanned downtime, optimize field scheduling, and improve maintenance planning. According to General Electric, there are more than 130,000 electric submersible pumps (ESP's) installed globally, accounting for 60% of the world's oil production. Prescriptive Analytics has been deployed to predict when and why an ESP will fail, and recommend the necessary actions to prevent the failure. In the area of health, safety and environment, prescriptive analytics can predict and preempt incidents that can lead to reputational and financial loss for oil and gas companies. Pricing Pricing is another area of focus. Natural gas prices fluctuate dramatically depending upon supply, demand, econometrics, geopolitics, and weather conditions. Gas producers, pipeline transmission companies and utility companies have a keen interest in more accurately predicting gas prices so that they can lock in favorable terms while hedging downside risk. Prescriptive analytics software can accurately predict prices by modeling internal and external variables simultaneously and also provide decision options and show the impact of each decision option. Applications in maritime industry Common Structural Rules for  Bulk Carriers and Oil Tankers ( managed by IACS organisation ) intensively utilizes the term "prescriptive requirements"  as one of two main classes of checkable calculations by dedicated numerical tools and algorithms for verifying safety of ship hull construction. Applications in healthcare Multiple factors are driving healthcare providers to dramatically improve business processes and operations as the United States healthcare industry embarks on the necessary migration from a largely fee-for service, volume-based system to a fee-for-performance, value-based system. Prescriptive analytics is playing a key role to help improve the performance in a number of areas involving various stakeholders: payers, providers and pharmaceutical companies. Prescriptive analytics can help providers improve effectiveness of their clinical care delivery to the population they manage and in the process achieve better patient satisfaction and retention. Providers can do better population health management by identifying appropriate intervention models for risk stratified population combining data from the in-facility care episodes and home based telehealth. Prescriptive analytics can also benefit healthcare providers in their capacity planning by using analytics to leverage operational and usage data combined with data of external factors such as economic data, population demographic trends and population health trends, to more accurately plan for future capital investments such as new facilities and equipment utilization as well as understand the trade-offs between adding additional beds and expanding an existing facility versus building a new one. Prescriptive analytics can help pharmaceutical companies to expedite their drug development by identifying patient cohorts that are most suitable for the clinical trials worldwide - patients who are expected to be compliant and will not drop out of the trial due to complications. Analytics can tell companies how much time and money they can save if they choose one patient cohort in a specific country vs. another. In provider-payer negotiations, providers can improve their negotiating position with health insurers by developing a robust understanding of future service utilization. By accurately predicting utilization, providers can also better allocate personnel. See also Analytics Applied Statistics Big Data Business analytics Business Intelligence Data mining Decision Management Decision Engineering Forecasting Hadoop MapReduce OLTP Operations Research Statistics Notes References Further reading Davenport, Thomas H., Kalakota, Ravi, Taylor, James, Lampa, Mike, Franks, Bill, Jeremy, Shapiro, Cokins, Gary, Way, Robin, King, Joy, Schafer, Lori, Renfrow, Cyndy and Sittig, Dean, Predictions for Analytics in 2012 International Institute for Analytics (December 15, 2011) Bertolucci, Jeff, Prescriptive Analytics and Data: Next Big Thing? InformationWeek. (April 15, 2013). Laney, Douglas and Kart, Lisa, (March 20, 2012). Emerging Role of the Data Scientist and the Art of Data Science Gartner. McCormick Northwestern Engineering Prescriptive analytics is about enabling smart decisions based on data. Business Analytics Information Event, I2SDS and Department of Decision Sciences, School of Business, The George Washington University (February 10, 2011). "The Difference Between Operations Research and Business Analysis" OR Exchange / Informs (April 2011). Farris, Adam, "How Big Data is Changing the Oil & Gas Industry" Analytics. (November / December 2012). Venter, Fritz and Stein, Andrew "Images & Videos: Reall Big Data" Analytics. (November / December 2012). Venter, Fritz and Stein, Andrew "The Technology Behind Image Analytics" Analytics. (November / December 2012). Horner, Peter and Basu, Atanu, Analytics and the Future of Healthcare Analytics. (January / February 2012). Ghosh, Rajib, Basu, Atanu and Bhaduri, Abhijit, From ‘Sick’ Care to ‘Health’ Care Analytics. (July / August 2011). Fischer, Eric, Basu, Atanu, Hubele, Joachim and Levine, Eric, TV ads, Wanamaker’s Dilemma & Analytics Analytics. (March / April 2011) Basu, Atanu and Worth, Tim, Predictive Analytics Practical ways to Drive Customer Service, Looking Forward Analytics. (July / August 2010). Brown, Scott, Basu, Atanu and Worth, Tim, [http://www.analytics-magazine.org/november-december-2010/98-predictive-analytics-in-field-service Predictive Analytics in Field Service, Practical Ways to Drive Field Service, Looking Forward] Analytics. (November / December 2010). Pease, Andrew Bringing Optimization to the Business, SAS Global Forum 2012, Paper 165-2012 (2012). Wheatley, Malcolm "Underground Analytics- The Value of Predicting When an Oil Pump Fails" DataInformed, May 29, 2013. Presley, Jennifer "ESP for ESPs Exploration & Production Magazine, July 1, 2013 Basu, Atanu, "How Prescriptive Analytics Can Reshape Fracking in Oil & Gas", DataInformed, December 10, 2013. Basu, Atanu "What The Frack: U.S. Energy Prowess with Shale, Big Data Analytics" WIRED Blog. (January 2014). Logan, Amy "Science Fiction Now a Fact in the E&P World" Unconventional Oil & Gas Center, June 2, 2014. Mohan, Daniel "Your Data Already Know What You Don't" Exploration & Production Magazine, September, 2014. van Rijmenam, Mark "The Future of Big Data? Three Use Cases of Prescriptive Analytics" Datafloq, December 29, 2014. External links INFORMS' bi-monthly, digital magazine on the analytics profession Menon, Jai "Why Data Matters: Moving Beyond Prediction" IBM Global Openlabs for Performance-Enhancement Analytics and Knowledge System (GoPeaks) Types of analytics Big data Business intelligence terms Formal sciences Health care
Prescriptive analytics
Technology
2,818
222,850
https://en.wikipedia.org/wiki/Guinness%20Book%20of%20Astronomy
The Guinness Book of Astronomy is a book () by the British astronomer Patrick Moore, first published in 1979, and running to seven editions. The first part of the book is written like a Guinness Book of Records, with paragraphs like "the most luminous star", "the farthest star", and so on. Solar System objects are explained in detail. The second part is a detailed sky atlas for amateur astronomy observations: for each constellation, a list of bright and dim stars, deep sky and other notable objects is given to the reader. The object tables are so complete that this book alone is enough for months of observations with small telescopes. Notes 1979 non-fiction books Astronomy books
Guinness Book of Astronomy
Astronomy
140
10,035,905
https://en.wikipedia.org/wiki/Integrated%20computational%20materials%20engineering
Integrated Computational Materials Engineering (ICME) is an approach to design products, the materials that comprise them, and their associated materials processing methods by linking materials models at multiple length scales. Key words are "Integrated", involving integrating models at multiple length scales, and "Engineering", signifying industrial utility. The focus is on the materials, i.e. understanding how processes produce material structures, how those structures give rise to material properties, and how to select materials for a given application. The key links are process-structures-properties-performance. The National Academies report describes the need for using multiscale materials modeling to capture the process-structures-properties-performance of a material. Standardization in ICME A fundamental requirement to meet the ambitious ICME objective of designing materials for specific products resp. components is an integrative and interdisciplinary computational description of the history of the component starting from the sound initial condition of a homogeneous, isotropic and stress free melt resp. gas phase and continuing via subsequent processing steps and eventually ending in the description of failure onset under operational load. Integrated Computational Materials Engineering is an approach to design products, the materials that comprise them, and their associated materials processing methods by linking materials models at multiple length scales. ICME thus naturally requires the combination of a variety of models and software tools. It is thus a common objective to build up a scientific network of stakeholders concentrating on boosting ICME into industrial application by defining a common communication standard for ICME relevant tools. Standardization of information exchange Efforts to generate a common language by standardizing and generalizing data formats for the exchange of simulation results represent a major mandatory step towards successful future applications of ICME. A future, structural framework for ICME comprising a variety of academic and/or commercial simulation tools operating on different scales and being modular interconnected by a common language in form of standardized data exchange will allow integrating different disciplines along the production chain, which by now have only scarcely interacted. This will substantially improve the understanding of individual processes by integrating the component history originating from preceding steps as the initial condition for the actual process. Eventually this will lead to optimized process and production scenarios and will allow effective tailoring of specific materials and component properties. The ICMEg project and its mission The ICMEg project aims to build up a scientific network of stakeholders concentrating on boosting ICME into industrial application by defining a common communication standard for ICME relevant tools. Eventually this will allow stakeholders from electronic, atomistic, mesoscopic and continuum communities to benefit from sharing knowledge and best practice and thus to promote a deeper understanding between the different communities of materials scientists, IT engineers and industrial users. ICMEg will create an international network of simulation providers and users. It will promote a deeper understanding between the different communities (academia and industry) each of them by now using very different tools/methods and data formats. The harmonization and standardization of information exchange along the life-cycle of a component and across the different scales (electronic, atomistic, mesoscopic, continuum) are the key activity of ICMEg. The mission of ICMEg is to establish and to maintain a network of contacts to simulation software providers, governmental and international standardization authorities, ICME users, associations in the area of materials and processing, and academia to define and communicate an ICME language in form of an open and standardized communication protocol to stimulate knowledge sharing in the field of multiscale materials design to identify missing tools, models and functionalities and propose a roadmap for their development to discuss and to decide about future amendments to the initial standard The activities of ICMEg include Organization of International Workshops on Software Solutions for Integrated Computational Materials Engineering Conducting market study and survey on available simulation software for ICME Create and maintain forum for knowledge sharing in ICME The ICMEg project ended in October 2016. Its major outcomes are a Handbook of Software Solutions for ICME the identification of HDF5 as a suitable communication file standard for microstructure information exchange in ICME settings the specification of a metadata description for microstructures a network of stakeholders in the area of ICME Most of the activities being launched in the ICMEg project are continued by the European Materials Modelling Council and in the MarketPlace project Multiscale modeling in material processing Multiscale modeling aims to evaluate material properties or behavior on one level using information or models from different levels and properties of elementary processes. Usually, the following levels, addressing a phenomenon over a specific window of length and time, are recognized: Structural scale: Finite element, finite volume and finite difference partial differential equation are solvers used to simulate structural responses such as solid mechanics and transport phenomena at large (meters) scales. process modeling/simulations: extrusion, rolling, sheet forming, stamping, casting, welding, etc. product modeling/simulations: performance, impact, fatigue, corrosion, etc. Macroscale: constitutive (rheology) equations are used at the continuum level in solid mechanics and transport phenomena at millimeter scales. Mesoscale: continuum level formulations are used with discrete quantities at multiple micrometer scales. "Meso" is an ambiguous term that means "intermediate" so it has been used as representing different intermediate scales. In this context, it can represent modeling from crystal plasticity for metals, Eshelby solutions for any materials, homogenization methods, and unit cell methods. Microscale: modeling techniques that represent the micrometer scale such as dislocation dynamics codes for metals and phase field models for multiphase materials. Phase field models of phase transitions and microstructure formation and evolution on nanometer to millimeter scales. Nanoscale: semi-empirical atomistic methods are used such as Lennard-Jones, Brenner potentials, embedded atom method (EAM) potentials, and modified embedded atom potentials (MEAM) in molecular dynamics (MD), molecular statics (MS), Monte Carlo (MC), and kinetic Monte Carlo (KMC) formulations. Electronic scale: Schroedinger equations are used in a computational framework as density functional theory (DFT) models of electron orbitals and bonding on angstrom to nanometer scales. There are some software codes that operate on different length scales such as: CALPHAD computational thermodynamics for prediction of equilibrium phase diagrams and even non-equilibrium phases. Phase field codes for simulation of microstructure evolution Databases of processing parameters, microstructure features, and properties from which one can draw correlations at various length scales GeoDict - The Digital Material Laboratory by Math2Market VPS-MICRO is a multiscale probabilistic fracture mechanics software. SwiftComp is a multiscale constitutive modeling software based on mechanics of structure genome. Digimat is a multiscale material modeling platform A comprehensive compilation of software tools with relevance for ICME is documented in the Handbook of Software Solutions for ICME Examples of Model integration Small scale models calculate material properties, or relationships between properties and parameters, e.g. yield strength vs. temperature, for use in continuum models CALPHAD computational thermodynamics software predicts free energy as a function of composition; a phase field model then uses this to predict structure formation and development, which one may then correlate with properties. An essential ingredient to model microstructure evolution by phase field models and other microstructre evolution codes are the initial and boundary conditions. While boundary conditions may be taken e.g. from the simulation of the actual process, the initial conditions (i.e. the initial microstructure entering into the actual process step) involve the entire integrated process history starting from the homogeneous, isotropic and stress free melt. Thus - for a successful ICME - an efficient exchange of information along the entire process chain and across all relevant length scales is mandatory. The models to be combined for this purpose comprise both academic and/or commercial modelling tools and simulation software packages. To streamline the information flow within this heterogeneous variety of modelling tools, the concept of a modular, standardized simulation platform has recently been proposed. A first realisation of this concept is the AixViPMaP® - the Aachen Virtual Platform for Materials Processing. Process models calculate spatial distribution of structure features, e.g. fiber density and orientation in a composite material; small-scale models then calculate relationships between structure and properties, for use in a continuum models of overall part or system behavior Large scale models explicitly fully couple with small scale models, e.g. a fracture simulation might integrate a continuum solid mechanics model of macroscopic deformation with an FD model of atomic motions at the crack tip Suites of models (large-scale, small-scale, atomic-scale, process-structure, structure-properties, etc.) can be hierarchically integrated into a systems design framework to enable the computational design of entirely new materials. A commercial leader in the use of ICME in computational materials design is QuesTek Innovations LLC, a small business in Evanston, IL co-founded by Prof. Greg Olson of Northwestern University. QuesTek's high-performance Ferrium® steels were designed and developed using ICME methodologies. The Mississippi State University Internal State Variable (ISV) plasticity-damage model (DMG) developed by a team led by Prof. Mark F. Horstemeyer (Founder of Predictive Design Technologies) has been used to optimize the design of a Cadillac control arm, the Corvette engine cradle, and a powder metal steel engine bearing cap. ESI Group through its ProCast and SYSWeld are commercial finite element solutions used in production environments by major manufacturers in aerospace, automotive and government organizations to simulate local material phase changes of metals prior to manufacturing. PAMFORM is utilized for tracking material changes during composite forming manufacturing simulation. Education Katsuyo Thorton announced at the 2010 MS&T ICME Technical Committee meeting that NSF would be funding a "Summer School" on ICME at the University of Michigan starting in 2011. Northwestern began offering a Masters of Science Certificate in ICME in the fall of 2011. The first Integrated Computational Materials Engineering (ICME) course based upon Horstemeyer 2012 was delivered at Mississippi State University (MSU) in 2012 as a graduate course with distance learning students included [c.f., Sukhija et al., 2013]. It was later taught in 2013 and 2014 at MSU also with distance learning students. In 2015, the ICME Course was taught by Dr. Mark Horstemeyer (MSU) and Dr. William (Bill) Shelton (Louisiana State University, LSU) with students from each institution via distance learning. The goal of the methodology embraced in this course was to provide students with the basic skills to take advantage of the computational tools and experimental data provided by EVOCD in conducting simulations and bridging procedures for quantifying the structure-property relationships of materials at multiple length scales. On successful completion of the assigned projects, students published their multiscale modeling learning outcomes on the ICME Wiki, facilitating easy assessment of student achievements and embracing qualities set by the ABET engineering accreditation board. See also Computational materials science Materials informatics ICME cyberinfrastructure Cyberinfrastructure QuesTek Innovations References JOM November 2006 issue focused on ICME Committee on Integrated Computational Materials Engineering, National Research Council, Integrated Computational Materials Engineering: A Transformational Discipline for Improved Competitiveness and National Security, National Academies Press, 2008. , NAP Link G. Olson, Designing a New Material Word, Science, Vol. 288, May 12, 2000 Horstemeyer 2009: Horstemeyer M.F., "Multiscale Modeling: A Review," Practical Aspects of Computational Chemistry, ed. J. Leszczynski and M.K. Shukla, Springer Science+Business Media, pp. 87-135, 2009 External links ICME section of Materials Technology @ TMS [Advances in ICME Implementation: Concepts and Practices” in the May 2017 issue (vol. 69, no. 5) of JOM https://link.springer.com/journal/11837/69/5] Cyberinfrastructure for ICME at Mississippi State University GeoDict The Digital Material Laboratory Materials science
Integrated computational materials engineering
Physics,Materials_science,Engineering
2,506
46,257,702
https://en.wikipedia.org/wiki/North%20Arm%20Powder%20Magazine
The North Arm Powder Magazine near Port Adelaide, South Australia, was from 1858 to 1906 a secure storage facility for dynamite and gelignite used in the construction, mining and quarrying industries. Location The magazine was in Gillman near Port Adelaide at the North Arm of the Port River only 9 metres away from the North Arm Bridge subsequently constructed on the North Arm Road. The explosives were stored in the wooden, slate-roofed magazine building and in two dynamite hulks moored in Magazine Creek. One of them was a retired iron dredger, built about 1852; the other was a former lighter. They were seen as a risk, if they were to explode, because they were close to new bridge and developing residential areas. Buildings The South Australian Government constructed the magazine in 1858 as a lightweight structure on wooden poles due to its location at the tidal creek. Only later, this became best practice to mitigate secondary damage from an explosion by descending débris. As the magazine was encroached by housing, only a decade after its commissioning proposals were made for its abandonment. However, it was not taken out of service until more than three decades later, in 1906, when its contents were transferred to the new Dry Creek explosives depot. The building was demolished in 1916. Subsequently, land reclamation has occurred and no visible evidence remains. References Explosives Gunpowder magazines Ports and harbours of South Australia 1858 establishments in Australia 1906 disestablishments in Australia Buildings and structures in Adelaide
North Arm Powder Magazine
Chemistry
296
70,382
https://en.wikipedia.org/wiki/Blossom
In botany, blossoms are the flowers of stone fruit trees (genus Prunus) and of some other plants with a similar appearance that flower profusely for a period of time in spring. Colloquially, flowers of orange are referred to as such as well. Peach blossoms (including nectarine), most cherry blossoms, and some almond blossoms are usually pink. Plum blossoms, apple blossoms, orange blossoms, some cherry blossoms, and most almond blossoms are white. Blossoms provide pollen to pollinators such as bees, and initiate cross-pollination necessary for the trees to reproduce by producing fruit. Herbal use The ancient Phoenicians used almond blossoms with honey and urine as a tonic, and sprinkled them into stews and gruels to give muscular strength. Crushed petals were also used as a poultice on skin spots and mixed with banana oil, for dry skin and sunburn. In herbalism the crab apple was used as treatment for boils, abscesses, splinters, wounds, coughs, colds and a host of other ailments ranging from acne to kidney ailments. Many dishes made with apples and apple blossom are of medieval origin. In the spring, monks and physicians would gather the blossoms and preserve them in vinegar for drawing poultices and for bee stings and other insect bites. Descending from China and south east Asia, the earliest orange species moved westwards via the trade routes. In 17th century Italy peach blossoms were made into a poultice for bruises, rashes, eczema, grazes and stings. In ancient Greek medicine plum blossoms were used to treat bleeding gums, mouth ulcers and tighten loose teeth. Plum blossoms mixed with sage leaves and flowers were used in plum wine or plum brandy as a mouthwash to soothe sore throats and mouth ailments and sweeten bad breath. Blossom festivals is the Japanese traditional custom of enjoying the transient beauty of flowers; in this case almost always refer to those of the or, less frequently, trees. In England, Wales and Northern Ireland the National Trust organises the environmental awareness campaign #BlossomWatch, which is designed to raise awareness of the first signs of Spring, by encouraging people to share images of blossoms via social media. Gallery See also Fragrance extraction References External links Blossom in other languages. + Plants
Blossom
Biology
473
61,569,211
https://en.wikipedia.org/wiki/Superior%3A%20The%20Return%20of%20Race%20Science
Superior: The Return of Race Science is a non-fiction book by Angela Saini published in 2019. Built around interviews with experts, the scientific consensus, and the author's analysis, it argues that some fields of biology are still influenced by the discredited scientific racism theories of the 19th century. Summary With Superior, Saini draws upon her own childhood in a white neighbourhood of London. The racial discrimination she faced at the time pushed her towards a style of journalism that seeks to highlight injustice. Her renewed interest in the genetics of race was stirred by the exploitation by the white supremacy movement of research that seems to point to genetically distinct racial groupings. Saini first recounts the history of scientific racism, from its origins of systematic classification of humans according to physical appearance and alleged racially-based personality traits, an approach adopted by a list of scientists that includes Linnaeus, Darwin and Huxley. She goes on to the acceptance of these theories by 20th-century anthropology and biology, and to their integration into political doctrines under the Nazi regime. She traces the way racial categories have changed over a fairly short period of time, revealing them as social constructs. Saini argues that despite deliberate efforts to discredit this approach in the post-war period, the pseudo-scientific claim that some varieties of homo sapiens are inherently superior (or more evolved) than others has not only survived, but is making a comeback. Having served the ideologies of the slave trade, race-based immigration and the Holocaust in the past, scientific racism is today enlisted in the cause of white supremacy. While acknowledging that today's scientists who look for expressions of the concept of race in biology are not the equivalent of their 19th-century peers, Saini questions whether this line of inquiry can produce any useful findings. She argues a focus on race or ethnicity in public health and medicine can blind researchers to environmental causes that have already been proven to affect health outcomes, such as socioeconomic conditions. By rehashing the idea that the concept of race corresponds to actual genetic differences, they also feed the re-emergence of the white nationalist movement. Critical reception In Nature, Robin Nelson argues the book "is perhaps best understood as continuing in a tradition of groundbreaking work that contextualizes the deep and problematic history of race science", along with works by Dorothy Roberts and Alondra Nelson. She notes the author uses loaded terms such as "political correctness" and "identity politics" without acknowledging those terms are often used in a pejorative manner, making her intention unclear. In Slate, Tim Requarth calls the book "exceptional and damning" and says it will force scientists to examine how a society's culture affects their scientific judgment. Writing for the Center for Genetics and Society, Peter Shanks calls Saini "an author to watch". He believes the book is "an invaluable resource, and my only real criticism is that the one-word title may give some the false impression that Saini endorses the idea that some groups are superior." In the Financial Times, Clive Cookson said the book is a "brilliant analysis of race science past and present". While Cookson is uncomfortable with what he perceives as an invitation to avoid researching links between genetics and intelligence, he still considers Superior "a thought-provoking combination of science, social history and modern politics." See also Race and intelligence Mankind Quarterly References 2019 non-fiction books British non-fiction books Genetics books Science books Scientific racism Eugenics books Scientific skepticism mass media Race and intelligence controversy Social problems in medicine Beacon Press books
Superior: The Return of Race Science
Biology
727
76,989,865
https://en.wikipedia.org/wiki/Mindar
Mindar (), also known as Android Kannon Mindar, is an android preacher at the Kōdai-ji temple in Kyoto, Japan. The humanoid robot regularly gives sermons on the Heart Sutra at the 400-year-old Zen Buddhist temple. It was created to represent and embody Kannon, a bodhisattva associated with compassion. Mindar was designed through a collaboration between staff of Kōdai-ji and roboticists from Osaka University, including Hiroshi Ishiguro. Construction of the tall android began in 2017 at Osaka University's robotics laboratory. Development of the android cost (US$227,250), while the total cost of the project was (US$909,090). Mindar was unveiled to the public at a ceremony in March 2019. Its 25-minute pre-programmed sermon was written by monks and addresses the Buddhist concepts of emptiness and compassion. Background and development Kōdai-ji is a Zen Buddhist temple established in 1606 in the Higashiyama ward of Kyoto. It is part of the Rinzai school. Roboticist Ishiguro Hiroshi of Osaka University visited Kōdai-ji in July 2017. Gotō Tenshō, then the temple's chief steward, suggested to Ishiguro the creation of a robotic Buddha statue. They met again two months later and initially considered having several robots discussing the Buddha's teachings, though it was determined that a single robot would be preferable from a technical standpoint. A monologue on the Heart Sutra was chosen and it was decided that the android would take the form of Kannon, a bodhisattva associated with compassion. The Lotus Sutra mentions that Kannon is capable of manifesting in various forms. The Android Kannon Production Committee was established in September 2017 and included staff from Kōdai-ji as well as engineers from Osaka University. Ishiguro proposed that the 'Alter' model of robot be used as a prototype. The subject matter of Mindar's sermon was determined by Buddhist monks of the Rinzai school—Honda Dōryū of , Sakaida Taisen of Kennin-ji, and Unrin'in Sōseki of Reigen-in. They devised a narrative explaining the Buddhist concepts of compassion and emptiness, based on works by Hajime Nakamura and Mumon Yamada. The name 'Mindar' was proposed by Ogawa Kōhei, a roboticist at Osaka University. Mindar is not powered by artificial intelligence, though the designers originally had aspirations of endowing the android with machine-learning capabilities. Gotō said "This robot will never die; it will just keep updating itself and evolving. With AI, we hope it will grow in wisdom to help people overcome even the most difficult troubles. It's changing Buddhism." The android Kannon was constructed at a robotics laboratory at Osaka University. Ogawa Kōhei engineered the android and it was completed in February 2019. The total cost of the project was (US$909,090), though development of the android only cost (US$227,250). A traditional Buddhist ceremony was held for the android upon its introduction to the public in March 2019. The ceremony was attended by monks and included chanting, bell-ringing, and drumming. Mindar has historical precedents that were drawn on by its designers. Mechanized Karakuri puppets were produced in Japan from the 17th century and the country's first robot, the Gakutensoku, debuted in the late 1920s and could write calligraphy, change its facial expression, and move its head and hands through an air pressure mechanism. As a religion-oriented android, Mindar was preceded by other 21st-century robots, including the Chinese chatbot Robot Monk Xian'er and Pepper (produced 2015–2021), which could be programmed to perform Buddhist funeral rites, including chanting sutras and banging drums. Description Mindar is a stationary, tall android, weighing . It has a slender mechatronic body made from aluminum with silicone skin covering its face, hands, and shoulders. Mindar stands on a platform and does not have working legs. It is capable of blinking and smiling, and moves its head, torso, and arms through air hydraulics. Mindar accompanies its preaching with a variety of gestures, such as joining its palms together in gasshō. A camera implanted in Mindar's left eye allows the android to give the impression of eye contact by focusing on a person. The top of Mindar's skull is exposed, showing blinking lights and wires within its cranial cavity. Similarly most of the android's body is not covered in silicone, exposing wires and servo motors. Similar to Ishiguro's telenoid robots, Mindar has an androgynous appearance. The voice has been described as feminine and soothing. Sermons Mindar is situated within the Kōdai-ji temple complex at Kyōka Hall. Its sermons are open to the public, and are typically given twice daily on Saturdays and Sundays. Mindar gives a 25-minute sermon in Japanese on the Heart Sutra, addressing concepts of compassion and emptiness within Buddhism. Chinese and English subtitles are projected on the back wall of the room. In the pre-programmed sermon's introduction, a spotlight shines on Mindar, and the android begins speaking. It refers to itself as the bodhisattva, saying: The multimedia presentation is accompanied by operatic piano music and augmented through 360-degree projection mapping, including the projection of a virtual audience on the walls of the room. Mindar interacts with members of the projected audience, answering their questions in a pre-programmed dialogue. The sermon ends with Mindar chanting the Heart Sutra. Reception Mindar's introduction in 2019 received international news coverage. Media coverage focused on the novelty of a robot preacher, the cost of the project, and the potential for Mindar to change perceptions about Buddhism in Japan. Public reception of Mindar has been mixed. A survey by Osaka University found that some people found the android easy to follow, surprisingly human-like, and warm, while others said that it felt unnatural or fake, with expressions that seemed engineered. Several people who have listened to Mindar's sermon have cried, with some considering the shadow cast by the android to be the "real" Kannon. Foreigners, especially those from Western countries, have been more critical of the android. Some raised concerns that the android upset the sanctity of religion, while others likened Mindar to Frankenstein's monster. A 2020 paper in Frontiers in Artificial Intelligence discusses whether androids such as Mindar can express Buddha-nature. It concludes that Mindar could be considered an authentic incarnation of Kannon were it to become self-aware. A 2023 paper in the Journal of Experimental Psychology describes a field study based on interviews with people who had heard Mindar's sermon. The paper indicates that people do not assign android preachers the same credibility as they do for human preachers. The authors conclude that the automation of religious duties would likely result in a reduction of religious commitment. See also Buddhism and artificial intelligence Human–robot interaction Japanese robotics Notes References External links Mindar at the Kōdai-ji website (in Japanese) Humanoid robots Robots of Japan 2019 robots Android (robot) Bodhisattvas Guanyin Buddhism and technology Japanese Buddhist clergy 2019 establishments in Japan Buddhism in Japan Tourist attractions in Kyoto
Mindar
Engineering
1,515
23,402,417
https://en.wikipedia.org/wiki/Slowloris%20%28cyber%20attack%29
Slowloris is a type of denial of service attack tool which allows a single machine to take down another machine's web server with minimal bandwidth and side effects on unrelated services and ports. Slowloris tries to keep many connections to the target web server open and hold them open as long as possible. It accomplishes this by opening connections to the target web server and sending a partial request. Periodically, it will send subsequent HTTP headers, adding to, but never completing, the request. Affected servers will keep these connections open, filling their maximum concurrent connection pool, eventually denying additional connection attempts from clients. The program was named after slow lorises, a group of primates which are known for their slow movement. Affected web servers This includes but is not necessarily limited to the following, per the attack's author: Apache 1.x and 2.x dhttpd Websense "block pages" (unconfirmed) Trapeze Wireless Web Portal (unconfirmed) Verizon's MI424-WR FIOS Cable modem (unconfirmed) Verizon's Motorola Set-top box (port 8082 and requires auth - unconfirmed) BeeWare WAF (unconfirmed) Deny All WAF (patched) Flask (development server) Internet Information Services (IIS) 6.0 and earlier Nginx 1.5.9 and earlier Vulnerable to Slowloris attack on the TLS handshake process: Apache HTTP Server 2.2.15 and earlier Internet Information Services (IIS) 7.0 and earlier Because Slowloris exploits problems handling thousands of connections, the attack has less of an effect on servers that handle large numbers of connections well. Proxying servers and caching accelerators such as Varnish, nginx, and Squid have been recommended to mitigate this particular kind of attack. In addition, certain servers are more resilient to the attack by way of their design, including Hiawatha, IIS, lighttpd, Cherokee, and Cisco CSS. Mitigating the Slowloris attack While there are no reliable configurations of the affected web servers that will prevent the Slowloris attack, there are ways to mitigate or reduce the impact of such an attack. In general, these involve increasing the maximum number of clients the server will allow, limiting the number of connections a single IP address is allowed to make, imposing restrictions on the minimum transfer speed a connection is allowed to have, and restricting the length of time a client is allowed to stay connected. In the Apache web server, a number of modules can be used to limit the damage caused by the Slowloris attack; the Apache modules mod_limitipconn, mod_qos, mod_evasive, mod security, mod_noloris, and mod_antiloris have all been suggested as means of reducing the likelihood of a successful Slowloris attack. Since Apache 2.2.15, Apache ships the module mod_reqtimeout as the official solution supported by the developers. Other mitigating techniques involve setting up reverse proxies, firewalls, load balancers or content switches. Administrators could also change the affected web server to software that is unaffected by this form of attack. For example, lighttpd and nginx do not succumb to this specific attack. Notable usage During the protests that erupted in the wake of the 2009 Iranian presidential election, Slowloris arose as a prominent tool used to leverage DoS attacks against sites run by the Iranian government. The belief was that flooding DDoS attacks would affect internet access for the government and protesters equally, due to the significant bandwidth they can consume. The Slowloris attack was chosen instead, because of its high impact and relatively low bandwidth. A number of government-run sites were targeted during these attacks, including gerdab.ir, leader.ir, and president.ir. A variant of this attack was used by spam network River City Media to force Gmail servers to send thousands of messages in bulk, by opening thousands of connections to the Gmail API with message sending requests, then completing them all at once. Similar software Since its release, a number of programs have appeared that mimic the function of Slowloris while providing additional functionality, or running in different environments: PyLoris – A protocol-agnostic Python implementation supporting Tor and SOCKS proxies. Slowloris – A Python 3 implementation of Slowloris with SOCKS proxy support. Goloris – Slowloris for nginx, written in Go. slowloris - Distributed Golang implementation QSlowloris – An executable form of Slowloris designed to run on Windows, featuring a Qt front end. An unnamed PHP version which can be run from a HTTP server. SlowHTTPTest – A highly configurable slow attacks simulator, written in C++. SlowlorisChecker – A Slowloris and Slow POST POC (Proof of concept). Written in Ruby. Cyphon - Slowloris for Mac OS X, written in Objective-C. sloww - Slowloris implementation written in Node.js. dotloris - Slowloris written in .NET Core SlowDroid - An enhanced version of Slowloris written in Java, reducing at minimum the attack bandwidth See also SlowDroid Trinoo Stacheldraht Denial of service LAND Low Orbit Ion Cannon High Orbit Ion Cannon ReDoS R-U-Dead-Yet References Virtual machine rotation for mitigation of a Slowloris attack | IEEE Conference Publication | IEEE Xplore. (n.d.). Retrieved November 30, 2024, from https://ieeexplore.ieee.org/document/9794349 Markova, V. (2024, January 4). The Slowloris Attack: How it Works and How to Protect Your Website. ClouDNS Blog. https://www.cloudns.net/blog/the-slowloris-attack-how-it-works-and-how-to-protect-your-website/ External links Slowloris HTTP DoS hackaday on Slowloris Apache attacked by a "slow loris" article on LWN.net Slowloris – a short video (including a demo) Home page of SlowHTTPTest An Attempt at Simulating SlowLoris on LOIC Blog post explaining the inner workings of Slowloris Denial-of-service attacks
Slowloris (cyber attack)
Technology
1,333
953,723
https://en.wikipedia.org/wiki/Historadiography
Historadiography is a technique formerly utilized in the fields of histology and cellular biology to provide semiquantitative information regarding the density of a tissue sample. It is usually synonymous with microradiography. This is achieved by layering a ground section of mineralized tissue (such as bone) with photographic emulsion on a glass slide and exposing the sample to a beam of X-rays. After developing the emulsion, the resulting radiograph can be viewed with a microscope. A side-by-side comparison with a slide containing radiographs of various substances of known mass can provide a rough mass estimate, and therefore a rough approximation of the concentration of calcium salts in the sample. Historadiography has also been used to visualize staining of tissue, such as spinal cord samples with thorotrast, which contains thorium that is opaque to X-rays. Over recent decades researchers have generally lost interest in historadiography. The most recent publication using the term (1998) to be indexed in PubMed referred to autoradiography of tritium incorporated in thymidine. References Histology
Historadiography
Chemistry
226
9,010,488
https://en.wikipedia.org/wiki/Samsung%20Electro-Mechanics
Samsung Electro-Mechanics (SEM, ) is a multinational electronic component company headquartered in Suwon, Gyeonggi Province, South Korea. It is a subsidiary of the Samsung Group. The company produces chip parts such as MLCCs, , camera modules, network modules and printed circuit boards. Established in 1973 as Samsung Sanyo Parts, the name was changed to Samsung Electric Parts the following year. The name was again changed to Samsung Electronics Parts in 1977, and then to Samsung Electro-Mechanics in 1987. The company is headquartered in Suwon, South Korea, with additional manufacturing facilities in Sejong and Busan and overseas in Philippines and Thailand. It also once had such facility in Tianjin, China. This company works on semiconductor components. Products MLCC (Multi-Layer Ceramic Capacitor) Camera Modules Organic Semiconductor Substrate Rigid-Flex PCBs Governance As of January 2020, Samsung Electro-Mechanics’ Board of Directors is composed of 3 internal directors and 4 outside directors, and there are 5 sub committees within the Board of Director. These committees are the Compensation Committee, Management Committee, Audit Committee, Outside Director Nomination Committee, and the Internal Trading Committee. Currently, an outside director is serving as chairman of the board for Samsung Electro-Mechanics. Environment management Greenhouse gas In the results of the evaluation of CDP (Carbon Disclosure Project) Korea Committee in 2016, Samsung Electro-Mechanics has been selected in CDP Korea Hall of Fame for three consecutive years in 2014. Energy In the 34th Energy-Saving Promotion conference awarding a prize to person who contributed to the improvement of energy efficiency and overcoming the power supply crisis, Chi-joon Choi, CEO of Samsung Electro-Mechanics was awarded the Silver Tower Industry medal with best honor In 2005, Samsung Electro-Mechanics started to provide free artificial joint surgery for low income families as part of the company's medical support policy, and as of 2016, 514 beneficiaries from Gyeonggi-do, Chungcheong-do and Gangwon-do regions underwent the procedure. Since 2006, a badminton tournament for disabled is regularly held annually, and about 3,000 people including the players, supporters and volunteer service teams participate from 16 cities around the country. Since 2008, the national music competition for handicapped students, in which individuals or groups of students with developmental disabilities, visual impairment or physical disabilities can participate and compete, is being hosted along with TJB Daejeon TV. In 2013, Samsung Electro-Mechanics founded an orchestra, hello!SEMCO, composed of 35 children and teenage members with disabilities. They recently held their first regular concert. References External links Companies based in Suwon Companies listed on the Korea Exchange Companies in the KOSPI 200 Computer companies of South Korea Computer hardware companies Computer memory companies Computer storage companies Display technology companies Electronics companies of South Korea Electronics companies established in 1973 Foundry semiconductor companies Electro-Mechanics Semiconductor companies of South Korea South Korean companies established in 1973 Technology companies of South Korea 1970s initial public offerings
Samsung Electro-Mechanics
Technology
598
24,436,577
https://en.wikipedia.org/wiki/Variation%20of%20information
In probability theory and information theory, the variation of information or shared information distance is a measure of the distance between two clusterings (partitions of elements). It is closely related to mutual information; indeed, it is a simple linear expression involving the mutual information. Unlike the mutual information, however, the variation of information is a true metric, in that it obeys the triangle inequality. Definition Suppose we have two partitions and of a set into disjoint subsets, namely and . Let: and Then the variation of information between the two partitions is: . This is equivalent to the shared information distance between the random variables i and j with respect to the uniform probability measure on defined by for . Explicit information content We can rewrite this definition in terms that explicitly highlight the information content of this metric. The set of all partitions of a set form a compact lattice where the partial order induces two operations, the meet and the join , where the maximum is the partition with only one block, i.e., all elements grouped together, and the minimum is , the partition consisting of all elements as singletons. The meet of two partitions and is easy to understand as that partition formed by all pair intersections of one block of, , of and one, , of . It then follows that and . Let's define the entropy of a partition as , where . Clearly, and . The entropy of a partition is a monotonous function on the lattice of partitions in the sense that . Then the VI distance between and is given by . The difference is a pseudo-metric as doesn't necessarily imply that . From the definition of , it is . If in the Hasse diagram we draw an edge from every partition to the maximum and assign it a weight equal to the VI distance between the given partition and , we can interpret the VI distance as basically an average of differences of edge weights to the maximum . For as defined above, it holds that the joint information of two partitions coincides with the entropy of the meet and we also have that coincides with the conditional entropy of the meet (intersection) relative to . Identities The variation of information satisfies , where is the entropy of , and is mutual information between and with respect to the uniform probability measure on . This can be rewritten as , where is the joint entropy of and , or , where and are the respective conditional entropies. The variation of information can also be bounded, either in terms of the number of elements: , Or with respect to a maximum number of clusters, : Triangle inequality To verify the triangle inequality , expand using the identity . It suffices to prove The right side has a lower bound which is no less than the left side. References Further reading External links Partanalyzer includes a C++ implementation of VI and other metrics and indices for analyzing partitions and clusterings C++ implementation with MATLAB mex files Entropy and information Summary statistics for contingency tables Clustering criteria
Variation of information
Physics,Mathematics
609
381,323
https://en.wikipedia.org/wiki/Chicken%20%28game%29
The game of chicken, also known as the hawk-dove game or snowdrift game, is a model of conflict for two players in game theory. The principle of the game is that while the ideal outcome is for one player to yield (to avoid the worst outcome if neither yields), individuals try to avoid it out of pride, not wanting to look like "chickens". Each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game essentially ends. The name "chicken" has its origins in a game in which two drivers drive toward each other on a collision course: one must swerve, or both may die in the crash, but if one driver swerves and the other does not, the one who swerved will be called a "chicken", meaning a coward; this terminology is most prevalent in political science and economics. The name "hawk–dove" refers to a situation in which there is a competition for a shared resource and the contestants can choose either conciliation or conflict; this terminology is most commonly used in biology and evolutionary game theory. From a game-theoretic point of view, "chicken" and "hawk–dove" are identical. The game has also been used to describe the mutual assured destruction of nuclear warfare, especially the sort of brinkmanship involved in the Cuban Missile Crisis. Popular versions The game of chicken models two drivers, both headed for a single-lane bridge from opposite directions. The first to swerve away yields the bridge to the other. If neither player swerves, the result is a costly deadlock in the middle of the bridge or a potentially fatal head-on collision. It is presumed that the best thing for each driver is to stay straight while the other swerves (since the other is the "chicken" while a crash is avoided). Additionally, a crash is presumed to be the worst outcome for both players. This yields a situation where each player, in attempting to secure their best outcome, risks the worst. The phrase game of chicken is also used as a metaphor for a situation where two parties engage in a showdown where they have nothing to gain and only pride stops them from backing down. Bertrand Russell famously compared the game of Chicken to nuclear brinkmanship:Since the nuclear stalemate became apparent, the governments of East and West have adopted the policy that Mr. Dulles calls 'brinkmanship'. This is a policy adapted from a sport that, I am told, is practiced by some youthful degenerates. This sport is called 'Chicken!'. It is played by choosing a long, straight road with a white line down the middle and starting two very fast cars toward each other from opposite ends. Each car is expected to keep the wheels on one side of the white line. As they approach each other, mutual destruction becomes more and more imminent. If one of them swerves from the white line before the other, the other, as they pass, shouts 'Chicken!', and the one who has swerved becomes an object of contempt. As played by irresponsible boys, this game is considered decadent and immoral, though only the lives of the players are risked. But when the game is played by eminent statesmen, who risk not only their own lives but those of many hundreds of millions of human beings, it is thought on both sides that the statesmen on one side are displaying a high degree of wisdom and courage, and only the statesmen on the other side are reprehensible. This, of course, is absurd. Both are to blame for playing such an incredibly dangerous game. The game may be played without misfortune a few times, but sooner or later, it will come to be felt that loss of face is more dreadful than nuclear annihilation. The moment will come when neither side can face the derisive cry of 'Chicken!' from the other side. When that moment comes, the statesmen of both sides will plunge the world into destruction.Brinkmanship involves the introduction of an element of uncontrollable risk: even if all players act rationally in the face of risk, uncontrollable events can still trigger the catastrophic outcome. In the "chickie run" scene from the film Rebel Without a Cause, this happens when Buzz cannot escape from the car and dies in the crash. The opposite scenario occurs in Footloose where Ren McCormack is stuck in his tractor and hence wins the game as they cannot play "chicken". A similar event happens in two different games in the film The Heavenly Kid, when first Bobby, and then later Lenny become stuck in their cars and drive off a cliff. The basic game-theoretic formulation of Chicken has no element of variable, potentially catastrophic, risk, and is also the contraction of a dynamic situation into a one-shot interaction. The hawk–dove version of the game imagines two players (animals) contesting an indivisible resource who can choose between two strategies, one more escalated than the other. They can use threat displays (play Dove), or physically attack each other (play Hawk). If both players choose the Hawk strategy, then they fight until one is injured and the other wins. If only one player chooses Hawk, then this player defeats the Dove player. If both players play Dove, there is a tie, and each player receives a payoff lower than the profit of a hawk defeating a dove. Game theoretic applications Chicken A formal version of the game of Chicken has been the subject of serious research in game theory. Two versions of the payoff matrix for this game are presented here (Figures 1 and 2). In Figure 1, the outcomes are represented in words, where each player would prefer to win over tying, prefer to tie over losing, and prefer to lose over crashing. Figure 2 presents arbitrarily set numerical payoffs which theoretically conform to this situation. Here, the benefit of winning is 1, the cost of losing is -1, and the cost of crashing is -1000. Both Chicken and Hawk–Dove are anti-coordination games, in which it is mutually beneficial for the players to play different strategies. In this way, it can be thought of as the opposite of a coordination game, where playing the same strategy Pareto dominates playing different strategies. The underlying concept is that players use a shared resource. In coordination games, sharing the resource creates a benefit for all: the resource is non-rivalrous, and the shared usage creates positive externalities. In anti-coordination games the resource is rivalrous but non-excludable and sharing comes at a cost (or negative externality). Because the loss of swerving is so trivial compared to the crash that occurs if nobody swerves, the reasonable strategy would seem to be to swerve before a crash is likely. Yet, knowing this, if one believes one's opponent to be reasonable, one may well decide not to swerve at all, in the belief that the opponent will be reasonable and decide to swerve, leaving the first player the winner. This unstable situation can be formalized by saying there is more than one Nash equilibrium, which is a pair of strategies for which neither player gains by changing their own strategy while the other stays the same. (In this case, the pure strategy equilibria are the two situations wherein one player swerves while the other does not.) Hawk–dove In the biological literature, this game is known as Hawk–Dove. The earliest presentation of a form of the Hawk–Dove game was by John Maynard Smith and George Price in their paper, "The logic of animal conflict". The traditional payoff matrix for the Hawk–Dove game is given in Figure 3, where V is the value of the contested resource, and C is the cost of an escalated fight. It is (almost always) assumed that the value of the resource is less than the cost of a fight, i.e., C > V > 0. If C ≤ V, the resulting game is not a game of Chicken but is instead a Prisoner's Dilemma. The exact value of the Dove vs. Dove payoff varies between model formulations. Sometimes the players are assumed to split the payoff equally (V/2 each), other times the payoff is assumed to be zero (since this is the expected payoff to a war of attrition game, which is the presumed models for a contest decided by display duration). While the Hawk–Dove game is typically taught and discussed with the payoffs in terms of V and C, the solutions hold true for any matrix with the payoffs in Figure 4, where W > T > L > X. Hawk–dove variants Biologists have explored modified versions of classic Hawk–Dove game to investigate a number of biologically relevant factors. These include adding variation in resource holding potential, and differences in the value of winning to the different players, allowing the players to threaten each other before choosing moves in the game, and extending the interaction to two plays of the game. Pre-commitment One tactic in the game is for one party to signal their intentions convincingly before the game begins. For example, if one party were to ostentatiously disable their steering wheel just before the match, the other party would be compelled to swerve. This shows that, in some circumstances, reducing one's own options can be a good strategy. One real-world example is a protester who handcuffs themselves to an object, so that no threat can be made which would compel them to move (since they cannot move). Another example, taken from fiction, is found in Stanley Kubrick's Dr. Strangelove. In that film, the Russians sought to deter American attack by building a "doomsday machine", a device that would trigger world annihilation if Russia was hit by nuclear weapons or if any attempt were made to disarm it. However, the Russians had planned to signal the deployment of the machine a few days after having set it up, which, because of an unfortunate course of events, turned out to be too late. Players may also make non-binding threats to not swerve. This has been modeled explicitly in the Hawk–Dove game. Such threats work, but must be wastefully costly if the threat is one of two possible signals ("I will not swerve" or "I will swerve"), or they will be costless if there are three or more signals (in which case the signals will function as a game of "rock, paper, scissors"). Best response mapping and Nash equilibria All anti-coordination games have three Nash equilibria. Two of these are pure contingent strategy profiles, in which each player plays one of the pair of strategies, and the other player chooses the opposite strategy. The third one is a mixed equilibrium, in which each player probabilistically chooses between the two pure strategies. Either the pure, or mixed, Nash equilibria will be evolutionarily stable strategies depending upon whether uncorrelated asymmetries exist. The best response mapping for all 2x2 anti-coordination games is shown in Figure 5. The variables x and y in Figure 5 are the probabilities of playing the escalated strategy ("Hawk" or "Don't swerve") for players X and Y respectively. The line in graph on the left shows the optimum probability of playing the escalated strategy for player Y as a function of x. The line in the second graph shows the optimum probability of playing the escalated strategy for player X as a function of y (the axes have not been rotated, so the dependent variable is plotted on the abscissa, and the independent variable is plotted on the ordinate). The Nash equilibria are where the players' correspondences agree, i.e., cross. These are shown with points in the right hand graph. The best response mappings agree (i.e., cross) at three points. The first two Nash equilibria are in the top left and bottom right corners, where one player chooses one strategy, the other player chooses the opposite strategy. The third Nash equilibrium is a mixed strategy which lies along the diagonal from the bottom left to top right corners. If the players do not know which one of them is which, then the mixed Nash is an evolutionarily stable strategy (ESS), as play is confined to the bottom left to top right diagonal line. Otherwise an uncorrelated asymmetry is said to exist, and the corner Nash equilibria are ESSes. Strategy polymorphism vis-à-vis strategy mixing The ESS for the Hawk–Dove game is a mixed strategy. Formal game theory is indifferent to whether this mixture is due to all players in a population choosing randomly between the two pure strategies (a range of possible instinctive reactions for a single situation) or whether the population is a polymorphic mixture of players dedicated to choosing a particular pure strategy(a single reaction differing from individual to individual). Biologically, these two options are strikingly different ideas. The Hawk–Dove game has been used as a basis for evolutionary simulations to explore which of these two modes of mixing ought to predominate in reality. Symmetry breaking In both "Chicken" and "Hawk–Dove", the only symmetric Nash equilibrium is the mixed strategy Nash equilibrium, where both individuals randomly chose between playing Hawk/Straight or Dove/Swerve. This mixed strategy equilibrium is often sub-optimal—both players would do better if they could coordinate their actions in some way. This observation has been made independently in two different contexts, with almost identical results. Correlated equilibrium and the game of chicken Consider the version of "Chicken" pictured in Figure 6. Like all forms of the game, there are three Nash equilibria. The two pure strategy Nash equilibria are (D, C) and (C, D). There is also a mixed strategy equilibrium where each player Dares with probability 1/3. It results in expected payoffs of 14/3 = 4.667 for each player. Now consider a third party (or some natural event) that draws one of three cards labeled: (C, C), (D, C), and (C, D). This exogenous draw event is assumed to be uniformly at random over the 3 outcomes. After drawing the card the third party informs the players of the strategy assigned to them on the card (but not the strategy assigned to their opponent). Suppose a player is assigned D, they would not want to deviate supposing the other player played their assigned strategy since they will get 7 (the highest payoff possible). Suppose a player is assigned C. Then the other player has been assigned C with probability 1/2 and D with probability 1/2 (due to the nature of the exogenous draw). The expected utility of Daring is 0(1/2) + 7(1/2) = 3.5 and the expected utility of chickening out is 2(1/2) + 6(1/2) = 4. So, the player would prefer to chicken out. Since neither player has an incentive to deviate from the drawn assignments, this probability distribution over the strategies is known as a correlated equilibrium of the game. Notably, the expected payoff for this equilibrium is 7(1/3) + 2(1/3) + 6(1/3) = 5 which is higher than the expected payoff of the mixed strategy Nash equilibrium. Uncorrelated asymmetries and solutions to the hawk–dove game Although there are three Nash equilibria in the Hawk–Dove game, the one which emerges as the evolutionarily stable strategy (ESS) depends upon the existence of any uncorrelated asymmetry in the game (in the sense of anti-coordination games). In order for row players to choose one strategy and column players the other, the players must be able to distinguish which role (column or row player) they have. If no such uncorrelated asymmetry exists then both players must choose the same strategy, and the ESS will be the mixing Nash equilibrium. If there is an uncorrelated asymmetry, then the mixing Nash is not an ESS, but the two pure, role contingent, Nash equilibria are. The standard biological interpretation of this uncorrelated asymmetry is that one player is the territory owner, while the other is an intruder on the territory. In most cases, the territory owner plays Hawk while the intruder plays Dove. In this sense, the evolution of strategies in Hawk–Dove can be seen as the evolution of a sort of prototypical version of ownership. Game-theoretically, however, there is nothing special about this solution. The opposite solution—where the owner plays dove and the intruder plays Hawk—is equally stable. In fact, this solution is present in a certain species of spider; when an invader appears the occupying spider leaves. In order to explain the prevalence of property rights over "anti-property rights" one must discover a way to break this additional symmetry. Replicator dynamics Replicator dynamics is a simple model of strategy change commonly used in evolutionary game theory. In this model, a strategy which does better than the average increases in frequency at the expense of strategies that do worse than the average. There are two versions of the replicator dynamics. In one version, there is a single population which plays against itself. In another, there are two population models where each population only plays against the other population (and not against itself). In the one population model, the only stable state is the mixed strategy Nash equilibrium. Every initial population proportion (except all Hawk and all Dove) converge to the mixed strategy Nash Equilibrium where part of the population plays Hawk and part of the population plays Dove. (This occurs because the only ESS is the mixed strategy equilibrium.) In the two population model, this mixed point becomes unstable. In fact, the only stable states in the two population model correspond to the pure strategy equilibria, where one population is composed of all Hawks and the other of all Doves. In this model one population becomes the aggressive population while the other becomes passive. This model is illustrated by the vector field pictured in Figure 7a. The one-dimensional vector field of the single population model (Figure 7b) corresponds to the bottom left to top right diagonal of the two population model. The single population model presents a situation where no uncorrelated asymmetries exist, and so the best players can do is randomize their strategies. The two population models provide such an asymmetry and the members of each population will then use that to correlate their strategies. In the two population model, one population gains at the expense of another. Hawk–Dove and Chicken thus illustrate an interesting case where the qualitative results for the two different versions of the replicator dynamics differ wildly. Related strategies and games Brinkmanship "Chicken" and "Brinkmanship" are often used synonymously in the context of conflict, but in the strict game-theoretic sense, "brinkmanship" refers to a strategic move designed to avert the possibility of the opponent switching to aggressive behavior. The move involves a credible threat of the risk of irrational behavior in the face of aggression. If player 1 unilaterally moves to A, a rational player 2 cannot retaliate since (A, C) is preferable to (A, A). Only if player 1 has grounds to believe that there is sufficient risk that player 2 responds irrationally (usually by giving up control over the response, so that there is sufficient risk that player 2 responds with A) player 1 will retract and agree on the compromise. War of attrition Like "Chicken", the "War of attrition" game models escalation of conflict, but they differ in the form in which the conflict can escalate. Chicken models a situation in which the catastrophic outcome differs in kind from the agreeable outcome, e.g., if the conflict is over life and death. War of attrition models a situation in which the outcomes differ only in degrees, such as a boxing match in which the contestants have to decide whether the ultimate prize of victory is worth the ongoing cost of deteriorating health and stamina. Hawk–dove and war of attrition The Hawk–Dove game is the most commonly used game theoretical model of aggressive interactions in biology. The war of attrition is another very influential model of aggression in biology. The two models investigate slightly different questions. The Hawk–Dove game is a model of escalation, and addresses the question of when ought an individual escalate to dangerously costly physical combat. The war of attrition seeks to answer the question of how contests may be resolved when there is no possibility of physical combat. The war of attrition is an auction in which both players pay the lower bid (an all-pay second price auction). The bids are assumed to be the duration which the player is willing to persist in making a costly threat display. Both players accrue costs while displaying at each other, the contest ends when the individual making the lower bid quits. Both players will then have paid the lower bid. Chicken and prisoner's dilemma Chicken is a symmetrical 2x2 game with conflicting interests, the preferred outcome is to play Straight while the opponent plays Swerve. Similarly, the prisoner's dilemma is a symmetrical 2x2 game with conflicting interests: the preferred outcome is to Defect while the opponent plays Cooperate. PD is about the impossibility of cooperation while Chicken is about the inevitability of conflict. Iterated play can solve PD but not Chicken. Both games have a desirable cooperative outcome in which both players choose the less escalated strategy, Swerve-Swerve in the Chicken game, and Cooperate-Cooperate in the prisoner's dilemma, such that players receive the Coordination payoff C (see tables below). The temptation away from this sensible outcome is toward a Straight move in Chicken and a Defect move in the prisoner's dilemma (generating the Temptation payoff, should the other player use the less escalated move). The essential difference between these two games is that in the prisoner's dilemma, the Cooperate strategy is dominated, whereas in Chicken the equivalent move is not dominated since the outcome payoffs when the opponent plays the more escalated move (Straight in place of Defect) are reversed. Schedule chicken and project management The term "schedule chicken" is used in project management and software development circles. The condition occurs when two or more areas of a product team claim they can deliver features at an unrealistically early date because each assumes the other teams are stretching the predictions even more than they are. This pretense continually moves forward past one project checkpoint to the next until feature integration begins or just before the functionality is actually due. The practice of "schedule chicken" often results in contagious schedule slips due to the inter-team dependencies and is difficult to identify and resolve, as it is in the best interest of each team not to be the first bearer of bad news. The psychological drivers underlining the "schedule chicken" behavior in many ways mimic the hawk–dove or snowdrift model of conflict. See also Brinkmanship Coordination game Fireship, a naval tactic of intentional suicidal ramming into an enemy ship Matching pennies Mexican standoff Prisoner's dilemma Ritualized aggression Si vis pacem, para bellum Volunteer's dilemma War of attrition Zugzwang Notes References External links The game of Chicken as a metaphor for human conflict Game-theoretic analysis of Chicken Game of Chicken – Rebel Without a Cause by Elmer G. Wiens. Non-cooperative games Evolutionary game theory Endurance games Social science experiments
Chicken (game)
Mathematics
4,929
1,325,302
https://en.wikipedia.org/wiki/Radar%20detector
A radar detector is an electronic device used by motorists to detect if their speed is being monitored by police or law enforcement using a radar gun. Most radar detectors are used so the driver can reduce the car's speed before being ticketed for speeding. In general sense, only emitting technologies, like doppler RADAR, or LIDAR can be detected. Visual speed estimating techniques, like ANPR or VASCAR can not be detected in daytime, but technically vulnerable to detection at night, when IR spotlight is used. There are no reports that piezo sensors can be detected. LIDAR devices require an optical-band sensor, although many modern detectors include LIDAR sensors. Most of today's radar detectors detect signals across a variety of wavelength bands: usually X, K, and Ka. In Europe the Ku band is common as well. The past success of radar detectors was based on the fact that radio-wave beams can not be narrow-enough, so the detector usually senses stray and scattered radiation, giving the driver time to slow down. Based on a focused laser-beam, LIDAR technology does not suffer this shortcoming; however it requires precise aiming. Modern police radars incorporate formidable computing power, producing a minimum number of ultra-short pulses, reusing wide beams for multi-target measurement, which renders most detectors useless. But, mobile Internet allows GPS navigation devices to map police radar locations in real-time. These devices are also often called "radar detectors", while not necessary carrying an RF sensor. Description One device law enforcement use to measure the expected speed of a moving vehicle is Doppler radar, which uses the Doppler effect to measure the relative speed of a vehicle. Doppler radar works by beaming a radio wave at a vehicle to then measure the expected change in frequency of the reflected wave (that bounces off the vehicle). Law enforcement often employs Doppler radar via hand-held radar guns, from vehicles, or from fixed objects such as traffic signals. Radar detectors use a superheterodyne receiver to detect these electromagnetic emissions from the gun, and raise an alarm to notify the motorist when a transmission is detected. False alarms can occur however due to the large number of devices, such as automatic door openers (such as the ones at supermarkets and drug stores), speed signs, blind spot monitoring systems, poorly designed radar detectors and adaptive cruise control, that operate in the same part of the electromagnetic spectrum as radar guns. In recent years, some radar detectors have added GPS technology. This allows users to manually store the locations where police frequently monitor traffic, with the detector sounding an alarm when approaching that location in the future (this is accomplished by pushing a button and does not require coordinates to be entered). These detectors also allow users to manually store the coordinates of sites of frequent false alarms, which the GPS enabled detector will then ignore. The detector can also be programmed to mute alerts when traveling below a preset speed, limiting unnecessary alerts. Some GPS enabled detectors can download the GPS coordinates of speed monitoring cameras and red-light cameras from the Internet, alerting the driver that they are approaching the camera. Counter technology Radar guns and detectors have evolved alternately over time to counter each other's technology in a form of civilian electronic "warfare". For example, as new frequencies have been introduced, radar detectors have initially been "blind" to them until their technology, too, has been updated. Similarly, the length of time and strength of the transmissions have been lowered to reduce the chance of detection, which in turn has resulted in more sensitive receivers and more sophisticated software counter technology. Lastly, radar detectors may combine other technologies, such as GPS-based technology with a point of interest database of known speed trapping locations, into a single device to improve their chances of success. Radar detector detectors The superheterodyne receiver in radar detectors has a local oscillator that radiates slightly, so it is possible to build a radar-detector detector, which detects such emissions (usually the frequency of the radar type being detected, plus about 10 MHz). The VG-2 Interceptor was the first device developed for this purpose, but has since been eclipsed by the Spectre III and Spectre Elite. This form of "electronic warfare" cuts both ways - since detector-detectors use a similar superheterodyne receiver, many early "stealth" radar detectors were equipped with a radar-detector-detector-detector circuit, which shuts down the main radar receiver when the detector-detector's signal is sensed, thus preventing detection by such equipment. This technique borrows from ELINT surveillance countermeasures. In the early 1990s, BEL-Tronics, Inc. of Ontario, Canada (where radar detector use is prohibited in most provinces) found that the local oscillator frequency of the detector could be altered to be out of the range of the VG-2 Interceptor (probably by using two local oscillator stages such that neither is near the RF frequency). This resulted in detector manufacturers responding by changing their local oscillator frequency. The VG-2 is no longer in production and radar detectors immune to the Spectre Elite are available. Radar scrambling It is illegal in many countries to sell or possess any products that actively transmit radar signals intended to jam radar equipment. In the United States, actively transmitting on a frequency licensed by the Federal Communications Commission (FCC) without a licence is a violation of FCC regulations, which may be punishable by fines up to $10,000 and/or up to one year imprisonment. LIDAR detection Newer speed detection devices use pulsed laser light, commonly referred to as LIDAR, rather than radio waves. Radar detectors, which detect radio transmissions, are unable to detect the infrared light emitted by LIDAR guns, so a different type of device called a LIDAR detector is required. However, LIDAR detection is not nearly as effective as radar detection because the output beam is very focused. While radar's radio waves can expand to across at from their source, LIDAR's light beam diffuses to only about . A police officer targeting a car will most likely aim for the center mass or headlight of the vehicle and, because radar detectors are mounted on the windshield away from the beam's aim, they may not alert at all. With such a focused beam, an officer using a LIDAR gun can target a single car in close proximity to others at ranges of up to . This has resulted in some manufacturers producing LIDAR jammers. Unlike those of radar, LIDAR's frequencies and use are not controlled by the FCC. These jammers attempt to confuse police LIDAR into showing no speed on the display. They are often successful, and therefore many LIDAR manufacturers produce LIDAR guns that have "jam codes" that show when they are being jammed. They will work against some LIDAR jammers, but not all. In spite of this, police can often tell when they are being jammed when they see no reading on their LIDAR gun. Many jammer-equipped motorists try to counter this by reducing their speed to legal limits before turning off their jammer equipment, a technique known as "kill the equipment", referred to as "JTK" or "Jam to Kill." Officers can often detect this by observing that their LIDAR equipment is unable to lock in a speed properly, along with visual indication of sudden deceleration of the targeted vehicle. They will then pull the offending vehicle over and look for LIDAR jammers on the front of the vehicle, potentially ticketing the motorist with an obstruction of justice charge. Some states also have laws against jamming of police radar or LIDAR: California, Colorado, Illinois, Iowa, Minnesota, Oklahoma, South Carolina, Tennessee, Texas, Utah and Virginia. In these states, the penalties can be severe. Despite the advent of LIDAR speed detection, radar remains more prevalent because of its lower price and the amount of radar equipment already in service. In addition, proper use of LIDAR equipment requires the officer to remain stationary in order to beam a very precise signal. Legality Using or possessing a radar detector or jammer is illegal in certain countries, and it may result in fines, seizure of the device, or both. These prohibitions generally are introduced under the premise that a driver who uses a radar detector will pose a greater risk of accident than a driver who does not. The table below provides information about laws regarding radar detectors in particular nations. In 1967 devices to warn drivers of radar speed traps were being manufactured in the United Kingdom; they were deemed illegal under the Wireless Telegraphy Act 1949. See also Laser jammer Road safety Traffic enforcement camera References Automotive electronics Consumer electronics Traffic law Radar Detectors Radar warning receivers de:Radarwarnanlage#Radarwarner im Straßenverkehr
Radar detector
Technology
1,810
57,591,866
https://en.wikipedia.org/wiki/National%20Initiative%20for%20Cybersecurity%20Education
The National Initiative for Cybersecurity Education (NICE) is a partnership between government, academia, and the private sector focused supporting the country's ability to address current and future cybersecurity education and workforce challenges through standards and best practices. NICE is led by the National Institute of Standards and Technology (NIST) in the U.S. Department of Commerce. History The Comprehensive National Cybersecurity Initiative (CNCI), established by President George W. Bush in January 2008, included over twelve Initiatives, one of which, Initiative 8, was aimed at making the Federal cybersecurity workforce better prepared to handle cybersecurity challenges. In May 2009, the Cyberspace Policy Review, directed by President Barack Obama,  elevated the CNCI Initiative 8, which had initially been focused on improving the Federal cybersecurity workforce's ability to perform cybersecurity work. The scope was expanded beyond the Federal workforce to include the private sector workforce, truly making it a national charge. In March 2010, the Obama administration declassified limited material regarding the CNCI, making Initiative 8 public: Initiative #8. Expand cyber education. While billions of dollars are being spent on new technologies to secure the U.S. Government in cyberspace, it is the people with the right knowledge, skills, and abilities to implement those technologies who will determine success. However there are not enough cybersecurity experts within the Federal Government or private sector to implement the CNCI, nor is there an adequately established Federal cybersecurity career field. Existing cybersecurity training and personnel development programs, while good, are limited in focus and lack unity of effort. In order to effectively ensure our continued technical advantage and future cybersecurity, we must develop a technologically-skilled and cyber-savvy workforce and an effective pipeline of future employees. It will take a national strategy, similar to the effort to upgrade science and mathematics education in the 1950s, to meet this challenge. Additionally, the CNCI described training, education, and professional development programs as lacking “unity of effort”. Cybersecurity Enhancement Act of 2014 Title IV established the “National cybersecurity awareness and education program”, which is now known as the National Initiative for Cybersecurity Education (NICE). Organization NICE is headquartered at NIST facilities in Gaithersburg, Maryland. The NICE Program Office activities are organized into three categories: government engagement, industry engagement, and academic engagement. See also List of computer security certifications Cyber security standards References External links Federal website for NICE Workforce Computer network security Initiatives in the United States 2010 establishments in the United States National Institute of Standards and Technology
National Initiative for Cybersecurity Education
Engineering
544
2,064,584
https://en.wikipedia.org/wiki/Click-through%20rate
Click-through rate (CTR) is the ratio of clicks on a specific link to the number of times a page, email, or advertisement is shown. It is commonly used to measure the success of an online advertising campaign for a particular website, as well as the effectiveness of email campaigns. Click-through rates for ad campaigns vary tremendously. The first online display ad, shown for AT&T on the website HotWired in 1994, had a 44% click-through rate. With time, the overall rate of user's clicks on webpage banner ads has decreased. Purpose The purpose of click-through rates is to measure the ratio of clicks to impressions of an online ad or email marketing campaign. Generally, the higher the CTR, the more effective the marketing campaign has been at bringing people to a website. Most commercial websites are designed to elicit some sort of action, whether it be to buy a book, read a news article, watch a music video, or search for a flight. People rarely visit websites with the intention of viewing advertisements, in the same way that few people watch television to view the commercials. While marketers want to know the reaction of the web visitor, with current technology it is nearly impossible to quantify the emotional reaction to the site and the effect of that site on the firm's brand. In contrast, it is easy to determine the click-through rate, which measures the proportion of visitors who clicked on an advertisement that redirected them to another page. Forms of interaction with advertisements other than clicking are possible but rare; "click-through rate" is the most commonly used term to describe the efficacy of an advert. Construction The click-through rate of an advertisement is the number of times a click is made on the ad, divided by the number of times the ad is "served", that is, shown (also called impressions), expressed as a percentage: Online advertising Click-through rates for banner ads have decreased over time. When banner ads first started to appear, it was not uncommon to have rates above five percent. They have fallen since then, currently averaging closer to 0.2 or 0.3 percent. In most cases, a 2% click-through rate would be considered very successful, though the exact number is hotly debated and would vary depending on the situation. The average click-through rate of 3% in the 1990s declined to 2.4%–0.4% by 2002. Since advertisers typically pay more for a high click-through rate, getting many click-throughs with few purchases is undesirable to advertisers. Similarly, by selecting an appropriate advertising site with high affinity (e.g., a movie magazine for a movie advertisement), the same banner can achieve a substantially higher CTR. Though personalized ads, unusual formats, and more obtrusive ads typically result in higher click-through rates than standard banner ads, overly intrusive ads are often avoided by viewers. Modern online advertising has moved beyond just using banner ads. Popular search engines allow advertisers to display ads in with the search results triggered by a search user. These ads are usually in text format and may include additional links and information like phone numbers, addresses, and specific product pages. This additional information moves away from the poor user experience that can be created from intrusive banner ads and provides useful information to the search user, resulting in higher click-through rates for this format of pay-per-click Advertising. Since CTR is an expression of relevancy of the ads to the user search, higher click-through rates are generally rewarded with a better quality score attributed to the ads, which in turns might lead to lower CPC, therefore incentivising advertisers to continually improve the relevancy of their ads. However, having a high click-through rate isn't the only goal for an online advertiser, who may develop campaigns to raise awareness for the overall gain of valuable traffic, sacrificing some click-through rate for that purpose. Estimating the Click-Through Rate for Ads Search engine advertising has become a significant element of the Web browsing experience. Choosing the right ads for the query and the order in which they are displayed greatly affects the probability that a user will see and click on each ad. This ranking has a strong impact on the revenue the search engine receives from the ads. Further, showing the user an ad that they prefer to click on improves user satisfaction. For these reasons, there is an increasing interest in accurately estimating the click-through rate of ads in a recommender system. Email An email click-through rate is defined as the number of recipients who click one or more links in an email and landed on the sender's website, blog, or other desired destination. More simply, email click-through rates represent the number of clicks that your email generated. Email click-through rate is expressed as a percentage, and calculated by dividing the number of click-throughs by the number of tracked message deliveries. Most email marketers use these metrics, along with open rate, bounce rate and other metrics, to understand the effectiveness and success of their email campaign. In general, there is no ideal click-through rate. This metric can vary based on the type of email sent, how frequently emails are sent, how the list of recipients is segmented, how relevant the content of the email is to the audience, and many other factors. Even the time of day can affect the click-through rate. Sunday appears to generate considerably higher click-through rates on average when compared to the rest of the week. Every year, various types of research studies are conducted to track the overall effectiveness of click-through rates in email marketing. Click-Through Rate and Search Engine Optimization Some experts on search engine optimization (SEO) have claimed since the mid-2010s that click-through rate has an impact on organic rankings. Numerous case studies have been published to support this theory. Proponents supporting this theory often claim that the click-through rate is a ranking signal for Google's RankBrain algorithm. Opponents of this theory claim that the click-through rate has little or no impact on organic rankings. Bartosz Góralewicz published the results of an experiment on Search Engine Land where he claims, "Despite popular belief, click-through rate is not a ranking factor. Even massive organic traffic won’t affect your website’s organic positions." More recently, Barry Schwartz wrote on Search Engine Land, "...Google has said countless times, in writing, at conferences, that CTR is not used in their ranking algorithm." See also Abandonment rate Banner blindness Clickbait Click fraud CPA – Cost per acquisition Cost per action Cost per click Cost per lead Cost per thousand CPI eCPA – effective cost per acquisition/action Internet marketing PPC – Pay per click View-through rate References Further reading Sherman, Lee and John Deighton, (2001), "Banner advertising: Measuring effectiveness and optimizing placement," Journal of Interactive Marketing, Spring, Vol. 15, Iss. 2. Ward A. Hanson and Kirthi Kalyanam, (2007), "Internet Marketing and eCommerce", Chapter8, Traffic Building, Thomson College Pub, Mason, Ohio. External links MASB Official Website Advertising indicators Audience measurement Online advertising Email Consumer behaviour Rates Marketing analytics
Click-through rate
Biology
1,516
14,167,920
https://en.wikipedia.org/wiki/GABRB3
Gamma-aminobutyric acid receptor subunit beta-3 is a protein that in humans is encoded by the GABRB3 gene. It is located within the 15q12 region in the human genome and spans 250kb. This gene includes 10 exons within its coding region. Due to alternative splicing, the gene codes for many protein isoforms, all being subunits in the GABAA receptor, a ligand-gated ion channel. The beta-3 subunit is expressed at different levels within the cerebral cortex, hippocampus, cerebellum, thalamus, olivary body and piriform cortex of the brain at different points of development and maturity. GABRB3 deficiencies are implicated in many human neurodevelopmental disorders and syndromes such as Angelman syndrome, Prader-Willi syndrome, nonsyndromic orofacial clefts, epilepsy and autism. The effects of methaqualone and etomidate are mediated through GABBR3 positive allosteric modulation. Gene The GABRB3 gene is located on the long arm of chromosome 15, within the q12 region in the human genome. It is located in a gene cluster, with two other genes, GABRG3 and GABRA5. GABRB3 was the first gene to be mapped to this particular region. It spans approximately 250kb and includes 10 exons within its coding region, as well as two additional alternative first exons that encode for signaling peptides. Alternatively spliced transcript variants encoding isoforms with distinct signal peptides have been described. This gene is located within an imprinting region that spans the 15q11-13 region. Its sequence is considerably longer than the two other genes found within its gene cluster due to a large 150kb intron it carries. A pattern is observed in GABRB3 gene replication, in humans the maternal allele is replicated later than the paternal allele. The reasoning and implications of this pattern are unknown. When comparing the human beta-3 subunit's genetic sequence with other vertebrate beta-3 subunit sequences, there is a high level of genetic conservation. In mice the Gabrb3 gene is located on chromosome 7 of its genome in a similar gene cluster style with some of the other subunits of the GABAA receptor. Function GABRB3 encodes a member of the ligand-gated ion channel family. The encoded protein is one of at least 13 distinct subunits of a multisubunit chloride channel that serves as the receptor for gamma-aminobutyric acid, the major inhibitory neurotransmitter of the nervous system. The two other genes in the gene cluster both encode for related subunits of the family. During development, when the GABRB3 subunit functions optimally, its role in the GABAA receptor allows for proliferation, migration, and differentiation of precursor cells that lead to the proper development of the brain. GABAA receptor function is inhibited by zinc ions. The ions bind allosterically to the receptor, a mechanism that is critically dependent on the receptor subunit composition. De novo heterozygous missense mutations within a highly conserved region of the GABRB3 gene can decrease the peak current amplitudes of neurons or alter the kinetic properties of the channel. This results in the loss of the inhibitory properties of the receptor. The beta-3 subunit has very similar function to the human version of the subunit. Structure The crystal structure of a human β3 homopentamer was published in 2014. The study of the crystal structure of the human β3 homopentamer revealed unique qualities that are only observed in eukaryotic cysteine-loop receptors. The characterization of the GABAA receptor and subunits helps with the mechanistic determination of mutations within the subunits and what direct effect the mutations may have on the protein and its interactions. Expression The expression of GABRB3 is not constant among all cells or at all stages of development. The distribution of expression of the GABAA receptor subunits (GABRB3 included) during development indicates that GABA may function as a neurotrophic factor, impacting neural differentiation, growth, and circuit organization. The expression of the beta-3 subunit reaches peak at different times in different locations of the brain, during development. The highest expression of Gabrb3 in mice, within the cerebral cortex and hippocampus are reached prenatally, while they are reached postnatally in the cerebellar cortex. After the highest peak of expression, Gabrb3 expression is down-regulated substantially in the thalamus and inferior olivary body of the mouse. By adulthood, the level of expression in the cerebral cortex and hippocampus drops below developmental expression levels, but the expression in the cerebellum does not change postnatally. The highest levels of Gabrb3 expression in the mature mouse brain occur in the Purkinje and granule cells of the cerebellum, the hippocampus, and the piriform cortex. In humans, the beta-3 subunit, as well as the subunits of its two neighbouring genes (GABRG3 and GABRA5), are bi-allelically expressed within the cerebral cortex, indicating that the gene is not subjected to imprinting within those cells. Imprinting Patterns Due to the location of GABRB3 in the 15q11-13 imprinting region found in humans, this gene is subject to imprinting depending on the location and the cells developmental state. Imprinting is not present in the mouse brain, having an equal expression from maternal and paternal alleles. Regulation Phosphorylation of the GABAA by cAMP-dependent protein kinase (PKA) has a regulatory effect dependent on the beta subunit involved. The mechanism by which the kinase is targeted towards the bata-3 subunit is unknown. AKAP79/150 binds directly to the GABRB3 subunit, which is critical for its own phosphorylation, mediated by PKA. Gabrb3 shows significantly reduced expression postnatally, when mice are deficient in MECP2. When the MECP2 gene is knocked out, the expression of Gabrb3 is reduced, suggesting a relationship of positive regulation between the two genes. Clinical significance Mutations in this gene may be associated with the pathogenesis of Angelman syndrome, nonsyndromic orofacial clefts, epilepsy and autism. The GABRB3 gene has been associated with savant skills accompanying such disorders. In mice, the knockout mutation of Gabrb3 causes severe neonatal mortality with the cleft palate phenotype present, the survivors experiencing hyperactivity, lack of coordination and suffering with epileptic seizures. These mice also exhibit changes to the vestibular system within the ear, resulting in poor swimming skills, difficulty in walking on grid floors, and are found to run in circles erratically. Angelman syndrome Deletion of the GABRB3 gene results in Angelman syndrome in humans, depending on the parental origin of the deletion. Deletion of the paternal allele of GABRB3 has no known implications with this syndrome, while deletion of the maternal GABRB3 allele results in development of the syndrome. Nonsyndromic Orofacial Clefting There is a strong association between GABRB3 expression levels and proper palate development. A disturbance in GABRB3 expression can be lined to the malformation of nonsyndromic cleft lip with or without cleft palate. Cleft lip and palate have also been observed in children who have inverted duplications encompassing the GABRB3 locus. Knockout of the beta-3 subunit in mice results in clefting of the secondary palate. Normal facial characteristics can be restored through the insertion of a Gabrb3 transgene into the mouse genome, making the Gabrb3 gene primarily responsible for cleft palate formation. Autism Spectrum Disorder Duplications of the Prader-Willi/Angelman syndrome region, also known as the imprinting region (15q11-13) that encompasses the GABRB3 gene are present in some patients diagnosed with Autism. These patients exhibit classic symptoms that are associated with the disorder. Duplications of the 15q11-13 region displayed in autistic patients are almost always of maternal origin (not paternal) and account for 1–2% of diagnosed autism disorder cases. This gene is also a candidate for autism because of the physiological response that benzodiazepine has on the GABA-A receptor, when used to treat seizures and anxiety disorders. The Gabrb3 gene deficient mouse has been proposed as a model of autism spectrum disorder. These mice exhibit similar phenotypic symptoms such as non-selective attention, deficits in a variety of exploratory parameters, sociability, social novelty, nesting and lower rearing frequency as can be equated to characteristics found in patients diagnosed with autism spectrum disorder. When studying Gabrb3 deficient mice, significant hypoplasia of the cerebellar vermis was observed. There is an unknown association between autism and the 155CA-2 locus, located within an intron in GABRB3. Epilepsy/Childhood absence epilepsy Defects in GABA transmission has often been implicated in epilepsy within animal models and human syndromes. Patients that are diagnosed with Angelman syndrome and have a deletion of the GABRB3 gene exhibit absence seizures. Reduced expression of the beta-3 subunit is a potential contributor to childhood absence epilepsy. See also GABAA receptor Heritability of autism References Further reading External links Ion channels Genetics of autism
GABRB3
Chemistry
2,042
14,878,473
https://en.wikipedia.org/wiki/SIPA1
Signal-induced proliferation-associated protein 1 is a protein that in humans is encoded by the SIPA1 gene. The product of this gene is a mitogen induced GTPase activating protein (GAP). It exhibits a specific GAP activity for Ras-related regulatory proteins Rap1 and Rap2, but not for Ran or other small GTPases. This protein may also hamper mitogen-induced cell cycle progression when abnormally or prematurely expressed. It is localized to the perinuclear region. Two alternatively spliced variants encoding the same isoform have been characterized to date. References Further reading
SIPA1
Chemistry
125
1,321,065
https://en.wikipedia.org/wiki/Myoporum%20laetum
Myoporum laetum, commonly known as ngaio ( , ) or mousehole tree, is a species of flowering plant in the family Scrophulariaceae and is endemic to New Zealand. It is a fast growing shrub or small tree with lance-shaped leaves, the edges with small serrations, and white flowers with small purple spots and 4 stamens. Description Ngaio is a fast-growing evergreen shrub or small tree that sometimes grows to a height of with a trunk up to in diameter, or spreads to as much as . It often appears dome-shaped at first but as it gets older, distorts as branches break off. The bark on older specimens is thick, corky and furrowed. The leaves are lance-shaped, usually long, wide, have many translucent dots in the leaves and edges that have small serrations in approximately the outer half. The flowers are white with purple spots and are borne in groups of 2 to 6 on stalks long. There are 5 egg-shaped, pointed sepals and 5 petals joined at their bases to form a bell-shaped tube long. The petal lobes are long making the flower diameter . There are four stamens that extend slightly beyond the petal tube and the ovary is superior with 2 locules. Flowering occurs from mid-spring to mid-summer and is followed by the fruit which is a bright red drupe long. Taxonomy and naming Myoporum laetum was first formally described in 1786 by Georg Forster in Florulae Insularum Australium Prodromus. The specific epithet (laetum) means "cheerful, pleasant or bright". Distribution and habitat Ngaio grows very well in coastal areas of New Zealand including the Chatham Islands. It grows in lowland forest, sometimes in pure stands, others in association with other species such as nīkau (Rhopalostylis sapida). Ecology Myoporum laetum has been introduced to several other countries including Portugal, South Africa and Namibia. It is considered an invasive exotic species by the California Exotic Pest Plant Council. Uses Indigenous use The Māori would rub the leaves over their skin to repel mosquitoes and sandflies. Horticulture Ngaio is a hardy plant that will grow in most soils but needs full sun. It can also tolerate exposure to salt spray. It can be grown from seed or from semi-hard cuttings. Toxicity The leaves of this tree contain the liver toxin ngaione, which can cause sickness and or death in stock such as horses, cattle, sheep and pigs. Māori legend According to Māori legend, a Ngaio tree can be seen on the Moon. Here is the story, as recounted by politician, historian, poet William Pember Reeves (1857–1932): See also Catherine Alexander (botanist) References laetum Trees of New Zealand Plants described in 1786 Moon myths Flora of New Zealand
Myoporum laetum
Astronomy
594
78,864,569
https://en.wikipedia.org/wiki/Oenethyl
Oenethyl, also known as 2-methylaminoheptane and sold under the brand names Pacamine and Neosupranol, is a sympathomimetic and vasopressor medication of the alkylamine which is no longer marketed. It was used as a nasal decongestant and to control blood pressure during anesthesia. It is closely structurally related to other alkylamines, for instance methylhexanamine and tuaminoheptane, among others. These compounds are known to act as structurally simple monoamine releasing agents and to produce psychostimulant-like effects. See also 1,3-Dimethylbutylamine Heptaminol Iproheptine Isometheptene Methylhexanamine Octodrine Tuaminoheptane References Abandoned drugs Alkylamines Antihypotensive agents Decongestants Sympathomimetics
Oenethyl
Chemistry
196
51,517,357
https://en.wikipedia.org/wiki/Nintendo%20hard
"Nintendo hard" is an informal term used to describe extreme difficulty in video games. It often refers to games with trial-and-error gameplay and limited or nonexistent saving of progress. The enduring term originated with Nintendo Entertainment System (NES) games from the mid-1980s to early 1990s, such as Ghosts 'n Goblins (1986), Contra (1988), Ninja Gaiden (1988), and Battletoads (1991). History The Nintendo hard difficulty of many games released for the Nintendo Entertainment System (NES) was influenced by the popularity of arcade games in the mid-1980s, a period where players put countless coins in machines trying to beat a game that was brutally hard yet very enjoyable. The difficulty of many games released in the 1980s and 1990s has also been attributed to the hardware limitations affecting gameplay. Former Nintendo president Satoru Iwata said in an interview regarding how NES games were made: "Everyone involved in the production would spend all night playing it, and because they made games, they became good at them. So these expert gamers make the games, saying 'This is too easy'". Also, Damiano Gerli of Ars Technica observed that extreme difficulty made it possible for a game with little actual content (in terms of number of levels or opponents) to provide a long period of gameplay. This specific method of increasing length through difficulty was also employed to combat video game rentals, with some games being made more difficult to prevent them from being beaten within a rental period and thus costing the developer potential sales. The number of current games considered Nintendo hard decreased significantly with the fourth-generation 16-bit period of video gaming, including Super Star Wars (1992).According to Michael Enger, indie games like I Wanna Be the Guy (2007) and Super Meat Boy (2010) are an "obvious homage" to the Nintendo hard games of the NES era, labeled as "masocore". Analysis Arcade conversions and 2D platform games are commonly called Nintendo hard. The Houston Press described the Nintendo hard era as a period where games "universally felt like they hated us for playing them". GamesRadar journalist Maxwell McGee noted the variety of types of "Nintendo hard" games in the NES library: "A game can be difficult because it's genuinely hard, or because it demands you finish the entire adventure in one sitting. It can litter the playing field with spikes and bottomless pits ... or be so hopelessly obtuse you have no idea how to advance". He wrote that several NES games, such as Yo! Noid (1990), Silver Surfer (1990), and Teenage Mutant Ninja Turtles (1989) garnered their Nintendo hard difficulty "for all the wrong reasons". Journalist Michael Enger did not qualify games with challenges that came from poorly-designed gameplay as Nintendo hard, but rather only games that were well made and are replayable but still extremely hard. Examples The games in the following list have been recognized as being some of the hardest NES games and for some, all platforms. References Nintendo Entertainment System Video game terminology
Nintendo hard
Technology
629
32,611,592
https://en.wikipedia.org/wiki/Dual%20ignition
Dual Ignition is a system for spark-ignition engines, whereby critical ignition components, such as spark plugs and magnetos, are duplicated. Dual ignition is most commonly employed on aero engines, and is sometimes found on cars and motorcycles. Dual ignition provides two advantages: redundancy in the event of in-flight failure of one ignition system; and more efficient burning of the fuel-air mixture within the combustion chamber. In aircraft and gasoline-powered fire fighting equipment, redundancy is the prime consideration, but in other vehicles the main targets are efficient combustion and meeting emission law requirements. Efficiency A dual ignition system will typically provide that each cylinder has twin spark plugs, and that the engine will have at least two ignition circuits, such as duplicate magnetos or ignition coils. Dual ignition promotes engine efficiency by initiating twin flame fronts, giving faster and more complete burning and thereby increasing power. Although a dual ignition system is a method of achieving optimum combustion and better fuel consumption, it remains rare in cars and motorcycles because of difficulties in siting the second plug within the cylinder head (thus, many dual ignition systems found on production automobiles typically were of a two valve design rather than a four valve). The Nash Ambassador for 1932-1948 used twin sparkplugs on the straight eight engine, while later Alfa Romeo Twin Spark cars use dual ignition, as do Honda cars with the i-DSI series engines, and Chrysler's Modern Hemi engine. In 1980 Nissan installed twin sparkplugs on the Nissan NAPS-Z engine, with Ford introducing it on the 1989 Ford Ranger and 1991 Ford Mustang four-cylinder models. Several modern Mercedes-Benz engines also have two spark plugs per cylinder, such as the M112 and M113 engines. Some motorcycles, such as the Honda VT500 and the Ducati Multistrada, also have dual ignition. The 2012 Ducati Multistrada was upgraded with "twin-plug cylinder heads for smoother, more efficient combustion", the change contributing to a 5% increase in torque and a 10% improvement in fuel consumption. Early BMW R1100S bikes had a single spark plug per cylinder, but after 2003 they were upgraded to dual ignition to meet emission law requirements. Safety Dual ignition in aero-engines should enable the aircraft to continue to fly safely after an ignition system failure. Operation of aero engines on one magneto (rather than both) typically results in an rpm drop of around 75 rpm. Its existence on aviation powerplants dates back to the World War I years, when such engines as the Hispano-Suiza 8 and Mercedes D.III, and even rotary engines as the later Gnome Monosoupape model 9N versions featured twin spark plugs per cylinder. The Hewland AE75, an inline three cylinder aero-engine created for the ARV Super2, had three ignition circuits, each circuit serving a plug in two different cylinders. If just one of the three circuits failed, all three cylinders still received sparks, and even if two circuits were to fail, the remaining circuit would keep the engine running on two cylinders. Partial dual ignition While true dual ignition uses completely separate and redundant systems, some certified engines, such as the Lycoming O-320-H2AD use a single engine magneto drive-shaft turning two separate magnetos. Whilst saving weight, this creates a single point of failure in mechanical terms, that could cause both ignition systems to cease working. A simple form of partial dual ignition on some amateur-built aircraft uses a single spark plug, but duplicates the coil and pick-up for better redundancy than traditional single ignition. A further form of partial dual ignition (such as on the Honda VT500) is for each cylinder to have a single HT coil which sends the current to one plug and completes the circuit via the second plug, rather than via the earth. This system requires a voltage sufficient to jump both plug gaps, but an advantage is that if one plug fouls, the fouled plug may burn itself clean while the engine continues running. Wankel engines Wankel engines have such an elongated combustion chamber that even non-aero wankel engines may adopt dual ignition to promote better combustion. The MidWest AE series Wankel aero-engine has twin plugs per chamber, but these are placed side-by-side, not sequentially, so their main purpose is to give redundancy rather than improved combustion. Distillate fuel Richard W. Dilworth of the Electo-Motive Corporation devised a system, using four spark plugs and one carburettor per cylinder, in order to burn "distillate" fuel in train car engines. Because such heavy, but cheap, fuel was hard to ignite, a quadruple system of ignition was used in order to burn fuel roughly equivalent to kerosene or home heating fuel. By using this distillate fuel, that cost as little as one-fifth the price of gasoline before the Great Depression, a railroad could save substantially on fuel costs. However, this patented ignition system saw little commercial use. References Engine technology Ignition systems Gasoline engines
Dual ignition
Technology
1,043
22,361,204
https://en.wikipedia.org/wiki/March%202045%20lunar%20eclipse
A penumbral lunar eclipse will occur at the Moon’s descending node of orbit on Friday, March 3, 2045, with an umbral magnitude of −0.0148. A lunar eclipse occurs when the Moon moves into the Earth's shadow, causing the Moon to be darkened. A penumbral lunar eclipse occurs when part or all of the Moon's near side passes into the Earth's penumbra. Unlike a solar eclipse, which can only be viewed from a relatively small area of the world, a lunar eclipse may be viewed from anywhere on the night side of Earth. Occurring about 1.8 days after perigee (on March 1, 2045, at 13:40 UTC), the Moon's apparent diameter will be larger. Visibility The eclipse will be completely visible over North and South America, seen rising over northeast Asia and eastern Australia and setting over west Africa and western Europe. Eclipse details Shown below is a table displaying details about this particular solar eclipse. It describes various parameters pertaining to this eclipse. Eclipse season This eclipse is part of an eclipse season, a period, roughly every six months, when eclipses occur. Only two (or occasionally three) eclipse seasons occur each year, and each season lasts about 35 days and repeats just short of six months (173 days) later; thus two full eclipse seasons always occur each year. Either two or three eclipses happen each eclipse season. In the sequence below, each eclipse is separated by a fortnight. Related eclipses Eclipses in 2045 An annular solar eclipse on February 16. A penumbral lunar eclipse on March 3. A total solar eclipse on August 12. A penumbral lunar eclipse on August 27. Metonic Preceded by: Lunar eclipse of May 16, 2041 Followed by: Lunar eclipse of December 20, 2048 Tzolkinex Preceded by: Lunar eclipse of January 21, 2038 Followed by: Lunar eclipse of April 14, 2052 Half-Saros Preceded by: Solar eclipse of February 27, 2036 Followed by: Solar eclipse of March 9, 2054 Tritos Preceded by: Lunar eclipse of April 3, 2034 Followed by: Lunar eclipse of February 1, 2056 Lunar Saros 143 Preceded by: Lunar eclipse of February 20, 2027 Followed by: Lunar eclipse of March 14, 2063 Inex Preceded by: Lunar eclipse of March 23, 2016 Followed by: Lunar eclipse of February 11, 2074 Triad Preceded by: Lunar eclipse of May 3, 1958 Followed by: Lunar eclipse of January 2, 2132 Lunar eclipses of 2042–2045 Metonic series Saros 143 Tritos series Half-Saros cycle A lunar eclipse will be preceded and followed by solar eclipses by 9 years and 5.5 days (a half saros). This lunar eclipse is related to two partial solar eclipses of Solar Saros 150. See also List of lunar eclipses and List of 21st-century lunar eclipses Notes External links 2045-03 2045-03 2045 in science
March 2045 lunar eclipse
Astronomy
621
12,647,784
https://en.wikipedia.org/wiki/Meuse/Haute%20Marne%20Underground%20Research%20Laboratory
The Meuse/Haute Marne Underground Research Laboratory is a laboratory located 500 metres underground in Bure in the Meuse département. It allows study of the geological formation in order to evaluate its capacity for deep geological repository of high-level and long-lived medium-level radioactive waste. It is managed by the Agence nationale pour la gestion des déchets radioactifs, the French nuclear waste management authority. Since radioactive waste needs to be safely stored for extreme lengths of time, the geology of the area is of utmost importance. Geologically, this site chiefly consists of Kimmeridgian claystone 500 metres underground in the Paris Basin. The exploratory work was for the Cigéo project which would store medium-level waste from 2025 onwards at Bure. These plans have been met with protests. History The first practical geological studies on locations for deep geological repository in France date back to the 1960s. In the 1980s Andra, at that time a branch of the CEA, was given the task of investigating possible locations for an underground research laboratory. Site selection Two geological formations were initially considered in the 1990s: clay and granite. The 1991 law thus dictated that research would be done in several possible sites. In 1994, work by Andra investigated a wide range of locations in 4 separate départements, and further narrowed down the choice to 3 locations. Layout All above and below ground facilities at the site are organized around two wells. Surface installations There are headframes above each well for transporting equipment and people in and out. Then there is a host of other surface buildings and factories for research, which occupy a total of 170,000 square metres. The reception building has a Green roof. Tunnels As of 2007, a 40 metre long tunnel had been completed at the 445 m underground level, while almost 500 m of tunnels have been excavated at the 490 m underground level. Further extensions were built between 2007 and 2009 and more are scheduled, to be completed by 2015. Cigéo After 20 years of exploratory research, ANDRA intends to file in 2019 a request to build Cigéo (French: Centre Industriel de Stockage Géologique), which will store underground the most radioactive waste from French nuclear power stations. The Nuclear Safety Authority has confirmed that the rock has not moved for several million years, although it wants a solution to be found to the problem of bitumen deposits. The future storage centre would have an area of 600 hectares, for 250 kilometres of galleries. It is proposed to store 70,000 cubic metres of intermediate-level waste and 10,000 cubic metres of long-lived high-level vitrified waste. The French nuclear energy industry produces around 13,000 cubic metres of toxic radioactive waste every year. The project was initially estimated to cost between €13.5 and €16.5 billion in 2005. In 2009 costs were re-estimated at €36 billion. In 2012 ANDRA revised costs to €34.4 billion, including taxes and operational costs for 100 years, however EDF and the CEA estimated €20 billion. The French government budgeted €25 billion in 2016. Retrievability French law stipulates that for the first few hundreds of years the stored material must be safely retrievable, insofar as future Frenchmen may find it useful. The storage facility is therefore being designed for this purpose. Protests Several groups have opposed the building of the waste storage facility, including Burestop 55, Bure Zone Libre and EODRA (Élus opposés à l'enfouissement des déchets radioactifs). A Maison de la Résistance (House of Resistance) was set up by anti-nuclear activists in the centre of Bure in 2004. The forest of Mandres-en-Barrois, the site of proposed air vents for the expanded site, was occupied in 2015. It became a ZAD (Zone to Defend) before being evicted in 2018. See also Mont Terri Rock Laboratory (swisstopo, Saint-Ursanne, CH) Grimsel Test Site (GTS, Rock Laboratory in granite, CH) HADES Underground Research Laboratory (SCK CEN, Mol, BE) Bedretto Underground Laboratory for Geoenergies (ETH Zurich, CH) References External links Radioactive waste repositories Underground laboratories Nuclear research institutes Nuclear research institutes in France Laboratories in France
Meuse/Haute Marne Underground Research Laboratory
Engineering
901
45,621
https://en.wikipedia.org/wiki/Psychopharmacology
Psychopharmacology (from Greek ; ; and ) is the scientific study of the effects drugs have on mood, sensation, thinking, behavior, judgment and evaluation, and memory. It is distinguished from neuropsychopharmacology, which emphasizes the correlation between drug-induced changes in the functioning of cells in the nervous system and changes in consciousness and behavior. The field of psychopharmacology studies a wide range of substances with various types of psychoactive properties, focusing primarily on the chemical interactions with the brain. The term "psychopharmacology" was likely first coined by David Macht in 1920. Psychoactive drugs interact with particular target sites or receptors found in the nervous system to induce widespread changes in physiological or psychological functions. The specific interaction between drugs and their receptors is referred to as "drug action", and the widespread changes in physiological or psychological function is referred to as "drug effect". These drugs may originate from natural sources such as plants and animals, or from artificial sources such as chemical synthesis in the laboratory. Historical overview Early psychopharmacology Not often mentioned or included in the field of psychopharmacology today are psychoactive substances not identified as useful in modern mental health settings or references. These substances are naturally occurring, but nonetheless psychoactive, and are compounds identified through the work of ethnobotanists and ethnomycologists (and others who study the native use of naturally occurring psychoactive drugs). However, although these substances have been used throughout history by various cultures, and have a profound effect on mentality and brain function, they have not always attained the degree of scrutinous evaluation that lab-made compounds have. Nevertheless, some, such as psilocybin and mescaline, have provided a basis of study for the compounds that are used and examined in the field today. Hunter-gatherer societies tended to favor hallucinogens, and today their use can still be observed in many surviving tribal cultures. The exact drug used depends on what the particular ecosystem a given tribe lives in can support, and are typically found growing wild. Such drugs include various psychoactive mushrooms containing psilocybin or muscimol and cacti containing mescaline and other chemicals, along with myriad other plants containing psychoactive chemicals. These societies generally attach spiritual significance to such drug use, and often incorporate it into their religious practices. With the dawn of the Neolithic and the proliferation of agriculture, new psychoactives came into use as a natural by-product of farming. Among them were opium, cannabis, and alcohol derived from the fermentation of cereals and fruits. Most societies began developing herblores, lists of herbs which were good for treating various physical and mental ailments. For example, St. John's wort was traditionally prescribed in parts of Europe for depression (in addition to use as a general-purpose tea), and Chinese medicine developed elaborate lists of herbs and preparations. These and various other substances that have an effect on the brain are still used as remedies in many cultures. Modern psychopharmacology The dawn of contemporary psychopharmacology marked the beginning of the use of psychiatric drugs to treat psychological illnesses. It brought with it the use of opiates and barbiturates for the management of acute behavioral issues in patients. In the early stages, psychopharmacology was primarily used for sedation. With the 1950s came the establishment of lithium for mania, chlorpromazine for psychoses, and then in rapid succession, the development of tricyclic antidepressants, monoamine oxidase inhibitors, and benzodiazepines, among other antipsychotics and antidepressants. A defining feature of this era includes an evolution of research methods, with the establishment of placebo-controlled, double-blind studies, and the development of methods for analyzing blood levels with respect to clinical outcome and increased sophistication in clinical trials. The early 1960s revealed a revolutionary model by Julius Axelrod describing nerve signals and synaptic transmission, which was followed by a drastic increase of biochemical brain research into the effects of psychotropic agents on brain chemistry. After the 1960s, the field of psychiatry shifted to incorporate the indications for and efficacy of pharmacological treatments, and began to focus on the use and toxicities of these medications. The 1970s and 1980s were further marked by a better understanding of the synaptic aspects of the action mechanisms of drugs. However, the model has its critics, too – notably Joanna Moncrieff and the Critical Psychiatry Network. Chemical signaling Neurotransmitters Psychoactive drugs exert their sensory and behavioral effects almost entirely by acting on neurotransmitters and by modifying one or more aspects of synaptic transmission. Neurotransmitters can be viewed as chemicals through which neurons primarily communicate; psychoactive drugs affect the mind by altering this communication. Drugs may act by 1) serving as a precursor to a neurotransmitter; 2) inhibiting neurotransmitter synthesis; 3) preventing storage of neurotransmitters in the presynaptic vesicle; 4) stimulating or inhibiting neurotransmitter release; 5) stimulating or blocking post-synaptic receptors; 6) stimulating autoreceptors, inhibiting neurotransmitter release; 7) blocking autoreceptors, increasing neurotransmitter release; 8) inhibiting neurotransmission breakdown; or 9) blocking neurotransmitter reuptake by the presynaptic neuron. Hormones The other central method through which drugs act is by affecting communications between cells through hormones. Neurotransmitters can usually only travel a microscopic distance before reaching their target at the other side of the synaptic cleft, while hormones can travel long distances before reaching target cells anywhere in the body. Thus, the endocrine system is a critical focus of psychopharmacology because 1) drugs can alter the secretion of many hormones; 2) hormones may alter the behavioral responses to drugs; 3) hormones themselves sometimes have psychoactive properties; and 4) the secretion of some hormones, especially those dependent on the pituitary gland, is controlled by neurotransmitter systems in the brain. Psychopharmacological substances Alcohol Alcohol is a depressant, the effects of which may vary according to dosage amount, frequency, and chronicity. As a member of the sedative-hypnotic class, at the lowest doses, the individual feels relaxed and less anxious. In quiet settings, the user may feel drowsy, but in settings with increased sensory stimulation, individuals may feel uninhibited and more confident. High doses of alcohol rapidly consumed may produce amnesia for the events that occur during intoxication. Other effects include reduced coordination, which leads to slurred speech, impaired fine-motor skills, and delayed reaction time. The effects of alcohol on the body's neurochemistry are more difficult to examine than some other drugs. This is because the chemical nature of the substance makes it easy to penetrate into the brain, and it also influences the phospholipid bilayer of neurons. This allows alcohol to have a widespread impact on many normal cell functions and modifies the actions of several neurotransmitter systems. Alcohol inhibits glutamate (a major excitatory neurotransmitter in the nervous system) neurotransmission by reducing the effectiveness at the NMDA receptor, which is related to memory loss associated with intoxication. It also modulates the function of GABA, a major inhibitory amino acid neurotransmitter. Abuse of alcohol has also been correlated with thiamine deficiencies within the brain, leading to lasting neurological conditions that affect primarily the ability of the brain to effectively store memories. One such neurological condition is called Korsakoff's syndrome, for which very few effective treatment modalities have been found. The reinforcing qualities of alcohol leading to repeated use – and thus also the mechanisms of withdrawal from chronic alcohol use – are partially due to the substance's action on the dopamine system. This is also due to alcohol's effect on the opioid systems, or endorphins, that have opiate-like effects, such as modulating pain, mood, feeding, reinforcement, and response to stress. Antidepressants Antidepressants reduce symptoms of mood disorders primarily through the regulation of norepinephrine and serotonin (particularly the 5-HT receptors). After chronic use, neurons adapt to the change in biochemistry, resulting in a change in pre- and postsynaptic receptor density and second messenger function. The Monoamine Theory of Depression and Anxiety, which states that the disruption of the activity of nitrogen containing neurotransmitters (i.e. serotonin, norepinephrine, and dopamine) is strongly correlated with the presence of depressive symptoms. Despite its longstanding prominence in pharmaceutical advertising, the myth that low serotonin levels cause depression is not supported by scientific evidence. Monoamine oxidase inhibitors (MAOIs) are the oldest class of antidepressants. They inhibit monoamine oxidase, the enzyme that metabolizes the monoamine neurotransmitters in the presynaptic terminals that are not contained in protective synaptic vesicles. The inhibition of the enzyme increases the amount of neurotransmitter available for release. It increases norepinephrine, dopamine, and 5-HT, thus increasing the action of the transmitters at their receptors. MAOIs have been somewhat disfavored because of their reputation for more serious side effects. Tricyclic antidepressants (TCAs) work through binding to the presynaptic transporter proteins and blocking the reuptake of norepinephrine or 5-HT into the presynaptic terminal, prolonging the duration of transmitter action at the synapse. Selective serotonin reuptake inhibitors (SSRIs) selectively block the reuptake of serotonin (5-HT) through their inhibiting effects on the sodium/potassium ATP-dependent serotonin transporter in presynaptic neurons. This increases the availability of 5-HT in the synaptic cleft. The main parameters to consider in choosing an antidepressant are side effects and safety. Most SSRIs are available generically and are relatively inexpensive. Older antidepressants such as TCAs and MAOIs usually require more visits and monitoring, which may offset the low expense of the drugs. SSRIs are relatively safe in overdoses and better tolerated than TCAs and MAOIs for most patients. Antipsychotics All proven antipsychotics are postsynaptic dopamine receptor blockers (dopamine antagonists). For an antipsychotic to be effective, it generally requires a dopamine antagonism of 60%–80% of dopamine D2 receptors. First generation (typical) antipsychotics: Traditional neuroleptics modify several neurotransmitter systems, but their clinical effectiveness is most likely due to their ability to antagonize dopamine transmission by competitively blocking the receptors or by inhibiting dopamine release. The most serious and troublesome side effects of these classical antipsychotics are movement disorders that resemble the symptoms of Parkinson's disease, because the neuroleptics antagonize dopamine receptors broadly, also reducing the normal dopamine-mediated inhibition of cholinergic cells in the striatum. Second-generation (atypical) antipsychotics: The concept of "atypicality" is from the finding that second generation antipsychotics (SGAs) have a greater serotonin/dopamine ratio than earlier drugs, and might be associated with improved efficacy (particularly for the negative symptoms of psychosis) and reduced extrapyramidal side effects. Some of the efficacy of atypical antipsychotics may be due to 5-HT2 antagonism or the blockade of other dopamine receptors. Agents that purely block 5-HT2 or dopamine receptors other than D2 have often failed as effective antipsychotics. Benzodiazepines Benzodiazepines are often used to reduce anxiety symptoms, muscle tension, seizure disorders, insomnia, symptoms of alcohol withdrawal, and panic attack symptoms. Their action is primarily on specific benzodiazepine sites on the GABAA receptor. This receptor complex is thought to mediate the anxiolytic, sedative, and anticonvulsant actions of the benzodiazepines. Use of benzodiazepines carries the risk of tolerance (necessitating increased dosage), dependence, and abuse. Taking these drugs for a long period of time can lead to severe withdrawal symptoms upon abrupt discontinuation. Hallucinogens Classical serotonergic psychedelics Psychedelics cause perceptual and cognitive distortions without delirium. The state of intoxication is often called a "trip". Onset is the first stage after an individual ingests (LSD, psilocybin, ayahuasca, and mescaline) or smokes (dimethyltryptamine) the substance. This stage may consist of visual effects, with an intensification of colors and the appearance of geometric patterns that can be seen with one's eyes closed. This is followed by a plateau phase, where the subjective sense of time begins to slow and the visual effects increase in intensity. The user may experience synesthesia, a crossing-over of sensations (for example, one may "see" sounds and "hear" colors). These outward sensory effects have been referred to as the "mystical experience", and current research suggests that this state could be beneficial to the treatment of some mental illnesses, such as depression and possibly addiction. In instances where some patients have seen a lack of improvement from the use of antidepressants, serotonergic hallucinogens have been observed to be rather effective in treatment. In addition to the sensory-perceptual effects, hallucinogenic substances may induce feelings of depersonalization, emotional shifts to a euphoric or anxious/fearful state, and a disruption of logical thought. Hallucinogens are classified chemically as either indolamines (specifically tryptamines), sharing a common structure with serotonin, or as phenethylamines, which share a common structure with norepinephrine. Both classes of these drugs are agonists at the 5-HT2 receptors; this is thought to be the central component of their hallucinogenic properties. Activation of 5-HT2A may be particularly important for hallucinogenic activity. However, repeated exposure to hallucinogens leads to rapid tolerance, likely through down-regulation of these receptors in specific target cells. Research suggests that hallucinogens affect many of these receptor sites around the brain and that through these interactions, hallucinogenic substances may be capable of inducing positive introspective experiences. The current research implies that many of the effects that can be observed occur in the occipital lobe and the frontomedial cortex; however, they also present many secondary global effects in the brain that have not yet been connected to the substance's biochemical mechanism of action. Dissociative hallucinogens Another class of hallucinogens, known as dissociatives, includes drugs such as ketamine, phencyclidine (PCP), and Salvia divinorum. Drugs such as these are thought to interact predominantly with glutamate receptors within the brain. Specifically, ketamine is thought to block NMDA receptors that are responsible for signalling in the glutamate pathways. Ketamine's more tranquilizing effects can be seen in the central nervous system through interactions with parts of the thalamus by inhibition of certain functions. Ketamine has become a major drug of research for the treatment of depression. These antidepressant effects are thought to be related to the drug's action on the glutamate receptor system and the relative spike in glutamate levels, as well as its interaction with mTOR, which is an enzymatic protein involved in catabolic processes in the human body. Phencyclidine's biochemical properties are still mostly unknown; however, its use has been associated with dissociation, hallucinations, and in some cases seizures and death. Salvia divinorum, a plant native to Mexico, has strong dissociative and hallucinogenic properties when the dry leaves are smoked or chewed. The qualitative value of these effects, whether negative or positive, has been observed to vary between individuals with many other factors to consider. Hypnotics Hypnotics are often used to treat the symptoms of insomnia or other sleep disorders. Benzodiazepines are still among the most widely prescribed sedative-hypnotics in the United States today. Certain non-benzodiazepine drugs are used as hypnotics as well. Although they lack the chemical structure of the benzodiazepines, their sedative effect is similarly through action on the GABAA receptor. They also have a reputation of being less addictive than benzodiazepines. Melatonin, a naturally-occurring hormone, is often used over the counter (OTC) to treat insomnia and jet lag. This hormone appears to be excreted by the pineal gland early during the sleep cycle and may contribute to human circadian rhythms. Because OTC melatonin supplements are not subject to careful and consistent manufacturing, more specific melatonin agonists are sometimes preferred. They are used for their action on melatonin receptors in the suprachiasmatic nucleus, responsible for sleep-wake cycles. Many barbiturates have or had an FDA-approved indication for use as sedative-hypnotics, but have become less widely used because of their limited safety margin in overdose, their potential for dependence, and the degree of central nervous system depression they induce. The amino-acid L-tryptophan is also available OTC, and seems to be free of dependence or abuse liability. However, it is not as powerful as the traditional hypnotics. Because of the possible role of serotonin in sleep patterns, a new generation of 5-HT2 antagonists are in current development as hypnotics. Cannabis and the cannabinoids Cannabis consumption produces a dose-dependent state of intoxication in humans. There is commonly increased blood flow to the skin, which leads to an increased heart rate and sensations of warmth or flushing. It also frequently induces increased hunger. Iversen (2000) categorized the subjective and behavioral effects often associated with cannabis into three stages. The first is the "buzz", a brief period of initial responding where the main effects are lightheadedness or slight dizziness, in addition to possible tingling sensations in the extremities or other parts of the body. The "high" is characterized by feelings of euphoria and exhilaration characterized by mild psychedelia as well as a sense of disinhibition. If the individual has taken a sufficiently large dose of cannabis, the level of intoxication progresses to the stage of being "stoned", and the user may feel calm, relaxed, and possibly in a dreamlike state. Sensory reactions may include the feeling of floating, enhanced visual and auditory perception, visual illusions, or the perception of the slowing of time passage, which are somewhat psychedelic in nature. There exist two primary CNS cannabinoid receptors, on which marijuana and the cannabinoids act. Both the CB1 and CB2 receptor are found in the brain. The CB2 receptor is also found in the immune system. CB1 is expressed at high densities in the basal ganglia, cerebellum, hippocampus, and cerebral cortex. Receptor activation can inhibit cAMP formation, inhibit voltage-sensitive calcium ion channels, and activate potassium ion channels. Many CB1 receptors are located on axon terminals, where they act to inhibit the release of various neurotransmitters. In combination, these chemical actions work to alter various functions of the central nervous system, including the motor system, memory, and various cognitive processes. Opioids The opioid category of drugs – including drugs such as heroin, morphine, and oxycodone – belong to the class of narcotic analgesics, which reduce pain without producing unconsciousness but do produce a sense of relaxation and sleep, and at high doses may result in coma and death. The ability of opioids (both endogenous and exogenous) to relieve pain depends on a complex set of neuronal pathways at the spinal cord level, as well as various locations above the spinal cord. Small endorphin neurons in the spinal cord act on receptors to decrease the conduction of pain signals from the spinal cord to higher brain centers. Descending neurons originating in the periaqueductal gray give rise to two pathways that further block pain signals in the spinal cord. The pathways begin in the locus coeruleus (noradrenaline) and the nucleus of raphe (serotonin). Similar to other abused substances, opioid drugs increase dopamine release in the nucleus accumbens. Opioids are more likely to produce physical dependence worse than that of other classes of psychoactive drugs, and can lead to painful withdrawal symptoms if discontinued abruptly after regular use. Stimulants Cocaine is one of the more common stimulants and is a complex drug that interacts with various neurotransmitter systems. It commonly causes heightened alertness, increased confidence, feelings of exhilaration, reduced fatigue, and a generalized sense of well-being. The effects of cocaine are similar to those of amphetamines, though cocaine tends to have a shorter duration of effect. In high doses or with prolonged use, cocaine can result in a number of negative effects, including irritability, anxiety, exhaustion, total insomnia, and even psychotic symptomatology. Most of the behavioral and physiological actions of cocaine can be explained by its ability to block the reuptake of the two catecholamines, dopamine and norepinephrine, as well as serotonin. Cocaine binds to transporters that normally clear these transmitters from the synaptic cleft, inhibiting their function. This leads to increased levels of neurotransmitter in the cleft and transmission at the synapses. Based on in-vitro studies using rat brain tissue, cocaine binds most strongly to the serotonin transporter, followed by the dopamine transporter, and then the norepinephrine transporter. Amphetamines tend to cause the same behavioral and subjective effects of cocaine. Various forms of amphetamine are commonly used to treat the symptoms of attention deficit hyperactivity disorder (ADHD) and narcolepsy, or are used recreationally. Amphetamine and methamphetamine are indirect agonists of the catecholaminergic systems. They block catecholamine reuptake, in addition to releasing catecholamines from nerve terminals. There is evidence that dopamine receptors play a central role in the behavioral responses of animals to cocaine, amphetamines, and other psychostimulant drugs. One action causes the dopamine molecules to be released from inside the vesicles into the cytoplasm of the nerve terminal, which are then transported outside by the mesolimbic dopamine pathway to the nucleus accumbens. This plays a key role in the rewarding and reinforcing effects of cocaine and amphetamine in animals, and is the primary mechanism for amphetamine dependence. Psychopharmacological research In psychopharmacology, researchers are interested in any substance that crosses the blood–brain barrier and thus has an effect on behavior, mood, or cognition. Drugs are researched for their physiochemical properties, physical side effects, and psychological side effects. Researchers in psychopharmacology study a variety of different psychoactive substances, including alcohol, cannabinoids, club drugs, psychedelics, opiates, nicotine, caffeine, psychomotor stimulants, inhalants, and anabolic–androgenic steroids. They also study drugs used in the treatment of affective and anxiety disorders, as well as schizophrenia. Clinical studies are often very specific, typically beginning with animal testing and ending with human testing. In the human testing phase, there is often a group of subjects: one group is given a placebo, and the other is administered a carefully measured therapeutic dose of the drug in question. After all of the testing is completed, the drug is proposed to the concerned regulatory authority (e.g. the U.S. FDA), and is either commercially introduced to the public via prescription, or deemed safe enough for over-the-counter sale. Though particular drugs are prescribed for specific symptoms or syndromes, they are usually not specific to the treatment of any single mental disorder. A somewhat controversial application of psychopharmacology is "cosmetic psychiatry": persons who do not meet criteria for any psychiatric disorder are nevertheless prescribed psychotropic medication. The antidepressant bupropion is then prescribed to increase perceived energy levels and assertiveness while diminishing the need for sleep. The antihypertensive compound propranolol is sometimes chosen to eliminate the discomfort of day-to-day anxiety. Fluoxetine in nondepressed people can produce a feeling of generalized well-being. Pramipexole, a treatment for restless leg syndrome, can dramatically increase libido in women. These and other off-label lifestyle applications of medications are not uncommon. Although occasionally reported in the medical literature, no guidelines for such usage have been developed. There is also a potential for the misuse of prescription psychoactive drugs by elderly persons, who may have multiple drug prescriptions. See also Pharmacology Neuropharmacology Neuropsychopharmacology Psychiatry History of pharmacy Mental health Recreational drug use Nathan S. Kline Prescriptive authority for psychologists movement References Further reading , an introductory text with detailed examples of treatment protocols and problems. , a general historical analysis. Peer-reviewed journals Experimental and Clinical Psychopharmacology, American Psychological Association Journal of Clinical Psychopharmacology, Lippincott Williams & Wilkins Journal of Psychopharmacology, British Association for Psychopharmacology, SAGE Publications Psychopharmacology, Springer Berlin/Heidelberg External links Psychopharmacology: The Fourth Generation of Progress — American College of Neuropsychopharmacology (ACNP) Bibliographical history of Psychopharmacology and Pharmacopsychology — Advances in the History of Psychology, York University Monograph Psychopharmacology Today British Association for Psychopharmacology (BAP) Psychopharmacology Institute: Video lectures and tutorials on psychotropic medications. Neuropharmacology
Psychopharmacology
Chemistry
5,676
58,767,235
https://en.wikipedia.org/wiki/Denis%20Osin
Denis Osin is a mathematician at Vanderbilt University working in geometric group theory and geometric topology. Career Osin received a PhD at Moscow State University in 1999 under the supervision of Aleksandr Olshansky. He worked at the Financial University under the Government of the Russian Federation, at the City College of CUNY, and joined Vanderbilt in 2008. He was promoted to a Full Professor in 2013. He is an editor at Groups, Geometry, and Dynamics. Recognition He was a speaker at the International Congress of Mathematicians in Rio de Janeiro in 2018. He was named to the 2021 class of fellows of the American Mathematical Society "for contributions in geometric group theory, specifically groups acting on hyperbolic spaces". References External links Denis Osin Home Page Living people Group theorists Topologists Moscow State University alumni Vanderbilt University faculty 21st-century Russian mathematicians Year of birth missing (living people) Fellows of the American Mathematical Society
Denis Osin
Mathematics
180
41,210,630
https://en.wikipedia.org/wiki/Tormentic%20acid
Tormentic acid is a chemical compound isolated from Luehea divaricata and Agrimonia eupatoria. Tormentic acid derivatives have been synthesized and studied. References Grewioideae Rosoideae Triterpenes
Tormentic acid
Chemistry
49
61,498,907
https://en.wikipedia.org/wiki/C13H21N5O2
{{DISPLAYTITLE:C13H21N5O2}} The molecular formula C13H21N5O2 (molar mass: 279.34 g/mol, exact mass: 279.1695 u) may refer to: Etamiphylline Tezampanel
C13H21N5O2
Chemistry
63
21,538,638
https://en.wikipedia.org/wiki/2012%20phenomenon
The 2012 phenomenon was a range of eschatological beliefs that cataclysmic or transformative events would occur on or around 21 December 2012. This date was regarded as the end-date of a 5,126-year-long cycle in the Mesoamerican Long Count calendar, and festivities took place on 21 December 2012 to commemorate the event in the countries that were part of the Maya civilization (Mexico, Guatemala, Honduras, and El Salvador), with main events at Chichén Itzá in Mexico and Tikal in Guatemala. Various astronomical alignments and numerological formulae were proposed for this date. A New Age interpretation held that the date marked the start of a period during which Earth and its inhabitants would undergo a positive physical or spiritual transformation, and that 21 December 2012 would mark the beginning of a new era. Others suggested that the date marked the end of the world or a similar catastrophe. Scenarios suggested for the end of the world included the arrival of the next solar maximum; an interaction between Earth and Sagittarius A*, a supermassive black hole at the center of the galaxy; the Nibiru cataclysm, in which Earth would collide with a mythical planet called Nibiru; or even the heating of Earth's core. Scholars from various disciplines quickly dismissed predictions of cataclysmic events as they arose. Mayan scholars stated that no classic Mayan accounts forecast impending doom, and the idea that the Long Count calendar ends in 2012 misrepresented Mayan history and culture. Astronomers rejected the various proposed doomsday scenarios as pseudoscience, having been refuted by elementary astronomical observations. Mesoamerican Long Count calendar December 2012 marked the conclusion of a bʼakʼtun—a time period in the Mesoamerican Long Count calendar, used in Mesoamerica prior to the arrival of Europeans. Although the Long Count was most likely invented by the Olmec, it has become closely associated with the Maya civilization, whose classic period lasted from 250 to 900 AD. The writing system of the classic Maya has been substantially deciphered, meaning that a text corpus of their written and inscribed material has survived from before the Spanish conquest of Yucatán. Unlike the 260-day tzolkʼin still used today among the Maya, the Long Count was linear rather than cyclical, and kept time roughly in units of 20: 20 days made a uinal, 18 uinals (360 days) made a tun, 20 tuns made a kʼatun, and 20 kʼatuns (144,000 days or roughly 394 years) made up a bʼakʼtun. Thus, the Maya date of 8.3.2.10.15 represents 8 bʼakʼtuns, 3 kʼatuns, 2 tuns, 10 uinals and 15 days. Apocalypse There is a strong tradition of "world ages" in Maya literature, but the record has been distorted, leaving several possibilities open to interpretation. According to the Popol Vuh, a compilation of the creation accounts of the Kʼicheʼ Maya of the Colonial-era highlands, the current world is the fourth. The Popol Vuh describes the gods first creating three failed worlds, followed by a successful fourth world in which humanity was placed. In the Maya Long Count, the previous world ended after 13 bʼakʼtuns, or roughly 5,125 years. The Long Count's "zero date" was set at a point in the past marking the end of the third world and the beginning of the current one, which corresponds to 11 August 3114 BC in the proleptic Gregorian calendar. This means that the fourth world reached the end of its 13th bʼakʼtun, or Maya date 13.0.0.0.0, on 21 December 2012. In 1957, Mayanist and astronomer Maud Worcester Makemson wrote that "the completion of a Great Period of 13 bʼakʼtuns would have been of the utmost significance to the Maya." In 1966, Michael D. Coe wrote in The Maya that "there is a suggestion ... that Armageddon would overtake the degenerate peoples of the world and all creation on the final day of the 13th [bʼakʼtun]. Thus ... our present universe [would] be annihilated ... when the Great Cycle of the Long Count reaches completion." Objections Coe's interpretation was repeated by other scholars through the early 1990s. In contrast, later researchers said that, while the end of the 13th bʼakʼtun would perhaps be a cause for celebration, it did not mark the end of the calendar. "There is nothing in the Maya or Aztec or ancient Mesoamerican prophecy to suggest that they prophesied a sudden or major change of any sort in 2012," said Mayanist scholar Mark Van Stone. "The notion of a 'Great Cycle' coming to an end is completely a modern invention." In 1990, Mayanist scholars Linda Schele and David Freidel argued that the Maya "did not conceive this to be the end of creation, as many have suggested". Susan Milbrath, curator of Latin American Art and Archaeology at the Florida Museum of Natural History, stated that, "We have no record or knowledge that [the Maya] would think the world would come to an end" in 2012. Sandra Noble, executive director of the Foundation for the Advancement of Mesoamerican Studies, said, "For the ancient Maya, it was a huge celebration to make it to the end of a whole cycle," and, "The 2012 phenomenon is a complete fabrication and a chance for a lot of people to cash in." "There will be another cycle," said E. Wyllys Andrews V, director of the Tulane University Middle American Research Institute. "We know the Maya thought there was one before this, and that implies they were comfortable with the idea of another one after this." Commenting on the new calendar found at Xultún, one archaeologist said "The ancient Maya predicted the world would continue—that 7,000 years from now, things would be exactly like this. We keep looking for endings. The Maya were looking for a guarantee that nothing would change. It's an entirely different mindset." Several prominent individuals representing Maya of Guatemala decried the suggestion that the world would end with the 13th bʼakʼtun. Ricardo Cajas, president of the Colectivo de Organizaciones Indígenas de Guatemala, said the date did not represent an end of humanity but that the new cycle "supposes changes in human consciousness". Martín Sacalxot, of the office of Guatemala's Human Rights Ombudsman (Procurador de los Derechos Humanos), said that the end of the calendar has nothing to do with the end of the world or the year 2012. Prior associations The European association of the Maya with eschatology dates back to the time of Christopher Columbus, who was compiling a work called Libro de las profecías during the voyage in 1502 when he first heard about the "Maia" on Guanaja, an island off the north coast of Honduras. Influenced by the writings of Bishop Pierre d'Ailly, Columbus believed that his discovery of "most distant" lands (and, by extension, the Maya themselves) was prophesied and would bring about the Apocalypse. End-times fears were widespread during the early years of the Spanish Conquest as the result of popular astrological predictions in Europe of a second Great Flood for the year 1524. In the 1900s, German scholar Ernst Förstemann interpreted the last page of the Dresden Codex as a representation of the end of the world in a cataclysmic flood. He made reference to the destruction of the world and an apocalypse, though he made no reference to the 13th bʼakʼtun or 2012 and it was not clear that he was referring to a future event. His ideas were repeated by archaeologist Sylvanus Morley, who directly paraphrased Förstemann and added his own embellishments, writing, "Finally, on the last page of the manuscript, is depicted the Destruction of the World ... Here, indeed, is portrayed with a graphic touch the final all-engulfing cataclysm" in the form of a great flood. These comments were later repeated in Morley's book, The Ancient Maya, the first edition of which was published in 1946. Maya references to bʼakʼtun 13 It is not certain what significance the classic Maya gave to the 13th bʼakʼtun. Most classic Maya inscriptions are strictly historical and do not make any prophetic declarations. Two items in the Maya classical corpus do mention the end of the 13th bʼakʼtun: Tortuguero Monument 6 and La Corona Hieroglyphic Stairway 12. Tortuguero The Tortuguero site, which lies in southernmost Tabasco, Mexico, dates from the 7th century AD and consists of a series of inscriptions mostly in honor of the contemporary ruler Bahlam Ahau. One inscription, known as Tortuguero Monument 6, is the only inscription known to refer to bʼakʼtun 13 in any detail. It has been partially defaced; Sven Gronemeyer and Barbara MacLeod have given this translation: {{verse translation|lang=myn| tzuhtzjo꞉m uy-u꞉xlaju꞉n pik chan ajaw u꞉x uni꞉w uhto꞉m il[?] yeʼni/ye꞉n bolon yokte' ta chak joyaj | It will be completed the 13th bʼakʼtun. It is 4 Ajaw 3 Kʼankʼin and it will happen a 'seeingʼ[?]. It is the display of Bʼolon-Yoktein a great "investiture". }} Very little is known about the god Bʼolon Yokteʼ. According to an article by Mayanists Markus Eberl and Christian Prager in British Anthropological Reports, his name is composed of the elements "nine", ʼOK-teʼ (the meaning of which is unknown), and "god". Confusion in classical period inscriptions suggests that the name was already ancient and unfamiliar to contemporary scribes. He also appears in inscriptions from Palenque, Usumacinta, and La Mar as a god of war, conflict, and the underworld. In one stele he is portrayed with a rope tied around his neck, and in another with an incense bag, together signifying a sacrifice to end a cycle of years. Based on observations of modern Maya rituals, Gronemeyer and MacLeod claim that the stela refers to a celebration in which a person portraying Bolon Yokteʼ Kʼuh was wrapped in ceremonial garments and paraded around the site. They note that the association of Bolon Yokteʼ Kʼuh with bʼakʼtun 13 appears to be so important on this inscription that it supersedes more typical celebrations such as "erection of stelae, scattering of incense" and so forth. Furthermore, they assert that this event was indeed planned for 2012 and not the 7th century. Mayanist scholar Stephen Houston contests this view by arguing that future dates on Maya inscriptions were simply meant to draw parallels with contemporary events, and that the words on the stela describe a contemporary rather than a future scene. La Corona In April–May 2012, a team of archaeologists unearthed a previously unknown inscription on a stairway at the La Corona site in Guatemala. The inscription, on what is known as Hieroglyphic Stairway 12, describes the establishment of a royal court in Calakmul in 635 AD, and compares the then-recent completion of 13 kʼatuns with the future completion of the 13th bʼakʼtun. It contains no speculation or prophecy as to what the scribes believed would happen at that time. Dates beyond bʼakʼtun 13 Maya inscriptions occasionally mention predicted future events or commemorations that would occur on dates far beyond the completion of the 13th bʼakʼtun. Most of these are in the form of "distance dates"; Long Count dates together with an additional number, known as a Distance Number, which when added to them makes a future date. On the west panel at the Temple of Inscriptions in Palenque, a section of text projects forward to the 80th 52-year Calendar Round from the coronation of the ruler Kʼinich Janaabʼ Pakal. Pakal's accession occurred on 9.9.2.4.8, equivalent to 27 July 615 AD in the proleptic Gregorian calendar. The inscription begins with Pakal's birthdate of 9.8.9.13.0 (24 March, ) and then adds the Distance Number 10.11.10.5.8 to it, arriving at a date of 21 October 4772 AD, more than 4,000 years after Pakal's time. Another example is Stela 1 at Coba which marks the date of creation as , or nineteen units above the bʼakʼtun. According to Linda Schele, these 13s represent "the starting point of a huge odometer of time", with each acting as a zero and resetting to 1 as the numbers increase. Thus this inscription anticipates the current universe lasting at least 2021×13×360 days, or roughly 2.687×1028 years; a time span equal to 2 quintillion times the age of the universe as determined by cosmologists. Others have suggested that this date marks creation as having occurred after that time span. In 2012, researchers announced the discovery of a series of Maya astronomical tables in Xultún, Guatemala which plot the movements of the Moon and other astronomical bodies over the course of 17 bʼakʼtuns. New Age beliefs Many assertions about the year 2012 form part of Mayanism, a non-codified collection of New Age beliefs about ancient Maya wisdom and spirituality. The term is distinct from "Mayanist," used to refer to an academic scholar of the Maya.Jenkins 2009: 223–229 Archaeoastronomer Anthony Aveni says that while the idea of "balancing the cosmos" was prominent in ancient Maya literature, the 2012 phenomenon did not draw from those traditions. Instead, it was bound up with American concepts such as the New Age movement, 2012 millenarianism, and the belief in secret knowledge from distant times and places. Themes found in 2012 literature included "suspicion towards mainstream Western culture", the idea of spiritual evolution, and the possibility of leading the world into the New Age by individual example or by a group's joined consciousness. The general intent of this literature was not to warn of impending doom but "to foster counter-cultural sympathies and eventually socio-political and 'spiritual' activism". Aveni, who has studied New Age and search for extraterrestrial intelligence (SETI) communities, describes 2012 narratives as the product of a "disconnected" society: "Unable to find spiritual answers to life's big questions within ourselves, we turn outward to imagined entities that lie far off in space or time—entities that just might be in possession of superior knowledge." Origins In 1975, the ending of bʼakʼtun 13 became the subject of speculation by several New Age authors, who asserted it would correspond with a global "transformation of consciousness". In Mexico Mystique: The Coming Sixth Age of Consciousness, Frank Waters tied Coe's original date of 24 December 2011 to astrology and the prophecies of the Hopi, while both José Argüelles (in The Transformative Vision) and Terence McKenna (in The Invisible Landscape) discussed the significance of the year 2012 without mentioning a specific day. Some research suggests that both Argüelles and McKenna were heavily influenced in this regard by the Mayanism of American author William S. Burroughs, who first portrayed the end of the Mayan Long Count as an apocalyptic shift of human consciousness in 1960's The Exterminator. In 1983, with the publication of Robert J. Sharer's revised table of date correlations in the 4th edition of Morley's The Ancient Maya, each became convinced that 21 December 2012 had significant meaning. By 1987, the year in which he organized the Harmonic Convergence event, Argüelles was using the date 21 December 2012 in The Mayan Factor: Path Beyond Technology. He claimed that on 13 August 3113 BC the Earth began a passage through a "galactic synchronization beam" that emanated from the center of our galaxy, that it would pass through this beam during a period of 5200 tuns (Maya cycles of 360 days each), and that this beam would result in "total synchronization" and "galactic entrainment" of individuals "plugged into the Earth's electromagnetic battery" by 13.0.0.0.0 (21 December 2012). He believed that the Maya aligned their calendar to correspond to this phenomenon. Anthony Aveni has dismissed all of these ideas. In 2001, Robert Bast wrote the first online articles regarding the possibility of a doomsday in 2012. In 2006, author Daniel Pinchbeck popularized New Age concepts about this date in his book 2012: The Return of Quetzalcoatl, linking bʼakʼtun 13 to beliefs in crop circles, alien abduction, and personal revelations based on the use of hallucinogenic drugs and mediumship. Pinchbeck claims to discern a "growing realization that materialism and the rational, empirical worldview that comes with it has reached its expiration date ... [w]e're on the verge of transitioning to a dispensation of consciousness that's more intuitive, mystical and shamanic". Galactic alignment There is no significant astronomical event tied to the Long Count's start date. Its supposed end date was tied to astronomical phenomena by esoteric, fringe, and New Age literature that placed great significance on astrology, especially astrological interpretations associated with the phenomenon of axial precession. Chief among these ideas is the astrological concept of a "galactic alignment". Precession In the Solar System, the planets and the Sun lie roughly within the same flat plane, known as the plane of the ecliptic. From our perspective on Earth, the ecliptic is the path taken by the Sun across the sky over the course of the year. The twelve constellations that line the ecliptic are known as the zodiacal constellations, and, annually, the Sun passes through all of them in turn. Additionally, over time, the Sun's annual cycle appears to recede very slowly backward by one degree every 72 years, or by one constellation approximately every 2,160 years. This backward movement, called "precession", is due to a slight wobble in the Earth's axis as it spins, and can be compared to the way a spinning top wobbles as it slows down. Over the course of 25,800 years, a period often called a Great Year, the Sun's path completes a full, 360-degree backward rotation through the zodiac. In Western astrological traditions, precession is measured from the March equinox, one of the two annual points at which the Sun is exactly halfway between its lowest and highest points in the sky. At the end of the 20th century and beginning of the 21st, the Sun's March equinox position was in the constellation Pisces moving back into Aquarius. This signaled the end of one astrological age (the Age of Pisces) and the beginning of another (the Age of Aquarius). Similarly, the Sun's December solstice position (in the northern hemisphere, the lowest point on its annual path; in the southern hemisphere, the highest) was in the constellation of Sagittarius, one of two constellations in which the zodiac intersects with the Milky Way. Every year, on the December solstice, the Sun and the Milky Way, appear (from the surface of the Earth) to come into alignment, and every year precession caused a slight shift in the Sun's position in the Milky Way. Given that the Milky Way is between 10° and 20° wide, it takes between 700 and 1,400 years for the Sun's December solstice position to precess through it. In 2012 it was about halfway through the Milky Way, crossing the galactic equator. In 2012, the Sun's December solstice fell on 21 December. Mysticism Mystical speculations about the precession of the equinoxes and the Sun's proximity to the center of the Milky Way appeared in Hamlet's Mill (1969) by Giorgio de Santillana and Hertha von Deschend. These were quoted and expanded upon by Terence and Dennis McKenna in The Invisible Landscape (1975). Adherents to the idea, following a theory first proposed by Munro Edmonson, alleged that the Maya based their calendar on observations of the Great Rift or Dark Rift, a band of dark dust clouds in the Milky Way, which, according to some scholars, the Maya called the Xibalba be or "Black Road". John Major Jenkins claims that the Maya were aware of where the ecliptic intersected the Black Road and gave this position in the sky a special significance in their cosmology. Jenkins said that precession would align the Sun precisely with the galactic equator at the 2012 winter solstice. Jenkins claimed that the classical Maya anticipated this conjunction and celebrated it as the harbinger of a profound spiritual transition for mankind. New Age proponents of the galactic alignment hypothesis argued that, just as astrology uses the positions of stars and planets to make claims of future events, the Maya plotted their calendars with the objective of preparing for significant world events. Jenkins attributed the insights of ancient Maya shamans about the Galactic Center to their use of psilocybin mushrooms, psychoactive toads, and other psychedelics. Jenkins also associated the Xibalba be with a "world tree", drawing on studies of contemporary (not ancient) Maya cosmology. Criticism Astronomers such as David Morrison argue that the galactic equator is an entirely arbitrary line and can never be precisely drawn, because it is impossible to determine the Milky Way's exact boundaries, which vary depending on clarity of view. Jenkins claimed he drew his conclusions about the location of the galactic equator from observations taken at above , an altitude that gives a clearer image of the Milky Way than the Maya had access to. Furthermore, since the Sun is half a degree wide, its solstice position takes 36 years to precess its full width. Jenkins himself noted that even given his determined location for the line of the galactic equator, its most precise convergence with the center of the Sun already occurred in 1998, and so asserts that, rather than 2012, the galactic alignment instead focuses on a multi-year period centered in 1998. There is no clear evidence that the classic Maya were aware of precession. Some Maya scholars, such as Barbara MacLeod, Michael Grofe, Eva Hunt, Gordon Brotherston, and Anthony Aveni, have suggested that some Mayan holy dates were timed to precessional cycles, but scholarly opinion on the subject remains divided. There is also little evidence, archaeological or historical, that the Maya placed any importance on solstices or equinoxes. It is possible that only the earliest among Mesoamericans observed solstices, but this is also a disputed issue among Mayanists. There is also no evidence that the classic Maya attached any importance to the Milky Way; there is no glyph in their writing system to represent it, and no astronomical or chronological table tied to it. Timewave zero and the I Ching "Timewave zero" is a numerological formula that purports to calculate the ebb and flow of "novelty", defined as increase over time in the universe's interconnectedness, or organized complexity. Terence McKenna claimed that the universe has a teleological attractor at the end of time that increases interconnectedness. He believed this which would eventually reach a singularity of infinite complexity in 2012, at which point anything and everything imaginable would occur simultaneously. He conceived this idea over several years in the early to mid-1970s whilst using psilocybin mushrooms and DMT. The scientific community considers novelty theory to be pseudoscience. McKenna expressed "novelty" in a computer program which produces a waveform known as "timewave zero" or the "timewave". Based on McKenna's interpretation of the King Wen sequence of the I Ching, an ancient Chinese book on divination, the graph purports to show great periods of novelty corresponding with major shifts in humanity's biological and sociocultural evolution. He believed that the events of any given time are resonantly related to the events of other times, and chose the atomic bombing of Hiroshima as the basis for calculating his end date of November 2012. When he later discovered this date's proximity to the end of the 13th bʼakʼtun of the Maya calendar, he revised his hypothesis so that the two dates matched. The 1975 first edition of The Invisible Landscape referred to 2012 (but no specific day during the year) only twice. In the 1993 second edition, McKenna employed Sharer's date of 21 December 2012 throughout. Novelty theory has been criticized for "rejecting countless ideas presumed as factual by the scientific community", depending "solely on numerous controversial deductions that contradict empirical logic", and encompassing "no suitable indication of truth", with the conclusion that novelty theory is a pseudoscience. Doomsday theories The idea that the year 2012 presaged a world cataclysm, the end of the world, or the end of human civilization, became a subject of popular media speculation as the date of 21 December 2012 approached. This idea was promulgated by many pages on the Internet, particularly on YouTube. The Discovery Channel was criticized for its "quasi-documentaries" about the subject that "sacrifice[d] accuracy for entertainment". Other alignments Some people interpreted the galactic alignment apocalyptically, claiming that its occurrence would somehow create a combined gravitational effect between the Sun and the supermassive black hole at the center of our galaxy (known as Sagittarius A*), creating havoc on Earth. Apart from the "galactic alignment" already having happened in 1998, the Sun's apparent path through the zodiac as seen from Earth did not take it near the true galactic center, but rather several degrees above it. Even were this not the case, Sagittarius A* is 30,000 light years from Earth; it would have to have been more than 6 million times closer to cause any gravitational disruption to Earth's Solar System. This reading of the alignment was included on the History Channel documentary Decoding the Past. John Major Jenkins complained that a science fiction writer co-authored the documentary, and he went on to characterize it as "45 minutes of unabashed doomsday hype and the worst kind of inane sensationalism". Some believers in a 2012 doomsday used the term "galactic alignment" to describe a different phenomenon proposed by some scientists to explain a pattern in mass extinctions supposedly observed in the fossil record. According to the Shiva Hypothesis, mass extinctions are not random, but recur every 26 million years. To account for this, it was suggested that vertical oscillations made by the Sun on its 250-million-year orbit of the galactic center cause it to regularly pass through the galactic plane. When the Sun's orbit takes it outside the galactic plane which bisects the galactic disc, the influence of the galactic tide is weaker. When re-entering the galactic disc—as it does every 20–25 million years—it comes under the influence of the far stronger "disc tides", which, according to mathematical models, increase the flux of Oort cloud comets into the inner Solar System by a factor of 4, thus leading to a massive increase in the likelihood of a devastating comet impact. This "alignment" takes place over tens of millions of years, and could never be timed to an exact date. Evidence shows that the Sun passed through the plane bisecting the galactic disc three million years ago and in 2012 was moving farther above it. A third suggested alignment was some sort of planetary conjunction occurring on 21 December 2012; there was no conjunction on that date. Multi-planet alignments did occur in both 2000 and 2010, each with no ill result for the Earth. Jupiter is the largest planet in the Solar System, being larger than all other planets combined. When Jupiter is near opposition, the difference in gravitational force that the Earth experiences is less than 1% of the force that the Earth feels daily from the Moon. Geomagnetic reversal Another idea tied to 2012 involved a geomagnetic reversal (often referred to as a pole shift by proponents), possibly triggered by a massive solar flare, that would release an energy equal to 100 billion atomic bombs. This belief was supposedly supported by observations that the Earth's magnetic field was weakening, which could precede a reversal of the north and south magnetic poles, and the arrival of the next solar maximum, which was expected sometime around 2012. Most scientific estimates say that geomagnetic reversals take between 1,000 and 10,000 years to complete, and do not start on any particular date. The U.S. National Oceanic and Atmospheric Administration predicted that the solar maximum would peak in late 2013 or 2014, and that it would be fairly weak, with a below-average number of sunspots. There was no scientific evidence linking a solar maximum to a geomagnetic reversal, which is driven by forces entirely within the Earth. A solar maximum does affect satellite and cellular phone communications. David Morrison attributed the rise of the solar storm idea to physicist and science popularizer Michio Kaku, who claimed in an interview with Fox News that a solar peak in 2012 could be disastrous for orbiting satellites, and to NASA's headlining a 2006 webpage as "Solar Storm Warning", a term later repeated on several doomsday pages. On 23 July 2012, a massive, potentially damaging, solar storm came within nine days of striking Earth. Planet X/Nibiru Some believers in a 2012 doomsday claimed that a planet called Planet X, or Nibiru, would collide with or pass by the Earth. This idea, which had appeared in various forms since 1995, initially predicted Doomsday in May 2003, but proponents abandoned that date after it passed without incident. The idea originated from claims of channeling alien beings and is widely ridiculed. Astronomers calculated that such an object so close to Earth would be visible to anyone looking up at the night sky. Other catastrophes Author Graham Hancock, in his book Fingerprints of the Gods, interpreted Coe's remarks in Breaking the Maya Code as evidence for the prophecy of a global cataclysm. Filmmaker Roland Emmerich later credited the book with inspiring his 2009 disaster film 2012. Other speculations regarding doomsday in 2012 included predictions by the Web Bot project, a computer program that purports to predict the future by analyzing Internet chatter. Commentators have rejected claims that the bot is able to predict natural disasters, as opposed to human-caused disasters like stock market crashes. The 2012 date was also loosely tied to the long-running concept of the photon belt''', which predicted a form of interaction between Earth and Alcyone, the largest star of the Pleiades cluster. Critics argued that photons cannot form belts, that the Pleiades, located more than 400 light years away, could have no effect on Earth, and that the Solar System, rather than getting closer to the Pleiades, was in fact moving farther away from it. Some media outlets tied the fact that the red supergiant star Betelgeuse would undergo a supernova at some point in the future to the 2012 phenomenon. While Betelgeuse was certainly in the final stages of its life, and would die as a supernova, there was no way to predict the timing of the event to within 100,000 years. To be a threat to Earth, a supernova would need to be no further than 25 light years from the Solar System. Betelgeuse is roughly 600 light years away, and so its supernova would not affect Earth. In December 2011, NASA's Francis Reddy issued a press release debunking the possibility of a supernova occurring in 2012. Another claim involved alien invasion. In December 2010, an article, first published in examiner.com and later referenced in the English-language edition of Pravda claimed, citing a Second Digitized Sky Survey photograph as evidence, that SETI had detected three large spacecraft due to arrive at Earth in 2012. Astronomer and debunker Phil Plait noted that by using the small-angle formula, one could determine that if the object in the photo were as large as claimed, it would have had to be closer to Earth than the Moon, which would mean it would already have arrived. In January 2011, Seth Shostak, chief astronomer of SETI, issued a press release debunking the claims. Public reaction The phenomenon spread widely after coming to public notice, particularly on the Internet, and hundreds of thousands of websites made reference to it. "Ask an Astrobiologist", a NASA public outreach website, received over 5,000 questions from the public on the subject from 2007, some asking whether they should kill themselves, their children or their pets. In May 2012, an Ipsos poll of 16,000 adults in 21 countries found that 8 percent had experienced fear or anxiety over the possibility of the world ending in December 2012, while an average of 10 percent agreed with the statement "the Mayan calendar, which some say 'ends' in 2012, marks the end of the world", with responses as high as 20 percent in China, 13 percent in Russia, Turkey, Japan and Korea, and 12 percent in the United States. At least one suicide was directly linked to fear of a 2012 apocalypse, with others anecdotally reported. Jared Lee Loughner, the perpetrator of the 2011 Tucson shooting, followed 2012-related predictions. A panel of scientists questioned on the topic at a plenary session at the Astronomical Society of the Pacific contended that the Internet played a substantial role in allowing this doomsday date to gain more traction than previous similar panics. Europe Beginning in 2000, the small French village of Bugarach, population 189, began receiving visits from "esoterics"—mystic believers who had concluded that the local mountain, Pic de Bugarach, was the ideal location to weather the transformative events of 2012. In 2011, the local mayor, Jean-Pierre Delord, began voicing fears to the international press that the small town would be overwhelmed by an influx of thousands of visitors in 2012, even suggesting he might call in the army. "We've seen a huge rise in visitors", Delord told The Independent in March 2012. "Already this year more than 20,000 people have climbed right to the top, and last year we had 10,000 hikers, which was a significant rise on the previous 12 months. They think Pic de Bugarach is 'un garage à ovnis' [a garage for UFOs]. The villagers are exasperated: the exaggerated importance of something which they see as completely removed from reality is bewildering. After 21 December, this will surely return to normal." In December 2012, the French government placed 100 police and firefighters around both Bugarach and Pic de Bugarach, limiting access to potential visitors. Ultimately, only about 1,000 visitors appeared at the height of the "event". Two raves were foiled, 12 people had to be turned away from the peak, and 5 people were arrested for carrying weapons. Jean-Pierre Delord was criticised by members of the community for failing to take advantage of the media attention and promote the region. The Turkish village of Şirince, near Ephesus, expected to receive over 60,000 visitors on 21 December 2012, as New Age mystics believed its "positive energy" would aid in weathering the catastrophe. Only a fraction of that number actually arrived, with a substantial component being police and journalists, and the expected windfall failed to materialise. Similarly, the pyramid-like mountain of Rtanj, in the Serbian Carpathians, attracted attention, due to rumors that it would emit a powerful force shield on the day, protecting those in the vicinity. Hotels around the base were full. In Russia, inmates of a women's prison experienced "a collective mass psychosis" in the weeks leading up to the supposed doomsday, while residents of a factory town near Moscow reportedly emptied a supermarket of matches, candles, food and other supplies. The Minister of Emergency Situations declared in response that according to "methods of monitoring what is occurring on the planet Earth", there would be no apocalypse in December. When asked when the world would end in a press conference, Russian President Vladimir Putin said, "In about 4.5 billion years." In December 2012, Vatican astronomer Rev. José Funes wrote in the Vatican newspaper L'Osservatore Romano that apocalyptic theories around 2012 were "not even worth discussing". Asia and Australia In May 2011, 5,000-7,000 Hmong ethnic people in Dien Bien province, Vietnam held a protest on the grounds that the end of the world was coming, and the Hmong people would be evacuated to their own Hmong country by "supernatural force". The Vietnamese media and government believe that this is a trick of the Hmong ethnic separatist forces. In China, up to a thousand members of the Christian cult Almighty God were arrested after claiming that the end of bʼakʼtun 13 marked the end of the world, and that it was time to overthrow Communism. Shoppers were reported to be hoarding supplies of candles in anticipation of coming darkness, while online retailer Taobao sold tickets to board Noah's Ark to customers. Bookings for wedding ceremonies on 21 December 2012 were saturated in several cities. On 14 December 2012, a man in Henan province attacked and wounded twenty-three children with a knife. Authorities suspected the man had been "influenced" by the prediction of the upcoming apocalypse. Academics in China attributed the widespread belief in the 2012 doomsday in their country to a lack of scientific literacy and a mistrust of the government-controlled media. On 6 December 2012, Australian Prime Minister Julia Gillard delivered a hoax speech for the radio station triple J in which she declared "My dear remaining fellow Australians; the end of the world is coming. Whether the final blow comes from flesh-eating zombies, demonic hell-beasts or from the total triumph of K-Pop, if you know one thing about me it is this—I will always fight for you to the very end." Radio announcer Neil Mitchell described the hoax as "immature" and pondered whether it demeaned her office. Jasper Tsang, president of Hong Kong’s Legislative Council, adjourned the legislature's sitting on 20 December 2012 by announcing that he "would not permit the world to end" as the legislature had to meet again in January 2013, to the laughter of MPs. Mexico and Central America Mesoamerican countries that once formed part of the Maya civilization—Mexico, Guatemala, Honduras, and El Salvador—all organized festivities to commemorate the end of bʼakʼtun 13 at the largest Maya sites. On 21 December 2011, the Maya town of Tapachula in Chiapas activated an eight-foot digital clock counting down the days until the end of bʼakʼtun 13. On 21 December 2012, major events took place at Chichén Itzá in Mexico and Tikal in Guatemala. In El Salvador, the largest event was held at Tazumal, and in Honduras, at Copán. In all of these archaeological sites, Maya rituals were held at dawn led by shamans and Maya priests. On the final day of bʼakʼtun 13, residents of Yucatán and other regions formerly dominated by the ancient Maya celebrated what they saw as the dawn of a new, better era. According to official figures from Mexico's National Institute of Anthropology and History (INAH), about 50,000 people visited Mexican archaeological sites on 21 December 2012. Of those, 10,000 visited Chichén Itzá in Yucatán, 9,900 visited Tulum in Quintana Roo, and 8,000 visited Palenque in Chiapas. An additional 10,000 people visited Teotihuacan near Mexico City, which is not a Maya site. The main ceremony in Chichén Itzá was held at dawn in the plaza of the Temple of Kukulkán, one of the principal symbols of Maya culture. The archaeological site was opened two hours early to receive thousands of tourists, mostly foreigners who came to participate in events scheduled for the end of bʼakʼtun 13. The fire ceremony at Tikal was held at dawn in the main plaza of the Temple of the Great Jaguar. The ceremony was led by Guatemalan and foreign priests. The President of Guatemala, Otto Pérez, and of Costa Rica, Laura Chinchilla, participated in the event as special guests. During the ceremony the priests asked for unity, peace and the end of discrimination and racism, with the hope that the start of a new cycle will be a "new dawn". About 3,000 people participated in the event. Most of these events were organized by agencies of the Mexican and Central American governments, and their respective tourism industries expected to attract thousands of visitors. Mexico is visited by about 22 million foreigners in a typical year. In 2012, the national tourism agency expected to attract 52 million visitors just to the regions of Chiapas, Yucatán, Quintana Roo, Tabasco and Campeche. A Maya activist group in Guatemala, Oxlaljuj Ajpop, objected to the commercialization of the date. A spokesman from the Conference of Maya Ministers commented that for them the Tikal ceremony is not a show for tourists but something spiritual and personal. The secretary of the Great Council of Ancestral Authorities commented that living Maya felt they were excluded from the activities in Tikal. This group held a parallel ceremony, and complained that the date has been used for commercial gain. In addition, before the main Tikal ceremony, about 200 Maya protested the celebration because they felt excluded. Most modern Maya were indifferent to the ceremonies, and the small number of people still practising ancient rites held solemn, more private ceremonies. Osvaldo Gomez, a technical advisor to the Tikal site, complained that many visitors during the celebration had illegally climbed the stairs of the Temple of the Masks, causing "irreparable" damage. South America In Brazil, Décio Colla, the Mayor of the City of São Francisco de Paula, Rio Grande do Sul, mobilized the population to prepare for the end of the world by stocking up on food and supplies. In the city of Corguinho, in the Mato Grosso do Sul, a colony was built for survivors of the expected tragedy. In Alto Paraíso de Goiás, the hotels also made specific reservations for prophetic dates. In Bolivia, President Evo Morales participated in Quechua and Aymara rituals, organized with government support, to commemorate the Southern solstice that took place in Isla del Sol, in the southern part of Lake Titicaca. During the event, Morales proclaimed the beginning of "Pachakuti", meaning the world's wake up to a culture of life and the beginning of the end to world capitalism, and he proposed to dismantle the International Monetary Fund and the World Bank. On 21 December 2012, the Uritorco mountain in Córdoba, Argentina was closed, as a mass suicide there had been proposed on Facebook. United States In the United States, sales of private underground blast shelters increased noticeably after 2009, with many construction companies' advertisements calling attention to the 2012 apocalypse. In Michigan, schools were closed for the Christmas holidays two days early, in part because rumours of the 2012 apocalypse were raising fears of repeat shootings similar to that at Sandy Hook. American reality TV stars Heidi Montag and Spencer Pratt revealed that they had spent most of their $10 million of accumulated earnings by 2010 because they believed the world would end in 2012. Cultural influence The 2012 phenomenon was discussed or referenced by several media outlets. Several TV documentaries, as well as some contemporary fictional references to the year 2012, referred to 21 December as the day of a cataclysmic event. The TV series The X-Files cited 22 December 2012 as the date for an alien colonization of the Earth, and mentioned the Mayan calendar "stopping" on this date. The History Channel aired a handful of special series on doomsday that included analysis of 2012 theories, such as Decoding the Past (2005–2007), 2012, End of Days (2006), Last Days on Earth (2006), Seven Signs of the Apocalypse (2009), and Nostradamus 2012 (2008). The Discovery Channel also aired 2012 Apocalypse in 2009, suggesting that massive solar storms, magnetic pole reversal, earthquakes, supervolcanoes, and other drastic natural events could occur in 2012. In 2012, the National Geographic Channel launched a show called Doomsday Preppers, a documentary series about survivalists preparing for various cataclysms, including the 2012 doomsday. Hundreds of books were published on the topic. The bestselling book of 2009, Dan Brown's The Lost Symbol, featured a coded mock email number (2456282.5) that decoded to the Julian date for 21 December 2012. In cinema, Roland Emmerich's 2009 science fiction disaster film 2012 was inspired by the phenomenon, and advance promotion prior to its release included a stealth marketing campaign in which TV spots and websites from the fictional "Institute for Human Continuity" called on people to prepare for the end of the world. As these promotions did not mention the film itself, many viewers believed them to be real and contacted astronomers in panic. Although the campaign was heavily criticized, the film became one of the most successful of its year, grossing nearly $770 million worldwide. An article in The Daily Telegraph attributed the widespread fear of the phenomenon in China to the film, which was a smash hit in that country because it depicted the Chinese building "survival arks". Lars von Trier's 2011 film Melancholia featured a plot in which a planet emerges from behind the Sun on a collision course with Earth. The phenomenon also inspired several rock and pop music hits. As early as 1997, "A Certain Shade of Green" by Incubus referred to the mystical belief that a shift in perception would arrive in 2012 ("Are you gonna stand around till 2012 A.D.? / What are you waiting for, a certain shade of green?"). More recent hits include "Time for Miracles" (2009) performed by Adam Lambert, "2012 (It Ain't the End)" (2010) performed by Jay Sean featuring Nicki Minaj, "Till the World Ends" (2011) performed by Britney Spears and "2012 (If The World Would End)" (2012) performed by Mike Candys featuring Evelyn & Patrick Miller. Towards mid-December 2012, an internet hoax related to South Korean singer Psy being one of the Four Horsemen of the Apocalypse was widely circulated around social media platforms. The hoax purported that once Psy's "Gangnam Style" YouTube video amassed a billion views, the world would end. Indian composer A. R. Rahman, known for Slumdog Millionaire'', released his single "Infinite Love" to "instill faith and optimism in people" prior to the hypothesised doomsday. The artwork for All Time Low's 2012 album Don't Panic satirizes various cataclysmic events associated with the phenomenon. A number of brands ran commercials tied to the phenomenon in the days and months leading to the date. In February 2012, American automotive company General Motors aired an advertisement during the annual Super Bowl football game in which a group of friends drove Chevrolet Silverados through the ruins of human civilization following the 2012 apocalypse. On 17 December 2012, Jell-O ran an ad saying that offering Jell-O to the Mayan gods would appease them into sparing the world. John Verret, Professor of Advertising at Boston University, questioned the utility of tying large sums of money to such a unique and short-term event. See also 13 (number) 2011 end times prediction Doomsday cult Dreamspell List of topics characterized as pseudoscience Triskaidekaphobia References Notes Citations Works cited Further reading External links Why The World Will Still Be Here After Dec. 21, 2012: A Public Discussion with 3 Scientists at the SETI Institute Academia.edu 2012 hoaxes Apocalypticism Conspiracy theories Hoaxes Mass psychogenic illness Maya calendars Mythology New Age culture Numerology Pseudoscience Urban legends Phenomenon
2012 phenomenon
Mathematics
10,154
8,732,068
https://en.wikipedia.org/wiki/New%20Holland%20Brewing%20Company
New Holland Brewing Company is an American independent craft brewing and distilling company headquartered in Holland, Michigan. It also owns and operates brewpub-style restaurants and spirits-tasting rooms located across West Michigan. The company's craft-style beer brands Dragon's Milk, Tangerine Space Machine, and spirits brands Dragon's Milk Origin, Beer Barrel Bourbon among others, are distributed throughout the United States and exported to Canada, Europe and Asia. After the sale of Bell's to Kirin, New Holland Brewing Company became the largest craft brewery in the state of Michigan. History Brett VanderKamp and Jason Spaulding, the founders of New Holland Brewing Company, grew up together in Midland, Michigan, and later attended Hope College. In college Spaulding and VanderKamp cultivated a love of homebrewing, which would bring them together again shortly after graduation. Their business plan took two years to formulate, but once complete, the pair quickly lined up investors, and in 1997 New Holland was founded in Holland, Michigan. Originally, their goal was to produce beer that was characteristically unique to Western Michigan. Their beer was well received, and the company increased production to just over in 2006. In 2007, the company increased production to over . New Holland began distilling bourbon, whiskey, rum, gin and vodka in 2005, and selling it in 2008. On August 23, 2018, New Holland Brewing Company announced that it will be re-branding its flagship Dragon's Milk Bourbon Barrel-Aged Stout. The company launched the re-branding Dragon's Milk packaging in 2023 alongside new Dragon's Milk items, Dragon's Milk Crimson Keep BA Imperial Red Ale and Dragon's Milk Tales of Gold BA Imperial Golden Ale. References Breweries in the United States American beer brands Beer brewing companies based in Michigan Distilleries Bourbon whiskey Cocktails Restaurants in Michigan Companies based in Michigan Pub chains Food- and drink-related organizations Holland, Michigan Grand Rapids, Michigan Battle Creek, Michigan
New Holland Brewing Company
Chemistry
409
1,030,401
https://en.wikipedia.org/wiki/Humic%20substance
Humic substances (HS) are colored relatively recalcitrant organic compounds naturally formed during long-term decomposition and transformation of biomass residues. The color of humic substances varies from bright yellow to light or dark brown leading to black. The term comes from humus, which in turn comes from the Latin word humus, meaning "soil, earth". Humic substances represent the major part of organic matter in soil, peat, coal, and sediments, and are important components of dissolved natural organic matter (NOM) in lakes (especially dystrophic lakes), rivers, and sea water. Humic substances account for 50 – 90% of cation exchange capacity in soils. "Humic substances" is an umbrella term covering humic acid, fulvic acid and humin, which differ in solubility. By definition, humic acid (HA) is soluble in water at neutral and alkaline pH, but insoluble at acidic pH < 2. Fulvic acid (FA) is soluble in water at any pH. Humin is not soluble in water at any pH. This definition of humic substances is largely operational. It is rooted in the history of soil science and, more precisely, in the tradition of alkaline extraction, which dates back to 1786, when Franz Karl Achard treated peat with a solution of potassium hydroxide and, after subsequent addition of an acid, obtained an amorphous dark precipitate (i.e., humic acid). Aquatic humic substances were isolated for the first time in 1806, from spring water by Jöns Jakob Berzelius. In terms of chemistry, FA, HA, and humin share more similarities than differences and represent a continuum of humic molecules. All of them are constructed from similar aromatic, polyaromatic, aliphatic, and carbohydrate units and contain the same functional groups (mainly carboxylic, phenolic, and ester groups), albeit in varying proportions. Water solubility of humic substances is primarily governed by interplay of two factors: the amount of ionizable functional groups and (mainly carboxylic) and molecular weight (MW). In general, fulvic acid has a higher amount of carboxylic groups and lower average molecular weight than does humic acid. Measured average molecular weights vary with source; however, molecular weight distributions of HA and FA overlap significantly. Age and origin of the source material determine the chemical structure of humic substances. In general, humic substances derived from soil and peat (which takes hundreds to thousands of years to form) have higher molecular weight, higher amounts of O and N, more carbohydrate units, and fewer polyaromatic units than humic substances derived from coal and leonardite (which takes millions of years to form). Isolation of HS is the result of an alkaline extraction from solid sources of NOM the adsorption of HS on a resin. A newer view of humic substances is that they are not mostly high-molecular-weight macropolymers but rather represent a heterogeneous mixture of relatively small molecular components of the soil organic matter auto-assembled in supramolecular associations and are composed of a variety of compounds of biological origin and synthesized by abiotic and biotic reactions in soil. and surface waters It is the large molecular complexity of the soil humeome that confers to humic matter its bioactivity in, its stability in ecosystems, soil and its role as plant growth promoter (in particular plant roots). The academic definition of humic substances is under debate and some researchers argue against the traditional concepts of humification and seek to forgo alkali extract method and to analyze the soil directly. Concepts of humic substances The formation of HS in nature is one of the least understood aspects of humus chemistry and one of the most intriguing. Historically, there have been three main theories to explain it: the lignin theory of Waksman (1932), the polyphenol theory, and the sugar-amine condensation theory of Maillard (1911). Humic substances are formed by the microbial degradation of dead biota matter, such as lignin, cellulose. ligno-cellulose and charcoal. Humic substances in the lab are resistant to further biodegradation. Their structure, elemental composition and content of functional groups of a given sample depend on the water or soil source and on the specific procedures and conditions of extraction. Nevertheless, the average properties of lab extractes HS from different sources are remarkably similar. Fractionation Historically, scientists have used variations of similar methods for extracting HS from NOM and separation the extracts into HA and FA. The International Humic Substances Society advocates the use of standard laboratory methods for preparation of humic and fulvic acids. Humic substances are extracted from soil and other solid sources using 0.1 M NaOH, under a nitrogen atmosphere, to prevent abiotic oxidation of some of the components of HS. The HA is then precipitated at pH 1, and the soluble fraction is treated on a resin column to separate fulvic acid components from other acid soluble compounds. The fraction of NOM not extracted by 0.1 NaOH is humin. Humic acid plus fulvic acid is extracted from natural waters using a resin column after microfiltration and acidification to pH 2. The humic materials are eluted from the column with NaOH, and humic acid is precipitated at pH 1. After adjusting the pH to 2 fulvic acid is separated from other acid soluble compounds, using a resin column as with solid phase sources. An analytical method for quantifying humic acid and fulvic acid in commercial ores and humic products, has been developed based on the IHSS humic acid and fulvic acid preparation methods. Scientists associated with the IHSS have also isolated the entire NOM from black water streams using reverse osmosis  The retentate from this process contains both humic and fulvic acids, predominately humic acid. The NOM from hard water streams has been isolated using  reverse osmosis and electrodialysis in tandem. Extracted humic acid not a single acid; rather, it is a complex mixture of many different acids containing carboxyl and phenolate groups so that the mixture behaves functionally as a dibasic acid or, occasionally, as a tribasic acid. Commercial humic acid used to amend soil is manufactured using these same well established procedures. Humic acids can form complexes with ions that are commonly found in the environment creating humic colloids. A sequential chemical fractionation called Humeomics can be used to isolate more homogeneous humic fractions and determine their molecular structures by advanced spectroscopic and chromatographic methods. Substances identified in humic extracts and directly in soil include mono-, di-, and tri-hydroxycarboxylic acids, fatty acids, dicarboxylic acids, linear alcohols, phenolic acids, terpenoids, carbohydrates, and amino acids. This suggests humic molecules may form a supramolecular structures held together by non-covalent forces, such as van der Waals force, π-π, and CH-π bonds. Chemical characteristics Since the dawn of modern chemistry, humic substances are among the most studied among natural materials. Despite long study, their molecular structure remains debatable The traditional view has been that humic substances are hetero- poly-condensates, in varying associations with clay. A more recent view is that relatively small molecules also play major a role. A typical humic substance is a mixture of many molecules, some of which are based on a motif of aromatic nuclei with phenolic and carboxylic substituents, linked together;  The functional groups that contribute most to surface charge and reactivity of humic substances are phenolic and carboxylic groups. Humic substances commonly behave as mixtures of dibasic acids, with a pK1 value around 4 for protonation of carboxyl groups and around 8 for protonation of phenolate groups in HA. Fulvic acids are more acidic than HA. There is considerable overall similarity among individual humic acids. For this reason, measured pK values for a given sample are average values relating to the constituent species. The other important characteristic is charge density. The more recent determinations of molecular weights of HS show that the molecular weights are not as great as once thought.  Reported  number average molecular weights of soil HA are < 6000 but they are highly poly disperse with some components with much larger measure molecular weights and much lower.  Measured number average molecular weights of aquatic HS with HA <= 1700 and FA< 900.  The aquatic HA and FA are also highly poly disperse.  The number of individually distinct components in HS, as measured by mass spectroscopy is in the thousands  The average composition of HA and FA can be represented by model structures. The presence of carboxylate and phenolate groups gives the humic acids the ability to form complexes with ions such as Mg2+, Ca2+, Fe2+, and Fe3+ creating humic colloids.. Many humic acids have two or more of these groups arranged so as to enable the formation of chelate complexes. The formation of (chelate) complexes is an important aspect of the biological role of humic acids in regulating bioavailability of metal ions. Criticism Decomposition products of dead plant materials form intimate associations with minerals, making it difficult to isolate and characterize soil organic constituents. 18th century soil chemists successfully used alkaline extraction to isolate a portion of the organic constituents in soil. This led to the theory that a 'humification' process created distinct 'humic substances' like 'humic acid', 'fulvic acid', and 'humin'. However, modern chemical analysis methods applied to unprocessed mineral soil have not directly observed large humic molecules. This suggests that the extraction and fractionation techniques used to isolate humic substances alter the original chemical composition of the organic matter. Since the definition of humic substances like humic and fulvic acids relies on their separation through these methods, it raises the question of whether the distinction between these compounds accurately reflects the natural state of organic matter in soil. Despite these concerns, the 'humification' theory persists in the field and in even textbooks, and attempts to redefine 'humic substances' in soil have resulted in a proliferation of conflicting definitions. This lack of consensus makes it difficult to communicate scientific understanding of soil processes and properties accurately." Determination of humic acids in water samples The presence of humic acid in water intended for potable or industrial use can have a significant impact on the treatability of that water and the success of chemical disinfection processes. For instance, humic and fulvic acids can react with the chemicals used in the chlorination process to form disinfection byproducts such as dihaloacetonitriles, which are toxic to humans. Accurate methods of establishing humic acid concentrations are therefore essential in maintaining water supplies, especially from upland peaty catchments in temperate climates. As a lot of different bio-organic molecules in very diverse physical associations are mixed together in natural environments, it is cumbersome to measure their exact concentrations in the humic superstructure. For this reason, concentrations of humic acid are traditionally estimated out of concentrations of organic matter, typically from concentrations of total organic carbon (TOC) or dissolved organic carbon (DOC). Extraction procedures are bound to alter some of the chemical linkages present in the soil humic substances (mainly ester bonds in biopolyesters such as cutins and suberins). The humic extracts are composed of large numbers of different bio-organic molecules that have not yet been totally separated and identified. However, single classes of residual biomolecules have been identified by selective extractions and chemical fractionation, and are represented by alkanoic and hydroxy alkanoic acids, resins, waxes, lignin residues, sugars, and peptides. Ecological effects Organic matter soil amendments have been known by farmers to be beneficial to plant growth for longer than recorded history. However, the chemistry and function of the organic matter have been a subject of controversy since humans began postulating about it in the 18th century. Until the time of Liebig, it was supposed that humus was used directly by plants, but, after Liebig showed that plant growth depends upon inorganic compounds, many soil scientists held the view that organic matter was useful for fertility only as it was broken down with the release of its constituent nutrient elements into inorganic forms. At the present time, soil scientists hold a more holistic view and at least recognize that humus influences soil fertility through its effect on the water-holding capacity of the soil. Also, since plants have been shown to absorb and translocate the complex organic molecules of systemic insecticides, they can no longer discredit the idea that plants may be able to absorb the soluble forms of humus; this may in fact be an essential process for the uptake of otherwise insoluble iron oxides. A study on the effects of humic acid on plant growth was conducted at Ohio State University which said in part "humic acids increased plant growth" and that there were "relatively large responses at low application rates". A 1998 study by scientists at the North Carolina State University College of Agriculture and Life Sciences showed that addition of humate to soil significantly increased root mass in creeping bentgrass turf. A 2018 study by scientists at the University of Alberta showed that humic acids can reduce prion infectivity in laboratory experiments, but that this effect may be uncertain in the environment due to minerals in the soil that buffer the effect. Anthropogenic production Humans can affect the production of humic substances via a variety of ways: by making use of natural processes by composting lignin or adding biochar (see soil rehabilitation), or by industrial synthesis of artificial humic substances from organic feedstocks directly. These artificial substances may be similarly divided into artificial humic acid (A-HA) and artificial fulvic acid (A-FA). Lignosulfonates, a by-product from the sulfite pulping of wood, are valorized in the industrial fabrication of concrete where they serve as water reducer, or concrete superplasticizer, to decrease the water-cement ratio (w/c) of fresh concrete while preserving its workability. The w/c ratio of concrete is one of the main parameter controlling the mechanical strength of hardened concrete and its durability. The same wood pulping process can also be applied to obtain humus-like substances by hydrolysis and oxidation. A kind of artificial "lignohumate" can be directly produced from wood in this way. Agricultural litter can be turned into an artificial humic substance by a hydrothermal reaction. The resulting mixture can increase the content of dissolved organic matter (DOM) and total organic carbon (TOC) in soil. Lignite (brown coal) may also be oxidized to produce humic substances, reversing the natural process of coal formation under anoxic and reducing conditions. This form of "mineral-derived fulvic acid" is widely used in China. This process also occurs in nature, producing leonardite. Economic geology In economic geology, the term humate refers to geological materials, such as weathered coal beds (leonardite), mudrock, or pore material in sandstones, that are rich in humic acids. Humate has been mined from the Fruitland Formation of New Mexico for use as a soil amendment since the 1970s, with nearly 60,000 metric tons produced by 2016. Humate deposits may also play an important role in the genesis of uranium ore bodies. Technological applications The heavy-metal binding abilities of humic acids have been exploited to develop remediation technologies for removing lead from waste water. To this end, Yurishcheva et al. coated magnetic nanoparticles with humic acids. After capturing lead ions, the nanoparticles can then be captured using a magnet. Ancient masonry Archeology finds that ancient Egypt used mudbricks reinforced with straw and humic acids. See also Black water (drink) Humin Humus Polycyclic aromatic hydrocarbon Soil References External links International Humic Substances Society Composting Organic acids Soil chemistry
Humic substance
Chemistry
3,420
1,693,711
https://en.wikipedia.org/wiki/Pseudaconitine
Pseudaconitine, also known as nepaline (C36H51NO12), is an extremely toxic alkaloid found in high quantities in the roots of Aconitum ferox, also known as Indian Monkshood, which belongs to the family Ranunculaceae. The plant is found in East Asia, including the Himalayas. History Pseudaconitine was discovered in 1878 by Wright and Luff. They isolated a highly toxic alkaloid from the roots of the plant Aconitum ferox and called it pseudaconitine. The poison is also called bikh, bish, or nabee. Toxicity and mechanism Pseudaconitine is a moderate inhibitor of the enzyme acetylcholinesterase. This enzyme breaks down the neurotransmitter acetylcholine through hydrolysis. Inhibition of this enzyme causes a constant stimulation of the postsynaptic membrane by the neurotransmitter which it cannot cancel. This accumulation of acetylcholine may thus lead to the constant stimulation of the muscles, glands and central nervous system. Furthermore, it appears the substance in small quantities also causes a tingling effect on the tongue, lips and skin. Structure and reactivity Pseudaconitine is a diterpene alkaloid, with the chemical formula C36H51NO12. The crystal melts at 202 °C and is moderately soluble in water, but more so in alcohol. This shows that it is a lipophilic substance. When heated in the dry state, it undergoes pyrolysis and pyropseudaconitine (C34H47O10N) is formed. This does not have the same tingling effect as pseudaconitine. See also Aconitine References Diterpene alkaloids Plant toxins Phenol ethers Acetate esters Benzoate esters Acetylcholinesterase inhibitors
Pseudaconitine
Chemistry
403
60,999,682
https://en.wikipedia.org/wiki/Alizarin%20Red%20S
Alizarin Red S (also known as C.I. Mordant Red 3, Alizarin Carmine, and C.I 58005.) is a water-soluble sodium salt of Alizarin sulfonic acid with a chemical formula of . Alizarin Red S was discovered by Graebe and Liebermann in 1871. In the field of histology alizarin Red S is used to stain calcium deposits in tissues, and in geology to stain and differentiate carbonate minerals. Uses Alizarin Red S is used in histology and histopathology to stain, or locate calcium deposits in tissues. In the presence of calcium, Alizarin Red S, binds to the calcium to form a Lake pigment that is orange to red in color. Whole specimens can be stained with Alizarin Red S to show the distribution of bone, especially in developing embryos. In living corals alizarin Red S has been used to mark daily growth layers. In geology, Alizarin Red S is used on thin sections, and polished surfaces to help identify carbonate minerals which stain at different rates. See also Aniline 1,2,4-Trihydroxyanthraquinone or purpurin, another red dye that occurs in madder root Hydroxyanthraquinone Dihydroxyanthraquinone List of dyes List of colors (compact) References Anthraquinone dyes Catechols Chelating agents Dihydroxyanthraquinones Organic pigments Natural dyes Staining dyes Histology Histotechnology Staining Histochemistry
Alizarin Red S
Chemistry,Biology
329
16,616,885
https://en.wikipedia.org/wiki/Cognitive%20resource%20theory
Cognitive resource theory (CRT) is a leadership theory of industrial and organisational psychology developed by Fred Fiedler and Joe Garcia in 1987 as a reconceptualisation of the Fiedler contingency model. The theory focuses on the influence of the leader's intelligence and experience on their reaction to stress. The essence of the theory is that stress is the enemy of rationality, damaging leaders' ability to think logically and analytically. However, the leader's experience and intelligence can lessen the influence of stress on his or her actions: intelligence is the main factor in low-stress situations, while experience counts for more during high-stress moments. Originating from studies into military leadership style, CRT can also be applied to other contexts such as the relationship between stress and ability in sport. The theory proposes the style of leadership required in certain situations, depending on the degree of stress, situational control and task structure. Training should focus on stress management so that a leader's intellect can be most effectively utilised and also to train leaders to take a directive approach when their knowledge will benefit the group but a less directive approach when group member abilities will contribute to performance. Fiedler contingency model Research into leadership performance and effectiveness of training programmes found no effect of years of experience on performance. To understand the effect of different leaders on performance in an organisation, Fielder developed the contingency model. The model highlights the importance of leadership style and the degree to which this is matched to the situation. Contrast between task-orientated leaders and relationship-orientated leaders judged by the Least Preferred Coworker (LPC) scale. Either leadership style can be effective depending on the situation so no ideal leader is theorised but performance can be improved by altering the situation to meet the style of leadership. The second factor of the theory is how well the leader can control the group and ensure their instructions are carried out. However this theory was criticised for its lack of flexibility and over the accuracy of the LPC scale. Fiedler then went on to develop the CRT which takes into account the personality of the leader, degree of situational stress and group-leader relations. Cognitive resource theory The cognitive resources of a leader refers to their experience, intelligence, competence, and task-relevant knowledge. Blades undertook studies in army mess halls, investigating the effect of group member and leader intelligence on overall organisational performance. The effect of intelligence on performance was influenced by how directive the leader was and both the leader's and members' motivation. He concluded that a leader's knowledge can only contribute to performance if it is efficiently communicated, hence requiring a directive leader and also a compliant group that is willing to undertake the commands of the leader. A further study on military cadets measuring levels of interpersonal stress and intelligence showed intelligence to be impaired under conditions of stress. Predictions A leader's cognitive ability contributes to the performance of the team only when the leader's approach is directive. When leaders are better at planning and decision-making, in order for their plans and decisions to be implemented, they need to tell people what to do, rather than hope they agree with them. When they are not better than people on the team, then a non-directive approach is more appropriate, for example where they facilitate an open discussion where the ideas of team can be aired and the best approach identified and implemented. Stress affects the relationship between intelligence and decision quality. When there is low stress, then intelligence is fully functional and makes an optimal contribution. However, during high stress, a natural intelligence not only makes no difference, but it may also have a negative effect. One reason for this may be that an intelligent person seeks rational solutions, which may not be available (and may be one of the causes of stress). In such situations, a leader who is inexperienced in 'gut feel' decisions is forced to rely on this unfamiliar approach. Another possibility is that the leader retreats within him/herself, to think hard about the problem, leaving the group to their own devices. In situations of stress cognitive abilities are not task-orientated but focus on task irrelevant features caused by stress of the situation or of their superior. Leader's abilities contribute to group performance only under conditions where the group favors the leader and is supportive of the leader and their goals. In situations where the group members are supportive, the leader's commands can therefore be implemented Leader's intelligence correlates with performance to the degree that the task is intellectually demanding. Intellectual abilities can only be utilised efficiently in difficult, cognitively demanding tasks. Therefore, the leader's abilities and intelligence only aid organisational success when they are directive, in a stress free situation, the organisations' members are supportive and the task requires high intellect. The role of experience In high stress conditions, experience is a more influencing factor on performance than intelligence as experience leads to perceiving the situation as more structured and less complex. A high level of intellect leads to cognitive complexity thereby perception of greater task complexity and the leader views many alternative solutions, resulting in greater stress. The extent to which a leader has situational control judged by their perception of task structure and their position of power defines how certain they think the task will be accomplished. Situational control is a key concept in both the contingency model and in CRT. The contingency model predicts that task-motivated leaders (low LPC score) perform most efficiently in situations of high control whereas relationship orientated leaders (high LPC score) perform best in moderately or low structured tasks. References Further reading Bettin, P. J. (1983). "The role of relevant experience and intellectual ability in determining the performance of military leaders: A contingency model explanation". Seattle: University of Washington. Fiedler, F. E. (1986), Berkowitz, L. (ed.), "The contribution of cognitive resources to leadership performance", Advances in experimental social psychology, New York, NY: Academic Press. Fiedler, F. E.; Gibson, F. W. (2001). "Determinants of effective utilization of leader abilities". Concepts for Air Force Leadership. 24 (2): 171–176. Fiedler, F. E.; McGuire, M.; Richardson, M. (1989). "The role of intelligence and experience in successful group performance". Applied Sport Psychology. 1: 132–149. Cognitive psychology Psychological theories
Cognitive resource theory
Biology
1,322
59,953,630
https://en.wikipedia.org/wiki/Wildlife%20of%20Norway
The wildlife of Norway includes the diverse flora and fauna of Norway. The habitats include high mountains, tundras, rivers, lakes, wetlands, sea coast and some lower cultivated land in the south. Mainland Norway has a long coastline, protected by skerries and much dissected by fjords, and the mostly-icebound archipelago of Svalbard lies further north. The flora is very varied and a large range of mammals, birds (many migratory), fish and invertebrate species live here, as well as a few species of reptiles and amphibians. Geography Mainland Norway is a mountainous, elongated country with a very long coastline. It extends from a latitude of 58°N to more than 71°N, which is north of the Arctic Circle, and there are some 50,000 smaller islands off the extremely indented coastline. The Scandinavian Mountains extend along the length of the country; the average elevation is and 32% of the mainland is located above the tree line. The mountains end abruptly on the west coast and there is little in the way of a coastal plain. Between the mountains are deep valleys, with lowland largely limited to the southeastern region of the country and the south coast. The far northeast of the country is less mountainous, with rolling hills and the Finnmarksvidda plateau. Further north still, the archipelago of Svalbard has an arctic climate; the land surface on the three large and many smaller islands is 60% glacier ice, 30% rock and scree, and only 10% is vegetated. The island has its own distinctive flora and fauna. Climate The climate of much of the mainland is subarctic, with some continental climate in the southeast and some oceanic climate around the coast. Compared to other places at similar latitudes, the temperature is higher because of the warm North Atlantic Current, and the coast normally remains free of ice. The predominant winds bring relatively warm, humid air in from the Atlantic. Much precipitation falls on the western side of the mountains, with the long inland valleys being rather drier and land to the east of the mountains experiencing a rain shadow effect, with less precipitation, more sunshine and usually warmer summers. The far north and northeast of the country are drier but experience much fog and drizzle. The climate of Svalbard is dominated by its high latitude, with the average summer temperature at and January averages at . The West Spitsbergen Current moderates Svalbard's temperatures, particularly during winter. Flora Vegetation zones in Norway include forests, bogs, wetlands and heaths. Boreal species are adapted to the long, cold winters but need a growing season of sufficient length and warmth. Thus typical boreal species include the Norway spruce and pine, while at higher altitudes deciduous trees like downy birch, grey alder, aspen and rowan predominate. Higher still, these give way to dwarf willows and birches above which are tundra, rock and ice. The tundra is too exposed and the climate too severe to support trees and large plants, and here grow mountain grasses and low-growing alpine plants such as mountain avens and purple saxifrage. At even higher altitudes mosses and lichens provide the chief vegetation cover. Estimates of the total number of species in the country include 20,000 species of algae, 1,800 species of lichen, 1,050 species of mosses, 2,800 species of vascular plants, and up to 7,000 species of fungi. In parts of the country with a more continental climate, spruce and pine are dominant and grow at higher elevations than other trees, but in other areas, mountain birch forms the tree line, at around in central southeastern Norway, descending to at the Arctic Circle and to sea level further north. At higher altitudes, the terrain is arctic tundra. Svalbard has permafrost and tundra, with both low, middle and high Arctic vegetation. 165 species of plants have been found on the archipelago. Only those areas which defrost in the summer have vegetation cover and this accounts for about 10% of the island group. Fauna Excluding bacteria and viruses but including marine organisms, the total number of animal and plant species in Norway is estimated at 60,000. This includes 16,000 species of insects (probably 4,000 more species yet to be described), 450 species of birds (250 species nesting in Norway), 90 species of mammals, 45 fresh-water species of fish, 150 marine species of fish, 1,000 species of fresh-water invertebrates, and 3,500 species of marine invertebrates. Terrestrial mammals on mainland Norway include the European hedgehog, six species of shrews and ten of bats. The European rabbit, the European hare and the mountain hare all live here as do the Eurasian beaver, the red squirrel and the brown rat as well as about fifteen species of smaller rodent. Of the ungulates, the wild boar, the muskox, the fallow deer, the red deer, the elk (N. American usage: 'moose'), the roe deer and the reindeer are found in the country. Terrestrial carnivores include the brown bear, the Eurasian wolf, the red fox and the Arctic fox, as well as the Eurasian lynx, the European badger, the Eurasian otter, the stoat, the least weasel, the European polecat, the European pine marten and the wolverine. The coast is visited by the walrus and six species of seal, and around thirty species of whale, dolphin and porpoise are found in Norwegian waters. Norway has a great variety of bird species utilising its many habitats, cliffs, wetlands, forests and tundra. In the summer, insects and other food sources are plentiful and the days are long, giving plenty of time for birds to forage and feed their young. This is not the case in winter when the ground is covered in snow, the wetlands in ice and the days are short, so many of the birds are migratory, usually breeding in Norway and overwintering in southern Europe or Africa. Six terrestrial species of reptiles have been recorded in Norway: the viviparous lizard, the sand lizard, the slow worm, the European adder, the grass snake and the smooth snake, and leatherback and loggerhead sea turtles occasionally visit the coast. Amphibians are limited to the smooth newt, the great crested newt, the common toad, the common frog, the moor frog and the pool frog. There are four terrestrial mammalian species on Svalbard, the Arctic fox, the Svalbard reindeer, the polar bear and the accidentally introduced southern vole, which is found only around Grumant. There are around eighteen species of marine mammal including whales, dolphins, seals and walruses. The rock ptarmigan is the only resident species of bird but the snow bunting and wheatear also nest on Svalbard as do the nearly thirty species of seabird that migrate here each year. Most freshwater lakes in the Svalbard archipelago are inhabited by Arctic char. References Norway Biota of Norway
Wildlife of Norway
Biology
1,441
2,529,979
https://en.wikipedia.org/wiki/Meldrum%27s%20acid
Meldrum's acid or 2,2-dimethyl-1,3-dioxane-4,6-dione is an organic compound with formula . Its molecule has a heterocyclic core with four carbon and two oxygen atoms; the formula can also be written as . It is a crystalline colorless solid, sparingly soluble in water. It decomposes on heating with release of carbon dioxide, acetone, and a ketene. Properties Acidity The compound can easily lose a hydrogen ion from the methylene () in the ring (carbon 5); which creates a double bond between it and one of the adjacent carbons (number 4 or 6), and a negative charge in the corresponding oxygen. The resulting anion is stabilized by resonance between the two alternatives, so that the double bond is delocalized and each oxygen in the carbonyls has a formal charge of −1/2. The ionization constant pKa is 4.97; which makes it behave as a monobasic acid even though it contains no carboxylic acid groups. In this and other properties, the compound resembles dimedone and barbituric acid. However, while dimedone exists in solution predominantly as the mono-enol tautomer, Meldrum's acid is almost entirely as the diketone form. The unusually high acidity of this compound was long considered anomalous—it is 8 orders of magnitude more acidic than the closely related compound dimethyl malonate. In 2004, Ohwada and coworkers determined that the energy-minimizing conformation structure of the compound places the alpha proton's σ*CH orbital in the proper geometry to align with the π*CO, so that the ground state poses unusually strong destabilization of the C-H bond. Preparation Original synthesis The compound was first made by Meldrum by a condensation reaction of acetone with malonic acid in acetic anhydride and sulfuric acid. Alternative syntheses As an alternative to its original preparation, Meldrum's acid can be synthesized from malonic acid, isopropenyl acetate (an enol derivative of acetone), and catalytic sulfuric acid. A third route is the reaction of carbon suboxide with acetone in the presence of oxalic acid. Uses Like malonic acid and its ester derivatives, and other 1,3-dicarbonyl compounds, Meldrum's acid can serve as a reactant for a variety of nucleophilic reactions. Alkylation and acylation The acidity of carbon 5 (between the two carbonyl groups) allows simple derivatization of Meldrum's acid at this position, through reactions such as alkylation and acylation. For example, deprotonation and reaction with a simple alkyl halide () attaches the alkyl group () at that position: The analogous reaction with an acyl chloride () attaches the acyl () instead: These two reactions allow Meldrum's acid to serve as a starting scaffold for the synthesis of many different structures with various functional groups. The alkylated products can be further manipulated to produce various amide and ester compounds. Heating the acyl product in the presence of an alcohol leads to ester exchange and decarboxylation in a process similar to the malonic ester synthesis. The reactive nature of the cyclic-diester allows good reactivity even for alcohols as hindered as t-butanol, and this reactivity of Meldrum's acid and it's derivatives has been used to develop a range of reactions. Ketoesters formed from the reaction of alcohols with Meldrum's acid derivatives are useful in the Knorr pyrrole synthesis. Synthesis of ketenes At temperatures greater than 200 °C Meldrum's acid undergoes a pericyclic reaction that releases acetone and carbon dioxide and produces a highly reactive ketene compound: These ketenes can be isolated using flash vacuum pyrolysis (FVP). Ketenes are highly electrophilic and can undergo addition reaction with a range of other chemicals, particularly ketene cycloadditions, or dimerisation to diketene. With this approach it is possible to form new C–C bonds, rings, amides, esters, and acids:                       Alternately, the pyrolysis can be performed in solution, to obtain the same results without isolating the ketene, in a one-pot reaction. The ability to form such diverse products makes Meldrum's acid a very useful reagent for synthetic chemists. History The compound is named after Andrew Norman Meldrum who reported its synthesis in 1908. He misidentified its structure as a β-lactone of β-hydroxyisopropylmalonic acid; the correct structure, the bislactone of 1,3-dioxane was reported in 1948. References Further reading Organic acids Lactones Dioxanes
Meldrum's acid
Chemistry
1,052
8,386,159
https://en.wikipedia.org/wiki/Ledeburite
In iron and steel metallurgy, ledeburite is a mixture of 4.3% carbon in iron and is a eutectic mixture of austenite and cementite. Ledeburite is not a type of steel as the carbon level is too high although it may occur as a separate constituent in some high carbon steels. It is mostly found with cementite or pearlite in a range of cast irons. It is named after the metallurgist Karl Heinrich Adolf Ledebur (1837–1906). He was the first professor of metallurgy at the Bergakademie Freiberg and discovered ledeburite in 1882. Ledeburite arises when the carbon content is between 2.06% and 6.67%. The eutectic mixture of austenite and cementite is 4.3% carbon, Fe3C:2Fe, with a melting point of 1147 °C. Ledeburite-II (at ambient temperature) is composed of cementite-I with recrystallized secondary cementite (which separates from austenite as the metal cools) and (with slow cooling) of pearlite. The pearlite results from the eutectoidal decay of the austenite that comes from the ledeburite-I at 723 °C. During more rapid cooling, bainite can develop instead of pearlite, and with very rapid cooling martensite can develop. Origins and discovery The story of ledeburite begins in the late 19th century when Adolf Ledebur, a pioneering German metallurgist, embarked on a journey to unravel the complexities of steel microstructures. In 1882, Ledebur identified a distinct microconstituent in high-carbon steels, characterized by its unique lamellar structure. This discovery marked the birth of ledeburite, named in honor of the scientist whose keen observations laid the foundation for understanding the intricate world within steel. Significance in metallurgical studies Beyond its immediate industrial applications, ledeburite holds a central position in metallurgical studies. The exploration of this unique microconstituent contributes to a deeper understanding of phase transformations, solidification processes, and the principles governing alloy behavior. Researchers and metallurgists leverage ledeburite as a model system to investigate the fundamental aspects of phase diagrams, eutectic reactions, and the kinetics of microstructural evolution during cooling and solidification. Metallurgical studies involving ledeburite extend to the development of advanced materials with tailored properties. By comprehending the nuances of ledeburite formation and its impact on steel performance, scientists can design alloys with improved strength, hardness, and corrosion resistance. This knowledge is invaluable in pushing the boundaries of material science and engineering, paving the way for innovations in diverse fields. External links Names and Steel Karl Heinrich Adolf Ledebur (in German) Rostfreier Edelstahl (in German) Metallurgy Ferrous alloys
Ledeburite
Chemistry,Materials_science,Engineering
623
19,525,611
https://en.wikipedia.org/wiki/HD%2090089
HD 90089 (HR 4084; Gliese 392.1) is a star located in the northern circumpolar constellation Camelopardalis. With an apparent magnitude of 5.25, it is faintly visible to the naked eye under ideal conditions. This star is located relatively close at a distance of 75 light years, but is drifting away at a rate of almost 8 km/s. HD 90089 is an F4 main-sequence star with the calcium K-line and metallic lines of an F2 star. Although the spectral type is of a form that would indicate an Am star, it is not listed in any of the major catalogues of chemically peculiar stars. At present it has 1.29 times the mass of the Sun and 1.4 times its radius. It radiates at 3.36 times the luminosity of the Sun from its photosphere at an effective temperature of , which gives it a yellowish-white hue. HD 90089's exact age depends on the method, with X-ray giving it a young age of only 300 million years. David et al. gave it an age of 1.1 billion years, significantly older than the previous solution; it spins rapidly with a projected rotational velocity of 56.2 km/s, and has an M0 companion separated 13" away and at approximately the same distance. An infrared excess has been detected around this star, most likely indicating the presence of a circumstellar disk at a radius of 145 AU. The temperature of this dust is 30 K. References External links HR 4084 Image HD 90089 Camelopardalis 090089 051502 F-type main-sequence stars 4084 Suspected variables Durchmusterung objects Am stars
HD 90089
Astronomy
368
13,544,815
https://en.wikipedia.org/wiki/Casein%20nutrient%20agar
Casein nutrient agar (CN) is a growth medium used to culture isolates of lactic acid bacteria such as Streptococcus thermophilus and Lactobacillus bulgaricus. It is composed of standard nutrient agar with the added ingredient of skim milk powder, which contains casein. Lactic Acid Bacteria will precipitate casein out of the agar by lowering the pH. This will produce a cloudy appearance around the colonies that do this. This medium is not regarded as selective as it supports the growth of a wide variety of organisms. References Microbiological media
Casein nutrient agar
Biology
128
669,552
https://en.wikipedia.org/wiki/Modular%20curve
In number theory and algebraic geometry, a modular curve Y(Γ) is a Riemann surface, or the corresponding algebraic curve, constructed as a quotient of the complex upper half-plane H by the action of a congruence subgroup Γ of the modular group of integral 2×2 matrices SL(2, Z). The term modular curve can also be used to refer to the compactified modular curves X(Γ) which are compactifications obtained by adding finitely many points (called the cusps of Γ) to this quotient (via an action on the extended complex upper-half plane). The points of a modular curve parametrize isomorphism classes of elliptic curves, together with some additional structure depending on the group Γ. This interpretation allows one to give a purely algebraic definition of modular curves, without reference to complex numbers, and, moreover, prove that modular curves are defined either over the field of rational numbers Q or a cyclotomic field Q(ζn). The latter fact and its generalizations are of fundamental importance in number theory. Analytic definition The modular group SL(2, Z) acts on the upper half-plane by fractional linear transformations. The analytic definition of a modular curve involves a choice of a congruence subgroup Γ of SL(2, Z), i.e. a subgroup containing the principal congruence subgroup of level N for some positive integer N, which is defined to be The minimal such N is called the level of Γ. A complex structure can be put on the quotient Γ\H to obtain a noncompact Riemann surface called a modular curve, and commonly denoted Y(Γ). Compactified modular curves A common compactification of Y(Γ) is obtained by adding finitely many points called the cusps of Γ. Specifically, this is done by considering the action of Γ on the extended complex upper-half plane H* = }. We introduce a topology on H* by taking as a basis: any open subset of H, for all r > 0, the set for all coprime integers a, c and all r > 0, the image of under the action of where m, n are integers such that an + cm = 1. This turns H* into a topological space which is a subset of the Riemann sphere P1(C). The group Γ acts on the subset }, breaking it up into finitely many orbits called the cusps of Γ. If Γ acts transitively on }, the space Γ\H* becomes the Alexandroff compactification of Γ\H. Once again, a complex structure can be put on the quotient Γ\H* turning it into a Riemann surface denoted X(Γ) which is now compact. This space is a compactification of Y(Γ). Examples The most common examples are the curves X(N), X0(N), and X1(N) associated with the subgroups Γ(N), Γ0(N), and Γ1(N). The modular curve X(5) has genus 0: it is the Riemann sphere with 12 cusps located at the vertices of a regular icosahedron. The covering X(5) → X(1) is realized by the action of the icosahedral group on the Riemann sphere. This group is a simple group of order 60 isomorphic to A5 and PSL(2, 5). The modular curve X(7) is the Klein quartic of genus 3 with 24 cusps. It can be interpreted as a surface with three handles tiled by 24 heptagons, with a cusp at the center of each face. These tilings can be understood via dessins d'enfants and Belyi functions – the cusps are the points lying over ∞ (red dots), while the vertices and centers of the edges (black and white dots) are the points lying over 0 and 1. The Galois group of the covering X(7) → X(1) is a simple group of order 168 isomorphic to PSL(2, 7). There is an explicit classical model for X0(N), the classical modular curve; this is sometimes called the modular curve. The definition of Γ(N) can be restated as follows: it is the subgroup of the modular group which is the kernel of the reduction modulo N. Then Γ0(N) is the larger subgroup of matrices which are upper triangular modulo N: and Γ1(N) is the intermediate group defined by: These curves have a direct interpretation as moduli spaces for elliptic curves with level structure and for this reason they play an important role in arithmetic geometry. The level N modular curve X(N) is the moduli space for elliptic curves with a basis for the N-torsion. For X0(N) and X1(N), the level structure is, respectively, a cyclic subgroup of order N and a point of order N. These curves have been studied in great detail, and in particular, it is known that X0(N) can be defined over Q. The equations defining modular curves are the best-known examples of modular equations. The "best models" can be very different from those taken directly from elliptic function theory. Hecke operators may be studied geometrically, as correspondences connecting pairs of modular curves. Quotients of H that are compact do occur for Fuchsian groups Γ other than subgroups of the modular group; a class of them constructed from quaternion algebras is also of interest in number theory. Genus The covering X(N) → X(1) is Galois, with Galois group SL(2, N)/{1, −1}, which is equal to PSL(2, N) if N is prime. Applying the Riemann–Hurwitz formula and Gauss–Bonnet theorem, one can calculate the genus of X(N). For a prime level p ≥ 5, where χ = 2 − 2g is the Euler characteristic, |G| = (p+1)p(p−1)/2 is the order of the group PSL(2, p), and D = π − π/2 − π/3 − π/p is the angular defect of the spherical (2,3,p) triangle. This results in a formula Thus X(5) has genus 0, X(7) has genus 3, and X(11) has genus 26. For p = 2 or 3, one must additionally take into account the ramification, that is, the presence of order p elements in PSL(2, Z), and the fact that PSL(2, 2) has order 6, rather than 3. There is a more complicated formula for the genus of the modular curve X(N) of any level N that involves divisors of N. Genus zero In general a modular function field is a function field of a modular curve (or, occasionally, of some other moduli space that turns out to be an irreducible variety). Genus zero means such a function field has a single transcendental function as generator: for example the j-function generates the function field of X(1) = PSL(2, Z)\H*. The traditional name for such a generator, which is unique up to a Möbius transformation and can be appropriately normalized, is a Hauptmodul (main or principal modular function, plural Hauptmoduln). The spaces X1(n) have genus zero for n = 1, ..., 10 and n = 12. Since each of these curves is defined over Q and has a Q-rational point, it follows that there are infinitely many rational points on each such curve, and hence infinitely many elliptic curves defined over Q with n-torsion for these values of n. The converse statement, that only these values of n can occur, is Mazur's torsion theorem. X0(N) of genus one The modular curves are of genus one if and only if equals one of the 12 values listed in the following table. As elliptic curves over , they have minimal, integral Weierstrass models . This is, and the absolute value of the discriminant is minimal among all integral Weierstrass models for the same curve. The following table contains the unique reduced, minimal, integral Weierstrass models, which means and . The last column of this table refers to the home page of the respective elliptic modular curve on The L-functions and modular forms database (LMFDB). Relation with the Monster group Modular curves of genus 0, which are quite rare, turned out to be of major importance in relation with the monstrous moonshine conjectures. First several coefficients of q-expansions of their Hauptmoduln were computed already in the 19th century, but it came as a shock that the same large integers show up as dimensions of representations of the largest sporadic simple group Monster. Another connection is that the modular curve corresponding to the normalizer Γ0(p)+ of Γ0(p) in SL(2, R) has genus zero if and only if p is 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 41, 47, 59 or 71, and these are precisely supersingular primes in moonshine theory, i.e. the prime factors of the order of the monster group. The result about Γ0(p)+ is due to Jean-Pierre Serre, Andrew Ogg and John G. Thompson in the 1970s, and the subsequent observation relating it to the monster group is due to Ogg, who wrote up a paper offering a bottle of Jack Daniel's whiskey to anyone who could explain this fact, which was a starting point for the theory of monstrous moonshine. The relation runs very deep and, as demonstrated by Richard Borcherds, it also involves generalized Kac–Moody algebras. Work in this area underlined the importance of modular functions that are meromorphic and can have poles at the cusps, as opposed to modular forms, that are holomorphic everywhere, including the cusps, and had been the main objects of study for the better part of the 20th century. See also Manin–Drinfeld theorem Moduli stack of elliptic curves Modularity theorem Shimura variety, a generalization of modular curves to higher dimensions References Steven D. Galbraith - Equations For Modular Curves Algebraic curves Modular forms Riemann surfaces
Modular curve
Mathematics
2,195
22,579,952
https://en.wikipedia.org/wiki/Linear%20ion%20trap
The linear ion trap (LIT) is a type of ion trap mass spectrometer. In a LIT, ions are confined radially by a two-dimensional radio frequency (RF) field, and axially by stopping potentials applied to end electrodes. LITs have high injection efficiencies and high ion storage capacities. History One of the first LITs was constructed in 1969, by Dierdre A. Church, who bent linear quadrupoles into closed circle and racetrack geometries and demonstrated storage of 3He+ and H+ ions for several minutes. Earlier, Drees and Paul described a circular quadrupole. However, it was used to produce and confine a plasma, not to store ions. In 1989, Prestage, Dick, and Malecki described that ions could be trapped in the linear quadrupole trap system to enhance ion-molecule reactions, thus it can be used to study spectroscopy of stored ions. How it works The LIT uses a set of quadrupole rods to confine ions radially and a static electrical potential on the end electrodes to confine the ions axially. The LIT can be used as a mass filter or as a trap by creating a potential well for the ions along the axis of the trap. The mass of trapped ions may be determined if the m/z lies between defined parameters. Advantages of the LIT design are high ion storage capacity, high scan rate, and simplicity of construction. Although quadrupole rod alignment is critical, adding a quality control constraint to their production, this constraint is additionally present in the machining requirements of the 3D trap. Selective mode and scanning mode Ions are either injected into or created within the interior of the LIT. They are confined by application of appropriate RF and DC voltages with their final position maintained within the center section of the LIT. The RF voltage is adjusted and multi-frequency resonance ejection waveforms are applied to the trap to eliminate all but the desired ions in preparation for subsequent fragmentation and mass analysis. The voltages applied to the ion trap are adjusted to stabilize the selected ions and to allow for collisional cooling in preparation for excitation. The energy of the selected ions is increased by application of a supplemental resonance excitation voltage applied to all segments of two rods located on the X-axis. This increase of energy causes dissociation of the selected ions due to collisions with damping gas. The product ions formed are retained in the trapping field. Scanning the contents of the trap to produce a mass spectrum is accomplished by linearly increasing the RF voltage applied to all sections of the trap and utilizing a supplemental resonance ejection voltage. These changes sequentially move ions from within the stability diagram to a position where they become unstable in the x-direction and leave the trapping field for detection. Ions are accelerated into two high voltage dynodes where ions produce secondary electrons. This signal is subsequently amplified by two electron multipliers and the analog signals are then integrated together and digitized. Combination with other mass analyzers LITs can be used as stand alone mass analyzers, and they can be combined with other mass analyzers, such as 3D Paul ion traps, TOF mass spectrometers, FTMS, and other kind of mass analyzers. Linear traps and 3D trap 3D ion trap (or Paul trap) mass spectrometers are widely used but have limitations. With a continuous source, such as one utilizing electrospray ionization (ESI), ions generated while the 3D trap is processing other ions are not used, thereby limiting the duty cycle. Furthermore, the total number of ions that can be stored in a 3D ion trap is limited by space charge effects. Combining a linear trap with a 3D trap can help overcome these limitations. Recently, Hardman and Makarov have described the use of a linear quadrupole trap to store ions formed by ESI for injection into an orbitrap mass analyzer. Ions passed through an orifice and skimmer, a quadrupole ion guide for ion cooling and then entered the quadrupole storage trap. The quadrupole trap has two rod sets; short rods near the exit were biased so that most ions accumulated in this region. Because the orbitrap requires that ions be injected in very short pulses, kilovolt ion extraction potentials were applied to the exit aperture. Flight times of ions to the orbitrap were mass dependent, but for a given mass, ions were injected in bunches less than 100 nanoseconds wide (fwhm). Linear traps and TOF A TOF mass spectrometer can also have a low-duty cycle when coupled with a continuous ion source. Combining an ion trap with a TOF mass analyzer can improve the duty cycle. Both 3D and linear traps have been combined with TOF mass analyzers. A trap can also add MSn capabilities to the system. Linear trap and FTICR Linear traps can be used to improve the performance of FT-ICR (or FTMS) systems. As with 3D ion traps, the duty cycle can be increased to nearly 100% if ions are accumulated in a linear trap, while the FTMS performs other functions. Unwanted ions that can cause space charge problems in the FTMS can be ejected in the linear trap to improve the resolution, sensitivity, and dynamic range of the system, although the system parameters used to optimize such signal characteristics co-vary with one another. Linear trap and triple quadrupole The combination of triple quadrupole MS with LIT technology in the form of an instrument of configuration QqLIT, using axial ejection, is particularly interesting, because this instrument retains the classical triple quadrupole scan functions such as selected reaction monitoring (SRM), product ion (PI), neutral loss (NL) and precursor ion (PC) while also providing access to sensitive ion trap experiments. For small molecules, quantitative and qualitative analysis can be performed using the same instrument. In addition, for peptide analysis, the enhanced multiply charged (EMC) scan allows an increase in selectivity, while the time-delayed fragmentation (TDF) scan provides additional structural information. In the case of the QqLIT, the uniqueness of the instrument is that the same mass analyzer Q3 can be run in two different modes. This allows very powerful scan combinations when performing information-dependent data acquisition. References Mass spectrometry Particle traps
Linear ion trap
Physics,Chemistry
1,310
76,817,857
https://en.wikipedia.org/wiki/Alalevonadifloxacin
Alalevonadifloxacin (trade name Emrok O) is an antibiotic of the fluoroquinolone class. It is a prodrug of levonadifloxacin with increased oral bioavailability. In India, it is approved for the treatment of infections with Gram-positive bacteria. References Fluoroquinolone antibiotics Prodrugs Carboxylic acids Piperidines Heterocyclic compounds with 3 rings Esters
Alalevonadifloxacin
Chemistry
99
56,414,357
https://en.wikipedia.org/wiki/Phase%20precession
Phase precession is a neurophysiological process in which the time of firing of action potentials by individual neurons occurs progressively earlier in relation to the phase of the local field potential oscillation with each successive cycle. In place cells, a type of neuron found in the hippocampal region of the brain, phase precession is believed to play a major role in the neural coding of information. John O'Keefe, who later shared the 2014 Nobel Prize in Physiology or Medicine for his discovery that place cells help form a "map" of the body's position in space, co-discovered phase precession with Michael Recce in 1993. Place cells Pyramidal cells in the hippocampus called place cells play a significant role in self-location during movement over short distances. As a rat moves along a path, individual place cells fire action potentials at an increased rate at particular positions along the path, termed "place fields". Each place cell's maximum firing ratewith action potentials occurring in rapid burstsoccurs at the position encoded by that cell; and that cell fires only occasionally when the animal is at other locations. Within a relatively small path, the same cells are repeatedly activated as the animal returns to the same position. Although simple rate coding (the coding of information based on whether neurons fire more rapidly or more slowly) resulting from these changes in firing rates may account for some of the neural coding of position, there is also a prominent role for the timing of the action potentials of a single place cell, relative to the firing of nearby cells in the local population. As the larger population of cells fire occasionally when the rat is outside of the cells' individual place fields, the firing patterns are organized to occur synchronously, forming wavelike voltage oscillations. These oscillations are measurable in local field potentials and electroencephalography (EEG). In the CA1 region of the hippocampus, where the place cells are located, these firing patterns give rise to theta waves. Theta oscillations have classically been described in rats, but evidence is emerging that they also occur in humans. In 1993, O'Keefe and Recce discovered a relationship between the theta wave and the firing patterns of individual place cells. Although the occasional action potentials of cells when rats were outside of the place fields occurred in phase with (at the peaks of) the theta waves, the bursts of more rapid spikes elicited when the rats reached the place fields were out of synchrony with the oscillation. As a rat approached the place field, the corresponding place cell would fire slightly in advance of the theta wave peak. As the rat moved closer and closer, each successive action potential occurred earlier and earlier within the wave cycle. At the center of the place field, when the cell would fire at its maximal rate, the firing had been advanced sufficiently to be anti-phase to the theta potential (at the bottom, rather than at the peak, of the theta waveform). Then, as the rat continued to move on past the place field and the cell firing slowed, the action potentials continued to occur progressively earlier relative to the theta wave, until they again became synchronous with the wave, aligned now with one wave peak earlier than before. O'Keefe and Recce termed this advancement relative to the wave phase "phase precession". Subsequent studies showed that each time a rat entered a completely different area and the place fields would be remapped, place cells would again become phase-locked to the theta rhythm. It is now widely accepted that the anti-phase cell firing that results from phase precession is an important component of information coding about place. Other systems There have been conflicting theories of how neurons in and around the hippocampus give rise to theta waves and consequently give rise to phase precession. As these mechanisms became better understood, the existence of phase precession was increasingly accepted by researchers. This, in turn, gave rise to the question of whether phase precession could be observed in any other regions of the brain, with other kinds of cell circuits, or whether phase precession was a peculiar property of hippocampal tissue. The finding that theta wave phase precession is also a property of grid cells in the entorhinal cortex demonstrated that the phenomenon exists in other parts of the brain that also mediate information about movement. Theta wave phase precession in the hippocampus also plays a role in some brain functions that are unrelated to spatial location. When rats were trained to jump up to the rim of a box, place cells displayed phase precession much as they do during movement along a path, but a subset of the place cells showed phase precession that was related to initiating the jump, independently of spatial location, and not related to the position during the jump. Phase precession in the entorhinal cortex has been hypothesized to arise from an attractor network process, so that two sequential neural representations within a single cycle of the theta oscillation can be temporally linked to each other downstream in the hippocampus, as episodic memories. References Neural coding Neural circuitry Hippocampus (brain) Animal locomotion Electrophysiology
Phase precession
Physics,Biology
1,096
9,585,894
https://en.wikipedia.org/wiki/Kohn%20anomaly
A Kohn anomaly or the Kohn effect is an anomaly in the dispersion relation of a phonon branch in a metal. The anomaly is named for Walter Kohn, who first proposed it in 1959. Description In condensed matter physics, a Kohn anomaly (also called the Kohn effect) is an anomaly in the dispersion relation of a phonon branch in a metal. For a specific wavevector, the frequency (and thus the energy) of the associated phonon is considerably lowered, and there is a discontinuity in its derivative. In extreme cases (that can happen in low-dimensional materials), the energy of this phonon is zero, meaning that a static distortion of the lattice appears. This is one explanation for charge density waves in solids. The wavevectors at which a Kohn anomaly is possible are the nesting vectors of the Fermi surface, that is vectors that connect a lot of points of the Fermi surface (for a one-dimensional chain of atoms or a spherical Fermi surface this vector would be ). The electron phonon interaction causes a rigid shift of the Fermi sphere and a failure of the Born-Oppenheimer approximation since the electrons do not follow any more the ionic motion adiabatically. In the phonon spectrum of a metal, a Kohn anomaly is a discontinuity in the derivative of the dispersion relation that is produced by the abrupt change in the screening of lattice vibrations by conduction electrons. It can occur at any point in the Brillouin Zone because ) is unrelated to crystal symmetry. In one dimension, it is equivalent to a Peierls instability, and it is similar to the Jahn-Teller effect seen in molecular systems. Kohn anomalies arise together with Friedel oscillations when one considers the Lindhard theory instead of the Thomas–Fermi approximation in order to find an expression for the dielectric function of a homogeneous electron gas. The expression for the real part of the reciprocal space dielectric function obtained following the Lindhard theory includes a logarithmic term that is singular at , where is the Fermi wavevector. Although this singularity is quite small in reciprocal space, if one takes the Fourier transform and passes into real space, the Gibbs phenomenon causes a strong oscillation of in the proximity of the singularity mentioned above. In the context of phonon dispersion relations, these oscillations appear as a vertical tangent in the plot of , called the Kohn anomalies. Many different systems exhibit Kohn anomalies, including graphene, bulk metals, and many low-dimensional systems (the reason involves the condition , which depends on the topology of the Fermi surface). However, it is important to emphasize that only materials showing metallic behaviour can exhibit a Kohn anomaly, since the model emerges from a homogeneous electron gas approximation. History The anomaly is named for Walter Kohn. They have been first proposed by Walter Kohn in 1959. See also Zero sound Pomeranchuk instability References Condensed matter physics
Kohn anomaly
Physics,Chemistry,Materials_science,Engineering
636
62,153,058
https://en.wikipedia.org/wiki/Polytopological%20space
In general topology, a polytopological space consists of a set together with a family of topologies on that is linearly ordered by the inclusion relation where is an arbitrary index set. It is usually assumed that the topologies are in non-decreasing order. However some authors prefer the associated closure operators to be in non-decreasing order where if and only if for all . This requires non-increasing topologies. Formal definitions An -topological space is a set together with a monotone map Top where is a partially ordered set and Top is the set of all possible topologies on ordered by inclusion. When the partial order is a linear order then is called a polytopological space. Taking to be the ordinal number an -topological space can be thought of as a set with topologies on it. More generally a multitopological space is a set together with an arbitrary family of topologies on it. History Polytopological spaces were introduced in 2008 by the philosopher Thomas Icard for the purpose of defining a topological model of Japaridze's polymodal logic (GLP). They were later used to generalize variants of Kuratowski's closure-complement problem. For example Taras Banakh et al. proved that under operator composition the closure operators and complement operator on an arbitrary -topological space can together generate at most distinct operators where In 1965 the Finnish logician Jaakko Hintikka found this bound for the case and claimed it "does not appear to obey any very simple law as a function of ". See also Bitopological space References Topology
Polytopological space
Physics,Mathematics
322
20,000,172
https://en.wikipedia.org/wiki/Diazenylium
Diazenylium is the chemical N2H+, an inorganic cation that was one of the first ions to be observed in interstellar clouds. Since then, it has been observed for in several different types of interstellar environments, observations that have several different scientific uses. It gives astronomers information about the fractional ionization of gas clouds, the chemistry that happens within those clouds, and it is often used as a tracer for molecules that are not as easily detected (such as N2). Its 1–0 rotational transition occurs at 93.174 GHz, a region of the spectrum where Earth's atmosphere is transparent and it has a significant optical depth in both cold and warm clouds so it is relatively easy to observe with ground-based observatories. The results of N2H+ observations can be used not only for determining the chemistry of interstellar clouds, but also for mapping the density and velocity profiles of these clouds. Astronomical detections N2H+ was first observed in 1974 by B.E. Turner. He observed a previously unidentified triplet at 93.174 GHz using the NRAO 11 m telescope. Immediately after this initial observation, Green et al. identified the triplet as the 1–0 rotational transition of N2H+. This was done using a combination of ab initio molecular calculations and comparison of similar molecules, such as N2, CO, HCN, HNC, and HCO+, which are all isoelectronic to N2H+. Based on these calculations, the observed rotational transition would be expected to have seven hyperfine components, but only three of these were observed, since the telescope's resolution was insufficient to distinguish the peaks caused by the hyperfine splitting of the inner Nitrogen atom. Just a year later, Thaddeus and Turner observed the same transition in the Orion molecular cloud 2 (OMC-2) using the same telescope, but this time they integrated for 26 hours, which resulted in a resolution that was good enough to distinguish the smaller hyperfine components. Over the past three decades, N2H+ has been observed quite frequently, and the 1–0 rotational band is almost exclusively the one that astronomers look for. In 1995, the hyperfine structure of this septuplet was observed with an absolute precision of ~7 kHz, which was good enough to determine its molecular constants with an order of magnitude better precision than was possible in the laboratory. This observation was done toward L1512 using the 37 m NEROC Haystack Telescope. In the same year, Sage et al. observed the 1–0 transition of N2H+ in seven out of the nine nearby galaxies that they observed with the NRAO 12 m telescope at Kitt Peak. N2H+ was one of the first few molecular ions to be observed in other galaxies, and its observation helped to show that the chemistry in other galaxies is quite similar to that which we see in our own galaxy. N2H+ is most often observed in dense molecular clouds, where it has proven useful as one of the last molecules to freeze out onto dust grains as the density of the cloud increases toward the center. In 2002, Bergin et al. did a spatial survey of dense cores to see just how far toward the center N2H+ could be observed and found that its abundance drops by at least two orders of magnitude when one moves from the outer edge of the core to the center. This showed that even N2H+ is not an ideal tracer for the chemistry of dense pre-stellar cores, and concluded that H2D+ may be the only good molecular probe of the innermost regions of pre-stellar cores. Laboratory detections Although N2H+ is most often observed by astronomers because of its ease of detection, there have been some laboratory experiments that have observed it in a more controlled environment. The first laboratory spectrum of N2H+ was of the 1–0 rotational band in the ground vibrational level, the same microwave transition that astronomers had recently discovered in space. Ten years later, Owrutsky et al. performed vibrational spectroscopy of N2H+ by observing the plasma created by a discharge of a mixture nitrogen, hydrogen, and argon gas using a color center laser. During the pulsed discharge, the poles were reversed on alternating pulses, so the ions were pulled back and forth through the discharge cell. This caused the absorption features of the ions, but not the neutral molecules, to be shifted back and forth in frequency space, so a lock-in amplifier could be used to observe the spectra of just the ions in the discharge. The lock-in combined with the velocity modulation gave >99.9% discrimination between ions and neutrals. The feed gas was optimized for N2H+ production, and transitions up to J = 41 were observed for both the fundamental N–H stretching band and the bending hot band. Later, Kabbadj et al. observed even more hot bands associated with the fundamental vibrational band using a difference frequency laser to observe a discharge of a mixture of nitrogen, hydrogen, and helium gases. They used velocity modulation in the same way that Owrutsky et al. had, in order to discriminate ions from neutrals. They combined this with a counterpropagating beam technique to aid in noise subtraction, and this greatly increased their sensitivity. They had enough sensitivity to observe OH+, H2O+, and H3O+ that were formed from the minute O2 and H2O impurities in their helium tank. By fitting all observed bands, the rotational constants for N2H+ were determined to be Be = 1.561928 cm−1 and De = , which are the only constants needed to determine the rotational spectrum of this linear molecule in the ground vibrational state, with the exception of determining hyperfine splitting. Given the selection rule ΔJ = ±1, the calculated rotational energy levels, along with their percent population at 30 kelvins, can be plotted. The frequencies of the peaks predicted by this method differ from those observed in the laboratory by at most 700 kHz. Chemistry N2H+ is found mostly in dense molecular clouds, where its presence is closely related to that of many other nitrogen-containing compounds. It is particularly closely tied to the chemistry of N2, which is more difficult to detect (since it lacks a dipole moment). This is why N2H+ is commonly used to indirectly determine the abundance of N2 in molecular clouds. The rates of the dominant formation and destruction reactions can be determined from known rate constants and fractional abundances (relative to H2) in a typical dense molecular cloud. The calculated rates here were for early time (316,000 years) and a temperature of 20 kelvins, which are typical conditions for a relatively young molecular cloud. {|class=wikitable |+Production of diazenylium |- !Reaction !Rate constant !Rate/[H2]2 !Relative rate |- |H2 + → N2H+ + H | || || 1.0 |- | + N2 → N2H+ + H2 | || || 9.1 |} {|class=wikitable |+Destruction of diazenylium |- !Reaction !Rate constant !Rate/[H2]2 !Relative rate |- | N2H+ + O → N2 + OH+ | || || 1.0 |- | N2H+ + CO → N2 + HCO+ | || || 3.2 |- | N2H+ + e– → N2 + H | || || 2.8 |- | N2H+ + e– → NHN | || || 3.7 |} There are dozens more reactions possible, but these are the only ones that are fast enough to affect the abundance of N2H+ in dense molecular clouds. Diazenylium thus plays a critical role in the chemistry of many nitrogen-containing molecules. Although the actual electron density in so-called "dense clouds" is quite low, the destruction of N2H+ is governed mostly by dissociative recombination. References Cations
Diazenylium
Physics,Chemistry
1,691
60,392,089
https://en.wikipedia.org/wiki/Cyclopentadienyl%20magnesium%20bromide
Cyclopentadienyl magnesium bromide is a chemical compound with the molecular formula . The molecule consists of a magnesium atom bonded to a bromine atom and a cyclopentadienyl group, a ring of five carbons each with one hydrogen atom. The compound is a Grignard reagent, a type of organometallic compound that features a magnesium atom bonded to a halogen atom and to a carbon atom of some organic functional group. This compound is of historic importance as the starting material for the first published synthesis of ferrocene by Peter Pauson and Thomas J. Kealy in 1951. Preparation The compound can be prepared by reacting cyclopentadiene with magnesium and bromoethane in anhydrous benzene. References Magnesium compounds Bromides Cyclopentadienyl complexes
Cyclopentadienyl magnesium bromide
Chemistry
170
6,028,132
https://en.wikipedia.org/wiki/Open%20Hub
Open Hub or Black Duck Open Hub (formerly Ohloh) is a website which provides a web services suite and online community platform that aims to index the open-source software development community. It was founded by former Microsoft managers Jason Allen and Scott Collison in 2004 and joined by the developer Robin Luckey. , the site lists 669,601 open-source projects, 681,345 source control repositories, 3,848,524 contributors and 31,688,426,179 lines of code. In 2017, Black Duck Software (the company running the site) was acquired by Synopsys for $565 million, however it was spun out as a separate company again in October 2024. History Ohloh is a website that provides a web services suite and online community platform that aims to index the open-source software development community. It was founded by former Microsoft managers Jason Allen and Scott Collison in 2004 and joined by the developer Robin Luckey. , the site lists 669,601 open-source projects, 681,345 source control repositories, 3,848,524 contributors and 31,688,426,179 lines of code. On 28 May 2009, Ohloh was acquired by Geeknet, owners of the popular open-source development platform SourceForge. However, Geeknet sold Ohloh to the open-source analysis company Black Duck Software on 5 October 2010. Black Duck integrated Ohloh's functionality with their existing products to advance the site into a major resource for FOSS development. On 18 July 2014, Ohloh became Black Duck Open Hub. In late August 2014, the Black Duck Open Hub's Organizations feature moved out of Beta and into Version 1.0. Functionality and features By retrieving data from revision control repositories (such as CVS, SVN, Git, Bazaar, and Mercurial), Black Duck Open Hub provides statistics about the longevity of projects, their licenses (including license conflict information) and software metrics such as source lines of code and commit statistics. The codebase history informs about the amount of activity for each project. Software stacks (list of software applications used by Black Duck Open Hub's members) and tags are used to calculate the similarity between projects. Global statistics per language measure the popularity of specific programming languages since the early 1990s. Those global statistics across all projects in Black Duck Open Hub have also been used to identify those with the most extensive continuous revision control histories. Contributor statistics are also available, measuring open-source developers' experience as observable in code committed to revision control repositories. Social network features (kudos) have been introduced to allow users to rank open-source contributors. A KudoRank for each user and open-source contributor on a scale of 1 to 10 is automatically extracted from all kudos in the system. The idea of measuring open-source developers' skills and productivity on the basis of commit statistics or mutual rating has received mixed reactions in technology blogs. Contributor profiles may also contain a contributor supplied email address, and avatars loaded from Gravatar using that email address. On 22 August 2007, a public beta of a web-service API was announced, exposing Black Duck Open Hub's data and reports to promote the development of third-party applications. On 18 January 2013, the team announced a new metric, the Project Activity Indicator (PAI). The PAI combines the number of contributors and the number of commits in an algorithm that weighs more recent activity more heavily than past activity. Activity is normalized so that all projects can be considered and weighed equally one against another; that activity assessment is scaled relatively to the number of project contributors and commits. On 14 January 2014, the team announced a new score, the Project Hotness Score. The PAI shows long-term activity and growth on FOSS projects, but its requirement that there be at least a year of data means that new projects can't be ranked. The Project Hotness Score looks at activity over the past few weeks and evaluates daily activity to identify those projects. By design, the Project Hotness Score is highly volatile. On 6 April 2016, the team announced Hub 3.0, which streamlined continuous integration and DevOps processes through policy management and rapid scanning capabilities. Code search In 2012, Black Duck Open Hub launched Open Hub Code Search, a free code search engine based on the predecessor Koders. It could search over 21 billion lines of open-source code and filter by language, project or syntax, but was discontinued in 2016. See also Google Code Search Koders Krugle Protecode References External links Code search engines Free software websites Internet properties established in 2006 American review websites Software metrics
Open Hub
Mathematics,Technology,Engineering
971
17,994,688
https://en.wikipedia.org/wiki/HD%20221246
HD 221246 or NGC 7686 1 is a star in open cluster NGC 7686, and it belongs to the northern constellation of Andromeda. With an apparent visual magnitude of 6.17, it can be viewed by the naked eye only under very favourable conditions. It has a spectral classification of K3III, meaning it is an evolved orange giant star. Parallax measurements place this star about 1,000 light years away from the solar system. References External links Image NGC 7686 1 Andromeda (constellation) 221246 NGC 7686 K-type giants 8925 115996 Durchmusterung objects
HD 221246
Astronomy
131
1,100,001
https://en.wikipedia.org/wiki/Einselection
In quantum mechanics, einselections, short for "environment-induced superselection", is a name coined by Wojciech H. Zurek for a process which is claimed to explain the appearance of wavefunction collapse and the emergence of classical descriptions of reality from quantum descriptions. In this approach, classicality is described as an emergent property induced in open quantum systems by their environments. Due to the interaction with the environment, the vast majority of states in the Hilbert space of a quantum open system become highly unstable due to entangling interaction with the environment, which in effect monitors selected observables of the system. After a decoherence time, which for macroscopic objects is typically many orders of magnitude shorter than any other dynamical timescale, a generic quantum state decays into an uncertain state which can be expressed as a mixture of simple pointer states. In this way the environment induces effective superselection rules. Thus, einselection precludes stable existence of pure superpositions of pointer states. These 'pointer states' are stable despite environmental interaction. The einselected states lack coherence, and therefore do not exhibit the quantum behaviours of entanglement and superposition. Advocates of this approach argue that since only quasi-local, essentially classical states survive the decoherence process, einselection can in many ways explain the emergence of a (seemingly) classical reality in a fundamentally quantum universe (at least to local observers). However, the basic program has been criticized as relying on a circular argument (e.g. by Ruth Kastner). So the question of whether the 'einselection' account can really explain the phenomenon of wave function collapse remains unsettled. Definition Zurek has defined einselection as follows: "Decoherence leads to einselection when the states of the environment corresponding to different pointer states become orthogonal: ", Details Einselected pointer states are distinguished by their ability to persist in spite of the environmental monitoring and therefore are the ones in which quantum open systems are observed. Understanding the nature of these states and the process of their dynamical selection is of fundamental importance. This process has been studied first in a measurement situation: When the system is an apparatus whose intrinsic dynamics can be neglected, pointer states turn out to be eigenstates of the interaction Hamiltonian between the apparatus and its environment. In more general situations, when the system's dynamics is relevant, einselection is more complicated. Pointer states result from the interplay between self-evolution and environmental monitoring. To study einselection, an operational definition of pointer states has been introduced. This is the "predictability sieve" criterion, based on an intuitive idea: Pointer states can be defined as the ones which become minimally entangled with the environment in the course of their evolution. The predictability sieve criterion is a way to quantify this idea by using the following algorithmic procedure: For every initial pure state , one measures the entanglement generated dynamically between the system and the environment by computing the entropy: or some other measure of predictability from the reduced density matrix of the system (which is initially ). The entropy is a function of time and a functional of the initial state . Pointer states are obtained by minimizing over and demanding that the answer be robust when varying the time . The nature of pointer states has been investigated using the predictability sieve criterion only for a limited number of examples. Apart from the already mentioned case of the measurement situation (where pointer states are simply eigenstates of the interaction Hamiltonian) the most notable example is that of a quantum Brownian particle coupled through its position with a bath of independent harmonic oscillators. In such case pointer states are localized in phase space, even though the interaction Hamiltonian involves the position of the particle. Pointer states are the result of the interplay between self-evolution and interaction with the environment and turn out to be coherent states. There is also a quantum limit of decoherence: When the spacing between energy levels of the system is large compared to the frequencies present in the environment, energy eigenstates are einselected nearly independently of the nature of the system-environment coupling. Collisional decoherence There has been significant work on correctly identifying the pointer states in the case of a massive particle decohered by collisions with a fluid environment, often known as collisional decoherence. In particular, Busse and Hornberger have identified certain solitonic wavepackets as being unusually stable in the presence of such decoherence. See also Mott problem References Quantum mechanics Emergence
Einselection
Physics
949
3,373,650
https://en.wikipedia.org/wiki/Earth%20systems%20engineering%20and%20management
Earth systems engineering and management (ESEM) is a discipline used to analyze, design, engineer and manage complex environmental systems. It entails a wide range of subject areas including anthropology, engineering, environmental science, ethics and philosophy. At its core, ESEM looks to "rationally design and manage coupled human–natural systems in a highly integrated and ethical fashion". ESEM is a newly emerging area of study that has taken root at the University of Virginia, Cornell and other universities throughout the United States, and at the Centre for Earth Systems Engineering Research (CESER) at Newcastle University in the United Kingdom. Founders of the discipline are Braden Allenby and Michael Gorman. Introduction to ESEM For centuries, humans have utilized the earth and its natural resources to advance civilization and develop technology. "As a principle result of Industrial Revolutions and associated changes in human demographics, technology systems, cultures, and economic systems have been the evolution of an Earth in which the dynamics of major natural systems are increasingly dominated by human activity". In many ways, ESEM views the earth as a human artifact. "In order to maintain continued stability of both natural and human systems, we need to develop the ability to rationally design and manage coupled human-natural systems in a highly integrated and ethical fashion- an Earth Systems Engineering and Management (ESEM) capability". ESEM has been developed by a few individuals. One of particular note is Braden Allenby. Allenby holds that the foundation upon which ESEM is built is the notion that "the Earth, as it now exists, is a product of human design". In fact there are no longer any natural systems left in the world, "there are no places left on Earth that don't fall under humanity's shadow". "So the question is not, as some might wish, whether we should begin ESEM, because we have been doing it for a long time, albeit unintentionally. The issue is whether we will assume the ethical responsibility to do ESEM rationally and responsibly". Unlike the traditional engineering and management process "which assume a high degree of knowledge and certainty about the systems behavior and a defined endpoint to the process," ESEM "will be in constant dialog with [the systems], as they – and we and our cultures – change and coevolve together into the future". ESEM is a new concept, however there are a number of fields "such as industrial ecology, adaptive management, and systems engineering that can be relied on to enable rapid progress in developing" ESEM as a discipline. The premise of ESEM is that science and technology can provide successful and lasting solutions to human-created problems such as environmental pollution and climate-change. This assumption has recently been challenged in Techno-Fix: Why Technology Won't Save Us or the Environment. Topics Adaptive management Adaptive management is a key aspect of ESEM. Adaptive management is a way of approaching environmental management. It assumes that there is a great deal of uncertainty in environmental systems and holds that there is never a final solution to an earth systems problem. Therefore, once action has been taken, the Earth Systems Engineer will need to be in constant dialogue with the system, watching for changes and how the system evolves. This way of monitoring and managing ecosystems accepts nature's inherent uncertainty and embraces it by never concluding to one certain cure to a problem. Earth systems engineering Earth systems engineering is essentially the use of systems analysis methods in the examination of environmental problems. When analyzing complex environmental systems, there are numerous data sets, stakeholders and variables. It is therefore appropriate to approach such problems with a systems analysis method. Essentially there are "six major phases of a properly-conducted system study". The six phases are as follows: Determine goals of system Establish criteria for ranking alternative candidates Develop alternatives solutions Rank alternative candidates Iterate Act Part of the systems analysis process includes determining the goals of the system. The key components of goal development include the development of a Descriptive Scenario, a Normative Scenario and Transitive Scenario. Essentially, the Descriptive Scenario "describe[s] the situation as it is [and] tell[s] how it got to be that way" (Gibson, 1991). Another important part of the Descriptive Scenario is how it "point[s] out the good features and the unacceptable elements of the status quo". Next, the Normative Scenario shows the final outcome or the way the system should operate under ideal conditions once action has been taken. For the earth systems approach, the "Normative Scenario" will involve the most complicated analysis. The Normative Scenario will deal with stakeholders, creating a common trading zone or location for the free exchange of ideas to come up with a solution of where a system may be restored to or just how exactly a system should be modified. Finally the Transitive scenario comes up with the actual process of changing a system from a Descriptive state to a Normative state. Often, there is not one final solution, as noted in adaptive management. Typically an iterative process ensues as variables and inputs change and the system coevolves with the analysis. Environmental science When examining complex ecosystems there is an inherent need for the earth systems engineer to have a strong understanding of how natural processes function. A training in Environmental Science will be crucial to fully understand the possible unintended and undesired effects of a proposed earth systems design. Fundamental topics such as the carbon cycle or the water cycle are pivotal processes that need to be understood. Ethics and sustainability At the heart of ESEM is the social, ethical and moral responsibility of the earth systems engineer to stakeholders and to the natural system being engineered, to come up with an objective Transitive and Normative scenario. "ESEM is the cultural and ethical context itself". The earth systems engineer will be expected to explore the ethical implications of proposed solutions. "The perspective of environmental sustainability requires that we ask ourselves how each interaction with the natural environment will affect, and be judged by, our children in the future" ". "There is an increasing awareness that the process of development, left to itself, can cause irreversible damage to the environment, and that the resultant net addition to wealth and human welfare may very well be negative, if not catastrophic". With this notion in mind, there is now a new goal of sustainable environment-friendly development. Sustainable development is an important part to developing appropriate ESEM solutions to complex environmental problems. Industrial ecology Industrial ecology is the notion that major manufacturing and industrial processes need to shift from open loop systems to closed loop systems. This is essentially the recycling of waste to make new products. This reduces refuse and increases the effectiveness of resources. ESEM looks to minimize the impact of industrial processes on the environment, therefore the notion of recycling of industrial products is important to ESEM. Case study: Florida Everglades The Florida Everglades system is a prime example of a complex ecological system that underwent an ESEM analysis. Background The Florida Everglades is located in southern Florida. The ecosystem is essentially a subtropical fresh water marsh composed of a variety of flora and fauna. Of particular note is the saw grass and ridge slough formations that make the Everglades unique. Over the course of the past century mankind has had a rising presence in this region. Currently, all of the eastern shore of Florida is developed and the population has increased to over 6 million residents. This increased presence over the years has resulted in the channeling and redirecting of water from its traditional path through the Everglades and into the Gulf of Mexico and Atlantic Ocean. With this there have been a variety of deleterious effects upon the Florida Everglades. Descriptive scenario By 1993, the Everglades had been affected by numerous human developments. The water flow and quality had been affected by the construction of canals and levees, to the series of elevated highways running through the Everglades to the expansive Everglades Agricultural Area that had contaminated the Everglades with high amounts of nitrogen. The result of this reduced flow of water was dramatic. There was a 90 - 95% reduction in wading bird populations, declining fish populations and salt water intrusion into the ecosystem. If the Florida Everglades were to remain a US landmark, action needed to be taken. Normative scenario It was in 1993 that the Army Corps of Engineers analyzed the system. They determined that an ideal situation would be to "get the water right". In doing so there would be a better flow through the Everglades and a reduced number of canals and levees sending water to tide. Transitive scenario It was from the development of the Normative Scenario, that the Army Corps of Engineers developed CERP, the Comprehensive Everglades Restoration Plan. In the plan they created a time line of projects to be completed, the estimated cost and the ultimate results of improving the ecosystem by having native flora and fauna prosper. They also outline the human benefits of the project. Not only will the solution be sustainable, as future generations will be able to enjoy the Everglades, but the correction of the water flow and through the creation of storage facilities will reduce the occurrence of droughts and water shortages in southern Florida. See also Design review Environmental management Industrial ecology Sustainability Systems engineering Publications Allenby, B. R. (2000). Earth systems engineering: the world as human artifact. Bridge 30 (1), 5–13. Allenby, B. R. (2005). Reconstructing earth: Technology and environment in the age of humans. Washington, DC: Island Press. From https://www.loc.gov/catdir/toc/ecip059/2005006241.html Allenby, B. R. (2000, Winter). Earth systems engineering and management. IEEE Technology and Society Magazine, 0278-0079(Winter) 10-24. Davis, Steven, et al. Everglades: The Ecosystem and Its Restoration. Boca Raton: St Lucie Press, 1997. "Everglades." Comprehensive Everglades Restoration Plan. 10 April 2004. https://web.archive.org/web/20051214102114/http://www.evergladesplan.org/ Gibson, J. E. (1991). How to do A systems analysis and systems analyst decalog. In W. T. Scherer (Ed.), (Fall 2003 ed.) (pp. 29–238). Department of Systems and Information Engineering: U of Virginia. Retrieved October 29, 2005, Gorman, Michael. (2004). Syllabus Spring Semester 2004. Retrieved October 29, 2005 from https://web.archive.org/web/20110716231016/http://repo-nt.tcc.virginia.edu/classes/ESEM/syllabus.html Hall, J.W. and O'Connell, P.E. (2007). Earth Systems Engineering: turning vision into action. Civil Engineering, 160(3): 114-122. Newton, L. H. (2003). Ethics and sustainability: Sustainable development and the moral life. Upper Saddle River, N.J.: Prentice Hall. References External links Class Taught Spring 2004 at The University of Virginia on ESEM UVA article on Spring 2004 course Class Taught January 2007 at the University of Virginia on ESEM Allenby Article on ESEM Centre for Earth Systems Engineering Research @ Newcastle University Environmental engineering Industrial ecology Systems ecology Systems engineering Engineering and management
Earth systems engineering and management
Chemistry,Engineering,Environmental_science
2,356
10,256,327
https://en.wikipedia.org/wiki/Natural%20circulation%20boiler
In a natural circulation boiler the circulation is achieved by the difference in density when the water in the boiler is heated. In natural circulation steam boilers the circulation of water is by convection currents, which are set up during the heating of water. In most of the boilers there is a natural circulation of water the fundamental principle of which is based on the principle of Thermosiphon. See also Controlled circulation Forced circulation boiler Once through steam generator Boilers
Natural circulation boiler
Chemistry,Engineering
89
59,882,447
https://en.wikipedia.org/wiki/List%20of%20protein%20tandem%20repeat%20annotation%20software
Computational methods use different properties of protein sequences and structures to find, characterize and annotate protein tandem repeats. Sequence-based annotation methods Structure-based annotation methods References
List of protein tandem repeat annotation software
Biology
40
78,711,654
https://en.wikipedia.org/wiki/Nomad%20%28eSIM%20company%29
Nomad is a company that sells eSIMs (embedded SIMs), launched in 2020. History Nomad was launched in 2020 and is a business line of LotusFlare, Inc., a telecommunications software development company founded by former Facebook and Microsoft engineers. Nomad is a connectivity marketplace that offers mobile data plans worldwide supplied by various communications service providers. International travelers with eSIM-capable smartphones can buy data plans from local providers, reducing roaming costs. eSIMs can be purchased through the website or the smartphone app. Plans include global eSIMs covering most countries and regional plans for specific areas such as Europe, Asia-Pacific, and Oceania. These plans cater to both short-term trips and extended stays. It is a data-only service, meaning it doesn't support traditional cellular voice calls or SMS messaging, but the speeds are fast enough to handle voice and video chat via apps like FaceTime or WhatsApp. In 2024, Nomad also launched Nomad eSIM Enterprise for business travelers. Nomad has been featured in articles by The New York Times, Wall Street Journal, and CNBC. Conflict in Gaza Nomad has been used to connect civilians during communication blackouts in the Gaza war zone. See also eSim Airalo References External links Application software Mobile applications Technology companies Mobile technology companies Wireless networking Travel technology 2020 establishments 2020 establishments in the United States ESIM companies
Nomad (eSIM company)
Technology,Engineering
283
71,275,203
https://en.wikipedia.org/wiki/Webb%27s%20First%20Deep%20Field
Webb's First Deep Field is the first operational image taken by the James Webb Space Telescope (JWST). The deep-field photograph, which covers a tiny area of sky visible from the Southern Hemisphere, is centered on SMACS 0723, a galaxy cluster in the constellation of Volans. Thousands of galaxies are visible in the image, some as old as 13 billion years. It is the highest-resolution image of the early universe ever taken. Captured by the telescope's Near-Infrared Camera (NIRCam), the image was revealed to the public by NASA on 11 July 2022. Background The James Webb Space Telescope is a space telescope operated by NASA and designed primarily to conduct infrared astronomy. Launched in December 2021, the spacecraft has been in a halo orbit around the second Sun–Earth Lagrange point (L2), about from Earth, since January 2022. At L2, the gravitational pull of the Sun combines with the gravitational pull of the Earth to produce an orbital period that matches Earth's, and the Earth and Sun remain co-aligned (as seen from that point) as the Earth and the spacecraft orbit the Sun together. Webb's First Deep Field was taken by the telescope's Near-Infrared Camera (NIRCam) and is a composite produced from images at different wavelengths, totalling 12.5 hours of exposure time. SMACS 0723 is a galaxy cluster visible from Earth's Southern Hemisphere, and has often been examined by Hubble and other telescopes in search of the deep past. Scientific results The image shows the galaxy cluster SMACS 0723 as it appeared 4.6 billion years ago, covering an area of sky with an angular size approximately equal to a grain of sand held at arm's length. Many of the objects in the image have undergone notable redshift due to the expansion of space over the extreme distance traveled by the light radiating from them. The redshifts of nearly 200 of these objects have been measured to date, with the highest redshift measured at 8.498. The combined mass of the galaxy cluster acts as a gravitational lens, magnifying and distorting the images of much more distant galaxies behind it. Webb's NIRCam brought the distant galaxies into sharp focus, revealing tiny, faint structures that had never been seen before, including star clusters and diffuse features. Diffraction spikes in the photo The six bright and two fainter spikes around the point sources of light in the photo are an artifact created by the physical limitations of the telescope. The six bright spikes are a result of diffraction from the mirror's edges. The mirror is composed of 18 individual units, each having the shape of a regular hexagon. The hexagonal rim of the units that make up the telescope's large mirror give rise to the six spikes. Telescopes with circular mirrors/lenses don't have such spikes (in lieu of spikes, diffraction from circular rims creates a pattern of concentric rings called Airy discs). The two additional spikes are a result of diffraction from the struts holding the telescope's secondary mirror in front of the main mirror. As shown in the figure on the right, diffraction from the three struts creates six spikes, but four of these are designed to co-align with the spikes created from the diffraction caused by the rim. This leaves the two faint horizontal spikes visible in the photo. Significance Deepest image of the Universe On 11 July 2022, JWST delivered the deepest sharp infrared image of the universe to date. Webb's First Deep Field is the first full false-color image from the JWST, and the highest-resolution infrared view of the universe yet captured. The image reveals thousands of galaxies in a tiny sliver of the universe, with Webb's sharp near-infrared view bringing out faint structures in extremely distant galaxies, offering the most detailed view of the early universe to date. Thousands of galaxies, which include the faintest objects ever observed in the infrared, have appeared in Webb's view for the first time. It was first revealed to the public during an event on 11 July 2022 by U.S. President Joe Biden. Comparison with the Hubble Space Telescope The following images are a comparison with the image taken by the Hubble Space Telescope and the image taken by Webb of the same galaxy cluster. See also List of deep fields References James Webb Space Telescope Physical cosmology Sky regions Astronomy image articles 2022 in spaceflight 2020s photographs Color photographs 2022 works
Webb's First Deep Field
Physics,Astronomy
938
58,503,642
https://en.wikipedia.org/wiki/16-Methylene-17%CE%B1-hydroxyprogesterone%20acetate
16-Methylene-17α-hydroxyprogesterone acetate is a progestin of the 17α-hydroxyprogesterone group which was never marketed. Given orally, it shows about 2.5-fold the progestogenic activity of parenteral progesterone in animal bioassays. It is a parent compound of the following clinically used progestins: Chlormethenmadinone acetate (6-chloro-16-methylene-17α-hydroxy-Δ6-progesterone acetate) Melengestrol acetate (6-methyl-16-methylene-17α-hydroxy-Δ6-progesterone acetate) Methenmadinone acetate (16-methylene-17α-hydroxy-Δ6-progesterone acetate) Segesterone acetate (16-methylene-17α-hydroxy-19-norprogesterone acetate) References Abandoned drugs Acetate esters Diketones Pregnanes Progestogens Vinylidene compounds
16-Methylene-17α-hydroxyprogesterone acetate
Chemistry
234
709,092
https://en.wikipedia.org/wiki/Floral%20symmetry
Floral symmetry describes whether, and how, a flower, in particular its perianth, can be divided into two or more identical or mirror-image parts. Uncommonly, flowers may have no axis of symmetry at all, typically because their parts are spirally arranged. Actinomorphic Most flowers are actinomorphic ("star shaped", "radial"), meaning they can be divided into three or more identical sectors which are related to each other by rotation about the center of the flower. Typically, each sector might contain one tepal or one petal and one sepal and so on. It may or may not be possible to divide the flower into symmetrical halves by the same number of longitudinal planes passing through the axis: oleander is an example of a flower without such mirror planes. Actinomorphic flowers are also called radially symmetrical or regular flowers. Other examples of actinomorphic flowers are the lily (Lilium, Liliaceae) and the buttercup (Ranunculus, Ranunculaceae). Zygomorphic Zygomorphic ("yoke shaped", "bilateral" – from the Greek ζυγόν, zygon, yoke, and μορφή, morphe, shape) flowers can be divided by only a single plane into two mirror-image halves, much like a yoke or a person's face. Examples are orchids and the flowers of most members of the Lamiales (e.g., Scrophulariaceae and Gesneriaceae). Some authors prefer the term monosymmetry or bilateral symmetry. The asymmetry allows pollen to be deposited in specific locations on pollinating insects and this specificity can result in evolution of new species. Globally and within individual networks, zygomorphic flowers are a minority. Plants with zygomorphic flowers have smaller number of visitor species compared to those with actinomorphic flowers. Sub-networks of plants with zygomorphic flowers share greater connectance, greater asymmetry and lower coextinction robustness for both the plants and the visitor species. Plant taxa with zygomorphic flowers can have a greater risk of extinction due to pollinator decline. Asymmetry A few plant species have flowers lacking any symmetry, and therefore having a "handedness". Examples: Valeriana officinalis and Canna indica. Differences Actinomorphic flowers are a basal angiosperm character; zygomorphic flowers are a derived character that has evolved many times. Some familiar and seemingly actinomorphic so-called flowers, such as those of daisies and dandelions (Asteraceae), and most species of Protea, are actually clusters of tiny (not necessarily actinomorphic) flowers arranged into a roughly radially symmetric inflorescence of the form known as a head, capitulum, or pseudanthium. Peloria Peloria or a peloric flower is the aberration in which a plant that normally produces zygomorphic flowers produces actinomorphic flowers instead. This aberration can be developmental, or it can have a genetic basis: the CYCLOIDEA gene controls floral symmetry. Peloric Antirrhinum plants have been produced by knocking out this gene. Many modern cultivars of Sinningia speciosa ("gloxinia") have been bred to have peloric flowers as they are larger and showier than the normally zygomorphic flowers of this species. Charles Darwin explored peloria in Antirrhinum (snapdragon) while researching the inheritance of floral characteristics for his The Variation of Animals and Plants Under Domestication. Later research, using Digitalis purpurea, showed that his results were largely in line with Mendelian theory. See also Patterns in nature Phyllotaxis Symmetry in biology Whorl (botany) References Bibliography Plant morphology
Floral symmetry
Biology
804
70,276,963
https://en.wikipedia.org/wiki/Charles%20Kurland
Charles Gabriel Kurland (born 14 January 1936) is an American-born Swedish biochemist. Kurland earned a doctorate in 1961 at Harvard University, advised by James D. Watson. Kurland accepted a postdoctoral research position at the Microbiology Institute of the University of Copenhagen, then joined the Uppsala University faculty in 1971. He retired from Uppsala in 2001, and was granted emeritus status. He was later affiliated with Lund University. Research Kurland's doctoral work dealt the structure of RNA, and continued with the discovery of messenger RNA (mRNA), work that also involved François Gros, Walter Gilbert and James Watson. This was published simultaneously with the report by Sydney Brenner, François Jacob and Matthew Meselson of the same discovery. It was followed by numerous papers concerned with ribosomal proteins In the later part of his career Kurland has been interested in the origins of mitochondria and the tree of life. Academy memberships Kurland was elected to the Royal Swedish Academy of Sciences in 1988 as a foreign member, and reclassified as a Swedish member in 2002. The Estonian Academy of Sciences recognized his achievements in biochemistry, and awarded Kurland an equivalent honor in 1991. References 1936 births Living people 21st-century Swedish chemists 20th-century Swedish biologists Members of the Royal Swedish Academy of Sciences Members of the Estonian Academy of Sciences Academic staff of Uppsala University Harvard University alumni Academic staff of Lund University American emigrants to Sweden Swedish biochemists 20th-century American biochemists 21st-century American biochemists Members of the Royal Society of Sciences in Uppsala 20th-century Swedish chemists
Charles Kurland
Chemistry
323
60,665,128
https://en.wikipedia.org/wiki/Roland%20SP-808
The Roland SP-808 GrooveSampler and SP-808EX/E-Mix Studio are both discontinued workstations, which function as digital samplers, synthesizers, and music sequencers. The digital samplers are a part of the long line of both Roland Corporation's and Boss Corporation's Groove Gear, which includes the more popular and successful Boss SP-303 and Roland SP-404 versions. Background Being an early installment in the SP lineage, the SP-808 GrooveSampler was originally released in the year of 1998. Sometime in the year 2000, the sampler was updated, redesigned, and released as the SP-808EX, with the additional name of "e-Mix Studio." Despite receiving an upgrade, both versions of the SP-808 have and also lack certain features of the succeeding SP installments. Features Groovesampler The original Roland SP-808 GrooveSampler can play up to four stereo samples simultaneously, with the sample rates of 44.1 and 32 kHz. The maximum sample time allowed is 25 minutes of stereo at the rate of 44.1 kHz. Being an predecessor to more popular SPs, the sampler itself can hold over 1,000 samples, while 100MB Zip disks can store up to 1024 samples, roughly amounting to 64 minutes. Unlike some of the succeeding SP installments, the sampler has no USB or CompactFlash card option. Furthermore, audio samples can only be stored, read, and transferred directly from the zip drive rather than internal RAM. In an effort to maximize storage space on zip disks, Roland decided against the use of AIFF and WAV audio formats. D-Beam controller is also included. E-Mix Studio Being an upgrade from the Groovesampler, the SP-808EX E-Mix Studio includes a virtual monophonic synthesizer for use with the step sequencer and D-Beam controller. Vocal effects, guitar multi-effects, a 10-band Vocoder, Voice Transformer, Mic Simulator, and a number DJ-oriented groove effects were added as well. A larger 250MB Zip drive replaces the original 100MB Zip. Sampling and recording time was extended possible to 61 stereo minutes. Expansion options include the OP-1 interface (6 analog outs, 2 digital I/Os, SCSI) and OP-2 interface (XLR I/Os, digital I/O, SCSI). In regards to storing and transporting audio, the method is the same as the Groovesampler. Notable users Despite receiving little popularity in comparison to the later SP-303 and SP-404 installments, Slipknot member 133 is known to have utilized the sampler for a number of years. DJ and music producer, Rekha Malhotra is known for utilizing the SP-808 as well. References Samplers (musical instrument) D2 D-Beam Grooveboxes Music sequencers Sound modules Music workstations Hip-hop production Japanese inventions
Roland SP-808
Engineering
605
426,467
https://en.wikipedia.org/wiki/Federation%20Square
Federation Square (marketed and colloquially known as Fed Square) is a venue for arts, culture and public events on the edge of the Melbourne central business district. It covers an area of at the intersection of Flinders and Swanston Streets built above busy railway lines and across the road from Flinders Street station. It incorporates major cultural institutions such as the Ian Potter Centre, Australian Centre for the Moving Image (ACMI) and the Koorie Heritage Trust as well as cafes and bars in a series of buildings centred around a large paved square, and a glass walled atrium. History Background Melbourne's central city grid was originally designed without a central public square, long seen as a missing element. From the 1920s there were proposals to roof the railway yards on the southeast corner of Flinders and Swanston Streets for a public square, with more detailed proposals prepared in the 1950s and 1960s. In the 1960s, the Melbourne City Council decided that the best place for the City Square was the corner of Swanston and Collins Streets, opposite the town hall. The first temporary square opened in 1968, and a permanent version opened in 1981. It was however not considered a great success, and was redeveloped in the 1990s as a smaller, simpler space in front of a new large hotel. Meanwhile, in the late 1960s, a small part of the railway lines had been partly roofed by the construction of the Princes Gate Towers, colloquially known as the "Gas & Fuel Buildings" after their major tenant, the Gas and Fuel Corporation, over the old Princes Bridge station. This included a plaza on the corner, which was elevated above the street and little used. Between the plaza and Batman Avenue, which ran along the north bank of the Yarra River, were the extensive Jolimont Railway Yards, and the through train lines running into Flinders Street station under Swanston Street. In 1978 the idea of roofing the railyards was again proposed as part of a State Government competition for a landmark, asking for “an idea, a word, image or plan” to put Melbourne on the map. It drew 2300 entries, but no winner was declared. Design competition and controversy In 1996 the Premier Jeff Kennett announced that the Gas & Fuel Buildings would be demolished, and the railyards roofed, and a complex including arts facilities and a large public space would be built. It was to be named Federation Square, and opened in time to celebrate the centenary of Australia's Federation in 2001, and would include performing arts facilities, a gallery, a cinemedia centre, the public space, a glazed wintergarden, and ancillary cafe and retail spaces. An architectural design competition was announced that received 177 entries from around the world. Five designs were shortlisted, which included entries from high-profile Melbourne architects Denton Corker Marshall and Ashton Raggatt McDougall, and lesser known Sydney architect Chris Elliott, and London based architects Jenny Lowe and Adrian Hawker. The jury was chaired by Professor Neville Quarry. The winner announced on 28 July 1997, a consortium led by Lab Architecture Studio directed by Donald Bates and Peter Davidson from London, with the Dutch landscape architects Karres en Brands, directed by Sylvia Karres and Bart Brands, teamed with local executive architects Bates Smart for the second stage. The design, originally costed at between $110 and $128 million, was complex and irregular, with gently angled 'cranked' geometries predominating in both the planning and the facade treatment of the various buildings and the wintergardens that surrounded and defined the open spaces. A series of 'shards' provided vertical accents, while interconnected laneways and stairways and the wintergarden would connect Flinders Street to the Yarra River. The open square was arranged as a gently sloping amphitheatre, focussed on a large viewing screen for public events, with a secondary sloped plaza area on the main corner. The design was widely supported by the design community but was less popular with the public. The design was also soon criticised when it was realised that the western freestanding 'shard' would block views of the south front of St Paul's Cathedral from Princes Bridge. The mix of occupants and tenants were soon modified, with the cinemedia centre becoming the new body known as ACMI, offices for multicultural broadcaster SBS added, and the gallery space becoming the Australian art wing of the National Gallery of Victoria, which became the Ian Potter Centre. The performance arts space was dropped, the number of commercial tenancies increased, and the south end of the Atrium became an auditorium. A new substantially rearranged design incorporating the new program was revealed in late 1998. Construction After the 1999 State election, while construction was well underway, the incoming Bracks Government ordered a report by the University of Melbourne's Professor Evan Walker into the 'western shard' to be located on the corner of Flinders and Swanston Streets, which concluded in February 2000 that the "heritage vista" towards St Paul's cathedral should be preserved, and the shard be no more than 8m in height. Budgets on the project blew out significantly due to the initial cost being seriously underestimated, given the expense of covering the railyards, changes to the brief, the need to resolve construction methods for the angular design, and the long delays. Among measures taken to cut costs was concreting areas originally designed for paving. The final cost of construction was approximately $467 million (over four times the original estimate), the main funding primarily from the state government, with $64 million from the City of Melbourne, some from the federal government, while private operators and sponsors paid for fitouts or naming rights. The square was opened on 26 October 2002. Unlike many Australian landmarks, it was not opened by the reigning monarch, Elizabeth II, nor was she invited to its unveiling; she visited Federation Square in October 2011. Further expansion In 2006, Federation Wharf redeveloped the vaults under Princes Walk (a former roadway) into a large bar, with extensive outdoor areas on the Yarra riverbank, with elevator access to Federation Square. Several proposals have been prepared for the area known as Federation Square East, the remaining area of railyards to the east. There have been proposals for office towers and, more recently, a combination of open space and a hotel, or another campus for the National Gallery of Victoria to house their contemporary art collection. Apple Store In December 2017, the Andrews government announced that one of the buildings of the square, the Birrarung Building, would be demolished to make way for a freestanding Apple Store, generating strong criticism over the commercial use of a cultural space. Opposition groups including Our City Our Square and the National Trust of Australia (Victoria) then nominated Fed Square to the Victorian Heritage Register, which resulted in an interim decision to list in October 2018. Apple cancelled the plans in April 2019 after the application to Heritage Victoria to demolish the Birrarung Building was denied, and after a hearing, the square was formally listed in August 2019. Metro Entrance With the construction of the upcoming Melbourne Metro Tunnel, an entrance to the underground Town Hall station from the corner of Federation Square was proposed, with a design released in December 2018 that would replace the corner Information Centre. After the heritage listing of the square, a permit was sought to demolish the building and the plaza around it, which was granted on the basis that the Information Centre was not the original design for the 'Western Shard', and it was demolished by January 2019, though without a final approved design for the new entrance. Later Developments In early 2022, following the decision to build a new National Gallery Victoria Contemporary behind the NGV, with a linear public space connection through to St Kilda Road, the State Government established the Melbourne Arts Precinct Corporation to manage the delivery of the new park, the management of Federation Square, and to better connect the various arts institutions in Southbank to each other and through to the CBD. In October 2023 the Age newspaper ran a series of articles on the square, providing a range of opinions on its strengths and weaknesses. The failure of many cafes and shops over the years was noted, as well as the rough surface affecting mobility, the lack of shade, and the lack of clear paths through the site, concluding that the square was still a 'work in progress'. Location and layout Federation Square occupies roughly a whole urban block bounded by Swanston, Flinders, and Russell Streets and the Yarra River. The open public square is directly opposite Flinders Street station and St Paul's Cathedral. The layout of the precinct is designed to connect the historical central district of the city with the Yarra River and a new park Birrarung Marr. Design features Square The complex of buildings forms a rough U-shape around the main open-air square, oriented to the west. The eastern end of the square is formed by the glazed walls of The Atrium. While bluestone is used for the majority of the paving in the Atrium and St Paul's Court, matching footpaths elsewhere in central Melbourne, the main square is paved in 470,000 ochre-coloured sandstone blocks from Western Australia and invokes images of the outback. The paving is designed as a huge urban artwork, called Nearamnew, by Paul Carter and gently rises above street level, containing a number of textual pieces inlaid in its undulating surface. There are a small number of landscaped sections in the square and plaza which are planted with Eucalyptus trees. Plaza and giant screen A key part of the plaza design is its large and fixed public television screen, which has been used to broadcast major sporting events such as the AFL Grand Final and the Australian Open every year. It is currently the biggest broadcasting screen in Australia. Buildings The architecture of the square is in the deconstructivist style, with both plan and elevations designed around slightly angular, 'cranked' geometries, rather than tradition orthogonal grids. The built forms are mainly slightly bent north–south volumes, separated by glazed gaps, a reference to traditional Melbourne laneways, with vertical 'shards', attached or freestanding, containing discrete functions like the Visitor's Centre, or lifts and stairs. The larger built volumes are relatively simple reinforced concrete buildings with glass walls, but with a second outer skin of cladding carried on heavy steel framing, folded and stepped slightly to create angular undulating surfaces. The cladding is composed of 6 different materials, zinc, perforated zinc, glass, frosted glass, sandstone and no cladding, in a camouflage-like pattern, and created using pinwheel tiling. The 'crossbar' is an east–west built from that runs through the long gallery building, and is clad in perforated black steel panels. Some buildings are named. The building along Flinders Street that houses ACMI and SBS is named the Alfred Deakin Building, the building between the plaza space and the river is called the Birrarung Building, while the building that houses the NGV Australia is also called the Ian Potter Centre. Shards Three shards frame the square space. The eastern and southern shards are completely clad in metallic surfaces with angular slots, very similar in design to the Jewish Museum Berlin, while the western shard is clad in glass. Adjoined to the southern shard is a hotel which features the wrap around metallic screen and glass louvers. Laneways There are a number of unnamed laneways in the Federation Square complex which connect it to both Flinders Street and the Yarra River via stairways. The stairways between the Western Shard and nearby buildings are also paved in larger flat rectangle sandstone blocks. Riverfront The riverfront areas extend south to an elevated pedestrian promenade which was once part of Batman Avenue and is lined with tall established trees of both deciduous exotic species and Australian eucalpyts. More recently, the vaults adjacent to the Princes Bridge have been converted into Federation Wharf, a series of cafes and boat berths. Some of the areas between the stairs and lanes leading to the river are landscaped with shady tree ferns. Atrium The "atrium" is one of the major public spaces in the precinct. It is a laneway-like space, five stories high with glazed walls and roof. The exposed metal structure and glazing patterns follow the pinwheel tiling pattern used elsewhere in the precinct's building facades. Labyrinth The "labyrinth" is a passive cooling system sandwiched above the railway lines and below the middle of the square. The concrete structure consists of 1.2 km of interlocking, honeycombed walls. It covers 1600 m2. The walls have a corrugated profile to maximize their surface area, and are spaced 60 cm apart. During summer nights, cold air is pumped in the combed space, cooling down the concrete, while heat absorbed during the day is pumped out. The following day, cold air is pumped from the labyrinth out into the atrium through floor vents. This process can keep the atrium up to 12 °C cooler than outside. This is comparable to conventional air conditioning, but using one-tenth the energy and producing one-tenth the carbon dioxide. During winter, the process is reversed, storing warm daytime air in the Labyrinth overnight, and pumping it back into the atrium during the day. The system can also partly cool the ACMI building when the power is not required by the atrium. Flagpoles In the Federation Square complex, there are a number of flagpoles, most notably a group of four, three of which permanently fly the Australian, Aboriginal, and Torres Strait Islander flags. The fourth flagpole occasionally flies the flag of a foreign country to celebrate a national holiday of that country, for example an independence day. Prior to 2022 foreign countries' flags were usually flown on a group of eight flagpoles located next to a bus stop. The following countries' flags have been raised at Federation Square and/or its surroundings at least once: Facilities and tenants In addition to a number of shops, bars, cafés and restaurants, Federation Square's cultural facilities include: Melbourne Visitor Centre The Melbourne Visitor Centre was located underground, with its entrance at the main corner shard directly opposite Flinders Street Station and St Pauls Cathedral and its exit at the opposite shard. The Visitor Centre was intended to replace a facility which was previously located at the turn of the 19th-century town hall administration buildings on Swanston Street. The Visitors Centre was demolished in December 2018 to make way for an entrance to the rapid transit station to be built under the Swanston Street, and the visitors centre returned to the Town Hall. The Edge The Edge theatre is a 450-seat space designed to have views of the Yarra River and across to the spire of The Arts Centre. The theatre is lined in wood veneer in similar geometrical patterns to other interiors in the complex. The Edge was named "The BMW Edge" until May 2013, when a new sponsorship deal with Deakin University caused it to be renamed "The Deakin Edge" until 2021. Zinc Zinc is a function space underneath the gallery building, and opens onto the Yarra river bank. It was intended as an entirely commercial part of the development of Federation Square, and is used for wedding receptions, corporate events, launches, and the like. National Gallery of Victoria The Ian Potter Centre, also known as the NGVA, houses the Australian part of the art collection of the National Gallery of Victoria (NGV), in the building along the eastern side. (The St Kilda Rd building now houses that International works of the NGV, and is known as the NGVI). There are over 20,000 Australian artworks, including paintings, sculpture, photography, fashion and textiles, and the collection is the oldest and most well known in the country. Well-known works at the Ian Potter Centre include Frederick McCubbin's Pioneers (1904) and Tom Roberts' Shearing the Rams (1890). Also featured are works from Sidney Nolan, John Perceval, Margaret Preston and Fred Williams. Indigenous art includes works by William Barak and Emily Kngwarreye. The National Gallery at Federation Square also features the NGV Kids Corner, which is an interactive education section aimed at small children and families, and the NGV Studio. ACMI (Australian Centre for the Moving Image) The Australian Centre for the Moving Image known as ACMI has two cinemas that are equipped to play every film, video and digital video format, with attention to high-quality acoustics. The screen gallery, built along the entire length of what was previously a train station platform, is a subterranean gallery for experimentation with the moving image. Video art, installations, interactives, sound art and net art are all regularly exhibited in this space. Additional venues within ACMI allow computer-based public education, and other interactive presentations. In 2003, ACMI commissioned SelectParks to produce an interactive game-based, site-specific installation called AcmiPark, which replicated and abstracted the real-world architecture of Federation Square. It also houses highly innovative mechanisms for interactive, multi-player sound and musical composition. Transport Hotel Bar Transport hotel and bar is a three-level hotel complex adjacent to the southern shard on the south western corner of the square. It has a ground-floor public bar, restaurant and cocktail lounge on the rooftop. SBS Radio and Television offices The Melbourne offices of the Special Broadcasting Service (SBS), one of Australia's two publicly funded national broadcasters, is in the Deakin Building on Flinders Street. Beer awards Federation Square has recently become home to several beer award shows, and tastings, including the Australian International Beer Awards trade and public shows, as well as other similar events such as showcases of local and other Australian breweries. These events have been held in the square's outdoor area the Atrium and usually require an entry fee in exchange for a set number of tastings. Past tenants Past tenants have included: "Champions", The Australian Racing Museum & Hall of Fame — Relocated to Melbourne Cricket Ground. National Design Centre — Relocated to National Gallery of Victoria Reception and recognition In 2009, Virtual Tourist awarded Federation Square with the title of the 'World's Fifth Ugliest Building'. Criticisms of it ranged from its damage to the heritage vista to its similarity to a bombed-out war-time bunker due to its "army camouflage" colours. A judge from Virtual Tourist justified Federation Square's ranking on the ugly list claiming that: "Frenzied and overly complicated, the chaotic feel of the complex is made worse by a web of unsightly wires from which overhead lights dangle." It continues to be a 'pet hate' of Melburnians and was discussed on ABC's Art Nation. After its opening on 26 October 2002, Federation Square remained controversial among Melburnians due to its unpopular architecture, but also because of its successive cost blowouts and construction delays (as its name suggests, it was to have opened in time for the centenary of Australian Federation on 1 January 2001). The construction manager was Multiplex. The designers of Federation Square did not get any work for six months after the completion of the A$450 million public space, but did receive hate-mail from people who disliked the design. The Australian Financial Review later reported that some Melburnians have learned to love the building, citing the record number of people using and visiting it. In 2005 it was included on The Atlantic Cities 2011 list of "10 Great Central Plazas and Squares". Architecture and Urban Design Awards At the Victorian State Architecture Awards held in June 2003, Federation Square was awarded the prestigious Victorian Architecture Medal, the Melbourne Prize and the Joseph Reed Award for Urban Design by the Victorian Chapter of the Royal Australian Institute of Architects. In November 2003, the project won the Walter Burley Griffin Award for Urban Design and the Interior Architecture Award for The Ian Potter Centre: NGV Australia (Federation Square) at the National Awards of the Royal Australian Institute of Architects.Other Awards''' 2003 — IDAA Public/Institutional Interior Design Award 2003 — Australian Institute of Landscape Architects Award for Design Excellence 2003 — Civic Trust Award 2003 — Mahony Griffin Award for Interior Architecture 2003 — Interior Design Awards Australia 2003 — Victorian and Tasmanian Award for Excellence for Design in Landscape Architecture 2003 — Dulux Interior Colour Award 2003 — Public Domain Award For Sustainability 2003 — Kenneth Brown Award Hawaii, Commendation for Asia Pacific Architecture 2005 — Urban Land Institute Award for Excellence: Asia Pacific USA, Best Public Project 2006 — Chicago Athenaeum International Architecture Awards USA See also Australian landmarks Lab Architecture Studio References Further reading Brown-May, A. and Day, N. (2003). Federation Square, South Yarra, Vic: Hardie Grant Books (). Melbourne gets square, Sydney Morning Herald (Australia), 19 October 2002. Federation Square'', Macarthur, John; Crist, Graham; Hartoonian, Gevork and Stanhope, Zara, 1 March 2003. Architecture Australia External links Fed Square Federation Square "FedCam" Culture Victoria – images and video of Federation Square and the history of the site Federation Square, a brief history Squares in Melbourne Melbourne City Centre Buildings and structures in Melbourne Buildings and structures completed in 2002 2002 establishments in Australia Architectural controversies Landmarks in Melbourne Sandstone buildings in Australia Tourist attractions in Melbourne
Federation Square
Engineering
4,308
78,615,194
https://en.wikipedia.org/wiki/Quasi-extinction
Quasi-extinction refers to the state in which a species or population has declined to critically low numbers, making its recovery highly unlikely, even though a small number of individuals may still persist. This concept is often used in conservation biology to identify species at extreme risk of extinction and to guide management strategies aimed at preventing complete extinction. Quasi-extinction is typically characterized by an inability of the population to sustain itself due to genetic, demographic, or environmental factors. Extinction threshold The quasi-extinction threshold, or sometimes called the quasi-extinction risk is the population size below which a species is considered to be at extreme risk of quasi-extinction. This threshold varies by species and is influenced by several factors, including reproductive rates, habitat requirements, and genetic diversity. It is often used in population viability analyses (PVA) to model the likelihood of a species declining to levels where recovery becomes nearly impossible. References Extinction Conservation biology Environmental conservation Evolutionary biology IUCN Red List Biota by conservation status
Quasi-extinction
Biology
195
45,456,706
https://en.wikipedia.org/wiki/Smooth%20maximum
In mathematics, a smooth maximum of an indexed family x1, ..., xn of numbers is a smooth approximation to the maximum function meaning a parametric family of functions such that for every , the function is smooth, and the family converges to the maximum function as . The concept of smooth minimum is similarly defined. In many cases, a single family approximates both: maximum as the parameter goes to positive infinity, minimum as the parameter goes to negative infinity; in symbols, as and as . The term can also be used loosely for a specific smooth function that behaves similarly to a maximum, without necessarily being part of a parametrized family. Examples Boltzmann operator For large positive values of the parameter , the following formulation is a smooth, differentiable approximation of the maximum function. For negative values of the parameter that are large in absolute value, it approximates the minimum. has the following properties: as is the arithmetic mean of its inputs as The gradient of is closely related to softmax and is given by This makes the softmax function useful for optimization techniques that use gradient descent. This operator is sometimes called the Boltzmann operator, after the Boltzmann distribution. LogSumExp Another smooth maximum is LogSumExp: This can also be normalized if the are all non-negative, yielding a function with domain and range : The term corrects for the fact that by canceling out all but one zero exponential, and if all are zero. Mellowmax The mellowmax operator is defined as follows: It is a non-expansive operator. As , it acts like a maximum. As , it acts like an arithmetic mean. As , it acts like a minimum. This operator can be viewed as a particular instantiation of the quasi-arithmetic mean. It can also be derived from information theoretical principles as a way of regularizing policies with a cost function defined by KL divergence. The operator has previously been utilized in other areas, such as power engineering. p-Norm Another smooth maximum is the p-norm: which converges to as . An advantage of the p-norm is that it is a norm. As such it is scale invariant (homogeneous): , and it satisfies the triangle inequality. Smooth maximum unit The following binary operator is called the Smooth Maximum Unit (SMU): where is a parameter. As , and thus . See also LogSumExp Softmax function Generalized mean References Mathematical notation Basic concepts in set theory https://www.johndcook.com/soft_maximum.pdf M. Lange, D. Zühlke, O. Holz, and T. Villmann, "Applications of lp-norms and their smooth approximations for gradient based learning vector quantization," in Proc. ESANN, Apr. 2014, pp. 271-276. (https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2014-153.pdf)
Smooth maximum
Mathematics
617
42,689,427
https://en.wikipedia.org/wiki/Ball-pen%20probe
A ball-pen probe is a modified Langmuir probe used to measure the plasma potential in magnetized plasmas. The ball-pen probe balances the electron and ion saturation currents, so that its floating potential is equal to the plasma potential. Because electrons have a much smaller gyroradius than ions, a moving ceramic shield can be used to screen off an adjustable part of the electron current from the probe collector. Ball-pen probes are used in plasma physics, notably in tokamaks such as CASTOR, (Czech Academy of Sciences Torus) ASDEX Upgrade, COMPASS, ISTTOK, MAST, TJ-K, RFX, H-1 Heliac, IR-T1, GOLEM as well as low temperature devices as DC cylindrical magnetron in Prague and linear magnetized plasma devices in Nancy and Ljubljana. Principle If a Langmuir probe (electrode) is inserted into a plasma, its potential is not equal to the plasma potential because a Debye sheath forms, but instead to a floating potential . The difference with the plasma potential is given by the electron temperature : where the coefficient is given by the ratio of the electron and ion saturation current density ( and ) and collecting areas for electrons and ions ( and ): The ball-pen probe modifies the collecting areas for electrons and ions in such a way that the ratio is equal to one. Consequently, and the floating potential of the ball-pen probe becomes equal to the plasma potential regardless of the electron temperature: Design and calibration A ball-pen probe consists of a conically shaped collector (non-magnetic stainless steel, tungsten, copper, molybdenum), which is shielded by an insulating tube (boron nitride, Alumina). The collector is fully shielded and the whole probe head is placed perpendicular to magnetic field lines. When the collector slides within the shield, the ratio varies, and can be set to 1. The adequate retraction length strongly depends on the magnetic field's value. The collector retraction should be roughly below the ion's Larmor radius. Calibrating the proper position of the collector can be done in two different ways: The ball-pen probe collector is biased by a low-frequency voltage that provides the I-V characteristics and obtain the saturation current of electrons and ions. The collector is then retracted until the I-V characteristics becomes symmetric. In this case, the ratio is close to unity, though not exactly. If the probe is retracted deeper, the I-V characteristics remain symmetric. The ball-pen probe collector potential is left floating, and the collector is retracted until its potential saturates. The resulting potential is above the Langmuir probe potential. Electron temperature measurements Using two measurements of the plasma potential with probes whose coefficient differ, it is possible to retrieve the electron temperature passively (without any input voltage or current). Using a Langmuir probe (with a non-negligible) and a ball-point probe (whose associated is close to zero) the electron temperature is given by: where is measured by the ball-pen probe, by the standard Langmuir probe, and is given by the Langmuir probe geometry, plasma gas composition, the magnetic field, and other minor factors (secondary electron emission, sheath expansion, etc.). It can be calculated theoretically, its value being about 3 for a non-magnetized hydrogen plasma. In practice, the ratio for the ball-pen probe is not exactly equal to one, so that the coefficient must be corrected by an empirical value for : where References External links PhD thesis, Jiri Adamek (Czech and English language) Overview: Ball-pen probe design, theory and first results at different fusion devices. Examination of plasma current spikes and general analysis of H-mode shots in the tokamak COMPASS Electrical Probe Measurements on the COMPASS Tokamak Probe Measurements on the COMPASS Tokamak Scanning ion sensitive probe for plasma profile measurements in the boundary of the Alcator C-Mod tokamak Development of Probes for Assessment of Ion Heat Transport and Sheath Heat Flux in the Boundary of the Alcator C-Mod Tokamak Video: Temporal evolution of Type-I ELM in the divertor region on the COMPASS tokamak. The new divertor Ball-pen and Langmuir probes on the COMPASS tokamak. Plasma diagnostics
Ball-pen probe
Physics,Technology,Engineering
905
24,424,748
https://en.wikipedia.org/wiki/CompoZr
The CompoZr Zinc finger nuclease (ZFN) platform is a technology developed by Sigma-Aldrich that allows researchers to target and manipulate the genome of living cells thereby creating cell lines or entire organisms with permanent and heritable gene deletions, insertions, or modifications. The technology was released in September 2008. In December 2008, CompoZr ZFN Technology ranked third in The Scientist Magazine's Top Ten Innovations of 2008. In July 2009, the first genetically modified mammal was created through the use of CompoZr ZFN Technology. References External links CompoZr Homepage CompoZr Publications Engineered proteins Zinc proteins
CompoZr
Chemistry
139
36,389,011
https://en.wikipedia.org/wiki/USA-96
USA-96, also known as GPS IIA-14, GPS II-23 and GPS SVN-34, is an American navigation satellite which is part of the Global Positioning System. It was 14 of 19 Block IIA GPS satellites to be launched, and the last one to be retired. Background Global Positioning System (GPS) was developed by the U.S. Department of Defense to provide all-weather round-the-clock navigation capabilities for military ground, sea, and air forces. Since its implementation, GPS has also become an integral asset in numerous civilian applications and industries around the globe, including recreational used (e.g., boating, aircraft, hiking), corporate vehicle fleet tracking, and surveying. GPS employs 24 spacecraft in 20,200 km circular orbits inclined at 55.0°. These vehicles are placed in 6 orbit planes with four operational satellites in each plane. GPS Block 2 was the operational system, following the demonstration system composed of Block 1 (Navstar 1 - 11) spacecraft. These spacecraft were 3-axis stabilized, nadir pointing using reaction wheels. Dual solar arrays supplied 710 watts of power. They used S-band (SGLS) communications for control and telemetry and Ultra high frequency (UHF) cross-link between spacecraft. The payload consisted of two L-band navigation signals at 1575.42 MHz (L1) and 1227.60 MHz (L2). Each spacecraft carried 2 rubidium and 2 Cesium clocks and nuclear detonation detection sensors. Built by Rockwell Space Systems for the U.S. Air force, the spacecraft measured 5.3 m across with solar panels deployed and had a design life of 7.5 years. Launch USA-96 was launched at 17:04:00 UTC on 26 October 1993, atop a Delta II launch vehicle, flight number D223, flying in the 7925-9.5 configuration. The launch took place from Launch Complex 17B (LC-17B) at the Cape Canaveral Air Force Station (CCAFS), and placed USA-96 into a transfer orbit. The satellite raised itself into medium Earth orbit using a Star-37XFP apogee motor. Mission On 25 November 1993, USA-96 was in an orbit with a perigee of , an apogee of , a period of 718.00 minutes, and 55.08° of inclination to the equator. It broadcast the PRN 04 signal, and operated in slot 4 of plane D of the GPS constellation. The satellite has a mass of . It had a design life of 7.5 years. It was temporarily removed from the GPS constellation on 2 November 2015. From 20 March 2018 the satellite was operational again, broadcasting the PRN 18 signal, from slot 6 of Plane D, until 9 October 2019, when it was placed in reserve as an on-orbit spare. It was the final Block IIA satellite to be retired on 13 April 2020. References Spacecraft launched in 1993 GPS satellites USA satellites
USA-96
Technology
615
6,974,982
https://en.wikipedia.org/wiki/Sailing%20stones
Sailing stones (also called sliding rocks, walking rocks, rolling stones, and moving rocks) are part of the geological phenomenon in which rocks move and inscribe long tracks along a smooth valley floor without animal intervention. The movement of the rocks occurs when large, thin sheets of ice floating on an ephemeral winter pond move and break up due to wind. Trails of sliding rocks have been observed and studied in various locations, including Little Bonnie Claire Playa, in Nevada, and most famously at Racetrack Playa, Death Valley National Park, California, where the number and length of tracks are notable. Description The Racetrack's stones speckle the playa floor, predominantly in the southern portion. Historical accounts identify some stones around from shore, yet most of the stones are found relatively close to their respective originating outcrops. Three lithologic types are identified: syenite, found most abundant on the west side of the playa; dolomite, subrounded blue-gray stones with white bands; black dolomite, the most common type, found almost always in angular joint blocks or slivers. This dolomite composes nearly all stones found in the southern half of the playa, and originates at a steep promontory, high, paralleling the east shore at the south end of the playa. Intrusive igneous rock originates from adjacent slopes (most of those being tan-colored feldspar-rich syenite). Tracks are often up to long, about wide, and typically much less than deep. Most moving stones range from about in diameter. Stones with rough bottoms leave straight striated tracks, while those with smooth bottoms tend to wander. Stones sometimes turn over, exposing another edge to the ground and leaving a different track in the stone's wake. Trails differ in both direction and length. Rocks that start next to each other may travel parallel for a time, before one abruptly changes direction to the left, right, or even back to the direction from which it came. Trail length also varies – two similarly sized and shaped rocks may travel uniformly, then one could move ahead or stop in its track. A balance of specific conditions is thought to be needed for stones to move: A flooded surface A thin layer of clay Wind Ice floes Warming temperatures causing ice breakup Research history At Racetrack Playa, these tracks have been studied since the early 1900s, yet the origins of stone movement were not confirmed and remained the subject of research for which several hypotheses existed. However, as of August 2014, timelapse video footage of rocks moving has been published, showing the rocks moving at high wind speeds within the flow of thin, melting sheets of ice. The scientists have thus identified the cause of the moving stones to be ice shove. Early investigation The first documented account of the sliding rock phenomenon dates to 1915, when a prospector named Joseph Crook from Fallon, Nevada, visited the Racetrack Playa site. In the following years, the Racetrack sparked interest from geologists Jim McAllister and Allen Agnew, who mapped the bedrock of the area in 1948 and published the earliest report about the sliding rocks in a Geologic Society of America Bulletin. Their publication gave a brief description of the playa furrows and scrapers, stating that no exact measurements had been taken and suggesting that furrows were the remnants of scrapers propelled by strong gusts of wind – such as the variable winds that produce dust-devils – over a muddy playa floor. Controversy over the origin of the furrows prompted the search for the occurrence of similar phenomena at other locations. Such a location was found at Little Bonnie Claire Playa in Nye County, Nevada, and the phenomenon was studied there, as well. Naturalists from the National Park Service later wrote more detailed descriptions and Life magazine featured a set of photographs from the Racetrack. In 1952, a National Park Service Ranger named Louis G. Kirk recorded detailed observations of furrow length, width, and general course. He sought simply to investigate and record evidence of the moving rock phenomenon, not to hypothesize or create an extensive scientific report. Speculation about how the stones move started at this time. Various and sometimes idiosyncratic possible explanations have been put forward over the years that have ranged from the supernatural to the complex. Most hypotheses favored by interested geologists posit that strong winds when the mud is wet are at least in part responsible. Some stones weigh as much as a human, which some researchers, such as geologist George M. Stanley, who published a paper on the topic in 1955, feel is too heavy for the area's winds to move. After extensive track mapping and research on rotation of the tracks in relation to ice floe rotation, Stanley maintained that ice sheets around the stones either help to catch the wind or that ice floes initiate rock movement. Progress in the 1970s Bob Sharp and Dwight Carey started a Racetrack stone movement monitoring program in May 1968. Eventually, 30 stones with fresh tracks were labeled and stakes were used to mark their locations. Each stone was given a name and changes in the stones' positions were recorded over a seven-year period. Sharp and Carey also tested the ice floe hypothesis by corralling selected stones. A corral in diameter was made around a wide, track-making stone with seven rebar segments placed apart. If a sheet of ice around the stones either increased wind-catching surface area or helped move the stones by dragging them along in ice floes, then the rebar should at least slow down and deflect the movement. Neither appeared to occur; the stone barely missed a rebar as it moved to the northwest out of the corral in the first winter. Two heavier stones were placed in the corral at the same time; one moved five years later in the same direction as the first, but its companion did not move during the study period. This indicated that if ice played a part in stone movement, then ice collars around stones must be small. Ten of the initial 30 stones moved in the first winter with Mary Ann (stone A) covering the longest distance at . Two of the next six monitored winters also had multiple stones move. No stones were confirmed to have moved in the summer, and in some winters, none or only a few stones moved. In the end, all but two of the monitored stones moved during the seven-year study. At in diameter, Nancy (stone H) was the smallest monitored stone. It also moved the longest cumulative distance, , and the greatest single winter movement, . The largest stone to move was . Karen (stone J) is a block of dolomite and weighs an estimated . Karen did not move during the monitoring period. The stone may have created its long, straight and old track from momentum gained from its initial fall onto the wet playa. However, Karen disappeared sometime before May 1994, possibly during the unusually wet winter of 1992 to 1993. Removal by artificial means is considered unlikely due to the lack of associated damage to the playa that a truck and winch would have caused. A possible sighting of Karen was made in 1994, from the playa. Karen was rediscovered by San Jose geologist Paula Messina in 1996. Continued research in the 1990s Professor John Reid led six research students from Hampshire College and the University of Massachusetts Amherst in a follow-up study in 1995. They found highly congruent trails from stones that moved in the late 1980s and during the winter of 1992–93. At least some stones were proved beyond a reasonable doubt to have been moved in ice floes that may be up to wide. Physical evidence included swaths of lineated areas that could only have been created by moving thin sheets of ice. Consequently, both wind alone and wind in conjunction with ice floes are thought to be motive forces. Physicists Bacon et al. studying the phenomenon in 1996, informed by studies in Owens Dry Lake Playa, discovered that winds blowing on playa surfaces can be compressed and intensified because of a playa's smooth, flat surfaces. They also found that boundary layers (the region just above ground where winds are slower due to ground drag) on these surfaces can be as low as . As a result, stones just a few centimeters high feel the full force of ambient winds and their gusts, which can reach in winter storms. Such gusts are thought to be the initiating force, while momentum and sustained winds keep the stones moving, possibly as fast as a moderate run. Wind and ice both are the favored hypothesis for these sliding rocks. Noted in "Surface Processes and Landforms", Don J. Easterbrook mentions that because of the lack of parallel paths between some rock paths, this could be caused by degenerating ice floes resulting in alternate routes. Though the ice breaks up into smaller blocks, it is still necessary for the rocks to slide. 21st-century developments Further understanding of the geologic processes at work in Racetrack Playa goes hand in hand with technological development. In 2009, development of inexpensive time-lapse digital cameras allowed the capturing of transient meteorological phenomena including dust devils and playa flooding. These cameras were aimed at capturing various stages of the previously mentioned phenomena, though discussion of the sliding stones ensued. The developers of photographic technology describe the difficulty of capturing the Racetrack's stealthy rocks, as movements only occur about once every three years, and they believed, lasted about 10 seconds. Their next identified advancement was wind-triggered imagery, vastly reducing the ten million seconds of nontransit time they had to sift through. It was postulated that small rafts of ice form around the rocks and the rocks are buoyantly floated off the soft bed, thus reducing the reaction and friction forces at the bed. Since this effect depends on reducing friction, and not on increasing the wind drag, these ice cakes need not have a particularly large surface area if the ice is adequately thick, as the minimal friction allows the rocks to be moved by arbitrarily light winds. Reinforcing the "ice raft" theory, a research study pointed out narrowing trails, intermittent springs, and trail ends having no rocks. The study identified that water drained from higher area into the Playa while ice covered the intermittent lake. This suggests that this water buoyantly lifts the ice floes with embedded rocks until friction with the playa bed is reduced sufficiently for wind to move them and cause the observed tracks. The study also analyses an artificial ditch intended to prevent visitors from driving on the playa, and concludes that it may interfere with rock sliding. In 2020, NASA ruled out the potential reasons for the stones moving results from the microbial mats and wind-generated water waves based on a fossil of dinosaur footprints. Explanation News articles reported the mystery solved when researchers observed rock movements using GPS and time-lapse photography. The largest rock movement the research team witnessed and documented was on December 20, 2013 and involved more than 60 rocks, with some rocks moving up to 224 metres (245 yards) between December 2013 and January 2014 in multiple movement events. These observations contradicted earlier hypotheses of strong winds or thick ice floating rocks off the surface. Instead, rocks move when large ice sheets a few millimeters thick floating in an ephemeral winter pond start to break up during sunny mornings. These thin floating ice panels, frozen during cold winter nights, are driven by light winds and shove rocks at up to 5 m/min (0.3 km/h; 0.2 mph). Some GPS-measured moves lasted up to 16 minutes, and a number of stones moved more than five times during the existence of the playa pond in the winter of 2013–14. Possible influence of climate change Because rock movement relies on a rare set of circumstances, the usually dry playa being flooded and the water freezing, drier winters and warmer winter nights would cause such circumstances to occur less often. A statistical study by Ralph Lorenz and Brian Jackson examining published reports of rock movements suggested (with 4:1 odds) an apparent decline between the 1960s–1990s, and the 21st century. Theft and vandalism of rocks On May 30, 2013, the Los Angeles Times reported that park officials were looking into the theft of several of the rocks from the Death Valley National Park. In August 2016, around of tire tracks were left in the playa by someone driving around it illegally. A photographer visiting in September also noted the initials 'D' and 'K' newly carved into one of the rocks. Although reports at the time suggested investigators had identified a suspect, the vandal had not been identified in March 2018, when a team of volunteers cleaned the tire tracks from the Racetrack using gardening tools and of water. See also Rocking stone References Further reading Messina, P., 1998, The Sliding Rocks of Racetrack Playa, Death Valley National Park, California: Physical and Spatial Influences on Surface Processes. Published doctoral dissertation, Department of Earth and Environmental Sciences, City University of New York, New York. University Microfilms, Incorporated, 1998. Messina, P., Stoffer, P., and Clarke, K. C. Mapping Death Valley's Wandering Rocks. , April, 1997: pp. 34–44 Sharp, R.P., and A.F. Glazner, 1997, Geology Underfoot in Death Valley and Owens Valley. Mountain Press Publishing Company, Missoula. External links "How Do Death Valley's 'Sailing Stones' Move Themselves Across the Desert?" , Smithsonian Magazine, June 2013. National Geographic: "What Drives Death Valley's Roving Rocks?" Racetrackplaya.org: The Racetrack Playa Blog – homepage SJSU.edu: "The Sliding Rocks of Racetrack Playa" – by Paula Messina. Smith.edu: "The Mystery of the Rocks on the Racetrack at Death Valley" – by Lena Fletcher and Anne Nester. Physics Forums.com: "The Sliding Rock Phenomenon" – online discussion. Earth Surface Dynamics Discussions: "Trail formation by ice-shoved 'sailing stones' observed at Racetrack Playa, Death Valley National Park" YouTube: Moving Rocks of Death Valley's Racetrack Playa – video by Brian Dunning. Fox News.com: Why Are Death Valley's Rocks Moving Themselves? – by Philip Schewe. Plosone.org: "Sliding Rocks on Racetrack Playa, Death Valley National Park: First Observation of Rocks in Motion" Death Valley National Park Death Valley Natural history of the Mojave Desert Rock formations of California Rock formations of Nevada Rocks Stones
Sailing stones
Physics
2,958
16,879,665
https://en.wikipedia.org/wiki/Circle%20packing
In geometry, circle packing is the study of the arrangement of circles (of equal or varying sizes) on a given surface such that no overlapping occurs and so that no circle can be enlarged without creating an overlap. The associated packing density, , of an arrangement is the proportion of the surface covered by the circles. Generalisations can be made to higher dimensions – this is called sphere packing, which usually deals only with identical spheres. The branch of mathematics generally known as "circle packing" is concerned with the geometry and combinatorics of packings of arbitrarily-sized circles: these give rise to discrete analogs of conformal mapping, Riemann surfaces and the like. Densest packing In the two-dimensional Euclidean plane, Joseph Louis Lagrange proved in 1773 that the highest-density lattice packing of circles is the hexagonal packing arrangement, in which the centres of the circles are arranged in a hexagonal lattice (staggered rows, like a honeycomb), and each circle is surrounded by six other circles. For circles of diameter and hexagons of side length , the hexagon area and the circle area are, respectively: The area covered within each hexagon by circles is: Finally, the packing density is: In 1890, Axel Thue published a proof that this same density is optimal among all packings, not just lattice packings, but his proof was considered by some to be incomplete. The first rigorous proof is attributed to László Fejes Tóth in 1942. While the circle has a relatively low maximum packing density, it does not have the lowest possible, even among centrally-symmetric convex shapes: the smoothed octagon has a packing density of about 0.902414, the smallest known for centrally-symmetric convex shapes and conjectured to be the smallest possible. (Packing densities of concave shapes such as star polygons can be arbitrarily small.) Other packings At the other extreme, Böröczky demonstrated that arbitrarily low density arrangements of rigidly packed circles exist. There are eleven circle packings based on the eleven uniform tilings of the plane. In these packings, every circle can be mapped to every other circle by reflections and rotations. The hexagonal gaps can be filled by one circle and the dodecagonal gaps can be filled with seven circles, creating 3-uniform packings. The truncated trihexagonal tiling with both types of gaps can be filled as a 4-uniform packing. The snub hexagonal tiling has two mirror-image forms. On the sphere A related problem is to determine the lowest-energy arrangement of identically interacting points that are constrained to lie within a given surface. The Thomson problem deals with the lowest energy distribution of identical electric charges on the surface of a sphere. The Tammes problem is a generalisation of this, dealing with maximising the minimum distance between circles on sphere. This is analogous to distributing non-point charges on a sphere. In bounded areas Packing circles in simple bounded shapes is a common type of problem in recreational mathematics. The influence of the container walls is important, and hexagonal packing is generally not optimal for small numbers of circles. Specific problems of this type that have been studied include: Circle packing in a circle Circle packing in a square Circle packing in a rectangle Circle packing in an equilateral triangle Circle packing in an isosceles right triangle See the linked articles for details. Unequal circles There are also a range of problems which permit the sizes of the circles to be non-uniform. One such extension is to find the maximum possible density of a system with two specific sizes of circle (a binary system). Only nine particular radius ratios permit compact packing, which is when every pair of circles in contact is in mutual contact with two other circles (when line segments are drawn from contacting circle-center to circle-center, they triangulate the surface). For all these radius ratios a compact packing is known that achieves the maximum possible packing fraction (above that of uniformly-sized discs) for mixtures of discs with that radius ratio. All nine have ratio-specific packings denser than the uniform hexagonal packing, as do some radius ratios without compact packings. It is also known that if the radius ratio is above 0.742, a binary mixture cannot pack better than uniformly-sized discs. Upper bounds for the density that can be obtained in such binary packings at smaller ratios have also been obtained. Applications Quadrature amplitude modulation is based on packing circles into circles within a phase-amplitude space. A modem transmits data as a series of points in a two-dimensional phase-amplitude plane. The spacing between the points determines the noise tolerance of the transmission, while the circumscribing circle diameter determines the transmitter power required. Performance is maximized when the constellation of code points are at the centres of an efficient circle packing. In practice, suboptimal rectangular packings are often used to simplify decoding. Circle packing has become an essential tool in origami design, as each appendage on an origami figure requires a circle of paper. Robert J. Lang has used the mathematics of circle packing to develop computer programs that aid in the design of complex origami figures. See also Apollonian gasket Circle packing in a rectangle Circle packing in a square Circle packing in a circle Inversive distance Kepler conjecture Malfatti circles Packing problem References Bibliography
Circle packing
Mathematics
1,126
1,403,932
https://en.wikipedia.org/wiki/Polymer%20banknote
Polymer banknotes are banknotes made from a synthetic polymer such as biaxially oriented polypropylene (BOPP). Such notes incorporate many security features not available in paper banknotes, including the use of metameric inks. Polymer banknotes last significantly longer than paper notes, causing a decrease in environmental impact and a reduced cost of production and replacement. Modern polymer banknotes were first developed by the Reserve Bank of Australia (RBA), Commonwealth Scientific and Industrial Research Organisation (CSIRO) and The University of Melbourne. They were first issued as currency in Australia during 1988 (coinciding with Australia's bicentennial year); by 1996, the Australian dollar was switched completely to polymer banknotes. Romania was the first country in Europe to issue a plastic note in 1999 and became the third country after Australia and New Zealand to fully convert to polymer by 2003. Other currencies that have been switched completely to polymer banknotes include: the Vietnamese đồng (2006) although this is only applied to banknotes with denominations above 5,000 đồng, the Brunei dollar (2006), the Nigerian Naira (2007), the Papua New Guinean kina (2008), the Canadian dollar (2013), the Maldivian rufiyaa (2017), the Mauritanian ouguiya (2017), the Nicaraguan córdoba (2017), the Vanuatu vatu (2017), the Eastern Caribbean dollar (2019), the pound sterling (2021) and the Barbadian dollar (2022). Several countries and regions have now introduced polymer banknotes into commemorative or general circulation, including: Nigeria, Cape Verde, Chile, The Gambia, Trinidad and Tobago, Vietnam, Mexico, Taiwan, Singapore, Malaysia, Botswana, São Tomé and Príncipe, North Macedonia, Russia, Solomon Islands, Samoa, Morocco, Albania, Sri Lanka, Hong Kong, Israel, China, Kuwait, Mozambique, Saudi Arabia, Isle of Man, Guatemala, Haiti, Jamaica, Libya, Mauritius, Costa Rica, Honduras, Angola, Namibia, Lebanon, the Philippines, Egypt, the United Arab Emirates, Samoa, and Bermuda. History In the 1980s, Canadian engineering company AGRA Vadeko and US chemical company US Mobil Chemical Company developed a polymer substrate trademarked as DuraNote. It had been tested by the Bank of Canada in the 1980s and 1990s; test C$ 20 and C$ 50 banknotes were auctioned in October 2012. It was also tested by the Bureau of Engraving and Printing of the United States Department of the Treasury in 1997 and 1998, when 40,000 test banknotes were printed and evaluated; and was evaluated by the central banks of 28 countries. Security features Polymer banknotes usually have three levels of security devices. Primary security devices are easily recognisable by consumers and may include intaglio, metal strips, and the clear areas of the banknote. Secondary security devices are detectable by a machine. Tertiary security devices may only be detectable by the issuing authority when a banknote is returned. Adoption Modern polymer banknotes were first developed by the Reserve Bank of Australia (RBA) and the Commonwealth Scientific and Industrial Research Organisation or CSIRO and first issued as currency in Australia during 1988, to coincide with Australia's bicentennial year. In August 2012, Nigeria's Central Bank attempted the switch back from polymer to paper banknotes, saying there were "significant difficulties associated with the processing and destruction of the polymer banknotes" which had "constrained the realisation of the benefits expected from polymer banknotes over paper notes". However, President Goodluck Jonathan halted the process in September 2012. The polymer notes in the Republic of Mauritius are available in values of , , and , Recently in December 2024, the Bank of Mauritius has announced that there will be issues of Rs 100, Rs 200, and Rs 1000 banknotes. The Fiji was issued in April 2013. In the United Kingdom, the first polymer banknotes were issued by the Northern Bank in Northern Ireland in 2000; these were a special commemorative issue bearing an image of the Space Shuttle. In March 2015, the Clydesdale Bank in Scotland began to issue polymer Sterling £5 notes marking the 125th anniversary of the building of the Forth Bridge. These were the first polymer notes to enter general circulation in the UK. The Royal Bank of Scotland followed in 2016 with a new issue of plastic £5 notes illustrated with a picture of author Nan Shepherd. In September 2016, the Bank of England began to issue £5 polymer notes with a picture of Winston Churchill; and in 2017 a polymer £10 began replacing its paper equivalent, featuring a picture of the author Jane Austen. A polymer £20 was issued in 2020 with a picture of J.M.W. Turner, and the £50 note was released in 2021, featuring Alan Turing. Although the polymer Bank of England notes are 15% smaller than the older, paper issue, they bear a similar design. Some businesses operating in the UK cash industry have opposed the switch to polymer, citing a lack of research into the cost impact of its introduction. In December 2022, following the death of Queen Elizabeth II, the Bank of England unveiled the design of a new series of banknotes featuring King Charles III. The rest of the design, however, is unchanged, with the exception of a slight alteration in colour. In the Philippines, it was proposed in 2009 to shift to the usage of polymer for Philippine peso banknotes. This did not push through due to concern the shift would have over the impact to country's abaca industry. The proposal was revived in 2021 during the COVID-19 pandemic since the polymer banknotes can be sanitized with less damage compared to paper banknotes, as well as other reasons such as durability, lesser average issue cost, and lesser susceptibility to counterfeiting. In April 2022, The Bangko Sentral ng Pilipinas officially released the 1000 peso bill polymer bank note into circulation. In December 2024, the BSP (Central Bank of the Philippines) has announced that they will be issuing polymer notes in the denominations of 500, 100, and 50 pesos in the first quarter of 2025. Despite having the updated logo and the updated signature of the current president, there are no plans for a 20 peso polymer note due to it being slowly shifted into becoming a coin. There are also no plans for a 200 peso polymer banknote due to low demand. See also Banknotes of the Australian dollar Banknotes of the Canadian dollar Banknotes of the New Zealand dollar CSIRO Hybrid Paper Polymer Banknote Polymers Notes References External links PolymerNotes.org by Stane Štraus Currency Note Research and Development Project from the University of Melbourne Professor David Solomon – Inventor of Plastic Bank Note Wins 2006 Victoria Prize from the University of Melbourne Polymer banknotes from Commonwealth Scientific and Industrial Research Organisation Note Printing Australia innoviasecurity.com PolymerNotes.de (in German) by Thomas Krause New Kuwaiti Dinar notes released by Kuwait News Agency. Costs of introducing polymer notes in the UK by CMS Payments Intelligence. Australian inventions Banknotes Polymers Currencies introduced in 1988
Polymer banknote
Chemistry,Materials_science
1,457
21,569,615
https://en.wikipedia.org/wiki/Digital%20media%20service
A digital media service (DMS) is an online service provider that sells access to digital library of content such as films, software, games, images, literature, etc. While no transfer of property is made, a nearly perfect duplicate of the data (song movie, etc.) is made on a customer's computer. Content is either primarily hosted on a dedicated server, which is owned by the service provider, or it is hosted primarily on the hard drives of its customers using a P2P protocol with, perhaps, a dedicated server to supplement. History One example of the older business model is the iTunes Store, which still markets and prices data as individual retail products. There are no examples of the latter business model in operation yet, but one is currently in development by Global Gaming Factory X and expected to begin operation some time after they acquire The Pirate Bay domain on August 27, 2009. A key difference between the two models is that the model which relies on its customer base for offering their bandwidth for other customers to access customer hosted data can operate at significantly lower costs than a company that seeks to limit data access to a per-download fee in order to supplement the cost of using its own hosting and bandwidth. The P2P model holds the potential for companies to offer unlimited access to the largest data library in the history of the internet to its customers for a reasonably low membership rate that is relevant to the cost of operation. While the market is virtually untouched, the P2P supplemented model will need entrepreneurs who are able to overcome a series of challenges in order to compete with the older business model as well as that which is offered for free (and often against the wishes of copyright holders) by hundreds of P2P communities on the internet. These challenges include, but are not limited to: Offering better data quality, speed, convenience and ease of use, protocol, sense of security, indexing and search organization, site up time, data library size, customer support, advertising, artist/copyright holder incentives and compensation, incentives and compensation for customers hosting data and providing bandwidth, guaranteed seeding (available access to indexed data at all times), than competitors. Digital media References Mueller, Milton. “Regulation of platform market access by the United States and China: Neo‐mercantilism in digital services.” Wiley, Policy & Internet (2021).
Digital media service
Technology
477
16,713,163
https://en.wikipedia.org/wiki/List%20of%20Intel%20manufacturing%20sites
Intel is an American multinational corporation and technology company headquartered in Santa Clara, California. Processors are manufactured in semiconductor fabrication plants called "fabs" which are then sent to assembly and testing sites before delivery to customers. Intel has stated that approximately 75% of their semiconductor fabrication is performed in the United States. Since May 1990, Intel has made an effort to eliminate chlorofluorocarbon consumption for the Oregon, Puerto Rico and Ireland system factories. Current fab sites Past fab sites Assembly and test sites AFO, Aloha, Oregon, United States Chandler, Arizona, United States CD1, Chengdu, Sichuan, China CD6, Chengdu, Sichuan, China KMDSDP, Kulim, Malaysia KMO, Kulim, Malaysia KM5, Kulim, Malaysia PG8, Penang, Malaysia VNAT, Ho Chi Minh City, Vietnam Jerusalem, Israel CRAT, Heredia, Belén, Costa Rica (1997–2014; 2020 – present) Makati, Philippines – MN1-MN5 also known as A2/T11 (1974–2009) Cavite, Philippines – CV1-CV4 (1997–2009) Shanghai, China (former Assembly / Test Manufacturing) Las Piedras Puerto Rico 1991-2001 (assemble Pentium CPU/Motherboards) Wroclaw/Walbrzych, Poland - planned 2027 (former Assembly / Test Manufacturing) See also List of semiconductor fabrication plants External links Global Manufacturing at Intel References Manufacturing sites Computing-related lists Manufacturing plants Lists of industrial buildings Manufacturing-related lists
List of Intel manufacturing sites
Technology
315
18,592,503
https://en.wikipedia.org/wiki/Reinhard%20Oehme
Reinhard Oehme (; born 26 January 1928, Wiesbaden; died sometime between 29 September and 4 October 2010, Hyde Park) was a German-American physicist known for the discovery of C (charge conjugation) non-conservation in the presence of P (parity) violation, the formulation and proof of hadron dispersion relations, the "Edge of the Wedge Theorem" in the function theory of several complex variables, the Goldberger-Miyazawa-Oehme sum rule, reduction of quantum field theories, Oehme-Zimmermann superconvergence relations for gauge field correlation functions, and many other contributions. Oehme was born in Wiesbaden, Germany as the son of Dr. Reinhold Oehme and Katharina Kraus. In 1952, in São Paulo, Brazil, he married Mafalda Pisani, who was born in Berlin as the daughter of Giacopo Pisani and Wanda d'Alfonso. Mafalda died in Chicago in August of the year 2004. Education and career Completing the Abitur at the Rheingau Gymnasium in Geisenheim near Wiesbaden, Oehme started to study physics and mathematics at the Johann Wolfgang Goethe University Frankfurt am Main, receiving the Diploma in 1948 as student of Erwin Madelung. Then he moved to Göttingen, joining the Max Planck Institute for Physics as a doctoral student of Werner Heisenberg, who was also a professor at the University of Göttingen. Early in 1951, Oehme completed the requirements for his Dr.rer.nat at Göttingen Universität. The translation of the title of his thesis is: "Creation of Photons in Collisions of Nucleons” Later this year, Heisenberg asked him to join Carl Friedrich von Weizsäcker on a trip to Brazil for the start-up of the Instituto de Física Teórica in São Paulo, considered also as a possible escape in view of the tense situation in Europe. In 1953, he returned to his assistant position at the Max Planck Institute in Göttingen. During the early fifties, the institute was a most interesting place. Oehme was there among an exceptional group of people around Heisenberg, including Vladimir Glaser, Rolf Hagedorn, Fritz Houtermans, Gerhard Lüders, Walter Thirring, Kurt Symanzik, Carl Friedrich von Weizsaecker, Wolfhart Zimmermann, Bruno Zumino, who all have made important contributions to physics at some time. A year later, with Heisenberg's recommendation to his friend Enrico Fermi, Oehme was offered a research associate position at the University of Chicago, where he worked at the Institute for Nuclear Studies. Publications associated with this period are described below under Work. In the fall of 1956, he moved to Princeton as a member of the Institute for Advanced Study, returning in 1958 to the University of Chicago as a professor in the department of physics and at the Enrico Fermi Institute for Nuclear Studies. In 1998, he became professor emeritus. Visiting Professor Positions*: University of Maryland, College Park, 1957; Universität Wien, Austria 1961; Imperial College, London1963-64; Universität Karlsruhe, Germany, 1974, 1975, 1977; University of Tokyo, Japan, 1976, 1988; Research Institute of Fundamental Physics, University of Kyoto, Japan, 1976. Visiting Positions*: Instituto de Física Teórica, São Paulo, Brasil; Brookhaven National Laboratory; Lawrence Berkeley National Laboratory; CERN, Geneva, Switzerland; International Centre for Theoretical Physics, Miramare-Trieste, Italy; Max Planck Institute for Physics, München, Germany. Awards*: Guggenheim Fellow, 1963–64; Humboldt Price, 1974; Fellowship of the Japanese Society for the Promotion of Science (JSPS) 1976, 1988. Honors: The University of Chicago offers annually the Enrico Fermi, Robert R. McCormick & Mafalda and Reinhard Oehme Postdoctoral Research Fellowships (*For citations see corresponding publications and acknowledgements in publications. ) Work Dispersion Relations, GMO Sum Rule, and Edge of the Wedge Theorem In 1954 in Chicago, Oehme studied the analytic properties of forward Scattering amplitudes in quantum field theories. He found that particle-particle and antiparticle-particle amplitudes are connected by analytic continuation in the complex energy plane. These results led to the paper by him with Marvin L. Goldberger and Hironari Miyazawa on the dispersion relations for pion-nucleon scattering, which also contains the Goldberger-Miyazawa-Oehme Sum Rule. There is good agreement with the experimental results of the Fermi Group at Chicago, the Lindenbaum Group at Brookhaven and others. The GMO Sum Rule is often used in the analysis of the pion-nucleon system. Oehme published a proper derivation of hadronic forward dispersion relations on the basis of local quantum field theory in an article published in Il Nuovo Cimento. His proof remains valid in gauge theories with confinement. The analytic connection Oehme found between particle and antiparticle amplitudes is the first example of a fundamental feature of local quantum field theory: the crossing property. It is proven here, in a non-perturbative setting, on the basis of the analytic properties of amplitudes which are a consequence of locality and spectrum, like the dispersion relations. For generalizations, one still relies mostly on perturbation theory. For the purpose of using the powerful methods of the theory of functions of several complex variables for the proof of non-forward dispersion relations, and for analytic properties of other Greens functions, Oehme formulated and proved a fundamental theorem which he called the “Edge of the Wedge Theorem” (“Keilkanten Theorem”). This work was done mainly in the Fall of 1956 at the Institute for Advanced Study in collaboration with Hans-Joachim Bremermann and John G. Taylor. Using microscopic causality and spectral properties, the BOT theorem provides an initial region of analyticity, which can be enlarged by "Analytic Completion". Oehme first presented these results at the Princeton University Colloquium during the winter semester 1956/57. Independently, a different and elaborate proof of non-forward dispersion relations has been published by Nikolay Bogoliubov and collaborators. The Edge of the Wedge Theorem of BOT has many other applications. For example, it can be used to show that, in the presence of (spontaneous) violations of Lorentz invariance, micro-causality (locality), together with positivity of the energy, implies Lorentz invariance of the energy- momentum spectrum. Together with Marvin L. Goldberger and Yoichiro Nambu, Oehme also has formulated dispersion relations for nucleon-nucleon scattering. Charge Conjugation Non-Conservation On August 7, 1956, Oehme wrote a letter to C.N. Yang in which it is shown that weak interactions must violate charge conjugation conservation in the event of a positive outcome of the polarization experiment in beta-decay. Since parity conservation leads to the same restrictions, he points out that C and P must BOTH be violated in order to get an asymmetry. Hence, at the level of ordinary weak interactions, CP is the relevant symmetry, and not C and P individually. Violation of C is one of the fundamental conditions for the matter-antimatter asymmetry of the Universe. The results of Oehme form the basis for the later experimental effort to study CP Symmetry, and the fundamental discovery of non-conservation at a lower level of interaction strength. As indicated above, the letter is reprinted in the book on Selected Papers by C.N. Yang. Prompted by the letter, T D Lee, R Oehme and C N Yang provided a detailed discussion of the interplay of non-invariance under P, C and T, and of applications to the Kaon - anti-Kaon complex. Their results are of importance for the description of the CP violation discovered later. In their paper the authors already consider non-invariance under T (time reversal) and hence, given the assumption of CPT symmetry, also under CP. Propagators and OZ Superconvergence Relations In connection with an exact structure analysis for gauge theory propagators, undertaken by Oehme in collaboration with Wolfhart Zimmermann, he obtained "Superconvergence Relations" for theories where the number of matter fields (flavors) is below a given limit. These "Oehme-Zimmernann Relations" provide a link between long- and short-distance properties of the theory. They are of importance for gluon confinement. These results about propagators depend essentially only upon general principles. Reduction of Quantum Field Theories As a general method of imposing restrictions on quantum field theories with several parameters, Oehme and Zimmermann have introduced a theory of reduction of coupling constants. This method is based upon the renormalization group, and is more general than the imposition of symmetries. There are solutions of the reduction equations which do not correspond to additional symmetries, but may be related to other characteristic aspects of the theory. On the other hand, supersymmetric theories do come out as possible solutions. This is an important example for the appearance of supersymmetry without imposing it explicitly. The reduction theory is finding many applications, theoretical and phenomenological. Other contributions Further contributions by Oehme, like those involving complex angular momentum, Rising Cross sections, Broken Symmetries, Current algebras and Weak Interactions, as well as chapters in books, may be found in: (http://home.uchicago.edu/~roehme/). External links Notes and references University of Chicago faculty 20th-century German physicists Mathematical physicists Theoretical physicists People associated with CERN 1928 births 2010 deaths Scientists from Wiesbaden Humboldt Research Award recipients 20th-century American physicists
Reinhard Oehme
Physics
2,066