id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
30,436
https://en.wikipedia.org/wiki/Theory%20of%20everything
A theory of everything (TOE), final theory, ultimate theory, unified field theory, or master theory is a hypothetical singular, all-encompassing, coherent theoretical framework of physics that fully explains and links together all aspects of the universe. Finding a theory of everything is one of the major unsolved problems in physics. Over the past few centuries, two theoretical frameworks have been developed that, together, most closely resemble a theory of everything. These two theories upon which all modern physics rests are general relativity and quantum mechanics. General relativity is a theoretical framework that only focuses on gravity for understanding the universe in regions of both large scale and high mass: planets, stars, galaxies, clusters of galaxies, etc. On the other hand, quantum mechanics is a theoretical framework that focuses primarily on three non-gravitational forces for understanding the universe in regions of both very small scale and low mass: subatomic particles, atoms, and molecules. Quantum mechanics successfully implemented the Standard Model that describes the three non-gravitational forces: strong nuclear, weak nuclear, and electromagnetic force – as well as all observed elementary particles. General relativity and quantum mechanics have been repeatedly validated in their separate fields of relevance. Since the usual domains of applicability of general relativity and quantum mechanics are so different, most situations require that only one of the two theories be used. The two theories are considered incompatible in regions of extremely small scale – the Planck scale – such as those that exist within a black hole or during the beginning stages of the universe (i.e., the moment immediately following the Big Bang). To resolve the incompatibility, a theoretical framework revealing a deeper underlying reality, unifying gravity with the other three interactions, must be discovered to harmoniously integrate the realms of general relativity and quantum mechanics into a seamless whole: a theory of everything may be defined as a comprehensive theory that, in principle, would be capable of describing all physical phenomena in the universe. In pursuit of this goal, quantum gravity has become one area of active research. One example is string theory, which evolved into a candidate for the theory of everything, but not without drawbacks (most notably, its apparent lack of currently testable predictions) and controversy. String theory posits that at the beginning of the universe (up to 10−43 seconds after the Big Bang), the four fundamental forces were once a single fundamental force. According to string theory, every particle in the universe, at its most ultramicroscopic level (Planck length), consists of varying combinations of vibrating strings (or strands) with preferred patterns of vibration. String theory further claims that it is through these specific oscillatory patterns of strings that a particle of unique mass and force charge is created (that is to say, the electron is a type of string that vibrates one way, while the up quark is a type of string vibrating another way, and so forth). String theory/M-theory proposes six or seven dimensions of spacetime in addition to the four common dimensions for a ten- or eleven-dimensional spacetime. Name Initially, the term theory of everything was used with an ironic reference to various overgeneralized theories. For example, a grandfather of Ijon Tichy – a character from a cycle of Stanisław Lem's science fiction stories of the 1960s – was known to work on the "General Theory of Everything". Physicist Harald Fritzsch used the term in his 1977 lectures in Varenna. Physicist John Ellis claims to have introduced the acronym "TOE" into the technical literature in an article in Nature in 1986. Over time, the term stuck in popularizations of theoretical physics research. Historical antecedents Antiquity to 19th century Many ancient cultures such as Babylonian astronomers and Indian astronomy studied the pattern of the Seven Sacred Luminaires/Classical Planets against the background of stars, with their interest being to relate celestial movement to human events (astrology), and the goal being to predict events by recording events against a time measure and then look for recurrent patterns. The debate between the universe having either a beginning or eternal cycles can be traced to ancient Babylonia. Hindu cosmology posits that time is infinite with a cyclic universe, where the current universe was preceded and will be followed by an infinite number of universes. Time scales mentioned in Hindu cosmology correspond to those of modern scientific cosmology. Its cycles run from our ordinary day and night to a day and night of Brahma, 8.64 billion years long. The natural philosophy of atomism appeared in several ancient traditions. In ancient Greek philosophy, the pre-Socratic philosophers speculated that the apparent diversity of observed phenomena was due to a single type of interaction, namely the motions and collisions of atoms. The concept of 'atom' proposed by Democritus was an early philosophical attempt to unify phenomena observed in nature. The concept of 'atom' also appeared in the Nyaya-Vaisheshika school of ancient Indian philosophy. Archimedes was possibly the first philosopher to have described nature with axioms (or principles) and then deduce new results from them. Any "theory of everything" is similarly expected to be based on axioms and to deduce all observable phenomena from them. Following earlier atomistic thought, the mechanical philosophy of the 17th century posited that all forces could be ultimately reduced to contact forces between the atoms, then imagined as tiny solid particles. In the late 17th century, Isaac Newton's description of the long-distance force of gravity implied that not all forces in nature result from things coming into contact. Newton's work in his Mathematical Principles of Natural Philosophy dealt with this in a further example of unification, in this case unifying Galileo's work on terrestrial gravity, Kepler's laws of planetary motion and the phenomenon of tides by explaining these apparent actions at a distance under one single law: the law of universal gravitation. Newton achieved the first great unification in physics, and he further is credited with laying the foundations of future endeavors for a grand unified theory. In 1814, building on these results, Laplace famously suggested that a sufficiently powerful intellect could, if it knew the position and velocity of every particle at a given time, along with the laws of nature, calculate the position of any particle at any other time: Laplace thus envisaged a combination of gravitation and mechanics as a theory of everything. Modern quantum mechanics implies that uncertainty is inescapable, and thus that Laplace's vision has to be amended: a theory of everything must include gravitation and quantum mechanics. Even ignoring quantum mechanics, chaos theory is sufficient to guarantee that the future of any sufficiently complex mechanical or astronomical system is unpredictable. In 1820, Hans Christian Ørsted discovered a connection between electricity and magnetism, triggering decades of work that culminated in 1865, in James Clerk Maxwell's theory of electromagnetism, which achieved the second great unification in physics. During the 19th and early 20th centuries, it gradually became apparent that many common examples of forces – contact forces, elasticity, viscosity, friction, and pressure – result from electrical interactions between the smallest particles of matter. In his experiments of 1849–1850, Michael Faraday was the first to search for a unification of gravity with electricity and magnetism. However, he found no connection. In 1900, David Hilbert published a famous list of mathematical problems. In Hilbert's sixth problem, he challenged researchers to find an axiomatic basis to all of physics. In this problem he thus asked for what today would be called a theory of everything. Early 20th century In the late 1920s, the then new quantum mechanics showed that the chemical bonds between atoms were examples of (quantum) electrical forces, justifying Dirac's boast that "the underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known". After 1915, when Albert Einstein published the theory of gravity (general relativity), the search for a unified field theory combining gravity with electromagnetism began with a renewed interest. In Einstein's day, the strong and the weak forces had not yet been discovered, yet he found the potential existence of two other distinct forces, gravity and electromagnetism, far more alluring. This launched his 40-year voyage in search of the so-called "unified field theory" that he hoped would show that these two forces are really manifestations of one grand, underlying principle. During the last few decades of his life, this ambition alienated Einstein from the rest of mainstream of physics, as the mainstream was instead far more excited about the emerging framework of quantum mechanics. Einstein wrote to a friend in the early 1940s, "I have become a lonely old chap who is mainly known because he doesn't wear socks and who is exhibited as a curiosity on special occasions." Prominent contributors were Gunnar Nordström, Hermann Weyl, Arthur Eddington, David Hilbert, Theodor Kaluza, Oskar Klein (see Kaluza–Klein theory), and most notably, Albert Einstein and his collaborators. Einstein searched in earnest for, but ultimately failed to find, a unifying theory (see Einstein–Maxwell–Dirac equations). Late 20th century and the nuclear interactions In the 20th century, the search for a unifying theory was interrupted by the discovery of the strong and weak nuclear forces, which differ both from gravity and from electromagnetism. A further hurdle was the acceptance that in a theory of everything, quantum mechanics had to be incorporated from the outset, rather than emerging as a consequence of a deterministic unified theory, as Einstein had hoped. Gravity and electromagnetism are able to coexist as entries in a list of classical forces, but for many years it seemed that gravity could not be incorporated into the quantum framework, let alone unified with the other fundamental forces. For this reason, work on unification, for much of the 20th century, focused on understanding the three forces described by quantum mechanics: electromagnetism and the weak and strong forces. The first two were combined in 1967–1968 by Sheldon Glashow, Steven Weinberg, and Abdus Salam into the electroweak force. Electroweak unification is a broken symmetry: the electromagnetic and weak forces appear distinct at low energies because the particles carrying the weak force, the W and Z bosons, have non-zero masses ( and , respectively), whereas the photon, which carries the electromagnetic force, is massless. At higher energies W bosons and Z bosons can be created easily and the unified nature of the force becomes apparent. While the strong and electroweak forces coexist under the Standard Model of particle physics, they remain distinct. Thus, the pursuit of a theory of everything remained unsuccessful: neither a unification of the strong and electroweak forces – which Laplace would have called 'contact forces' – nor a unification of these forces with gravitation had been achieved. Modern physics Conventional sequence of theories A theory of everything would unify all the fundamental interactions of nature: gravitation, the strong interaction, the weak interaction, and electromagnetism. Because the weak interaction can transform elementary particles from one kind into another, the theory of everything should also predict all the different kinds of particles possible. The usual assumed path of theories is given in the following graph, where each unification step leads one level up on the graph. In this graph, electroweak unification occurs at around 100 GeV, grand unification is predicted to occur at 1016 GeV, and unification of the GUT force with gravity is expected at the Planck energy, roughly 1019 GeV. Several Grand Unified Theories (GUTs) have been proposed to unify electromagnetism and the weak and strong forces. Grand unification would imply the existence of an electronuclear force; it is expected to set in at energies of the order of 1016 GeV, far greater than could be reached by any currently feasible particle accelerator. Although the simplest grand unified theories have been experimentally ruled out, the idea of a grand unified theory, especially when linked with supersymmetry, remains a favorite candidate in the theoretical physics community. Supersymmetric grand unified theories seem plausible not only for their theoretical "beauty", but because they naturally produce large quantities of dark matter, and because the inflationary force may be related to grand unified theory physics (although it does not seem to form an inevitable part of the theory). Yet grand unified theories are clearly not the final answer; both the current standard model and all proposed GUTs are quantum field theories which require the problematic technique of renormalization to yield sensible answers. This is usually regarded as a sign that these are only effective field theories, omitting crucial phenomena relevant only at very high energies. The final step in the graph requires resolving the separation between quantum mechanics and gravitation, often equated with general relativity. Numerous researchers concentrate their efforts on this specific step; nevertheless, no accepted theory of quantum gravity, and thus no accepted theory of everything, has emerged with observational evidence. It is usually assumed that the theory of everything will also solve the remaining problems of grand unified theories. In addition to explaining the forces listed in the graph, a theory of everything may also explain the status of at least two candidate forces suggested by modern cosmology: an inflationary force and dark energy. Furthermore, cosmological experiments also suggest the existence of dark matter, supposedly composed of fundamental particles outside the scheme of the standard model. However, the existence of these forces and particles has not been proven. String theory and M-theory Since the 1990s, some physicists such as Edward Witten believe that 11-dimensional M-theory, which is described in some limits by one of the five perturbative superstring theories, and in another by the maximally-supersymmetric eleven-dimensional supergravity, is the theory of everything. There is no widespread consensus on this issue. One remarkable property of string/M-theory is that seven extra dimensions are required for the theory's consistency, on top of the four dimensions in our universe. In this regard, string theory can be seen as building on the insights of the Kaluza–Klein theory, in which it was realized that applying general relativity to a 5-dimensional universe, with one space dimension small and curled up, looks from the 4-dimensional perspective like the usual general relativity together with Maxwell's electrodynamics. This lent credence to the idea of unifying gauge and gravity interactions, and to extra dimensions, but did not address the detailed experimental requirements. Another important property of string theory is its supersymmetry, which together with extra dimensions are the two main proposals for resolving the hierarchy problem of the standard model, which is (roughly) the question of why gravity is so much weaker than any other force. The extra-dimensional solution involves allowing gravity to propagate into the other dimensions while keeping other forces confined to a 4-dimensional spacetime, an idea that has been realized with explicit stringy mechanisms. Research into string theory has been encouraged by a variety of theoretical and experimental factors. On the experimental side, the particle content of the standard model supplemented with neutrino masses fits into a spinor representation of SO(10), a subgroup of E8 that routinely emerges in string theory, such as in heterotic string theory or (sometimes equivalently) in F-theory. String theory has mechanisms that may explain why fermions come in three hierarchical generations, and explain the mixing rates between quark generations. On the theoretical side, it has begun to address some of the key questions in quantum gravity, such as resolving the black hole information paradox, counting the correct entropy of black holes and allowing for topology-changing processes. It has also led to many insights in pure mathematics and in ordinary, strongly-coupled gauge theory due to the Gauge/String duality. In the late 1990s, it was noted that one major hurdle in this endeavor is that the number of possible 4-dimensional universes is incredibly large. The small, "curled up" extra dimensions can be compactified in an enormous number of different ways (one estimate is 10500 ) each of which leads to different properties for the low-energy particles and forces. This array of models is known as the string theory landscape. One proposed solution is that many or all of these possibilities are realized in one or another of a huge number of universes, but that only a small number of them are habitable. Hence what we normally conceive as the fundamental constants of the universe are ultimately the result of the anthropic principle rather than dictated by theory. This has led to criticism of string theory, arguing that it cannot make useful (i.e., original, falsifiable, and verifiable) predictions and regarding it as a pseudoscience/philosophy. Others disagree, and string theory remains an active topic of investigation in theoretical physics. Loop quantum gravity Current research on loop quantum gravity may eventually play a fundamental role in a theory of everything, but that is not its primary aim. Loop quantum gravity also introduces a lower bound on the possible length scales. There have been recent claims that loop quantum gravity may be able to reproduce features resembling the Standard Model. So far only the first generation of fermions (leptons and quarks) with correct parity properties have been modelled by Sundance Bilson-Thompson using preons constituted of braids of spacetime as the building blocks. However, there is no derivation of the Lagrangian that would describe the interactions of such particles, nor is it possible to show that such particles are fermions, nor that the gauge groups or interactions of the Standard Model are realised. Use of quantum computing concepts made it possible to demonstrate that the particles are able to survive quantum fluctuations. This model leads to an interpretation of electric and color charge as topological quantities (electric as number and chirality of twists carried on the individual ribbons and colour as variants of such twisting for fixed electric charge). Bilson-Thompson's original paper suggested that the higher-generation fermions could be represented by more complicated braidings, although explicit constructions of these structures were not given. The electric charge, color, and parity properties of such fermions would arise in the same way as for the first generation. The model was expressly generalized for an infinite number of generations and for the weak force bosons (but not for photons or gluons) in a 2008 paper by Bilson-Thompson, Hackett, Kauffman and Smolin. Other attempts Among other attempts to develop a theory of everything is the theory of causal fermion systems, giving the two current physical theories (general relativity and quantum field theory) as limiting cases. Another theory is called Causal Sets. As some of the approaches mentioned above, its direct goal isn't necessarily to achieve a theory of everything but primarily a working theory of quantum gravity, which might eventually include the standard model and become a candidate for a theory of everything. Its founding principle is that spacetime is fundamentally discrete and that the spacetime events are related by a partial order. This partial order has the physical meaning of the causality relations between relative past and future distinguishing spacetime events. Causal dynamical triangulation does not assume any pre-existing arena (dimensional space), but rather attempts to show how the spacetime fabric itself evolves. Another attempt may be related to ER=EPR, a conjecture in physics stating that entangled particles are connected by a wormhole (or Einstein–Rosen bridge). Present status At present, there is no candidate theory of everything that includes the standard model of particle physics and general relativity and that, at the same time, is able to calculate the fine-structure constant or the mass of the electron. Most particle physicists expect that the outcome of ongoing experiments – the search for new particles at the large particle accelerators and for dark matter – are needed in order to provide further input for a theory of everything. Arguments against In parallel to the intense search for a theory of everything, various scholars have debated the possibility of its discovery. Gödel's incompleteness theorem A number of scholars claim that Gödel's incompleteness theorem suggests that attempts to construct a theory of everything are bound to fail. Gödel's theorem, informally stated, asserts that any formal theory sufficient to express elementary arithmetical facts and strong enough for them to be proved is either inconsistent (both a statement and its denial can be derived from its axioms) or incomplete, in the sense that there is a true statement that can't be derived in the formal theory. Stanley Jaki, in his 1966 book The Relevance of Physics, pointed out that, because a "theory of everything" will certainly be a consistent non-trivial mathematical theory, it must be incomplete. He claims that this dooms searches for a deterministic theory of everything. Freeman Dyson has stated that "Gödel's theorem implies that pure mathematics is inexhaustible. No matter how many problems we solve, there will always be other problems that cannot be solved within the existing rules. […] Because of Gödel's theorem, physics is inexhaustible too. The laws of physics are a finite set of rules, and include the rules for doing mathematics, so that Gödel's theorem applies to them." Stephen Hawking was originally a believer in the Theory of Everything, but after considering Gödel's Theorem, he concluded that one was not obtainable. "Some people will be very disappointed if there is not an ultimate theory that can be formulated as a finite number of principles. I used to belong to that camp, but I have changed my mind." Jürgen Schmidhuber (1997) has argued against this view; he asserts that Gödel's theorems are irrelevant for computable physics. In 2000, Schmidhuber explicitly constructed limit-computable, deterministic universes whose pseudo-randomness based on undecidable, Gödel-like halting problems is extremely hard to detect but does not prevent formal theories of everything describable by very few bits of information. Related critique was offered by Solomon Feferman and others. Douglas S. Robertson offers Conway's game of life as an example: The underlying rules are simple and complete, but there are formally undecidable questions about the game's behaviors. Analogously, it may (or may not) be possible to completely state the underlying rules of physics with a finite number of well-defined laws, but there is little doubt that there are questions about the behavior of physical systems which are formally undecidable on the basis of those underlying laws. Since most physicists would consider the statement of the underlying rules to suffice as the definition of a "theory of everything", most physicists argue that Gödel's Theorem does not mean that a theory of everything cannot exist. On the other hand, the scholars invoking Gödel's Theorem appear, at least in some cases, to be referring not to the underlying rules, but to the understandability of the behavior of all physical systems, as when Hawking mentions arranging blocks into rectangles, turning the computation of prime numbers into a physical question. This definitional discrepancy may explain some of the disagreement among researchers. Fundamental limits in accuracy No physical theory to date is believed to be precisely accurate. Instead, physics has proceeded by a series of "successive approximations" allowing more and more accurate predictions over a wider and wider range of phenomena. Some physicists believe that it is therefore a mistake to confuse theoretical models with the true nature of reality, and hold that the series of approximations will never terminate in the "truth". Einstein himself expressed this view on occasions. Definition of fundamental laws There is a philosophical debate within the physics community as to whether a theory of everything deserves to be called the fundamental law of the universe. One view is the hard reductionist position that the theory of everything is the fundamental law and that all other theories that apply within the universe are a consequence of the theory of everything. Another view is that emergent laws, which govern the behavior of complex systems, should be seen as equally fundamental. Examples of emergent laws are the second law of thermodynamics and the theory of natural selection. The advocates of emergence argue that emergent laws, especially those describing complex or living systems are independent of the low-level, microscopic laws. In this view, emergent laws are as fundamental as a theory of everything. The debates do not make the point at issue clear. Possibly the only issue at stake is the right to apply the high-status term "fundamental" to the respective subjects of research. A well-known debate over this took place between Steven Weinberg and Philip Anderson. Impossibility of calculation Weinberg points out that calculating the precise motion of an actual projectile in the Earth's atmosphere is impossible. So how can we know we have an adequate theory for describing the motion of projectiles? Weinberg suggests that we know principles (Newton's laws of motion and gravitation) that work "well enough" for simple examples, like the motion of planets in empty space. These principles have worked so well on simple examples that we can be reasonably confident they will work for more complex examples. For example, although general relativity includes equations that do not have exact solutions, it is widely accepted as a valid theory because all of its equations with exact solutions have been experimentally verified. Likewise, a theory of everything must work for a wide range of simple examples in such a way that we can be reasonably confident it will work for every situation in physics. Difficulties in creating a theory of everything often begin to appear when combining quantum mechanics with the theory of general relativity, as the equations of quantum mechanics begin to falter when the force of gravity is applied to them. See also (SVT) References Bibliography Pais, Abraham (1982) Subtle is the Lord: The Science and the Life of Albert Einstein (Oxford University Press, Oxford. Ch. 17, Weinberg, Steven (1993) Dreams of a Final Theory: The Search for the Fundamental Laws of Nature, Hutchinson Radius, London, Corey S. Powell Relativity versus quantum mechanics: the battle for the universe, The Guardian (2015) Relativity versus quantum mechanics: the battle for the universe External links The Elegant Universe, Nova episode about the search for the theory of everything and string theory. Theory of Everything, freeview video by the Vega Science Trust, BBC and Open University. The Theory of Everything: Are we getting closer, or is a final theory of matter and the universe impossible? Debate between John Ellis (physicist), Frank Close and Nicholas Maxwell. Why The World Exists, a discussion between physicist Laura Mersini-Houghton, cosmologist George Francis Rayner Ellis and philosopher David Wallace about dark matter, parallel universes and explaining why these and the present Universe exist. Theories of Everything, BBC Radio 4 discussion with Brian Greene, John Barrow & Val Gibson (In Our Time, March 25, 2004). Physics beyond the Standard Model Theories of gravity
Theory of everything
Physics
5,554
64,051,792
https://en.wikipedia.org/wiki/Isovoacristine
Isovoacristine is an anticholinergic and antihistaminic alkaloid. See also Benztropine Benzydamine Chlorpheniramine References Indole alkaloids Heterocyclic compounds with 5 rings Methyl esters Methoxy compounds Azepanes
Isovoacristine
Chemistry
63
39,524,440
https://en.wikipedia.org/wiki/Barber-Colman%20knotter
A Barber-Colman knotter is a piece of textile machinery used in a weaving shed. When all the warp carried on the weavers beam has been used, a new beam replaces it. Each end has to pass through the eyes on the existing heddles, and through the existing reed. The knotter takes each new thread and knots it the existing end, which will pull it through the correct healds and reed, saving much time. A good man could do 32 or 33 warps a day. See also Barber-Colman Company References Textile machinery Weaving equipment
Barber-Colman knotter
Engineering
117
36,114,320
https://en.wikipedia.org/wiki/Device%20Description%20Language
Device Description Language (DDL) is the formal language describing the service and configuration of field devices for process and factory automation. Background Current field devices for process and factory automation have a number of configuration options, to customize them to their individual use case. For these means they are equipped with a digital communication interface (HART, PROFIBUS, Fieldbus Foundation). Different software tools provide the means to control and configure the devices. In the 1990s, the DDL was developed to remove the requirement to write a new software tool for each new device type. Software can, through the interpretation of a device description (DD), configure and control many different devices. The creation of a description with the DDL is less effort than writing an entire software tool. The HART Communication Foundation, PROFIBUS and Fieldbus Foundation have merged their individual dialects of the DDL. The result became the Electronic Device Description Language (EDDL), an IEC standard (IEC 61804). The harmonization and enhancement of the EDDL is being undertaken in the EDDL Cooperation Team (ECT). The ECT consists of the leadership of the Fieldbus Foundation, Profibus Nutzerorganisation (PNO), Hart Communication Foundation, OPC Foundation and the FDT Group. Structure of the DDL The DDL describes: Data (e.g. Parameters) Communication (e.g. Addressing Information) User Interfaces Operations (e.g. Calibration) Software A device description (DD) can be created with a plain text editor. But like any other programing or description language, the authoring is error prone and as such special development tools may be used, to create valid and norm conforming EDDs. The following tools assists the creation of EDDs: isEDD Workbench and EDD Checker (ifak e.V. Magdeburg) - Norm conformity to the IEC 61804 parts 3, 4 and 5 The following control and configuration tools interpret the DDL: SIMATIC PDM - "The Process Device Manager" (Siemens) AMS Intelligent Device Manager(Emerson Process Management) SDC 625 (HART) FDM (Honeywell) isEDDview DTM (ifak system GmbH) iDTM für FieldCare (Endress+Hauser) DevCom2000 (ProComSol, Ltd) DevComDroid (ProComSol, Ltd) DevCom.iOS (ProComSol, Ltd) References Riedl, M.; Naumann, F.: EDDL - Electronic Device Description Language, Walter Borst: Device Description Language - The HTML of fieldbuses, Technical Article External links Official Website of the EDDL Industrial automation
Device Description Language
Engineering
558
75,858,640
https://en.wikipedia.org/wiki/Dysprosium%28III%29%20phosphate
Dysprosium(III) phosphate is an inorganic compound with the chemical formula DyPO4. Preparation Dysprosium(III) phosphate can be obtained by reacting dysprosium(III) oxide and ammonium dihydrogen phosphate at high temperature: Properties Dysprosium(III) phosphate decomposes into dysprosium oxyphosphate and phosphorus pentoxide above 1200 °C. It reacts with sodium fluoride to obtain NaDyFPO4: It reacts with sodium molybdate at high temperature to generate Na2Dy(MoO4)(PO4): References Dysprosium compounds Phosphates
Dysprosium(III) phosphate
Chemistry
142
274,668
https://en.wikipedia.org/wiki/Land%20letter
The Land letter was a letter sent to U.S. President George W. Bush by five evangelical Christian leaders on October 3, 2002, outlining their support for a just war pre-emptive invasion of Iraq. As its foundation for support, the letter refers to the "criteria of just war theory as developed by Christian theologians in the late fourth and early fifth centuries A.D." The letter was written by Richard D. Land, president of the Ethics & Religious Liberty Commission of the Southern Baptist Convention. It was co-signed by: Chuck Colson, founder of Prison Fellowship Ministries Bill Bright, chairman of the Christian organization Cru James Kennedy, president of Coral Ridge Ministries, and Carl D. Herbster, president of the American Association of Christian Schools The letter asserted that a pre-emptive invasion of Iraq met the criteria of traditional 'just war' theory because: such an action would be defensive the intent is found to be just and noble. The United States does not intend to 'destroy, conquer, or exploit Iraq' it is a last resort because Saddam Hussein had a record of attacking his neighbors, of the 'headlong pursuit and development of biochemical and nuclear weapons of mass destruction' and their use against his own people, and harboring Al-Qaeda in Iraq terrorists it is authorized by a legitimate authority, namely the United States it has limited goals it has reasonable expectation of success non-combatant immunity would be observed it meets the criteria of proportionality—the human cost on both sides would be justified by the intended outcome See also Religious opposition to the Iraq War References External links Text of the Land letter Causes and prelude of the Iraq War Reactions to the Iraq War Christianity and violence Evangelical documents 2002 documents Just war theory Iraq War documents
Land letter
Biology
354
9,631,094
https://en.wikipedia.org/wiki/Small%20temporal%20RNA
Small temporal RNA (abbreviated stRNA) regulates gene expression during roundworm development by preventing the mRNAs they bind from being translated. In contrast to siRNA, stRNAs downregulate expression of target RNAs after translation initiation without affecting mRNA stability. Nowadays, stRNAs are better known as miRNAs. stRNAs exert negative post-transcriptional regulation by binding to complementary sequences in the 3' untranslated regions of their target genes. stRNAs are transcribed as longer precursor RNAs that are processed by the RNase Dicer/DCR-1 and members of the RDE-1/AGO1 family of proteins, which are better known for their roles in RNA interference (RNAi). stRNAs may function to control temporal identity during development in C. elegans and other organisms. References RNA RNA interference
Small temporal RNA
Chemistry
175
747,222
https://en.wikipedia.org/wiki/Silicon%20on%20sapphire
Silicon on sapphire (SOS) is a hetero-epitaxial process for metal–oxide–semiconductor (MOS) integrated circuit (IC) manufacturing that consists of a thin layer (typically thinner than 0.6 μm) of silicon grown on a sapphire () wafer. SOS is part of the silicon-on-insulator (SOI) family of CMOS (complementary MOS) technologies. Typically, high-purity artificially grown sapphire crystals are used. The silicon is usually deposited by the decomposition of silane gas () on heated sapphire substrates. The advantage of sapphire is that it is an excellent electrical insulator, preventing stray currents caused by radiation from spreading to nearby circuit elements. SOS faced early challenges in commercial manufacturing because of difficulties in fabricating the very small transistors used in modern high-density applications. This is because the SOS process results in the formation of dislocations, twinning and stacking faults from crystal lattice disparities between the sapphire and silicon. Additionally, there is some aluminum, a p-type dopant, contamination from the substrate in the silicon closest to the interface. History In 1963, Harold M. Manasevit was the first to document epitaxial growth of silicon on sapphire while working at the Autonetics division of North American Aviation (now Boeing). In 1964, he published his findings with colleague William Simpson in the Journal of Applied Physics. In 1965, C.W. Mueller and P.H. Robinson fabricated a MOSFET (metal–oxide–semiconductor field-effect transistor) using the silicon-on-sapphire process at RCA Laboratories. SOS was first used in aerospace and military applications because of its inherent resistance to radiation. More recently, patented advancements in SOS processing and design have been made by Peregrine Semiconductor, allowing SOS to be commercialized in high-volume for high-performance radio-frequency (RF) applications. Circuits and systems The advantages of the SOS technology allow research groups to fabricate a variety of SOS circuits and systems that benefit from the technology and advance the state-of-the-art in: analog-to-digital converters (a nano-Watts prototype was produced by Yale e-Lab) monolithic digital isolation buffers SOS-CMOS image sensor arrays (one of the first standard CMOS image sensor arrays capable of transducing light simultaneously from both sides of the die was produced by Yale e-Lab) patch-clamp amplifiers energy harvesting devices three-dimensional (3D) integration with no galvanic connections charge pumps temperature sensors early microprocessors, such as the RCA 1802 Applications Silicon on sapphire pressure transducer, pressure transmitter and temperature sensor diaphragms have been manufactured using a patented process by Armen Sahagen since 1985. Outstanding performance in high temperature environments helped propel this technology forward. This SOS technology has been licensed throughout the world. ESI Technology Ltd. in the UK have developed a wide range of pressure transducers and pressure transmitters that benefit from the outstanding features of silicon on sapphire. Peregrine Semiconductor has used SOS technology to develop RF integrated circuits (RFICs) including RF switches, digital step attenuators (DSAs), phase locked-loop (PLL) frequency synthesizers, prescalers, mixers/upconverters, and variable-gain amplifiers. These RFICs are designed for commercial RF applications such as mobile handsets and cellular infrastructure, broadband consumer and DTV, test and measurement, and industrial public safety, as well as rad-hard aerospace and defense markets. Hewlett-Packard used SOS in some of their CPU designs, particularly in the HP 3000 line of computers. Silicon on sapphire chips produced in the 1970s proved superior in performance to their all silicon counterparts, but this came at the cost of lower yields of just 9%. Substrate analysis: SOS structure The application of epitaxial growth of silicon on sapphire substrates for fabricating MOS devices involves a silicon purification process that mitigates crystal defects which result from a mismatch between sapphire and silicon lattices. For example, Peregrine Semiconductor's SP4T switch is formed on an SOS substrate where the final thickness of silicon is approximately 95 nm. Silicon is recessed in regions outside the polysilicon gate stack by poly oxidation and further recessed by the sidewall spacer formation process to a thickness of approximately 78 nm. See also Silicon on insulator Radiation hardening References Further reading Thin film deposition Semiconductor device fabrication MOSFETs Silicon
Silicon on sapphire
Chemistry,Materials_science,Mathematics
942
2,089,044
https://en.wikipedia.org/wiki/IMSI-catcher
An international mobile subscriber identity-catcher, or IMSI-catcher, is a telephone eavesdropping device used for intercepting mobile phone traffic and tracking location data of mobile phone users. Essentially a "fake" mobile tower acting between the target mobile phone and the service provider's real towers, it is considered a man-in-the-middle (MITM) attack. The 3G wireless standard offers some risk mitigation due to mutual authentication required from both the handset and the network. However, sophisticated attacks may be able to downgrade 3G and LTE to non-LTE network services which do not require mutual authentication. IMSI-catchers are used in a number of countries by law enforcement and intelligence agencies, but their use has raised significant civil liberty and privacy concerns and is strictly regulated in some countries such as under the German Strafprozessordnung (StPO / Code of Criminal Procedure). Some countries do not have encrypted phone data traffic (or very weak encryption), thus rendering an IMSI-catcher unnecessary. Overview A virtual base transceiver station (VBTS) is a device for identifying the temporary mobile subscriber identity (TMSI), international mobile subscriber identity (IMSI) of a nearby GSM mobile phone and intercepting its calls, some are even advanced enough to detect the international mobile equipment identity (IMEI). It was patented and first commercialized by Rohde & Schwarz in 2003. The device can be viewed as simply a modified cell tower with a malicious operator, and on 4 January 2012, the Court of Appeal of England and Wales held that the patent is invalid for obviousness. IMSI-catchers are often deployed by court order without a search warrant, the lower judicial standard of a pen register and trap-and-trace order being preferred by law enforcement. They can also be used in search and rescue operation for missing persons. Police departments have been reluctant to reveal use of these programs and contracts with vendors such as Harris Corporation, the maker of Stingray and Kingfish phone tracker devices. In the UK, the first public body to admit using IMSI catchers was the Scottish Prison Service, though it is likely that the Metropolitan Police Service has been using IMSI catchers since 2011 or before. Body-worn IMSI-catchers that target nearby mobile phones are being advertised to law enforcement agencies in the US. The GSM specification requires the handset to authenticate to the network, but does not require the network to authenticate to the handset. This well-known security hole is exploited by an IMSI catcher. The IMSI catcher masquerades as a base station and logs the IMSI numbers of all the mobile stations in the area, as they attempt to attach to the IMSI-catcher. It allows forcing the mobile phone connected to it to use no call encryption (A5/0 mode) or to use easily breakable encryption (A5/1 or A5/2 mode), making the call data easy to intercept and convert to audio. The 3G wireless standard mitigates risk and enhanced security of the protocol due to mutual authentication required from both the handset and the network and removes the false base station attack in GSM. Some sophisticated attacks against 3G and LTE may be able to downgrade to non-LTE network services which then does not require mutual authentication. Functionalities Identifying an IMSI Every mobile phone has the requirement to optimize its reception. If there is more than one base station of the subscribed network operator accessible, it will always choose the one with the strongest signal. An IMSI-catcher masquerades as a base station and causes every mobile phone of the simulated network operator within a defined radius to log in. With the help of a special identity request, it is able to force the transmission of the IMSI. Tapping a mobile phone The IMSI-catcher subjects the phones in its vicinity to a man-in-the-middle attack, appearing to them as a preferred base station in terms of signal strength. With the help of a SIM, it simultaneously logs into the GSM network as a mobile station. Since the encryption mode is chosen by the base station, the IMSI-catcher can induce the mobile station to use no encryption at all. Hence it can encrypt the plain text traffic from the mobile station and pass it to the base station. A targeted mobile phone is sent signals where the user will not be able to tell apart the device from authentic cell service provider infrastructure. This means that the device will be able to retrieve data that a normal cell tower receives from mobile phones if registered. There is only an indirect connection from mobile station via IMSI-catcher to the GSM network. For this reason, incoming phone calls cannot generally be patched through to the mobile station by the GSM network, although more modern versions of these devices have their own mobile patch-through solutions in order to provide this functionality. Passive IMSI detection The difference between a passive IMSI-catcher and an active IMSI-catcher is that an active IMSI-catcher intercepts the data in transfer such as spoke, text, mail, and web traffic between the endpoint and cell tower. Active IMSI-catchers generally also intercept all conversations and data traffic within a large range and are therefore also called rogue cell towers. It sends a signal with a plethora of commands to the endpoints, which respond by establishing a connection and routes all conversations and data traffic between the endpoints and the actual cell tower for as long as the attacker wishes. A passive IMSI-catcher on the other hand only detects the IMSI, TMSI or IMEI of an endpoint. Once the IMSI, TMSI or IMEI address is detected, the endpoint is immediately released. The passive IMSI-catcher sends out a signal with only one specific command to the endpoints, which respond to it and share the identifiers of the endpoint with the passive IMSI-catcher. The vendors of passive IMSI-catchers take privacy more into account. Universal Mobile Telecommunications System (UMTS) False base station attacks are prevented by a combination of key freshness and integrity protection of signaling data, not by authenticating the serving network. To provide a high network coverage, the UMTS standard allows for inter-operation with GSM. Therefore, not only UMTS but also GSM base stations are connected to the UMTS service network. This fallback is a security disadvantage and allows a new possibility of a man-in-the-middle attack. Tell-tales and difficulties The assignment of an IMSI catcher has a number of difficulties: It must be ensured that the mobile phone of the observed person is in standby mode and the correct network operator is found out. Otherwise, for the mobile station, there is no need to log into the simulated base station. Depending on the signal strength of the IMSI-catcher, numerous IMSIs can be located. The problem is to find out the right one. All mobile phones in the area covered by the catcher have no access to the network. Incoming and outgoing calls cannot be patched through for these subscribers. Only the observed person has an indirect connection. There are some disclosing factors. In most cases, the operation cannot be recognized immediately by the subscriber. But there are a few mobile phones that show a small symbol on the display, e.g. an exclamation point, if encryption is not used. This "Ciphering Indication Feature" can be suppressed by the network provider, however, by setting the OFM bit in EFAD on the SIM card. Since the network access is handled with the SIM/USIM of the IMSI-catcher, the receiver cannot see the number of the calling party. Of course, this also implies that the tapped calls are not listed in the itemized bill. The assignment near the base station can be difficult, due to the high signal level of the original base station. As most mobile phones prefer the faster modes of communication such as 4G or 3G, downgrading to 2G can require blocking frequency ranges for 4G and 3G. Detection and counter-measures Some preliminary research has been done in trying to detect and frustrate IMSI-catchers. One such project is through the Osmocom open source mobile station software. This is a special type of mobile phone firmware that can be used to detect and fingerprint certain network characteristics of IMSI-catchers, and warn the user that there is such a device operating in their area. But this firmware/software-based detection is strongly limited to a select few, outdated GSM mobile phones (i.e. Motorola) that are no longer available on the open market. The main problem is the closed-source nature of the major mobile phone producers. The application Android IMSI-Catcher Detector (AIMSICD) is being developed to detect and circumvent IMSI-catchers by StingRay and silent SMS. Technology for a stationary network of IMSI-catcher detectors has also been developed. Several apps listed on the Google Play Store as IMSI catcher detector apps include SnoopSnitch, Cell Spy Catcher, and GSM Spy Finder and have between 100,000 and 500,000 app downloads each. However, these apps have limitations in that they do not have access to phone's underlying hardware and may offer only minimal protection. See also Telephone tapping Stingray phone tracker Mobile phone jammer External links Chris Paget's presentation Practical Cellphone Spying at DEF CON 18 Verrimus - Mobile Phone Intercept Detection Footnotes Further reading External links Mobile Phone Networks: a tale of tracking, spoofing and owning mobile phones IMSI-catcher Seminar paper and presentation Mini IMSI and IMEI catcher The OsmocomBB project MicroNet: Proximus LLC GSM IMSI and IMEI dual band catcher MicroNet-U: Proximus LLC UMTS catcher iParanoid: IMSI Catcher Intrusion Detection System presentation Vulnerability by Design in Mobile Network Security Surveillance Mobile security Telephone tapping Telephony equipment Law enforcement equipment
IMSI-catcher
Technology,Engineering
2,085
55,949,736
https://en.wikipedia.org/wiki/Yukon%20Ice%20Patches
The Yukon Ice Patches are a series of dozens of ice patches in the southern Yukon discovered in 1997, which have preserved hundreds of archaeological artifacts, with some more than 9,000 years old. The first ice patch was discovered on the mountain Thandlät, west of the Kusawa Lake campground which is west of Whitehorse, Yukon. The Yukon Ice Patch Project began shortly afterwards with a partnership between archaeologists in partnership with six Yukon First Nations, on whose traditional territory the ice patches were found. They include the Carcross/Tagish First Nation, the Kwanlin Dün First Nation, the Ta’an Kwäch’än Council, the Champagne and Aishihik First Nations, the Kluane First Nation, and the Teslin Tlingit Council. Ice patches Cryologists describe how ice patches, such the rare Yukon alpine region ice patches, differ from glaciers. The latter are constantly moving; they gradually build up mass over time until they reach a certain size, when they slowly flow downhill. Unlike glaciers, ice patches do not move. As some of the snow remaining from winter accumulation melts, the rest is gradually compressed into ice. Ice patches do not achieve enough mass to flow downhill so any artifacts within are preserved intact without being crushed. History In the 1990s "during a period of extremely warm summer temperatures" with ice patches melting, the Yukon Ice Patch Project began. In September 1997, Gerald W. Kuzyk discovered the first of the Yukon ice patches artifacts, an atlatl dart fragment, on mountain Thandlät at an elevation of . The Yukon Ice Patches are studied by archaeologists in partnership with six Yukon First Nations, on whose traditional territory the ice patches were found. They include the Carcross/Tagish First Nation, the Kwanlin Dün First Nation, the Ta’an Kwäch’än Council, the Champagne and Aishihik First Nations, the Kluane First Nation, and the Teslin Tlingit Council. The 43 Yukon Ice Patches in southern Yukon included "more than 207 archaeological objects and 1700 faunal remains have been recovered from 43 melting ice patches in the southern Yukon. The artifacts range in age from a 9000-year-old (calendar) dart shaft to a 19th-century musket ball...Of particular interest is the description of three different techniques for the construction of throwing darts and the observation of stability in the hunting technology employed in the study area over seven millennia. Radiocarbon chronologies indicate that this period of stability was followed by an abrupt technological replacement of the throwing dart by the bow and arrow after 1200 BP." The artifacts are curated by the Yukon Archaeology Program, Government of Yukon. In the Kusawa Lake area, there are no longer any caribou, but in her 1987 interviews, Elder Mary Ned (born 1890s-) spoke about caribou being “all over this place.” Evidence of this was proven by the nearby discovery of the Ice Patch artifacts...Oral history tells us that a corral, or caribou fence was located on the east side of the lake, between the lake and the mountain." References Geography of the Arctic Geomorphology Montane ecology Pedology Physical geography Earth sciences Planetary science Paleoclimatology Archaeological sites in Yukon
Yukon Ice Patches
Astronomy
662
51,793,007
https://en.wikipedia.org/wiki/Myles%20Tierney
Myles Tierney (September 1937 – 5 October 2017) was an American mathematician and Professor at Rutgers University who founded the theory of elementary toposes with William Lawvere. Tierney obtained his B.A. from Brown University in 1959 and his Ph.D. from Columbia University in 1965. His dissertation, On the classifying spaces for K-Theory mod p, was written under the supervision of Samuel Eilenberg. Following positions at Rice University (1965–66) and ETH Zurich (1966–68), he became an associate professor at Rutgers in 1968. Tierney was named a Fellow of the American Mathematical Society. Publications Myles Tierney, On the Spectrum of a Ringed Topos, Algebra, Topology and Category Theory, (1976) André Joyal, Myles Tierney, An extension of the Galois theory of Grothendieck, Memoirs of the American Mathematical Society 51 (1984), no. 309. André Joyal, Myles Tierney, Strong stacks and classifying space, Category theory (Como, 1990), 213—236, Lecture Notes in Math. 1488, Springer 1991. André Joyal, Myles Tierney, On the theory of path groupoids, Journal of Pure and Applied Algebra 149 (2000), no. 1, 69—100, . André Joyal, Myles Tierney, Quasi-categories vs Segal spaces, Categories in algebra, geometry and mathematical physics, 277—326, Contemporary Mathematics 431, American Mathematical Society, Providence, RI, 2007. See also Lawvere–Tierney topology References 1937 births 2017 deaths 20th-century American mathematicians Category theorists Rutgers University faculty Fellows of the American Mathematical Society Columbia Graduate School of Arts and Sciences alumni Brown University alumni Rice University people Academic staff of ETH Zurich 21st-century American mathematicians
Myles Tierney
Mathematics
353
35,648,894
https://en.wikipedia.org/wiki/Momentum-transfer%20cross%20section
In physics, and especially scattering theory, the momentum-transfer cross section (sometimes known as the momentum-transport cross section) is an effective scattering cross section useful for describing the average momentum transferred from a particle when it collides with a target. Essentially, it contains all the information about a scattering process necessary for calculating average momentum transfers but ignores other details about the scattering angle. The momentum-transfer cross section is defined in terms of an (azimuthally symmetric and momentum independent) differential cross section by The momentum-transfer cross section can be written in terms of the phase shifts from a partial wave analysis as Explanation The factor of arises as follows. Let the incoming particle be traveling along the -axis with vector momentum Suppose the particle scatters off the target with polar angle and azimuthal angle plane. Its new momentum is For collision to much heavier target than striking particle (ex: electron incident on the atom or ion), so By conservation of momentum, the target has acquired momentum Now, if many particles scatter off the target, and the target is assumed to have azimuthal symmetry, then the radial ( and ) components of the transferred momentum will average to zero. The average momentum transfer will be just . If we do the full averaging over all possible scattering events, we get where the total cross section is Here, the averaging is done by using expected value calculation (see as a probability density function). Therefore, for a given total cross section, one does not need to compute new integrals for every possible momentum in order to determine the average momentum transferred to a target. One just needs to compute . Application This concept is used in calculating charge radius of nuclei such as proton and deuteron by electron scattering experiments. To this purpose a useful quantity called the scattering vector having the dimension of inverse length is defined as a function of energy and scattering angle : References Momentum Scattering theory
Momentum-transfer cross section
Physics,Chemistry,Mathematics
384
12,984
https://en.wikipedia.org/wiki/Geiger%20counter
A Geiger counter (, ; also known as a Geiger–Müller counter or G-M counter) is an electronic instrument used for detecting and measuring ionizing radiation. It is widely used in applications such as radiation dosimetry, radiological protection, experimental physics and the nuclear industry. "Geiger counter" is often used generically to refer to any form of dosimeter (or, radiation-measuring device), but scientifically, a Geiger counter is only one specific type of dosimeter. It detects ionizing radiation such as alpha particles, beta particles, and gamma rays using the ionization effect produced in a Geiger–Müller tube, which gives its name to the instrument. In wide and prominent use as a hand-held radiation survey instrument, it is perhaps one of the world's best-known radiation detection instruments. The original detection principle was realized in 1908 at the University of Manchester, but it was not until the development of the Geiger–Müller tube in 1928 that the Geiger counter could be produced as a practical instrument. Since then, it has been very popular due to its robust sensing element and relatively low cost. However, there are limitations in measuring high radiation rates and the energy of incident radiation. The Geiger counter is one of the first examples of data sonification. Principle of operation A Geiger counter consists of a Geiger–Müller tube (the sensing element which detects the radiation) and the processing electronics, which display the result. The Geiger–Müller tube is filled with an inert gas such as helium, neon, or argon at low pressure, to which a high voltage is applied. The tube briefly conducts electrical charge when high energy particles or gamma radiation make the gas conductive by ionization. The ionization is considerably amplified within the tube by the Townsend discharge effect to produce an easily measured detection pulse, which is fed to the processing and display electronics. This large pulse from the tube makes the Geiger counter relatively cheap to manufacture, as the subsequent electronics are greatly simplified. The electronics also generate the high voltage, typically 400–900 volts, that has to be applied to the Geiger–Müller tube to enable its operation. This voltage must be carefully selected, as too high a voltage will allow for continuous discharge, damaging the instrument and invalidating the results. Conversely, too low a voltage will result in an electric field that is too weak to generate a current pulse. The correct voltage is usually specified by the manufacturer. To help quickly terminate each discharge in the tube a small amount of halogen gas or organic material known as a quenching mixture is added to the fill gas. Readout There are two types of detected radiation readout: counts and radiation dose. The counts display is the simplest, and shows the number of ionizing events detected, displayed either as a count rate, such as "counts per minute" or "counts per second", or as a total number of counts over a set time period (an integrated total). The counts readout is normally used when alpha or beta particles are being detected. More complex to achieve is a display of radiation dose rate, displayed in units such as the sievert, which is normally used for measuring gamma or X-ray dose rates. A Geiger–Müller tube can detect the presence of radiation, but not its energy, which influences the radiation's ionizing effect. Consequently, instruments measuring dose rate require the use of an energy compensated Geiger–Müller tube, so that the dose displayed relates to the counts detected. The electronics will apply known factors to make this conversion, which is specific to each instrument and is determined by design and calibration. The readout can be analog or digital, and modern instruments offer serial communications with a host computer or network. There is usually an option to produce audible clicks representing the number of ionization events detected. This is the distinctive sound associated with handheld or portable Geiger counters. The purpose of this is to allow the user to concentrate on manipulation of the instrument while retaining auditory feedback on the radiation rate. Limitations There are two main limitations of the Geiger counter: Because the output pulse from a Geiger–Müller tube is always of the same magnitude (regardless of the energy of the incident radiation), the tube cannot differentiate between radiation types or measure radiation energy, which prevents it from correctly measuring dose rate. The tube is less accurate at high radiation rates, because each ionization event is followed by a "dead time", an insensitive period during which any further incident radiation does not result in a count. Typically, the dead time will reduce indicated count rates above about 104 to 105 counts per second, depending on the characteristic of the tube being used. While some counters have circuitry which can compensate for this, for measuring very high dose rates, ion chamber instruments are preferred for high radiation rates. Types and applications The intended detection application of a Geiger counter dictates the tube design used. Consequently, there are a great many designs, but they can be generally categorized as "end-window", windowless "thin-walled", "thick-walled", and sometimes hybrids of these types. Particle detection The first historical uses of the Geiger principle were to detect α- and β-particles, and the instrument is still used for this purpose today. For α-particles and low energy β-particles, the "end-window" type of a Geiger–Müller tube has to be used, as these particles have a limited range and are easily stopped by a solid material. Therefore, the tube requires a window which is thin enough to allow as many as possible of these particles through to the fill gas. The window is usually made of mica with a density of about 1.5–2.0 mg/cm2. α-particles have the shortest range, and to detect these the window should ideally be within 10 mm of the radiation source due to α-particle attenuation. However, the Geiger–Müller tube produces a pulse output which is the same magnitude for all detected radiation, so a Geiger counter with an end window tube cannot distinguish between α- and β-particles. A skilled operator can use varying distance from a radiation source to differentiate between α- and high energy β-particles. The "pancake" Geiger–Müller tube is a variant of the end-window probe, but designed with a larger detection area to make checking quicker. However, the pressure of the atmosphere against the low pressure of the fill gas limits the window size due to the limited strength of the window membrane. Some β-particles can also be detected by a thin-walled "windowless" Geiger–Müller tube, which has no end-window, but allows high energy β-particles to pass through the tube walls. Although the tube walls have a greater stopping power than a thin end-window, they still allow these more energetic particles to reach the fill gas. End-window Geiger counters are still used as a general purpose, portable, radioactive contamination measurement and detection instrument, owing to their relatively low cost, robustness and relatively high detection efficiency; particularly with high energy β-particles. However, for discrimination between α- and β-particles or provision of particle energy information, scintillation counters or proportional counters should be used. Those instrument types are manufactured with much larger detector areas, which means that checking for surface contamination is quicker than with a Geiger counter. Gamma and X-ray detection Geiger counters are widely used to detect gamma radiation and X-rays, collectively known as photons, and for this the windowless tube is used. However, detection efficiency is low compared to alpha and beta particles. The article on the Geiger–Müller tube carries a more detailed account of the techniques used to detect photon radiation. For high energy photons, the tube relies on the interaction of the radiation with the tube wall, usually a material with a high atomic number such as stainless steel of 1–2 mm thickness, to produce free electrons within the tube wall, due to the photoelectric effect. If these migrate out of the tube wall, they enter and ionize the fill gas. This effect increases the detection efficiency because the low-pressure gas in the tube has poorer interaction with higher energy photons than a steel tube. However, as photon energies decrease to low levels, there is greater gas interaction, and the contribution of direct gas interaction increases. At very low energies (less than 25 keV), direct gas ionisation dominates, and a steel tube attenuates the incident photons. Consequently, at these energies, a typical tube design is a long tube with a thin wall which has a larger gas volume, to give an increased chance of direct interaction of a particle with the fill gas. Above these low energy levels, there is a considerable variance in response to different photon energies of the same intensity, and a steel-walled tube employs what is known as "energy compensation" in the form of filter rings around the naked tube, which attempts to compensate for these variations over a large energy range. A steel-walled Geiger–Müller tube is about 1% efficient over a wide range of energies. Neutron detection A variation of the Geiger tube known as a Bonner sphere can be used to exclusively measure radiation dosage from neutrons rather than from gammas by the process of neutron capture. The tube, which can contain the fill gas boron trifluoride or helium-3, is surrounded by a plastic moderator that reduces neutron energies prior to capture. When a capture occurs in the fill gas, the energy released is registered in the detector. Gamma measurement—personnel protection and process control While "Geiger counter" is practically synonymous with the hand-held variety, the Geiger principle is in wide use in installed "area gamma" alarms for personnel protection, as well as in process measurement and interlock applications. The processing electronics of such installations have a higher degree of sophistication and reliability than those of hand-held meters. Physical design For hand-held units there are two fundamental physical configurations: the "integral" unit with both detector and electronics in the same unit, and the "two-piece" design which has a separate detector probe and an electronics module connected by a short cable. In the 1930s a mica window was added to the cylindrical design allowing low-penetration radiation to pass through with ease. The integral unit allows single-handed operation, so the operator can use the other hand for personal security in challenging monitoring positions, but the two piece design allows easier manipulation of the detector, and is commonly used for alpha and beta surface contamination monitoring where careful manipulation of the probe is required or the weight of the electronics module would make operation unwieldy. A number of different sized detectors are available to suit particular situations, such as placing the probe in small apertures or confined spaces. Gamma and X-Ray detectors generally use an "integral" design so the Geiger–Müller tube is conveniently within the electronics enclosure. This can easily be achieved because the casing usually has little attenuation, and is employed in ambient gamma measurements where distance from the source of radiation is not a significant factor. However, to facilitate more localised measurements such as "surface dose", the position of the tube in the enclosure is sometimes indicated by targets on the enclosure so an accurate measurement can be made with the tube at the correct orientation and a known distance from the surface. There is a particular type of gamma instrument known as a "hot spot" detector which has the detector tube on the end of a long pole or flexible conduit. These are used to measure high radiation gamma locations whilst protecting the operator by means of distance shielding. Particle detection of alpha and beta can be used in both integral and two-piece designs. A pancake probe (for alpha/beta) is generally used to increase the area of detection in two-piece instruments whilst being relatively light weight. In integral instruments using an end window tube there is a window in the body of the casing to prevent shielding of particles. There are also hybrid instruments which have a separate probe for particle detection and a gamma detection tube within the electronics module. The detectors are switchable by the operator, depending the radiation type that is being measured. Guidance on application use In the United Kingdom the National Radiological Protection Board issued a user guidance note on selecting the best portable instrument type for the radiation measurement application concerned. This covers all radiation protection instrument technologies and includes a guide to the use of G-M detectors. History In 1908 Hans Geiger, under the supervision of Ernest Rutherford at the Victoria University of Manchester (now the University of Manchester), developed an experimental technique for detecting alpha particles that would later be used to develop the Geiger–Müller tube in 1928. This early counter was only capable of detecting alpha particles and was part of a larger experimental apparatus. The fundamental ionization mechanism used was discovered by John Sealy Townsend between 1897 and 1901, and is known as the Townsend discharge, which is the ionization of molecules by ion impact. It was not until 1928 that Geiger and Walther Müller (a PhD student of Geiger) developed the sealed Geiger–Müller tube which used basic ionization principles previously used experimentally. Small and rugged, not only could it detect alpha and beta radiation as prior models had done, but also gamma radiation. Now a practical radiation instrument could be produced relatively cheaply, and so the Geiger counter was born. As the tube output required little electronic processing, a distinct advantage in the thermionic valve era due to minimal valve count and low power consumption, the instrument achieved great popularity as a portable radiation detector. Modern versions of the Geiger counter use halogen quench gases, a technique invented in 1947 by Sidney H. Liebson. Halogen compounds have superseded the organic quench gases because of their much longer life and lower operating voltages; typically 400-900 volts. Gallery See also Becquerel, the SI unit of the radioactive decay rate of a quantity of radioactive material Civil defense Geiger counters, handheld radiation monitors, both G-M and ion chambers Counting efficiency the ratio of radiation events reaching a detector and the number it counts Data sonification, the interpretation or processing of data by sound Dosimeter, a device used by personnel to measure what radiation dose they have received Ionization chamber, the simplest ionising radiation detector Gaseous ionization detector, an overview of the main gaseous detector types Geiger–Müller tube, provides a more detailed description of Geiger–Müller tube operation and types Geiger plateau, the correct operating voltage range for a Geiger–Müller tube Photon counting Radioactive decay, the process by which unstable atoms emit radiation Safecast (organization), use of Geiger–Müller counter technology in citizen science Scintillation counter, a gasless radiation detector Sievert, the SI unit of stochastic effects of radiation on the human body References External links How a Geiger counter works. Particle detectors Laboratory equipment Counting instruments Ionising radiation detectors 1908 introductions 1928 introductions English inventions German inventions Radiation protection
Geiger counter
Mathematics,Technology,Engineering
3,068
33,692,541
https://en.wikipedia.org/wiki/Interstitial%20condensation
Interstitial condensation is a type of condensation that may occur within an enclosed wall, roof or floor cavity structure, which can create dampening. When moisture-laden air at dew point temperature penetrates inside a cavity of the structure, it condenses into liquid water on that surface. The moisture laden air can penetrate into hidden interstitial wall cavity through the exterior in a warm/humid outdoor period, and from inside the building during warm/humid indoor periods. Groundwater soaking the basement foundation walls from wet soil is common. This can result from a high water table or from improperly drained rainwater runoff soaking into the ground next to the basement walls. Moisture saturated basement walls will add moisture directly into basement interstitial spaces leading to interstitial condensation with cool basement temperatures. All interstitial condensation can cause uncontrolled mold and bacteria growth, rotting of wood components, corrosion of metal components and/or a reduction in the thermal insulation's effectiveness. The resulting structural damage, along with mold and bacteria growth, may occur without any visible surface indications until significant damage or extensive mold and bacteria growth has occurred. HVAC ducts within interstitial spaces (chases) can leak out cold air through unsealed joints/connections which produces dew point surfaces. Unsealed duct joints/connections can also create suction that pulls humid air into interstitial spaces and chases. This can promote more mold and bacteria growth on the condensed cool surfaces of the interstitial spaces. In addition, the cool ducts themselves can condense humid air and “sweat” even more liquid water into the interstitial spaces thereby exacerbating mold and bacteria growth. Since most building materials are permeable and many joints are not completely sealed, it's critical in controlling interstitial condensation to control indoor moisture at its sources (venting out shower vapor), through HVAC dehumidification, ventilation and by adding an impermeable vapor barrier in the interstitial cavity. In addition, since the air in interstitial cavities can communicate with interior spaces through tiny cracks and unsealed joints, any airborne mold, aerosolized fungal fragments and bacteria growth in the interstitial cavity can travel into the building's air to then be breathed in by building occupants. Interstitial condensation is differentiated from surface condensation in buildings which is known as "cold-bridge condensation" or "warm front condensation" where the condensation forms on the interior or exterior surfaces of a building rather than inside wall, floor or roof cavities. Moisture sources It is physically impossible to build envelope assemblies so that they completely prevent air infiltration, exfiltration of water vapor diffusion. Moist air can infiltrate envelope assemblies driven by the pressure differential created by wind and stack effect. Since all buildings contain various levels of moist air, cognizant authorities have recommended maintaining an indoor relative humidity of air between 40% and 60%. The sources of interior moisture are people, appliances such as dishwashers, cooking, showers, wet basements, leaking pipes and roof/wall rainwater leaks. Leaks of liquid water into the building envelope are a different problem than interstitial moisture condensation, but this additional water can exacerbate interstitial wetting which can increase mold and bacteria growth. Discovering wet interstitial spaces Building professionals have moisture sensing instruments to discover areas of interstitial condensation which may contain possible mold & bacteria growth. There are three primary methods to test for interstitial moisture-surface testing and cavity testing: Surface testing with pin-type moisture meters. This meter works on a resistance principle that measures the flow of electricity between two pin tips and measures the moisture of that very tiny path. Pin meters only measure the moisture at the point in the material (drywall or wood) between the two pins. Behind wall testing with electromagnetic moisture meters. This meter detects and evaluates moisture conditions within various building materials by non-destructively measuring the electrical impedance. A low frequency electronic signal is transmitted into the material via the electrodes in the base of the instrument. The strength of this signal varies in proportion to the amount of moisture in the material under test. The moisture meter determines the strength of the current and converts this to a moisture content value, displaying it on an analog dial or digital screen. Infrared cameras to detect surface temperatures (wet walls are cooler). Infrared cameras are good tools for quickly finding surface moisture, but depend on sufficiently wetted surfaces which show up as a cooler temperature. Depending on the instrument's quality and sensitivity, the instrument may or may not find surface moisture area, and should always be used in conjunction with surface or behind wall meters.. Prevention Preventing interstitial condensation by keeping these hidden spaces dry, is critical in all buildings. This is done by: maintaining a slightly positive indoor pressure in warm months and a neutral pressurization in cold months; preventing infiltration (exterior air leakage into the building); preventing exfiltration (interior air leakage into the assemblies); controlling indoor moisture at its sources through exhaust ventilation, having correct HVAC design for efficient air dehumidification; effective vapor barrier wall sealing; proper insulation; using an diffusion tight vapor barrier (vapor check) on the warm side of the insulation, i.e., inside the assembly on a heated building and outside on a cooled building. Vapor barriers can be problematic because they difficult to install perfectly and also reduce the ability of a cavity to dry out when it does get wet. Vapor barriers are used in conjunction with a housewrap which are vapor permeable but a water resistant membrane, so that one side of the cavity is permeable to allow drying. Spray foam insulation can an effective vapor barrier if applied correctly. Historically, most buildings built before the twentieth century were not designed to maintain 70F/21C, were both naturally well ventilated and built with very permeable materials. The increase in interstitial condensation problems are due to: the modern prevalence of central heating and air conditioning; the construction of air-tighter enclosures causing buildings to be negatively pressurized; more heavily insulated buildings; more indoor plumbing sweating and leaking. Other construction Interstitial condensation problems may also occur in other structures with enclosed air spaces along with the presence of high humidity and a large temperature difference between exterior and interior, including refrigerated vehicles. Freezing The process may cause further problems if freezing is involved. Condensed water expands when frozen, possibly causing further structural damage. References Moisture protection Building defects
Interstitial condensation
Materials_science
1,354
55,735,904
https://en.wikipedia.org/wiki/Rubroboletus%20esculentus
Rubroboletus esculentus is a bolete fungus in the family Boletaceae. It is found in southwestern China. The cap is bright red or blood red and measures 7-12 cm across. It is sticky when wet. The flesh under the cap is 2–2.5 cm thick and yellow. The tubes are yellow, while the pore surface is red to brownish red. The stout stipe is 9–12 cm high by 2–4 cm wide and often has a bulbous base. All parts of the mushroom turn blue when cut or bruised. Rubroboletus esculentus was described in 2017. Its closest relative genetically is Rubroboletus rhodoxanthus. It is a popularly eaten and highly regarded mushroom in Aba, Chengdu and Dujiangyan of Sichuan Province in southwestern China, where it is seen in markets. References Fungi described in 2017 Fungi of China esculentus Edible fungi Fungus species
Rubroboletus esculentus
Biology
199
15,347,925
https://en.wikipedia.org/wiki/ZBTB7B
Zinc finger and BTB domain-containing protein 7B is a protein that in humans is encoded by the ZBTB7B gene. ZFP67 is an early growth response gene that encodes a zinc finger-containing transcription factor that binds to the promoter regions of type I collagen genes (e.g. COL1A1; MIM 120150) and has a role in development.[supplied by OMIM] See also Zbtb7 References Further reading External links Transcription factors
ZBTB7B
Chemistry,Biology
104
4,237,868
https://en.wikipedia.org/wiki/Winged%20sun
The winged sun is a solar symbol associated with divinity, royalty, and power in the Ancient Near East (Egypt, Mesopotamia, Anatolia, and Persia). The Illyrian Sun-deity is also represented as a winged sun. Ancient Egypt In ancient Egypt, the symbol is attested from the Old Kingdom (Sneferu, 26th century BC ), often flanked on either side with a uraeus. Behdety In early Egyptian religion, the symbol Behdety represented Horus of Edfu, later identified with Ra-Horakhty. It is sometimes depicted on the neck of Apis, the bull of Ptah. As time passed (according to interpretation) all of the subordinated gods of Egypt were considered to be aspects of the sun god, including Khepri. The name "Behdety" means the inhabitant of Behdet. He was the sky god of the region called Behdet in the Nile basin. His image was first found in the inscription on a comb's body, as a winged solar panel. The period of the comb is about 3000 BC. Such winged solar panels were later found in the funeral picture of Pharaoh Sahure of the fifth dynasty. Behdety is seen as the protector of Pharaoh. On both sides of his picture are seen the Uraeus, which is a symbol for the cobra-headed goddess Wadjet. He resisted the intense heat of Egyptian sun with his two wings. Mesopotamia From roughly 2000 BCE, the symbol also appears in Mesopotamia. It appears in reliefs with Assyrian rulers as a symbol for royalty, transcribed into Latin as (literally, "his own self, the Sun", i.e. "His Majesty"). Illyria Early figurative evidence of the celestial cult in Illyria is provided by 6th century BCE plaques from Lake Shkodra, which belonged to the Illyrian tribal area of what was referred in historical sources to as the Labeatae in later times. Each of those plaques portray simultaneously sacred representations of the sky and the sun, and symbolism of lightning and fire, as well as the sacred tree and birds (eagles). In those plaques there is a recurrent mythological representation of the celestial deity: the Sun deity animated with a face and two wings, throwing lightning into a fire altar, which in some plaques is held by two men (sometimes on two boats). Iran In Zoroastrian Persia, the symbol of the winged sun became part of the iconography of the Faravahar, the symbol of the divine power and royal glory in Persian culture. Judah From around the 8th century BC, the winged solar disk appears on Hebrew seals connected to the royal house of the Kingdom of Judah. Many of these are seals and jar handles from Hezekiah's reign, together with the inscription l'melekh ("belonging to the king"). Typically, Hezekiah's royal seals feature two downward-pointing wings and six rays emanating from the central sun disk, and some are flanked on either side with the Egyptian ankh ("key of life") symbol. Prior to this, there are examples from the seals of servants of king Ahaz and of king Uzziah. Compare also Malachi 4:2, referring to a winged "Sun of righteousness", Greece The winged sun is conventionally depicted as the knob of the caduceus, the staff of Hermes. Modern use Various groups such as Freemasonry, Rosicrucianism, Thelema, Theosophy, and Unity Church have also used it. The symbol was used on the cover of Charles Taze Russell's textbook series Studies in the Scriptures beginning with the 1911 editions. The winged sun symbol is also cited by proponents of the pseudoscientific Nibiru cataclysm. Implied Secular use A winged sun is used in the heraldry of the North America Trade Directory. Variations of the symbol are used as a trademark logo on vehicles produced by the Chrysler Corporation, Mini, Bentley Motors, Lagonda (Aston Martin) and Harley Davidson. Since WW2, military aircraft of the United States have carried the insignia of a circle with stripes extending from each side like wings. Whether this is coincidental or some symbolic resemblance was intended is unknown. A five-pointed star is inscribed within the circle. Regarding its video game usage, the symbol has become a common motif in the Sonic the Hedgehog franchise, most notably featured on title screens displaying the main character, as well as a stylized version appearing as a symbol for religious mechanics and buildings in Civilization VI, among others. See also Winged genie References Bibliography R. Mayer, Opificius, Die geflügelte Sonne, Himmels- und Regendarstellungen im Alten Vorderasien, UF 16 (1984) 189-236. D. Parayre, Carchemish entre Anatolie et Syrie à travers l'image du disque solaire ailé (ca. 1800-717 av. J.-C.), Hethitica 8 (1987) 319-360. D. Parayre, Les cachets ouest-sémitiques à travers l'image du disque solaire ailé, Syria 67 (1990) 269-314. External links Relief Depicting Gilgamesh Between Two Bull-Men Supporting a Winged Sun Disk, Kapara palace, Tell Halaf. Ancient Egyptian symbols Egyptian hieroglyphs Heraldic charges Middle Eastern mythology Religious symbols Solar symbols Sun myths Divinity Horus Ra
Winged sun
Astronomy
1,131
50,863,781
https://en.wikipedia.org/wiki/Bonytt
Bonytt is a Norwegian monthly home and interior design magazine based in Oslo, Norway. Founded in 1941, it is one of the oldest magazines in the country as well as the most popular magazine in its category. History and profile Bonytt was established by Arne Remlow and Per Tannum in 1941. Ramlow was the owner and long-term editor-in-chief of the magazine, which has its headquarters in Oslo. In 1947 the magazine become the official media outlet of the Norwegian Applied Art Association. In the 1950s it adopted a modernist approach, which was left later. Then it positioned itself as a source for inspiration for the amateur interior designers. In 1967 the magazine was renamed as Nye Bonytt to indicate its new approach. The magazine is part of Egmont/Orkla ASA. It is published monthly by Egmont Hjemmet Mortensen A/S. In 2010 an IPad version of Bonytt was started. Circulation In 1999 Bonytt sold 711,000 copies, making it the best-selling consumer special interest magazine in Norway. The circulation of the magazine was 68,000 copies in 2003. In 2006 the magazine sold 62,900 copies. Its 2022 circulation was 25,203 copies. References 1941 establishments in Norway Design magazines Magazines established in 1941 Magazines published in Oslo Monthly magazines published in Norway Norwegian-language magazines
Bonytt
Engineering
276
10,416,796
https://en.wikipedia.org/wiki/Pharmaceutical%20lobby
The pharmaceutical lobby refers to the representatives of pharmaceutical drug and biomedicine companies who engage in lobbying in favour of pharmaceutical companies and their products. Political influence in the United States The largest pharmaceutical companies and their two trade groups, Pharmaceutical Research and Manufacturers of America (PhRMA) and Biotechnology Innovation Organization, lobbied on at least 1,600 pieces of legislation between 1998 and 2004. According to the non-partisan OpenSecrets, pharmaceutical companies spent $900 million on lobbying between 1998 and 2005, more than any other industry. During the same period, they donated $89.9 million to federal candidates and political parties, giving approximately three times as much to Republicans as to Democrats. According to the Center for Public Integrity, from January 2005 through June 2006 alone, the pharmaceutical industry spent approximately $182 million on federal lobbying in the United States. In 2005, the industry had 1,274 registered lobbyists in Washington, D.C. A 2020 study found that, from 1999 to 2018, the pharmaceutical industry and health product industry together spent $4.7 billion lobbying the United States federal government, an average of $233 million per year. Controversy in the U.S. Prescription drug costs in the U.S. Critics of the pharmaceutical lobby argue that the drug industry's influence allows it to promote legislation friendly to drug manufacturers at the expense of patients. The lobby's influence in securing the passage of the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 was considered a major and controversial victory for the industry, as it prevents the government from directly negotiating prices with drug companies who provide those prescription drugs covered by Medicare. Price negotiations are instead conducted between manufacturers and the pharmacy benefit managers providing Medicare Part D benefits under contract with Medicare. In 2010 the Congressional Budget Office estimated the average discount negotiated by pharmacy benefit managers at 14%. The high price of U.S. prescription drugs has been a source of ongoing controversy. Pharmaceutical companies state that the high costs are the result of pricey research and development programs. Critics point to the development of drugs having only small incremental benefit. According to Marcia Angell, former editor-in-chief of the New England Journal of Medicine, "The United States is the only advanced country that permits the pharmaceutical industry to charge exactly what the market will bear." In contrast, the RAND Corporation and authors from the National Bureau of Economic Research have argued that price controls stifle innovation and are economically counterproductive in the long term. International operations In 2021, during the height of COVID-19, vaccine makers increased lobbying and public-relations efforts to oppose a proposal that would temporarily waive their patents in Germany, Japan and other countries. This proposal would allow COVID-19 vaccine patents to be licensed to international vaccine makers or otherwise sold entirely. The Biden presidential administration in the U.S. supported the waiver proposal; however, pharmaceutical industry trade groups supported Germany, Japan, and other countries that expressed opposition. Pharmaceutical industry representatives have been lobbying members of Congress to pressure the Biden administration to reverse its support of the waiver, arguing that the patents protect its innovations. However, proponents of the proposal see the patent as giving companies a monopoly over sales of vaccines during a world crisis. See also Bad Pharma (2012) by Ben Goldacre Big Pharma (2006) by Jacky Law Big Pharma conspiracy theory Ethics in pharmaceutical sales List of pharmaceutical companies Lists about the pharmaceutical industry Pharmaceutical marketing References External links Pharmaceutical lobbying totals at Opensecrets Corporations and Health Watch PhRMA's home page PBS series on the pharmaceutical industry Big Bucks, Big Pharma, Amy Goodman, 68 minutes Conflict of interest Lobbying in the United States Lobbying organizations Lobby
Pharmaceutical lobby
Chemistry,Biology
750
41,285
https://en.wikipedia.org/wiki/Interoperability
Interoperability is a characteristic of a product or system to work with other products or systems. While the term was initially defined for information technology or systems engineering services to allow for information exchange, a broader definition takes into account social, political, and organizational factors that impact system-to-system performance. Types of interoperability include syntactic interoperability, where two systems can communicate with each other, and cross-domain interoperability, where multiple organizations work together and exchange information. Types If two or more systems use common data formats and communication protocols then they are capable of communicating with each other and they exhibit syntactic interoperability. XML and SQL are examples of common data formats and protocols. Low-level data formats also contribute to syntactic interoperability, ensuring that alphabetical characters are stored in the same ASCII or a Unicode format in all the communicating systems. Beyond the ability of two or more computer systems to exchange information, semantic interoperability is the ability to automatically interpret the information exchanged meaningfully and accurately in order to produce useful results as defined by the end users of both systems. To achieve semantic interoperability, both sides must refer to a common information exchange reference model. The content of the information exchange requests are unambiguously defined: what is sent is the same as what is understood. Cross-domain interoperability involves multiple social, organizational, political, legal entities working together for a common interest or information exchange. Interoperability and open standards Interoperability implies exchanges between a range of products, or similar products from several different vendors, or even between past and future revisions of the same product. Interoperability may be developed post-facto, as a special measure between two products, while excluding the rest, by using open standards. When a vendor is forced to adapt its system to a dominant system that is not based on open standards, it is compatibility, not interoperability. Open standards Open standards rely on a broadly consultative and inclusive group including representatives from vendors, academics and others holding a stake in the development that discusses and debate the technical and economic merits, demerits and feasibility of a proposed common protocol. After the doubts and reservations of all members are addressed, the resulting common document is endorsed as a common standard. This document may be subsequently released to the public, and henceforth becomes an open standard. It is usually published and is available freely or at a nominal cost to any and all comers, with no further encumbrances. Various vendors and individuals (even those who were not part of the original group) can use the standards document to make products that implement the common protocol defined in the standard and are thus interoperable by design, with no specific liability or advantage for customers for choosing one product over another on the basis of standardized features. The vendors' products compete on the quality of their implementation, user interface, ease of use, performance, price, and a host of other factors, while keeping the customer's data intact and transferable even if he chooses to switch to another competing product for business reasons. Post facto interoperability Post facto interoperability may be the result of the absolute market dominance of a particular product in contravention of any applicable standards, or if any effective standards were not present at the time of that product's introduction. The vendor behind that product can then choose to ignore any forthcoming standards and not co-operate in any standardization process at all, using its near-monopoly to insist that its product sets the de facto standard by its very market dominance. This is not a problem if the product's implementation is open and minimally encumbered, but it may well be both closed and heavily encumbered (e.g. by patent claims). Because of the network effect, achieving interoperability with such a product is both critical for any other vendor if it wishes to remain relevant in the market, and difficult to accomplish because of lack of cooperation on equal terms with the original vendor, who may well see the new vendor as a potential competitor and threat. The newer implementations often rely on clean-room reverse engineering in the absence of technical data to achieve interoperability. The original vendors may provide such technical data to others, often in the name of encouraging competition, but such data is invariably encumbered, and may be of limited use. Availability of such data is not equivalent to an open standard, because: The data is provided by the original vendor on a discretionary basis, and the vendor has every interest in blocking the effective implementation of competing solutions, and may subtly alter or change its product, often in newer revisions, so that competitors' implementations are almost, but not quite completely interoperable, leading customers to consider them unreliable or of lower quality. These changes may not be passed on to other vendors at all, or passed on after a strategic delay, maintaining the market dominance of the original vendor. The data itself may be encumbered, e.g. by patents or pricing, leading to a dependence of all competing solutions on the original vendor, and possibly leading a revenue stream from the competitors' customers back to the original vendor. This revenue stream is the result of the original product's market dominance and not a result of any innate superiority. Even when the original vendor is genuinely interested in promoting a healthy competition (so that he may also benefit from the resulting innovative market), post-facto interoperability may often be undesirable as many defects or quirks can be directly traced back to the original implementation's technical limitations. Although in an open process, anyone may identify and correct such limitations, and the resulting cleaner specification may be used by all vendors, this is more difficult post-facto, as customers already have valuable information and processes encoded in the faulty but dominant product, and other vendors are forced to replicate those faults and quirks for the sake of preserving interoperability even if they could design better solutions. Alternatively, it can be argued that even open processes are subject to the weight of past implementations and imperfect past designs and that the power of the dominant vendor to unilaterally correct or improve the system and impose the changes to all users facilitates innovation. Lack of an open standard can also become problematic for the customers, as in the case of the original vendor's inability to fix a certain problem that is an artifact of technical limitations in the original product. The customer wants that fault fixed, but the vendor has to maintain that faulty state, even across newer revisions of the same product, because that behavior is a de facto standard and many more customers would have to pay the price of any interoperability issues caused by fixing the original problem and introducing new behavior. Government eGovernment Speaking from an e-government perspective, interoperability refers to the collaboration ability of cross-border services for citizens, businesses and public administrations. Exchanging data can be a challenge due to language barriers, different specifications of formats, varieties of categorizations and other hindrances. If data is interpreted differently, collaboration is limited, takes longer and is inefficient. For instance, if a citizen of country A wants to purchase land in country B, the person will be asked to submit the proper address data. Address data in both countries include full name details, street name and number as well as a postal code. The order of the address details might vary. In the same language, it is not an obstacle to order the provided address data; but across language barriers, it becomes difficult. If the language uses a different writing system it is almost impossible if no translation tools are available. Flood risk management Interoperability is used by researchers in the context of urban flood risk management.  Cities and urban areas worldwide are expanding, which creates complex spaces with many interactions between the environment, infrastructure and people.  To address this complexity and manage water in urban areas appropriately, a system of systems approach to water and flood control is necessary. In this context, interoperability is important to facilitate system-of-systems thinking, and is defined as: "the ability of any water management system to redirect water and make use of other system(s) to maintain or enhance its performance function during water exceedance events." By assessing the complex properties of urban infrastructure systems, particularly the interoperability between the drainage systems and other urban systems (e.g. infrastructure such as transport), it could be possible to expand the capacity of the overall system to manage flood water towards achieving improved urban flood resilience. Military forces Force interoperability is defined in NATO as the ability of the forces of two or more nations to train, exercise and operate effectively together in the execution of assigned missions and tasks. Additionally NATO defines interoperability more generally as the ability to act together coherently, effectively and efficiently to achieve Allied tactical, operational and strategic objectives. At the strategic level, interoperability is an enabler for coalition building. It facilitates meaningful contributions by coalition partners. At this level, interoperability issues center on harmonizing world views, strategies, doctrines, and force structures. Interoperability is an element of coalition willingness to work together over the long term to achieve and maintain shared interests against common threats. Interoperability at the operational and tactical levels is where strategic interoperability and technological interoperability come together to help allies shape the environment, manage crises, and win wars. The benefits of interoperability at the operational and tactical levels generally derive from the interchangeability of force elements and units. Technological interoperability reflects the interfaces between organizations and systems. It focuses on communications and computers but also involves the technical capabilities of systems and the resulting mission compatibility between the systems and data of coalition partners. At the technological level, the benefits of interoperability come primarily from their impacts at the operational and tactical levels in terms of enhancing flexibility. Public safety Because first responders need to be able to communicate during wide-scale emergencies, interoperability is an important issue for law enforcement, fire fighting, emergency medical services, and other public health and safety departments. It has been a major area of investment and research over the last 12 years. Widely disparate and incompatible hardware impedes the exchange of information between agencies. Agencies' information systems such as computer-aided dispatch systems and records management systems functioned largely in isolation, in so-called information islands. Agencies tried to bridge this isolation with inefficient, stop-gap methods while large agencies began implementing limited interoperable systems. These approaches were inadequate and, in the US, the lack of interoperability in the public safety realm become evident during the 9/11 attacks on the Pentagon and World Trade Center structures. Further evidence of a lack of interoperability surfaced when agencies tackled the aftermath of Hurricane Katrina. In contrast to the overall national picture, some states, including Utah, have already made great strides forward. The Utah Highway Patrol and other departments in Utah have created a statewide data sharing network. The Commonwealth of Virginia is one of the leading states in the United States in improving interoperability. The Interoperability Coordinator leverages a regional structure to better allocate grant funding around the Commonwealth so that all areas have an opportunity to improve communications interoperability. Virginia's strategic plan for communications is updated yearly to include new initiatives for the Commonwealth – all projects and efforts are tied to this plan, which is aligned with the National Emergency Communications Plan, authored by the Department of Homeland Security's Office of Emergency Communications. The State of Washington seeks to enhance interoperability statewide. The State Interoperability Executive Committee (SIEC), established by the legislature in 2003, works to assist emergency responder agencies (police, fire, sheriff, medical, hazmat, etc.) at all levels of government (city, county, state, tribal, federal) to define interoperability for their local region. Washington recognizes that collaborating on system design and development for wireless radio systems enables emergency responder agencies to efficiently provide additional services, increase interoperability, and reduce long-term costs. This work saves the lives of emergency personnel and the citizens they serve. The U.S. government is making an effort to overcome the nation's lack of public safety interoperability. The Department of Homeland Security's Office for Interoperability and Compatibility (OIC) is pursuing the SAFECOM and CADIP and Project 25 programs, which are designed to help agencies as they integrate their CAD and other IT systems. The OIC launched CADIP in August 2007. This project will partner the OIC with agencies in several locations, including Silicon Valley. This program will use case studies to identify the best practices and challenges associated with linking CAD systems across jurisdictional boundaries. These lessons will create the tools and resources public safety agencies can use to build interoperable CAD systems and communicate across local, state, and federal boundaries. As regulator for interoperability Governance entities can increase interoperability through their legislative and executive powers. For instance, in 2021 the European Commission, after commissioning two impact assessment studies and a technology analysis study, proposed the implementation of a standardization – for iterations of USB-C – of phone charger products, which may increase interoperability along with convergence and convenience for consumers while decreasing resource needs, redundancy and electronic waste. Commerce and industries Information technology and computers Desktop Desktop interoperability is a subset of software interoperability. In the early days, the focus of interoperability was to integrate web applications with other web applications. Over time, open-system containers were developed to create a virtual desktop environment in which these applications could be registered and then communicate with each other using simple publish–subscribe patterns. Rudimentary UI capabilities were also supported allowing windows to be grouped with other windows. Today, desktop interoperability has evolved into full-service platforms which include container support, basic exchange between web and web, but also native support for other application types and advanced window management. The very latest interop platforms also include application services such as universal search, notifications, user permissions and preferences, 3rd party application connectors and language adapters for in-house applications. Information search Search interoperability refers to the ability of two or more information collections to be searched by a single query. Specifically related to web-based search, the challenge of interoperability stems from the fact designers of web resources typically have little or no need to concern themselves with exchanging information with other web resources. Federated Search technology, which does not place format requirements on the data owner, has emerged as one solution to search interoperability challenges. In addition, standards, such as Open Archives Initiative Protocol for Metadata Harvesting, Resource Description Framework, and SPARQL, have emerged that also help address the issue of search interoperability related to web resources. Such standards also address broader topics of interoperability, such as allowing data mining. Software With respect to software, the term interoperability is used to describe the capability of different programs to exchange data via a common set of exchange formats, to read and write the same file formats, and to use the same communication protocols. The lack of interoperability can be a consequence of a lack of attention to standardization during the design of a program. Indeed, interoperability is not taken for granted in the non-standards-based portion of the computing world. According to ISO/IEC 2382-01, Information Technology Vocabulary, Fundamental Terms, interoperability is defined as follows: "The capability to communicate, execute programs, or transfer data among various functional units in a manner that requires the user to have little or no knowledge of the unique characteristics of those units". Standards-developing organizations provide open public software specifications to facilitate interoperability; examples include the Oasis-Open organization and buildingSMART (formerly the International Alliance for Interoperability). Another example of a neutral party is the RFC documents from the Internet Engineering Task Force (IETF). The Open Service for Lifecycle Collaboration community is working on finding a common standard in order that software tools can share and exchange data e.g. bugs, tasks, requirements etc. The final goal is to agree on an open standard for interoperability of open source application lifecycle management tools. Java is an example of an interoperable programming language that allows for programs to be written once and run anywhere with a Java virtual machine. A program in Java, so long as it does not use system-specific functionality, will maintain interoperability with all systems that have a Java virtual machine available. Applications will maintain compatibility because, while the implementation is different, the underlying language interfaces are the same. Achieving software Software interoperability is achieved through five interrelated ways: Product testing Products produced to a common standard, or to a sub-profile thereof, depend on the clarity of the standards, but there may be discrepancies in their implementations that system or unit testing may not uncover. This requires that systems formally be tested in a production scenario – as they will be finally implemented – to ensure they actually will intercommunicate as advertised, i.e. they are interoperable. Interoperable product testing is different from conformance-based product testing as conformance to a standard does not necessarily engender interoperability with another product which is also tested for conformance. Product engineering Implements the common standard, or a sub-profile thereof, as defined by the industry and community partnerships with the specific intention of achieving interoperability with other software implementations also following the same standard or sub-profile thereof. Industry and community partnership Industry and community partnerships, either domestic or international, sponsor standard workgroups with the purpose of defining a common standard that may be used to allow software systems to intercommunicate for a defined purpose. At times an industry or community will sub-profile an existing standard produced by another organization to reduce options and thus make interoperability more achievable for implementations. Common technology and intellectual property The use of a common technology or intellectual property may speed up and reduce the complexity of interoperability by reducing variability between components from different sets of separately developed software products and thus allowing them to intercommunicate more readily. This technique has some of the same technical results as using a common vendor product to produce interoperability. The common technology can come through third-party libraries or open-source developments. Standard implementation Software interoperability requires a common agreement that is normally arrived at via an industrial, national or international standard. Each of these has an important role in reducing variability in intercommunication software and enhancing a common understanding of the end goal to be achieved. Unified interoperability Market dominance and power Interoperability tends to be regarded as an issue for experts and its implications for daily living are sometimes underrated. The European Union Microsoft competition case shows how interoperability concerns important questions of power relationships. In 2004, the European Commission found that Microsoft had abused its market power by deliberately restricting interoperability between Windows work group servers and non-Microsoft work group servers. By doing so, Microsoft was able to protect its dominant market position for work group server operating systems, the heart of corporate IT networks. Microsoft was ordered to disclose complete and accurate interface documentation, which could enable rival vendors to compete on an equal footing (the interoperability remedy). Interoperability has also surfaced in the software patent debate in the European Parliament (June–July 2005). Critics claim that because patents on techniques required for interoperability are kept under RAND (reasonable and non-discriminatory licensing) conditions, customers will have to pay license fees twice: once for the product and, in the appropriate case, once for the patent-protected program the product uses. Business processes Interoperability is often more of an organizational issue. Interoperability can have a significant impact on the organizations concerned, raising issues of ownership (do people want to share their data? or are they dealing with information silos?), labor relations (are people prepared to undergo training?) and usability. In this context, a more apt definition is captured in the term business process interoperability. Interoperability can have important economic consequences; for example, research has estimated the cost of inadequate interoperability in the US capital facilities industry to be $15.8 billion a year. If competitors' products are not interoperable (due to causes such as patents, trade secrets or coordination failures), the result may well be monopoly or market failure. For this reason, it may be prudent for user communities or governments to take steps to encourage interoperability in various situations. At least 30 international bodies and countries have implemented eGovernment-based interoperability framework initiatives called e-GIF while in the US there is the NIEM initiative. Medical industry The need for plug-and-play interoperability – the ability to take a medical device out of its box and easily make it work with one's other devices – has attracted great attention from both healthcare providers and industry. Increasingly, medical devices like incubators and imaging systems feature software that integrates at the point of care and with electronic systems, such as electronic medical records. At the 2016 Regulatory Affairs Professionals Society (RAPS) meeting, experts in the field like Angela N. Johnson with GE Healthcare and Jeff Shuren of the United States Food and Drug Administration provided practical seminars on how companies developing new medical devices, and hospitals installing them, can work more effectively to align interoperable software systems. Railways Railways have greater or lesser interoperability depending on conforming to standards of gauge, couplings, brakes, signalling, loading gauge, and structure gauge to mention a few parameters. For passenger rail service, different railway platform height and width clearance standards may also affect interoperability. North American freight and intercity passenger railroads are highly interoperable, but systems in Europe, Asia, Africa, Central and South America, and Australia are much less so. The parameter most difficult to overcome (at reasonable cost) is incompatibility of gauge, though variable gauge axle systems are increasingly used. Telecommunications In telecommunications, the term can be defined as: The ability to provide services to and accept services from other systems, and to use the services exchanged to enable them to operate effectively together. ITU-T provides standards for international telecommunications. The condition achieved among communications-electronics systems or items of communications-electronics equipment when information or services can be exchanged directly and satisfactorily between them or their users. The degree of interoperability should be defined when referring to specific cases. In two-way radio, interoperability is composed of three dimensions: compatible communications paths (compatible frequencies, equipment and signaling), radio system coverage or adequate signal strength, and; scalable capacity. Organizations dedicated to interoperability Many organizations are dedicated to interoperability. Some concentrate on eGovernment, eBusiness or data exchange in general. Global Internationally, Network Centric Operations Industry Consortium facilitates global interoperability across borders, language and technical barriers. In the built environment, the International Alliance for Interoperability started in 1994, and was renamed buildingSMART in 2005. Europe In Europe, the European Commission and its IDABC program issue the European Interoperability Framework. IDABC was succeeded by the Interoperability Solutions for European Public Administrations (ISA) program. They also initiated the Semantic Interoperability Centre Europe (SEMIC.EU). A European Land Information Service (EULIS) was established in 2006, as a consortium of European National Land Registers. The aim of the service is to establish a single portal through which customers are provided with access to information about individual properties, about land and property registration services, and about the associated legal environment. The European Interoperability Framework (EIF) considered four kinds of interoperability: legal interoperability, organizational interoperability, semantic interoperability, and technical interoperability. In the European Research Cluster on the Internet of Things (IERC) and IoT Semantic Interoperability Best Practices; four kinds of interoperability are distinguished: syntactical interoperability, technical interoperability, semantic interoperability, and organizational interoperability. US In the United States, the General Services Administration Component Organization and Registration Environment (CORE.GOV) initiative provided a collaboration environment for component development, sharing, registration, and reuse in the early 2000s. A related initiative is the ongoing National Information Exchange Model (NIEM) work and component repository. The National Institute of Standards and Technology serves as an agency for measurement standards. See also Computer and information technology Architecture of Interoperable Information Systems List of computer standards Model Driven Interoperability, framework Semantic Web, standard for making Internet data machine readable Business Business interoperability interface, between an organization's systems and processes Enterprise interoperability, ability to link activities in an efficient and competitive way Other Collaboration, general concept Polytely, problem solving Universal Data Element Framework, information indexing Notes References External links "When and How Interoperability Drives Innovation," by Urs Gasser and John Palfrey GIC - The Greek Interoperability Centre: A Research Infrastructure for Interoperability in eGovernment and eBusiness, in SE Europe and the Mediterranean Simulation Interoperability Standards Organization (SISO) Interoperability: What is it and why should I want it? Ariadne 24 (2000) Interoperability Constitution - DOE's GridWise Architecture Council Interoperability Context-Setting Framework - DOE's GridWise Architecture Council Decision Maker's Interoperability Checklist - DOE's GridWise Architecture Council OA Journal on Interoperability in Business Information Systems University of New Hampshire Interoperability Laboratory - premier research facility on interoperability of computer networking technologies Interoperability vs. intraoperability: your open choice on Bob Sutor blog, 6 December 2006 ECIS European Committee for Interoperable Systems Gradmann, Stefan. INTEROPERABILITY. A key concept for large scale, persistent digital libraries. DL.org Digital Library Interoperability, Best Practices and Modelling Foundations Computing terminology Telecommunications engineering Product testing
Interoperability
Technology,Engineering
5,308
46,601,982
https://en.wikipedia.org/wiki/Distributive%20numeral
In linguistics, a distributive numeral, or distributive number word, is a word that answers "how many times each?" or "how many at a time?", such as singly or doubly. They are contrasted with multipliers. In English, this part of speech is rarely used and much less recognized than cardinal numbers and ordinal numbers, but it is clearly distinguished and commonly used in Latin and several Romance languages, such as Romanian. English In English distinct distributive numerals exist, such as singly, doubly, and triply, and are derived from the corresponding multiplier (of Latin origin, via French) by suffixing -y (reduction of Middle English -lely > -ly). However, this is more commonly expressed periphrastically, such as "one by one", "two by two"; "one at a time", "two at a time"; "one of each", "two of each"; "in twos", "in threes"; or using a counter word as in "in groups of two" or "two pieces to a ...". Examples include "Please get off the bus one by one so no one falls.", "She jumped up the steps two at a time.", "Students worked in the lab in twos and threes.", "Students worked in groups of two and three.", and "Students worked two people to a team." The suffixes -some (as in twosome, threesome) and -fold (as in two-fold, three-fold) are also used, though also relatively infrequently. For musical groups solo, duo, trio, quartet, etc. are commonly used, and pair is used for a group of two. A conspicuous use of distributive numbers is in arity or adicity, to indicate how many parameters a function takes. Most commonly this uses Latin distributive numbers and -ary, as in unary, binary, ternary, but sometimes Greek numbers are used instead, with -adic, as in monadic, dyadic, triadic. Other languages Georgian, Latin, and Romanian are notable languages with distributive numerals; see Romanian distributive numbers. An example of this difference can be seen with the distributive number for 'one hundred'. While the cardinal number is 'centum', the distributive form is "centēnī,-ae, a". In Japanese numerals, distributive forms are formed regularly from a cardinal number, a counter word, and the suffix , as in . In Bisayan languages, notably Cebuano, Hiligaynon, and Waray, distributive numbers are formed by adding the prefix tag- on the cardinal number, as in tagpito (seven of each) and tag-upat (four of each). In Cebuano, some distributive forms undergo metathesis or syncope, such as tagsa (from tag-usa), tagurha (from tagduha), tagutlo (from tagtulo), and tagilma (from taglima). In Turkish, one of the -ar/-er suffixes (chosen according to vowel harmony) is added to the end of a cardinal numeral, as in "birer" (one of each) and "dokuzar" (nine of each). If the numeral ends with a vowel, a letter ş comes to the middle; as in "ikişer" (two of each) and "altışar" (six of each). See also Cardinal number Ordinal number References Gil, David. 2013. Distributive numerals. In: Dryer, Matthew S. & Haspelmath, Martin (eds.) The World Atlas of Language Structures Online. Leipzig: Max Planck Institute for Evolutionary Anthropology. Accessed on 2019-07-23. Numerals
Distributive numeral
Mathematics
865
23,855,065
https://en.wikipedia.org/wiki/GNOME%20Shell
GNOME Shell is the graphical shell of the GNOME desktop environment starting with version 3, which was released on April 6, 2011. It provides basic functions like launching applications and switching between windows. GNOME Shell replaced GNOME Panel and some ancillary components of GNOME 2. GNOME Shell is written in C and JavaScript as a plugin for Mutter. In contrast to the KDE Plasma Workspaces, a software framework intended to facilitate the creation of multiple graphical shells for different devices, the GNOME Shell is intended to be used on desktop computers with large screens operated via keyboard and mouse, as well as portable computers with smaller screens operated via their keyboard, touchpad or touchscreen. History The first concepts for GNOME Shell were created during GNOME's User Experience Hackfest 2008 in Boston. After criticism of the traditional GNOME desktop and accusations of stagnation and lacking vision, the resulting discussion led to the announcement of GNOME 3.0 in April 2009. Since then Red Hat has been the main driver of GNOME Shell's development. Pre-release versions of GNOME Shell were first made available in August 2009 and became regular, non-default part of GNOME in version 2.28 in September 2009. It was finally shipped as GNOME's default user interface on April 6, 2011. Design As graphical shell (graphical front-end/graphical shell/UX/UI) of the GNOME desktop environment, its design is guided by the GNOME UX Design Team. Design components The GNOME Shell comprises the following graphical and functional elements: Top bar System status area Activities Overview Dash Window picker Application picker Search Notifications and messaging tray Application switcher Indicators tray (deprecated, waiting on new specification) Software architecture GNOME Shell is tightly integrated with Mutter, a compositing window manager and Wayland compositor. It is based upon Clutter to provide visual effects and hardware acceleration. According to GNOME Shell maintainer Owen Taylor, it is set up as a Mutter plugin largely written in JavaScript and uses GUI widgets provided by GTK+ version 3. Features Changes to the user interface (UI) include, but are not limited to: Clutter and Mutter support multi-touch gestures. Support for HiDPI monitors. A new Activities overview, which houses: A dock (called "Dash") for quickly switching between and launching applications A window picker, similar to macOS's Mission Control, also incorporating a workspace switcher/manager An application picker which allows for reordering application icons and creating application groups. A search bar which handles launching applications, searching for files, and performing web searches. "Snapping" windows to screen borders to make them fill up a half of the screen or the whole screen A single window button by default, Close, instead of three (configurable). Minimization has been removed due to the lack of a panel to minimize to, in favor of workspace window management. Maximization can be accomplished using the afore-mentioned window snapping, or by double-clicking the window title bar. A fallback mode is offered in versions 3.0–3.6 for those without hardware acceleration which offers the GNOME Panel desktop. This mode can also be toggled through the System Settings menu. GNOME 3.8 removed the fallback mode and replaced it with GNOME Shell extensions that offer a more traditional look and feel. Extensibility The functionality of GNOME Shell can be changed with extensions, which can be written in JavaScript. Users can find and install extensions using the GNOME extensions website. Some of these extensions are hosted in GNOME's git repository, though they are not official. Gallery Adoption Arch Linux dropped support of GNOME 2 in favor of GNOME 3 in its repositories in April 2011. Fedora Linux uses GNOME Shell by default since release 15, May 2011. CentOS Steam uses the latest version of GNOME Shell Sabayon Linux uses the latest version of GNOME Shell. openSUSE's GNOME edition has used GNOME Shell since version 12.1 in November 2011. Mageia 2 and later include GNOME Shell, since May 2012. Debian 8 and later features GNOME Shell in the default desktop, since April 2015. Solaris 11.4 replaced GNOME 2 with GNOME Shell in August 2018. Ubuntu uses GNOME Shell by default since 17.10, October 2017, after Canonical ceased development of Unity. It has been available for installation in the repositories since version 11.10. An alternative flavor, Ubuntu GNOME, was released alongside Ubuntu 12.10, and gained official flavor status by Ubuntu 13.04. Reception GNOME Shell has received mixed reviews: it has been criticized for a variety of reasons, mostly related to design decisions and reduced user control over the environment. For example, users in the free software community have raised concerns that the planned tight integration with Mutter will mean that users of GNOME Shell will not be able to switch to an alternative window manager without breaking their desktop. In particular, users might not be able to use Compiz with GNOME Shell while retaining access to the same types of features that older versions of GNOME allowed. Reviews have generally become more positive over time, with upcoming releases addressing many of the annoyances reported by users. See also Unity – a shell interface for GNOME used by old versions of Ubuntu KDE Plasma - a shell built with Qt References External links GNOME Graphical shells that use GTK Software that uses Clutter (software) Software that uses Meson User interfaces
GNOME Shell
Technology
1,118
21,289
https://en.wikipedia.org/wiki/Nautical%20mile
A nautical mile is a unit of length used in air, marine, and space navigation, and for the definition of territorial waters. Historically, it was defined as the meridian arc length corresponding to one minute ( of a degree) of latitude at the equator, so that Earth's polar circumference is very near to 21,600 nautical miles (that is 60 minutes × 360 degrees). Today the international nautical mile is defined as . The derived unit of speed is the knot, one nautical mile per hour. Unit symbol There is no single internationally agreed symbol, with several symbols in use. NM is used by the International Civil Aviation Organization. nmi is used by the Institute of Electrical and Electronics Engineers and the United States Government Publishing Office. M is used as the abbreviation for the nautical mile by the International Hydrographic Organization. nm is a non-standard abbreviation used in many maritime applications and texts, including U.S. Government Coast Pilots and Sailing Directions. It conflicts with the SI symbol for nanometre. History The word mile is from the Latin phrase for a thousand paces: . Navigation at sea was done by eye until around 1500 when navigational instruments were developed and cartographers began using a coordinate system with parallels of latitude and meridians of longitude. The earliest reference of 60 miles to a degree is a map by Nicolaus Germanus in a 1482 edition of Ptolemy's Geography indicating one degree of longitude at the Equator contains "". An earlier manuscript map by Nicolaus Germanus in a previous edition of Geography states "" ("one degree longitude and latitude under the equator forms 500 stadia, which make 62 miles"). Whether a correction or convenience, the reason for the change from 62 to 60 miles to a degree is not explained. Eventually, the ratio of 60 miles to a degree appeared in English in a 1555 translation of Pietro Martire d'Anghiera's Decades: "[Ptolemy] assigned likewise to every degree three score miles." By the late 16th century English geographers and navigators knew that the ratio of distances at sea to degrees was constant along any great circle (such as the equator, or any meridian), assuming that Earth was a sphere. In 1574, William Bourne stated in A Regiment for the Sea the "rule to raise a degree" practised by navigators: "But as I take it, we in England should allowe 60 myles to one degrée: that is, after 3 miles to one of our Englishe leagues, wherefore 20 of oure English leagues shoulde answere to one degrée." Likewise, Robert Hues wrote in 1594 that the distance along a great circle was 60 miles per degree. However, these referred to the old English mile of 5000 feet and league of 15,000 feet, relying upon Ptolemy's underestimate of the Earth's circumference. In the early seventeenth century, English geographers started to acknowledge the discrepancy between the angular measurement of a degree of latitude and the linear measurement of miles. In 1624 Edmund Gunter suggested 352,000 feet to a degree (5866 feet per arcminute). In 1633, William Oughtred suggested 349,800 feet to a degree (5830 feet per arcminute). Both Gunter and Oughtred put forward the notion of dividing a degree into 100 parts, but their proposal was generally ignored by navigators. The ratio of 60 miles, or 20 leagues, to a degree of latitude remained fixed while the length of the mile was revised with better estimates of the earth's circumference. In 1637, Robert Norwood proposed a new measurement of 6120 feet for an arcminute of latitude, which was within 44 feet of the currently accepted value for a nautical mile. Since the Earth is not a perfect sphere but is an oblate spheroid with slightly flattened poles, a minute of latitude is not constant, but about 1,862 metres at the poles and 1,843 metres at the Equator. France and other metric countries state that in principle a nautical mile is an arcminute of a meridian at a latitude of 45°, but that is a modern justification for a more mundane calculation that was developed a century earlier. By the mid-19th century, France had defined a nautical mile via the original 1791 definition of the metre, one ten-millionth of a quarter meridian. So became the metric length for a nautical mile. France made it legal for the French Navy in 1906, and many metric countries voted to sanction it for international use at the 1929 International Hydrographic Conference. Both the United States and the United Kingdom used an average arcminute—specifically, a minute of arc of a great circle of a sphere having the same surface area as the Clarke 1866 ellipsoid. The authalic (equal area) radius of the Clarke 1866 ellipsoid is . The resulting arcminute is . The United States chose five significant digits for its nautical mile, 6,080.2 feet, whereas the United Kingdom chose four significant digits for its Admiralty mile, 6,080 feet. In 1929 the international nautical mile was defined by the First International Extraordinary Hydrographic Conference in Monaco as exactly 1,852 metres (which is ). The United States did not adopt the international nautical mile until 1954. Britain adopted it in 1970, but legal references to the obsolete unit are now converted to 1,853 metres (which is ). Similar definitions The metre was originally defined as of the length of the meridian arc from the North pole to the equator (1% of a centesimal degree of latitude), thus one kilometre of distance corresponds to one centigrad (also known as centesimal arc minute) of latitude. The Earth's circumference is therefore approximately 40,000 km. The equatorial circumference is slightly longer than the polar circumference the measurement based on this ( = 1,855.3 metres) is known as the geographical mile. Using the definition of a degree of latitude on Mars, a Martian nautical mile equals to . This is potentially useful for celestial navigation on a human mission to the planet, both as a shorthand and a quick way to roughly determine the location. See also Nautical measured mile Conversion of units Orders of magnitude (length) Notes References Mile Units of length Customary units of measurement in the United States
Nautical mile
Mathematics
1,305
65,093,532
https://en.wikipedia.org/wiki/Focused%20proof
In mathematical logic, focused proofs are a family of analytic proofs that arise through goal-directed proof-search, and are a topic of study in structural proof theory and reductive logic. They form the most general definition of goal-directed proof-search—in which someone chooses a formula and performs hereditary reductions until the result meets some condition. The extremal case where reduction only terminates when axioms are reached forms the sub-family of uniform proofs. A sequent calculus is said to have the focusing property when focused proofs are complete for some terminating condition. For System LK, System LJ, and System LL, uniform proofs are focused proofs where all the atoms are assigned negative polarity. Many other sequent calculi has been shown to have the focusing property, notably the nested sequent calculi of both the classical and intuitionistic variants of the modal logics in the S5 cube. Uniform proofs In the sequent calculus for an intuitionistic logic, the uniform proofs can be characterised as those in which the upward reading performs all right rules before the left rules. Typically, uniform proofs are not complete for the logic i.e., not all provable sequents or formulas admit a uniform proof, so one considers fragments where they are complete e.g., the hereditary Harrop fragment of intuitionistic logic. Due to the deterministic behaviour, uniform proof-search has been used as the control mechanism defining the programming language paradigm of logic programming. Occasionally, uniform proof-search is implemented in a variant of the sequent calculus for the given logic where context management is automatic, thereby increasing the fragment for which one can define a logic programming language. Focused proofs The focusing principle was originally classified through the disambiguation between synchronous and asynchronous connectives in linear logic i.e., connectives that interact with the context and those that do not, as a consequence of research on logic programming. They are now an increasingly important example of control in reductive logic, and can drastically improve proof-search procedures in industry. The essential idea of focusing is to identify and coalesce the non-deterministic choices in a proof, so that a proof can be seen as an alternation of negative phases (where invertible rules are applied eagerly) and positive phases (where applications of the other rules are confined and controlled). Polarisation According to the rules of the sequent calculus, formulas are canonically put into one of two classes called positive and negative e.g., in LK and LJ the formula is positive. The only freedom is over atoms, which are assigned a polarity freely. For negative formulas, provability is invariant under the application of a right rule; and, dually, for a positive formulas provability is invariant under the application of a left rule. In either case one can safely apply rules in any order to hereditary sub-formulas of the same polarity. In the case of a right rule applied to a positive formula, or a left rule applied to a negative formula, one may result in invalid sequents e.g., in LK and LJ there is no proof of the sequent beginning with a right rule. A calculus admits the focusing principle if when an original reduct was provable then the hereditary reducts of the same polarity are also provable. That is, one can commit to focusing on decomposing a formula and its sub-formulas of the same polarity without loss of completeness. Focused system A sequent calculus is often shown to have the focusing property by working in a related calculus where polarity explicitly controls which rules apply. Proofs in such systems are in focused, unfocused, or neutral phases, where the first two are characterised by hereditary decomposition; and the latter by forcing a choice of focus. One of the most important operational behaviours a procedure can undergo is backtracking i.e., returning to an earlier stage in the computation where a choice was made. In focused systems for classical and intuitionistic logic, the use of backtracking can be simulated by pseudo-contraction. Let and denote change of polarity, the former making a formula negative, and the latter positive; and call a formula with an arrow neutral. Recall that is positive, and consider the neutral polarized sequent , which is interpreted as the actual sequent . For neutral sequents such as this, the focused system forces one to make an explicit choice of which formula to focus on, denoted by . To perform a proof-search the best thing is to choose the left formula, since is positive, indeed (as discussed above) in some cases there are no proofs where the focus is on the right formula. To overcome this, some focused calculi create a backtracking point such that focusing on the right yields , which is still as . The second formula on the right can be removed only when the focused phase has finished, but if proof-search gets stuck before this happens the sequent may remove the focused component thereby returning to the choice e.g., must be taken to as no other reductive inference can be made. This is a pseudo-contraction since it has the syntactic form of a contraction on the right, but the actual formula doesn't exist i.e., in the interpretation of the proof in the focused system the sequent has only one formula on the right. References Logic Proof theory Reductionism Logic programming
Focused proof
Mathematics
1,138
44,799,308
https://en.wikipedia.org/wiki/Dietmar%20Seyferth
Dietmar Seyferth (January 11, 1929 – June 6, 2020) was an emeritus professor of chemistry at the Massachusetts Institute of Technology. He published widely on topics in organometallic chemistry and was the founding editor of the journal Organometallics. Biography Seyferth was born in 1929 in Chemnitz, Saxony, Germany, and received his college education at the University of Buffalo. His PhD thesis, which focused on main group chemistry was completed under the mentorship of Eugene G. Rochow at Harvard. Seyferth spent his entire academic career at , where he initially concentrated on organophosphorus, organosilicon, and organomercury chemistry. He also contributed to organocobalt chemistry and organoiron chemistry, e.g. the popularization of Fe2S2(CO)6. He died on Saturday, June 6, 2020, due to complications from COVID-19 during the COVID-19 pandemic in Massachusetts. Seyferth has been widely recognized, notably with the American Chemical Society Award in Organometallic Chemistry and election to the U.S. National Academy of Sciences. See also Seyferth–Gilbert homologation References 1929 births 2020 deaths Inorganic chemists University at Buffalo alumni Harvard University alumni Members of the United States National Academy of Sciences German emigrants to the United States Deaths from the COVID-19 pandemic in Massachusetts
Dietmar Seyferth
Chemistry
296
2,643,012
https://en.wikipedia.org/wiki/Archive%20site
In web archiving, an archive site is a website that stores information on webpages from the past for anyone to view. Common techniques Two common techniques for archiving websites are using a web crawler or soliciting user submissions: Using a web crawler: By using a web crawler (e.g., the Internet Archive) the service will not depend on an active community for its content, and thereby can build a larger database faster. However, web crawlers are only able to index and archive information the public has chosen to post to the Internet, or that is available to be crawled, as website developers and system administrators have the ability to block web crawlers from accessing [certain] web pages (using a robots.txt). User submissions: While it can be difficult to start user submission services due to potentially low rates of user submissions, this system can yield some of the best results. By crawling web pages one is only able to obtain the information the public has chosen to post online; however, potential content providers may not bother to post certain information, assuming no one would be interested in it, because they lack a proper venue in which to post it, or because of copyright concerns. However, users who see someone wants their information may be more apt to submit it. Examples Google Groups On 12 February 2001, Google acquired the usenet discussion group archives from Deja.com and turned it into their Google Groups service. They allow users to search old discussions with Google's search technology, while still allowing users to post to the mailing lists. Internet Archive The Internet Archive is building a compendium of websites and digital media. Starting in 1996, the Archive has been employing a web crawler to build up their database. It is one of the best known archive sites. NBCUniversal Archives NBCUniversal Archives offer access to exclusive content from NBCUniversal and its subsidiaries. Their NBCUniversal Archives website provides easy viewing of past and recent news clips, and it is a prime example of a news archive. Nextpoint Nextpoint offers an automated cloud-based, SaaS for marketing, compliance, and litigation related needs including electronic discovery. PANDORA Archive PANDORA (Pandora Archive), founded in 1996 by the National Library of Australia, stands for Preserving and Accessing Networked Documentary Resources of Australia, which encapsulates their mission. They provide a long-term catalog of select online publications and web sites authored by Australians or that are of an Australian topic. They employ their PANDAS (PANDORA Digital Archiving System) when building their catalog. textfiles.com textfiles.com is a large library of old text files maintained by Jason Scott Sadofsky. Its mission is to archive the old documents that had floated around the bulletin board systems (BBS) of his youth and to document other people's experiences on the bulletin board systems. See also Internet Archive Pandora Archive WebCite Web archiving References Data management Online archives Web archiving initiatives
Archive site
Technology
609
61,771,858
https://en.wikipedia.org/wiki/Curvularia%20geniculata
Curvularia geniculata is a fast-growing anamorphic fungus in the division Ascomycota, most commonly found in soil, especially in areas of warmer climates. The fungus is a pathogen, mainly causing plant and animal infections, and rarely causing human infections. C. geniculata is characterized by its curved conidia, which has a dark brown centre and pale tapered tips, and produces anti-fungal compounds called Curvularides A-E. History and taxonomy The fungus was discovered by American botanist Samuel Mills Tracy and mycologist Franklin Sumner Earle in Starkville, Mississippi 1894 on Love grass (Eragrostis rachitricha) grown from imported seeds. They classified the fungus as Helminthosporium geniculatum; however, the Heliminthosporium species later got segregated into four different genera, one being the genus Curvularia. In 1923, Karel Bernard Boedijn, a Dutch botanist and mycologist, reclassified the fungus as Curvularia geniculata which is the asexual form (anamorph) of the fungus. Associated with C. geniculata is the sexual form (teleomorph), classified first as Cochliobolus geniculatus in 1964 and later reclassified to Pseudocochliobolus geniculatus in 1978 by Richard Robert Nelson. Morphology Curvularia geniculata colonies grown on Oxford agar can grow rapidly to 3–5 cm in diameter, with a dark brown and hairy appearance. The fungus produces conidiophores up to 600 μm long, becoming lighter near the tip, and are septate, meaning the structure is subdivided by walls called septa. The conidiophores will produce 4-septate conidia (18–37 x 8–14 μm), consisting of a curved, broad central section that is dark brown and paler tapered ends. C. geniculata can be mistaken for Curvularia lunata because the latter is more commonly found. These two can be distinguished because C. lunata produces 3-septate conidia. Growth and physiology The optimal growth temperature for C. geniculata is . As a thermotolerant, the fungus can grow up to , but grows at a slower rate. The culture age (20-, 40-, and 60-day-old) affect the germination rate, germ tube growth and branching in different temperature conditions. Conidia germination was found to increase as temperature increased to 15 °C in all cultures. However, as the temperature reached or passed over 25 °C, germination declined in 40- and 60-day-old cultures, but not in 20-day-old cultures. In all the cultures, germ tube growth and branching increased as the temperature increased to 25 °C, but decreased above 25 °C. Habitat and ecology Curvularia geniculata is frequently reported to be found in soil and plants, particularly in warmer areas. The fungus was found to be associated with many plant species within the families Amaranthaceae, Apiaceae, Araceae, Asteraceae, Balsaminaceae, Basellaceae, Brassicaceae, Convolvulaceae, Fabaceae, Gesneriaceae, Marantaceae, Oleaceae, Papaveraceae, Poaceae, Solanacae, Vitaceae and Zingiberaceae. The fungus has been commonly found in Asia (Bangladesh, Bhutan, Brunei, Hong Kong, India, Malaysia, Myanmar, Nepal, Singapore and Thailand), Africa (Nigeria, Seychelles, Sierra Leone, South Africa and Uganda), Europe (USSR and Italy), North America (Bahamas, Canada, Central America, Cuba, Jamaica, Tobago, Trinidad and the USA), Oceania (Australia, Fiji, Papua New Guinea and Solomon Islands) and South America (Brazil, Peru and Venezuela). Pathogenicity Curvularia geniculata is most often associated with a wide range of plant species, especially in tropical countries, because it has little host specificity. Not only is this fungus commonly pathogenic to plants, but it is also frequently found in animals and occasionally found in humans. Members of the Curvularia species produce metabolites and toxins, some with anti-fungal properties. C. geniculata produces anti-fungal compounds, Curvularides A-E, which function in cyclic peptide regulation and cell wall degradation. Curvularide B was found to use its anti-fungal properties on Candida albicans, a fungus often associated with HIV patients. Plant infections Curvularia geniculata, a common plant pathogen, colonizes the roots of many plant species. For instance, Witchweed is a plant host of C. geniculata which causes huge crop losses because it parasitizes corn, grain, and many other plant species. Upon germination, the fungus is able to cause infection by penetrating the plant with its infectious pegs called appressorium, allowing the hyphae to grow in and between the host cells, resulting in cell death and leafspots. Animal infections Curvularia geniculata is a frequent animal pathogen that has been found to cause many animal diseases such as sinus infections in cattle, swelling of the skin (subcutaneous tumefactions) of dogs and horses, bone infections (osteomyelitis) in dogs, and central nervous infections in birds. The fungus has been identified as the common causal agent of mycetomata, a chronic fungal infection, which gives rise to pigmented nodules on the body of horses upon traumatic injury. Also, C. geniculata has been reported to cause bovine mycotic abortion in cattle, likely by inhalation or ingestion of the conidia by pregnant cows. Human infections Curvularia geniculata rarely contributes to human disease and has been reported in a few cases of keratitis, inflammatory disease often of the feet (mycetomas), (endocarditis) and peritonitis. The fungus enters into the human body via injury to the eye, colonization of the sinus, penetration of the skin or inhalation. Being exposed for a long period of time and contact with soil are the biggest risk factors of getting infected by C. geniculata. Potential treatments The fungus was found to be susceptible to Ketoconazole and itraconazole anti-fungal drugs in vitro. Patients with C. geniculata-induced peritonitis fully recovered upon treatment with anti-fungal medications, amphotericin B and itraconazole. Biotechnology applications Due to industrial activities, mercury is present in the soil which is very toxic and is a possible health hazard to humans and animals. C. geniculata can potentially be used as a method for mercury bioremediation because of its resistant properties to mercury and ability to colonize on plant roots. By colonizing host roots, mercury extracted from the soil can accumulate in the host, reducing the mercury levels in the soil. The fungus was able to remove more than 97% of mercury in vitro. References Pleosporaceae Fungi described in 1896 Fungus species
Curvularia geniculata
Biology
1,498
71,552,759
https://en.wikipedia.org/wiki/Aegis%20Sonix
Aegis Sonix is a music sequencer, software synthesizer and a score editor for the Amiga created by Aegis Development and published in 1987. The application offers a combination of a notation editor and an editor of digital sounds and is able to edit IFF music instruments and other digital sound files. History Commodore International developed but never officially released a sound application called Musicraft for its Amiga series of computers. Aegis Development bought rights to Musicraft from Commodore and contracted the original developers Mark Riley and Gary Koffler to continue its development. The application - now under the Aegis Sonix name - was released in 1987. in 1989, Aegis left the Amiga market and sold its software products to a Californian software producer Oxxi, Inc, which continued to use the Aegis brand. Features The software includes a score, keyboard, and an instrument editor. The program uses two file formats, the SMUS file format, which is an IFF based tracker module music format, and the INSTR file format which contains the instrument data. Sonix can also read music files created by Deluxe Music Construction Set and works as a MIDI sequencer. Sonix offers 8 music tracks: the first four are useable for 4 voices of the Amiga sound hardware and also for the MIDI instruments, the other four are available only for the MIDI instruments. All 8 tracks are independent of each other and can be used at the same time. The score editor allows to write notation for sheet music. The application serves primarily a music synthesis tool for creation of analog or digital sounds and is able to edit IFF music instruments and other digital sound files. Reception A review in the French magazine Tilt commended easy to use menus and good documentation and highlighted the combination of a notation editor and an editor of digital sounds as the main strong point of the application. The AmigaNews magazine evaluated the use of Sonix with MIDI and noted among several benefits few limitations: the program doesn't support recording of a key press on the MIDI keyboard. Sonix was also reviewed in the Keyboard magazine. See also Deluxe Music Construction Set References 1987 software Amiga software Scorewriters Music sequencers
Aegis Sonix
Engineering
434
4,890,844
https://en.wikipedia.org/wiki/Institut%20de%20radioprotection%20et%20de%20s%C3%BBret%C3%A9%20nucl%C3%A9aire
The French Institut de radioprotection et de sûreté nucléaire (IRSN) ("Radioprotection and Nuclear Safety Institute") located in Fontenay-aux-Roses is a public official establishment with an industrial and commercial aspect (EPIC) created by the AFSSE Act ( – French Agency of Sanitary Environmental Security) and by February 22, 2002 decreed n°2002-254. The IRSN is placed under the conjoint authority of the Defence minister, the Environmental minister, the Industry minister and the Health and Research minister. The IRSN gathers more than 1500 experts and researchers from the Institut de protection et de sûreté nucléaire (IPSN – Protection and Nuclear Safety Institute) and the Office de protection contre les rayonnements ionisants (OPRI – Ionizing radiations protection office). These scientists are thus competent on nuclear safety, radioactive protection and control of nuclear and sensitive materials.The IRSN realize investigations, expertise assessments and studies on the fields of nuclear safety, protection against ionizing radiation, protection and control of nuclear material, and protection against voluntary ill-advised acts. History 2001: creation of IRSN by law no. 2001-398 of 9 May 2001, article 51. IRSN was created from the merger of the Institute for Nuclear Protection and Safety (IPSN), which was part of the Commissariat à Atomic Energy (CEA) and the Office for Protection against Ionizing Radiation (OPRI), which was attached to the Ministry of Health, with the aim of creating a new public research and expertise establishment, independent of industrialists. 2002: operation of IRSN specified by decree no. 2002-254 of February 22, 2002. 2007: operation of IRSN modified by decree no. 2007-529 of April 6, 2007 to take account of law no. 2006-686 of June 13, 2006, relating to transparency and security in nuclear matters. 2016: operation of IRSN amended by decree no. 2016-283 of March 10, 2016 creating articles R592-15 et seq. in the environment code and repealing decree no. 2002-254 of February 22, 2002. Organization Presentation IRSN has nearly 1,816 employees, including many specialists, engineers, researchers, doctors, agronomists, veterinarians and technicians, competent experts in nuclear safety and radiation protection as well as in the field of control of sensitive nuclear materials. This staff is spread over 9 sites in France, with the Institute's headquarters located in Fontenay-aux-Roses. The largest facilities are located in Fontenay-aux-Roses, Vésinet and Cadarache. Establishments are also present in Cherbourg-en-Cotentin, Orsay, Saclay, Tournemire, Villeneuve-lès-Avignon and Tahiti. This last antenna is responsible for monitoring radioactive fallout from former French nuclear tests in French Polynesia. See also Chernobyl disaster References External links Government agencies of France Medical and health organizations based in France Nuclear research institutes Nuclear safety in France 2002 establishments in France Hauts-de-Seine Radiation protection organizations
Institut de radioprotection et de sûreté nucléaire
Engineering
641
1,029,177
https://en.wikipedia.org/wiki/Primitive%20polynomial%20%28field%20theory%29
In finite field theory, a branch of mathematics, a primitive polynomial is the minimal polynomial of a primitive element of the finite field . This means that a polynomial of degree with coefficients in is a primitive polynomial if it is monic and has a root in such that is the entire field . This implies that is a primitive ()-root of unity in . Properties Because all minimal polynomials are irreducible, all primitive polynomials are also irreducible. A primitive polynomial must have a non-zero constant term, for otherwise it will be divisible by x. Over GF(2), is a primitive polynomial and all other primitive polynomials have an odd number of terms, since any polynomial mod 2 with an even number of terms is divisible by (it has 1 as a root). An irreducible polynomial F(x) of degree m over GF(p), where p is prime, is a primitive polynomial if the smallest positive integer n such that F(x) divides is . A primitive polynomial of degree has different roots in , which all have order , meaning that any of them generates the multiplicative group of the field. Over GF(p) there are exactly primitive elements and primitive polynomials, each of degree , where is Euler's totient function. The algebraic conjugates of a primitive element in are , , , …, and so the primitive polynomial has explicit form . That the coefficients of a polynomial of this form, for any in , not necessarily primitive, lie in follows from the property that the polynomial is invariant under application of the Frobenius automorphism to its coefficients (using ) and from the fact that the fixed field of the Frobenius automorphism is . Examples Over the polynomial is irreducible but not primitive because it divides : its roots generate a cyclic group of order 4, while the multiplicative group of is a cyclic group of order 8. The polynomial , on the other hand, is primitive. Denote one of its roots by . Then, because the natural numbers less than and relatively prime to are 1, 3, 5, and 7, the four primitive roots in are , , , and . The primitive roots and are algebraically conjugate. Indeed . The remaining primitive roots and are also algebraically conjugate and produce the second primitive polynomial: . For degree 3, has primitive elements. As each primitive polynomial of degree 3 has three roots, all necessarily primitive, there are primitive polynomials of degree 3. One primitive polynomial is . Denoting one of its roots by , the algebraically conjugate elements are and . The other primitive polynomials are associated with algebraically conjugate sets built on other primitive elements with relatively prime to 26: Applications Field element representation Primitive polynomials can be used to represent the elements of a finite field. If α in GF(pm) is a root of a primitive polynomial F(x), then the nonzero elements of GF(pm) are represented as successive powers of α: This allows an economical representation in a computer of the nonzero elements of the finite field, by representing an element by the corresponding exponent of This representation makes multiplication easy, as it corresponds to addition of exponents modulo Pseudo-random bit generation Primitive polynomials over GF(2), the field with two elements, can be used for pseudorandom bit generation. In fact, every linear-feedback shift register with maximum cycle length (which is , where n is the length of the linear-feedback shift register) may be built from a primitive polynomial. In general, for a primitive polynomial of degree m over GF(2), this process will generate pseudo-random bits before repeating the same sequence. CRC codes The cyclic redundancy check (CRC) is an error-detection code that operates by interpreting the message bitstring as the coefficients of a polynomial over GF(2) and dividing it by a fixed generator polynomial also over GF(2); see Mathematics of CRC. Primitive polynomials, or multiples of them, are sometimes a good choice for generator polynomials because they can reliably detect two bit errors that occur far apart in the message bitstring, up to a distance of for a degree n primitive polynomial. Primitive trinomials A useful class of primitive polynomials is the primitive trinomials, those having only three nonzero terms: . Their simplicity makes for particularly small and fast linear-feedback shift registers. A number of results give techniques for locating and testing primitiveness of trinomials. For polynomials over GF(2), where is a Mersenne prime, a polynomial of degree r is primitive if and only if it is irreducible. (Given an irreducible polynomial, it is not primitive only if the period of x is a non-trivial factor of . Primes have no non-trivial factors.) Although the Mersenne Twister pseudo-random number generator does not use a trinomial, it does take advantage of this. Richard Brent has been tabulating primitive trinomials of this form, such as . This can be used to create a pseudo-random number generator of the huge period ≈ . References External links Field (mathematics) Polynomials
Primitive polynomial (field theory)
Mathematics
1,073
863,712
https://en.wikipedia.org/wiki/Liquid%20breathing
Liquid breathing is a form of respiration in which a normally air-breathing organism breathes an oxygen-rich liquid which is capable of CO2 gas exchange (such as a perfluorocarbon). The liquid involved requires certain physical properties, such as respiratory gas solubility, density, viscosity, vapor pressure and lipid solubility, which some perfluorochemicals (PFCs) have. Thus, it is critical to choose the appropriate PFC for a specific biomedical application, such as liquid ventilation, drug delivery or blood substitutes. The physical properties of PFC liquids vary substantially; however, the one common property is their high solubility for respiratory gases. In fact, these liquids carry more oxygen and carbon dioxide than blood. In theory, liquid breathing could assist in the treatment of patients with severe pulmonary or cardiac trauma, especially in pediatric cases. Liquid breathing has also been proposed for use in deep diving and space travel. Despite some recent advances in liquid ventilation, a standard mode of application has not yet been established. Approaches As liquid breathing is still a highly experimental technique, there are several proposed approaches. Total liquid ventilation Although total liquid ventilation (TLV) with completely liquid-filled lungs can be beneficial, the complex liquid-filled tube system required is a disadvantage compared to gas ventilation—the system must incorporate a membrane oxygenator, heater, and pumps to deliver to, and remove from the lungs tidal volume aliquots of conditioned perfluorocarbon (PFC). One research group led by Thomas H. Shaffer has maintained that with the use of microprocessors and new technology, it is possible to maintain better control of respiratory variables such as liquid functional residual capacity and tidal volume during TLV than with gas ventilation. Consequently, the total liquid ventilation necessitates a dedicated liquid ventilator similar to a medical ventilator except that it uses a breathable liquid. Many prototypes are used for animal experimentation, but experts recommend continued development of a liquid ventilator toward clinical applications. Specific preclinical liquid ventilator (Inolivent) is currently under joint development in Canada and France. The main application of this liquid ventilator is the ultra-fast induction of therapeutic hypothermia after cardiac arrest. This has been demonstrated to be more protective than slower cooling method after experimental cardiac arrest. Partial liquid ventilation In contrast, partial liquid ventilation (PLV) is a technique in which a PFC is instilled into the lung to a volume approximating functional residual capacity (approximately 40% of total lung capacity). Conventional mechanical ventilation delivers tidal volume breaths on top of it. This mode of liquid ventilation currently seems technologically more feasible than total liquid ventilation, because PLV could utilise technology currently in place in many neonatal intensive-care units (NICU) worldwide. The influence of PLV on oxygenation, carbon dioxide removal and lung mechanics has been investigated in several animal studies using different models of lung injury. Clinical applications of PLV have been reported in patients with acute respiratory distress syndrome (ARDS), meconium aspiration syndrome, congenital diaphragmatic hernia and respiratory distress syndrome (RDS) of neonates. In order to correctly and effectively conduct PLV, it is essential to properly dose a patient to a specific lung volume (10–15 ml/kg) to recruit alveolar volume redose the lung with PFC liquid (1–2 ml/kg/h) to oppose PFC evaporation from the lung. If PFC liquid is not maintained in the lung, PLV can not effectively protect the lung from biophysical forces associated with the gas ventilator. New application modes for PFC have been developed. Partial liquid ventilation (PLV) involves filling the lungs with a liquid. This liquid is a perfluorocarbon such as perflubron (brand name Liquivent). The liquid has some unique properties. It has a very low surface tension, similar to the surfactant substances produced in the lungs to prevent the alveoli from collapsing and sticking together during exhalation. It also has a high density, oxygen readily diffuses through it, and it may have some anti-inflammatory properties. In PLV, the lungs are filled with the liquid, the patient is then ventilated with a conventional ventilator using a protective lung ventilation strategy. The hope is that the liquid will help the transport of oxygen to parts of the lung that are flooded and filled with debris, help remove this debris and open up more alveoli improving lung function. The study of PLV involves comparison to protocolized ventilator strategy designed to minimize lung damage. PFC vapor Vaporization of perfluorohexane with two anesthetic vaporizers calibrated for perfluorohexane has been shown to improve gas exchange in oleic acid-induced lung injury in sheep. Predominantly PFCs with high vapor pressure are suitable for vaporization. Aerosol-PFC With aerosolized perfluorooctane, significant improvement of oxygenation and pulmonary mechanics was shown in adult sheep with oleic acid-induced lung injury. In surfactant-depleted piglets, persistent improvement of gas exchange and lung mechanics was demonstrated with Aerosol-PFC. The aerosol device is of decisive importance for the efficacy of PFC aerosolization, as aerosolization of PF5080 (a less purified FC77) has been shown to be ineffective using a different aerosol device in surfactant-depleted rabbits. Partial liquid ventilation and Aerosol-PFC reduced pulmonary inflammatory response. Human usage Medical treatment The most promising area for the use of liquid ventilation is in the field of pediatric medicine. The first medical use of liquid breathing was treatment of premature babies and adults with acute respiratory distress syndrome (ARDS) in the 1990s. Liquid breathing was used in clinical trials after the development by Alliance Pharmaceuticals of the fluorochemical perfluorooctyl bromide, or perflubron for short. Current methods of positive-pressure ventilation can contribute to the development of lung disease in pre-term neonates, leading to diseases such as bronchopulmonary dysplasia. Liquid ventilation removes many of the high pressure gradients responsible for this damage. Furthermore, perfluorocarbons have been demonstrated to reduce lung inflammation, improve ventilation-perfusion mismatch and to provide a novel route for the pulmonary administration of drugs. In order to explore drug delivery techniques that would be useful for both partial and total liquid ventilation, more recent studies have focused on PFC drug delivery using a nanocrystal suspension. The first image is a computer model of a PFC liquid (perflubron) combined with gentamicin molecules. The second image shows experimental results comparing both plasma and tissue levels of gentamicin after an intratracheal (IT) and intravenous (IV) dose of 5 mg/kg in a newborn lamb during gas ventilation. Note that the plasma levels of the IV dose greatly exceed the levels of the IT dose over the 4 hour study period; whereas, the lung tissue levels of gentamicin when delivered by an intratracheal (IT) suspension, uniformly exceed the intravenous (IV) delivery approach after 4 hours. Thus, the IT approach allows more effective delivery of the drug to the target organ while maintaining a safer level systemically. Both images represent the in-vivo time course over 4 hours. Numerous studies have now demonstrated the effectiveness of PFC liquids as a delivery vehicle to the lungs. Clinical trials with premature infants and adults have been conducted. Since the safety of the procedure and the effectiveness were apparent from an early stage, the US Food and Drug Administration (FDA) gave the product "fast track" status (meaning an accelerated review of the product, designed to get it to the public as quickly as is safely possible) due to its life-saving potential. Clinical trials showed that using perflubron with ordinary ventilators improved outcomes as much as using high frequency oscillating ventilation (HFOV). But because perflubron was not better than HFOV, the FDA did not approve perflubron, and Alliance is no longer pursuing the partial liquid ventilation application. Whether perflubron would improve outcomes when used with HFOV or has fewer long-term consequences than HFOV remains an open question. In 1996 Mike Darwin and Steven B. Harris proposed using cold liquid ventilation with perfluorocarbon to quickly lower the body temperature of victims of cardiac arrest and other brain trauma to allow the brain to better recover. The technology came to be called gas/liquid ventilation (GLV), and was shown able to achieve a cooling rate of 0.5 °C per minute in large animals. It has not yet been tried in humans. Most recently, hypothermic brain protection has been associated with rapid brain cooling. In this regard, a new therapeutic approach is the use of intranasal perfluorochemical spray for preferential brain cooling. The nasopharyngeal (NP) approach is unique for brain cooling due to anatomic proximity to the cerebral circulation and arteries. Based on preclinical studies in adult sheep, it was shown that independent of region, brain cooling was faster during NP-perfluorochemical versus conventional whole body cooling with cooling blankets. To date, there have been four human studies including a completed randomized intra-arrest study (200 patients). Results clearly demonstrated that prehospital intra-arrest transnasal cooling is safe, feasible and is associated with an improvement in cooling time. Proposed uses Diving Gas pressure increases with depth, rising 1 bar () every 10 meters to over 1,000 bar at the bottom of the Mariana Trench. Diving becomes more dangerous as depth increases, and deep diving presents many hazards. All surface-breathing animals are subject to decompression sickness, including aquatic mammals and free-diving humans. Breathing at depth can cause nitrogen narcosis and oxygen toxicity. Holding the breath while ascending after breathing at depth can cause air embolisms, burst lung, and collapsed lung. Special breathing gas mixes such as trimix or heliox reduce the risk of nitrogen narcosis but do not eliminate it. Heliox further eliminates the risk of nitrogen narcosis but introduces the risk of helium tremors below about . Atmospheric diving suits maintain body and breathing pressure at 1 bar, eliminating most of the hazards of descending, ascending, and breathing at depth. However, the rigid suits are bulky, clumsy, and very expensive. Liquid breathing offers a third option, promising the mobility available with flexible dive suits and the reduced risks of rigid suits. With liquid in the lungs, the pressure within the diver's lungs could accommodate changes in the pressure of the surrounding water without the huge partial pressure gas exposures required when the lungs are filled with gas. Liquid breathing would not result in the saturation of body tissues with high pressure nitrogen or helium that occurs with the use of non-liquids, thus would reduce or remove the need for slow decompression. A significant problem, however, arises from the high viscosity of the liquid and the corresponding reduction in its ability to remove CO2. All uses of liquid breathing for diving must involve total liquid ventilation (see above). Total liquid ventilation, however, has difficulty moving enough liquid to carry away CO2, because no matter how great the total pressure is, the amount of partial CO2 gas pressure available to dissolve CO2 into the breathing liquid can never be much more than the pressure at which CO2 exists in the blood (about 40 mm of mercury (Torr)). At these pressures, most fluorocarbon liquids require about 70 mL/kg minute-ventilation volumes of liquid (about 5 L/min for a 70 kg adult) to remove enough CO2 for normal resting metabolism. This is a great deal of fluid to move, particularly as liquids are more viscous and denser than gases, (for example water is about 850 times the density of air). Any increase in the diver's metabolic activity also increases CO2 production and the breathing rate, which is already at the limits of realistic flow rates in liquid breathing. It seems unlikely that a person would move 10 liters/min of fluorocarbon liquid without assistance from a mechanical ventilator, so "free breathing" may be unlikely. However, it has been suggested that a liquid breathing system could be combined with a CO2 scrubber connected to the diver's blood supply; a US patent has been filed for such a method. Space travel Liquid immersion provides a way to reduce the physical stress of G forces. Forces applied to fluids are distributed as omnidirectional pressures. As liquids cannot be practically compressed, they do not change density under high acceleration such as performed in aerial maneuvers or space travel. A person immersed in liquid of the same density as tissue has acceleration forces distributed around the body, rather than applied at a single point such as a seat or harness straps. This principle is used in a new type of G-suit called the Libelle G-suit, which allows aircraft pilots to remain conscious and functioning at more than 10g acceleration by surrounding them with water in a rigid suit. Acceleration protection by liquid immersion is limited by the differential density of body tissues and immersion fluid, limiting the utility of this method to about 15g to 20g. Extending acceleration protection beyond 20g requires filling the lungs with fluid of density similar to water. An astronaut totally immersed in liquid, with liquid inside all body cavities, will feel little effect from extreme G forces because the forces on a liquid are distributed equally, and in all directions simultaneously. Effects will still be felt because of density differences between different body tissues, so an upper acceleration limit still exists. However, it can likely be higher than hundreds of G. Liquid breathing for acceleration protection may never be practical because of the difficulty of finding a suitable breathing medium of similar density to water that is compatible with lung tissue. Perfluorocarbon fluids are twice as dense as water, hence unsuitable for this application. Examples in fiction Literary works Alexander Beliaev's 1928 science fiction novel Amphibian Man is based on a scientist and a maverick surgeon, who makes his son, Ichthyander (etymology: "fish" + "man") a life-saving transplant – a set of shark gills. There is a film based on the novel. L. Sprague de Camp's 1938 short story "The Merman" hinges on an experimental process to make lungs function as gills, thus allowing a human being to "breathe" under water. Hal Clement's 1973 novel Ocean on Top portrays a small underwater civilization living in a 'bubble' of oxygenated fluid denser than seawater. Joe Haldeman's 1975 novel The Forever War describes liquid immersion and breathing in great detail as a key technology to allow space travel and combat with acceleration up to 50 G. In the Star Trek: The Next Generation novel The Children of Hamlin (1988) the crew of the Enterprise-D encounter an alien race whose ships contain a breathable liquid environment. Peter Benchley's 1994 novel White Shark centers around a Nazi scientist's experimental attempts to create an amphibious human, whose lungs are surgically modified to breathe underwater, and trained to reflexively do so after being flooded with a fluorocarbon solution. Judith and Garfield Reeves-Stevens' 1994 Star Trek novel Federation explains that before the invention of the inertial dampener, the stresses of high-G acceleration required starship pilots to be immersed in liquid-filled capsules, breathing an oxygen-rich saline solution to prevent their lungs from being crushed. Nicola Griffith's novel Slow River (1995) features a sex scene occurring within a twenty cubic foot silvery pink perflurocarbon pool, with the sensation described as "like breathing a fist". Ben Bova's novel Jupiter (2000) features a craft in which the crew are suspended in a breathable liquid that allows them to survive in the high-pressure environment of Jupiter's atmosphere. In Scott Westerfeld's sci-fi novel The Risen Empire (2003), the lungs of soldiers performing insertion from orbit are filled with an oxygen-rich polymer gel with embedded pseudo-alveoli and a rudimentary artificial intelligence. The novel Mechanicum (2008) by Graham McNeill, Book 9 in the Horus Heresy book series, describes physically crippled (gigantic war machine) pilots encased in nutrient fluid tanks. This allows them to continue operating beyond the limits normally imposed by the body. In Liu Cixin's novel The Dark Forest (2008), the warships of humanity in the 23rd century flood their compartments with an oxygen-rich liquid called 'deep-sea acceleration fluid' to protect the crew against the forces of extreme acceleration that the ships undergo. Ships enter a 'deep-sea state' where the crew are immersed in the fluid and sedated before acceleration can commence. In the 2009 novel The Lost Symbol by Dan Brown, Robert Langdon (the protagonist) is completely submerged in breathable liquid mixed with hallucinogenic chemicals and sedatives as a torture and interrogation technique by Mal'akh (the antagonist). He goes through a near death experience when he inhales the liquid and blacks out, losing control over his body, but is soon revived. In Greg van Eekhout's 2014 novel California Bones, two characters are put into tanks filled with liquid: "They were given no breathing apparatus, but the water in the tank was rich with perfluorocarbon, which carried more oxygen than blood." In author A.L. Mengel's science fiction novel The Wandering Star (2016), several characters breathe oxygenated fluid during a dive to explore an underwater city. They submerge in high pressure "bubbles" filled with the perfluorocarbon fluid. In Tiamat's Wrath, a 2019 novel in The Expanse series by James S. A. Corey, The Laconian empire utilizes a ship with full-immersion liquid-breathing pods that allow the crew to undergo significantly increased g-forces. As powerful and fuel-efficient fusion engines in the series have made the only practical limitations of a ships' acceleration the survivability of the crew, this makes the ship the fastest in all of human-colonized space. Films and television The aliens in the Gerry Anderson UFO series (1970-1971) use liquid-breathing spacesuits. The 1989 film The Abyss by James Cameron features a character using liquid breathing to dive thousands of feet without compressing. The Abyss also features a scene with a rat submerged in and breathing fluorocarbon liquid, filmed in real life. In the 1995 anime Neon Genesis Evangelion, the cockpits of the titular mecha are filled with a fictional oxygenated liquid called LCL which is required for the pilot to mentally sync with an Evangelion, as well as providing direct oxygenation of their blood, and dampening the impacts from battle. Once the cockpit is flooded the LCL is ionized, bringing its density, opacity, and viscosity close to that of air. Protagonist Shinji Ikari notices that LCL smells like blood. It is eventually revealed that LCL is the blood of the Evangelions' progenitor, Lilith. In the movie Mission to Mars (2000), a character is depicted as being immersed in apparent breathable fluid before a high-acceleration launch. In season 1, episode 13 of Seven Days (1998-2001) chrononaut Frank Parker is seen breathing a hyper-oxygenated perfluorocarbon liquid that is pumped through a sealed full body suit that he is wearing. This suit and liquid combination allow him to board a Russian submarine through open ocean at a depth of almost 1000 feet. Upon boarding the submarine he removes his helmet, expels the liquid from his lungs and is able to breathe air again. In an episode of the Adult Swim cartoon series Metalocalypse (2006-2013), the other members of the band submerge guitarist Toki in a "liquid oxygen isolation chamber" while recording an album in the Mariana Trench. In a Series 11 episode of Dalziel and Pascoe (1996-2007) entitled Demons on Our Shoulders, magician Lee Knight, played by Richard E Grant, performs an underwater trick using breathable fluid. In an episode of the Syfy Channel show Eureka (2006-2012), Sheriff Jack Carter is submerged in a tank of "oxygen rich plasma" to be cured of the effects of a scientific accident. In the anime series Aldnoah.Zero (2014-2015), episode 5 shows that Slaine Troyard was in a liquid-filled capsule when he crashed. Princess Asseylum witnessed the crash, helped him to get out of the capsule, then used CPR on him to draw out the liquid from his lungs. In the 2024 anime Bang Brave Bang Bravern, the titular mecha Bravern fills its cockpit with liquid during underwater combat, telling pilot Ao Isami that it will supply oxygen directly to him while also counteracting the pressure. Bravern directly compares this to the scene from The Abyss, prompting Ao to ask how Bravern knows about the film. Video games In the classic 1995 PC turn-based strategy game X-COM: Terror from the Deep, "Aquanauts" fighting in deep ocean conditions breathe a dense oxygen-carrying fluid. In the EVE Online Universe (2003), pilots in capsules (escape pods that function as the control center for the spacecraft) breathe an oxygen rich, nano-saturated, breathable glucose-based suspension solution. In the game Helldivers 2 (2024), after an upgrade, jet pilots use breathable perfluorocarbons in the cockpit to absorb G-forces and allow more dangerous maneuvers. See also Artificial gills (human) Breathing gas Liquid ventilator Mechanical ventilation References External links Here, Breathe This Liquid, from Discover Magazine Miracle Girl, from Reader's Digest Liquids Underwater diving equipment Respiration University at Buffalo Medical procedures Modes of mechanical ventilation Respiratory system procedures
Liquid breathing
Physics,Chemistry
4,556
13,442,871
https://en.wikipedia.org/wiki/XML/EDIFACT
XML/EDIFACT is an Electronic Data Interchange (EDI) format used in Business-to-business transactions. It allows EDIFACT message types to be used by XML systems. EDIFACT is a formal machine-readable description of electronic business documents. It uses a syntax close to delimiter separated files. This syntax was invented in the 1980s to keep files as small as possible. Because of the Internet boom around 2000, XML started to become the most widely supported file syntax. But for example, an invoice is still an invoice, containing information about buyer, seller, product, due amount. EDIFACT works perfectly from the content viewpoint, but many software systems struggle to handle its syntax. So combining EDIFACT vocabulary and grammar with XML syntax makes XML/EDIFACT. The rules for XML/EDIFACT are defined by ISO TS 20625. Use-cases XML/EDIFACT is used in B2B scenarios as listed below: Newer EAI or B2B systems often cannot handle EDI (Electronic Data Interchange) syntax directly. Simple syntax converters do a 1:1 conversion before. Their input is an EDIFACT transaction file, their output an XML/EDIFACT instance file. XML/EDIFACT keeps XML B2B transactions relatively small. XML element names derived from EDIFACT tags are much shorter and more formal than those derived from natural language since they are simply expressions of the EDIFACT syntax. A company does not want to invest into new vocabularies from scratch. XML/EDIFACT reuses business content defined in UN/EDIFACT. Since 1987, the UN/EDIFACT library was enriched by global business needs for all sectors of industry, transport and public services. Large companies can order goods from small companies via XML/EDIFACT. The small companies use XSLT stylesheets to browse the message content in human readable forms, as shown in Example 3. Example 1: EDIFACT source code A name and address (NAD) segment, containing customer ID and customer address, expressed in EDIFACT syntax: NAD+BY+CST9955::91++Candy Inc+Sirup street 15+Sugar Town++55555' Example 2: XML/EDIFACT source code The same information content in an XML/EDIFACT instance file: <S_NAD> <D_3035>BY</D_3035> <C_C082><D_3039>CST9955</D_3039><D_3055>91</D_3055></C_C082> <C_C080><D_3036>Candy Inc</D_3036></C_C080> <C_C059><D_3042>Sirup street 15</D_3042></C_C059> <D_3164>Sugar Town</D_3164> <D_3251>55555</D_3251> </S_NAD> Example 3: XML/EDIFACT in a browser The same XML/EDIFACT instance presented with help of an XSLT stylesheet: External links UN/EDIFACT Main Page ISO/TS 20625:2002 - This document by the ISO costs CHF 158,00 to access. www.edifabric.com - A .NET framework for converting EDIFACT and X12 messages into XML and vice versa. Edifact-XML - A free complete java parser library for converting UN EDIFACT messages to XML. Edifact<->XML Converter plus Edifact xsd generator - UN/EDIFACT<->ISO/TS 20625 XML. XML-based standards Electronic data interchange
XML/EDIFACT
Technology
808
3,930,356
https://en.wikipedia.org/wiki/C7%20protein
C7 protein is an engineered zinc finger protein based on the murine ZFP, Zif268 and discovered by Wu et al. in 1994 (published in 1995). It shares the same zinc finger 2 and zinc finger 3 of Zif268, but differs in the sequence of finger 1. It also shares the same DNA target, 5'-GCGTGGGCG-3'. The shared sequences in single letter amino acid codes of fingers 2 and 3 are RSD-H-LTT and RAD-E-RKR (positions -1 through 6 in the alpha helix). Zinc finger 1 has the sequence KSA-D-LKR which provides a 13-fold increase in affinity to the target sequence of the entire ZFP over that of Zif268. It is used in zinc finger investigations in which the amino acid sequence of finger 2 is changed in order to determine the appropriate sequence to target a given three-nucleotide target site. A variation of C7, C7.GAT is preferred since it lacks the aspartic acid residue present in finger 3 of C7 and known to cause a phenomenon called 'target site overlap'. In this case the target site overlap is a result of the aspartic acid residue forming a hydrogen bond with the N4 of the cytosine (in the opposite strand) base-paired to the guanine in the finger 2 subsite. It can also form the same hydrogen bond with an adenine base paired to a thymine. This target site overlap would dictate that either a cytosine or adenine residue be present as the 3' nucleotide in the finger 2 subsite which is unacceptable when looking to target sequences containing another nucleotide at this position. References Engineered proteins Genetics experiments Molecular genetics
C7 protein
Chemistry,Biology
374
18,327,309
https://en.wikipedia.org/wiki/Acecarbromal
Acecarbromal (INN) (brand names Sedamyl, Abasin, Carbased, Paxarel, Sedacetyl, numerous others), also known as acetylcarbromal and acetyladalin, is a hypnotic and sedative drug of the ureide (acylurea) group discovered by Bayer in 1917 that was formerly marketed in the United States and Europe. It is also used in combination with extract of quebracho and vitamin E as a treatment for erectile dysfunction under the brand name Afrodor in Europe. Acecarbromal is structurally related to the barbiturates, which are basically cyclized ureas. Prolonged use is not recommended as it can cause bromine poisoning. See also Bromisoval Carbromal References Erectile dysfunction drugs GABAA receptor positive allosteric modulators Hypnotics Organobromides Sedatives Ureas Drugs developed by Bayer
Acecarbromal
Chemistry,Biology
199
2,902,701
https://en.wikipedia.org/wiki/35%20Arietis
35 Arietis (abbreviated 35 Ari) is a binary star in the northern constellation of Aries. 35 Arietis is the Flamsteed designation. It is approximately distant from the Earth, based upon an annual parallax shift of 9.51 mas. This star is visible to the naked eye with an apparent visual magnitude of 4.64. This is a single-lined spectroscopic binary system, with the presence of a companion being demonstrated by shifts in the spectrum of the primary component. The pair orbit each other with a period of 490.0 days and an eccentricity of 0.14. The primary is a B-type main-sequence star with a stellar classification of B3 V. With a mass around 5.7 times that of the Sun, it is radiating 870 times the Sun's luminosity. This energy is being emitted from the outer atmosphere at an effective temperature of 17,520 K, causing it to shine with the blue-white hue of a B-type star. This star was formerly located in the obsolete constellation Musca Borealis. References External links Image 35 Arietis Aries (constellation) B-type main-sequence stars Arietis, 35 Spectroscopic binaries Durchmusterung objects 0801 012719 016908
35 Arietis
Astronomy
268
4,027,892
https://en.wikipedia.org/wiki/Assortative%20mixing
In the study of complex networks, assortative mixing, or assortativity, is a bias in favor of connections between network nodes with similar characteristics. In the specific case of social networks, assortative mixing is also known as homophily. The rarer disassortative mixing is a bias in favor of connections between dissimilar nodes. In social networks, for example, individuals commonly choose to associate with others of similar age, nationality, location, race, income, educational level, religion, or language as themselves. In networks of sexual contact, the same biases are observed, but mixing is also disassortative by gender – most partnerships are between individuals of opposite sex. Assortative mixing can have effects, for example, on the spread of disease: if individuals have contact primarily with other members of the same population groups, then diseases will spread primarily within those groups. Many diseases are indeed known to have differing prevalence in different population groups, although other social and behavioral factors affect disease prevalence as well, including variations in quality of health care and differing social norms. Assortative mixing is also observed in other (non-social) types of networks, including biochemical networks in the cell, computer and information networks, and others. Of particular interest is the phenomenon of assortative mixing by degree, meaning the tendency of nodes with high degree to connect to others with high degree, and similarly for low degree. Because degree is itself a topological property of networks, this type of assortative mixing gives rise to more complex structural effects than other types. Empirically it has been observed that most social networks mix assortatively by degree, but most networks of other types mix disassortatively, although there are exceptions. See also Assortative mating Assortativity Complex network Friendship paradox Graph theory Heterophily Homophily Preferential attachment References Network theory Social science methodology Social networks Epidemiology Organization
Assortative mixing
Mathematics,Environmental_science
396
44,142,683
https://en.wikipedia.org/wiki/Thaxterogaster%20pluvius
Thaxterogaster pluvius is a species of fungus in the family Cortinariaceae. Taxonomy It was described in 1821 by the Swedish mycologist Elias Magnus Fries who classified it as Agaricus pluvius. In 1838 Fries reclassified it as Cortinarius pluvius. In 2022 the species was transferred from Cortinarius and reclassified as Thaxterogaster pluvius based on genomic data. Habitat and distribution It is found in Europe and North America. See also List of Cortinarius species References External links pluvius Fungi of Europe Fungi of North America Fungi described in 1821 Taxa named by Elias Magnus Fries Fungus species
Thaxterogaster pluvius
Biology
140
6,948,785
https://en.wikipedia.org/wiki/Nuclear%20protein
A nuclear protein is a protein found in the cell nucleus. Proteins are transported inside the nucleus with the help of the nuclear pore complex, which acts a barrier between cytoplasm and nuclear membrane. The import and export of proteins through the nuclear pore complex plays a fundamental role in gene regulation and other biological functions. References External links http://npd.hgu.mrc.ac.uk/user/about Cell nucleus
Nuclear protein
Chemistry
92
46,553
https://en.wikipedia.org/wiki/Escherichia%20coli%20O157%3AH7
Escherichia coli O157:H7 is a serotype of the bacterial species Escherichia coli and is one of the Shiga-like toxin–producing types of E. coli. It is a cause of disease, typically foodborne illness, through consumption of contaminated and raw food, including raw milk and undercooked ground beef. Infection with this type of pathogenic bacteria may lead to hemorrhagic diarrhea, and to kidney failure; these have been reported to cause the deaths of children younger than five years of age, of elderly patients, and of patients whose immune systems are otherwise compromised. Transmission is via the fecal–oral route, and most illness has been through distribution of contaminated raw leaf green vegetables, undercooked meat and raw milk. Signs and symptoms E. coli O157:H7 infection often causes severe, acute hemorrhagic diarrhea (although nonhemorrhagic diarrhea is also possible) and abdominal cramps. Usually little or no fever is present, and the illness resolves in 5 to 10 days. It can also sometimes be asymptomatic. In some people, particularly children under five years of age, persons whose immunologies are otherwise compromised, and the elderly, the infection can cause hemolytic–uremic syndrome (HUS), in which the red blood cells are destroyed and the kidneys fail. About 2–7% of infections lead to this complication. In the United States, HUS is the principal cause of acute kidney failure in children, and most cases of HUS are caused by E. coli O157:H7. Bacteriology Like the other strains of the E. coli, O157:H7 is gram-negative and oxidase-negative. Unlike many other strains, it does not ferment sorbitol, which provides a basis for clinical laboratory differentiation of the strain. Strains of E. coli that express Shiga and Shiga-like toxins gained that ability via infection with a prophage containing the structural gene coding for the toxin, and nonproducing strains may become infected and produce shiga-like toxins after incubation with shiga toxin positive strains. The prophage responsible seems to have infected the strain's ancestors fairly recently, as viral particles have been observed to replicate in the host if it is stressed in some way (e.g. antibiotics). All clinical isolates of E. coli O157:H7 possess the plasmid pO157. The periplasmic catalase is encoded on pO157 and may enhance the virulence of the bacterium by providing additional oxidative protection when infecting the host. E. coli O157:H7 non-hemorrhagic strains are converted to hemorrhagic strains by lysogenic conversion after bacteriophage infection of non-hemorrhagic cells. Natural habitat While it is relatively uncommon, the E. coli serotype O157:H7 can naturally be found in the intestinal contents of some cattle, goats, and even sheep. The digestive tract of cattle lack the Shiga toxin receptor globotriaosylceramide, and thus, these can be asymptomatic carriers of the bacterium. The prevalence of E. coli O157:H7 in North American feedlot cattle herds ranges from 0 to 60%. Some cattle may also be so-called "super-shedders" of the bacterium. Super-shedders may be defined as cattle exhibiting rectoanal junction colonization and excreting >103 to 4 CFU g−1 feces. Super-shedders have been found to constitute a small proportion of the cattle in a feedlot (<10%) but they may account for >90% of all E. coli O157:H7 excreted. Transmission Infection with E. coli O157:H7 can come from ingestion of contaminated food or water, or oral contact with contaminated surfaces. Examples of this can be undercooked ground beef but also leafy vegetables and raw milk. Fields often become contaminated with the bacterium through irrigation processes or contaminated water naturally entering the soil. It is highly virulent, with a low infectious dose: an inoculation of fewer than 10 to 100 colony-forming units (CFU) of E. coli O157:H7 is sufficient to cause infection, compared to over a million CFU for other pathogenic E. coli strains. Diagnosis A stool culture can detect the bacterium. The sample is cultured on sorbitol-MacConkey (SMAC) agar, or the variant cefixime potassium tellurite sorbitol-MacConkey agar (CT-SMAC). On SMAC agar, O157:H7 colonies appear clear due to their inability to ferment sorbitol, while the colonies of the usual sorbitol-fermenting serotypes of E. coli appear red. Sorbitol non-fermenting colonies are tested for the somatic O157 antigen before being confirmed as E. coli O157:H7. Like all cultures, diagnosis is time-consuming with this method; swifter diagnosis is possible using quick E. coli DNA extraction method plus polymerase chain reaction techniques. Newer technologies using fluorescent and antibody detection are also under development. Prevention Avoiding the consumption of, or contact with, unpasteurized dairy products, undercooked beef, uncleaned vegetables, and non disinfected water reduces the risk of an E. coli infection. Proper hand washing with water that has been treated with adequate levels of chlorine or other effective disinfectants after using the lavatory or changing a diaper, especially among children or those with diarrhea, reduces the risk of transmission. E. coli O157:H7 infection is a nationally reportable disease in the US, Great Britain, and Germany. It is also reportable in most states of Australia including Queensland. Treatment While fluid replacement and blood pressure support may be necessary to prevent death from dehydration, most patients recover without treatment in 5–10 days. There is no evidence that antibiotics improve the course of disease, and treatment with antibiotics may precipitate hemolytic–uremic syndrome (HUS). The antibiotics are thought to trigger prophage induction, and the prophages released by the dying bacteria infect other susceptible bacteria, converting them into toxin-producing forms. Antidiarrheal agents, such as loperamide (imodium), should also be avoided as they may prolong the duration of the infection. Certain novel treatment strategies, such as the use of anti-induction strategies to prevent toxin production and the use of anti-Shiga toxin antibodies, have also been proposed. History The common ancestor of Escherichia coli O157:H7 was found to have originated in the Netherlands around 1890 as estimated by molecular biologists. It is thought that international spread was through animal movements, like Holstein Friesian cattle. E.coli O157:H7 is thought to have moved from Europe to Australia around 1937, to the United States in 1941, to Canada in 1960, and from Australia to New Zealand in 1966. The first recorded observation of human E. coli O157:H7 infection was in 1975, in association with a sporadic case of hemorrhagic colitis, but it was not identified as pathogenic then. It was first recognized as a human pathogen following a 1982 hemorrhagic colitis outbreak in Oregon and Michigan, in which at least 47 people were sickened by eating beef hamburger patties from a fast food chain that were found to be contaminated with it. The United States Department of Agriculture banned the sale of ground beef contaminated with the O157:H7 strain in 1994. Culture and society The pathogen results in an estimated 2,100 hospitalizations annually in the United States. The illness is often misdiagnosed; therefore, expensive and invasive diagnostic procedures may be performed. Patients who develop HUS often require prolonged hospitalization, dialysis, and long-term followup. See also 1993 Jack in the Box E. coli outbreak 1996 Odwalla E. coli outbreak 2011 Germany E. coli O104:H4 outbreak 2024 McDonald's E. coli outbreak Escherichia coli O104:H4 Escherichia coli O121 Food-induced purpura List of foodborne illness outbreaks Walkerton E. coli outbreak References External links Haemolytic Uraemic Syndrome Help (HUSH) – a UK based charity E. coli: Protecting yourself and your family from a sometimes deadly bacterium Escherichia coli O157:H7 genomes and related information at PATRIC, a Bioinformatics Resource Center funded by NIAID For more information about reducing your risk of foodborne illness, visit the US Department of Agriculture's Food Safety and Inspection Service website or The Partnership for Food Safety Education | Fight BAC! briandeer.com, report from The Sunday Times on a UK outbreak, May 17, 1998 CBS5 report on September 2006 outbreak Escherichia coli Bovine diseases Zoonoses Foodborne illnesses Infraspecific bacteria taxa Pathogenic bacteria bg:Escherichia coli O157:H7
Escherichia coli O157:H7
Biology
1,959
72,221,650
https://en.wikipedia.org/wiki/Hantao%20Ji
Hantao Ji is a professor of astrophysical sciences at Princeton University. He received the John Dawson Award in 2002 for his work on magnetic reconnection. He is also a fellow of the American Physical Society. References Plasma physicists Year of birth missing (living people) Living people
Hantao Ji
Physics
58
61,533,334
https://en.wikipedia.org/wiki/C26H42N7O17P3S
{{DISPLAYTITLE:C26H42N7O17P3S}} The molecular formula C26H42N7O17P3S (molar mass: 849.64 g/mol, exact mass: 849.1571 u) may refer to: Methylcrotonyl-CoA Tiglyl-CoA
C26H42N7O17P3S
Chemistry
74
65,838,044
https://en.wikipedia.org/wiki/ML-SA1
ML-SA1 is a chemical compound which acts as an "agonist" (i.e. channel opener) of the TRPML family of calcium channels. It has mainly been studied for its role in activating TRPML1 channels, although it also shows activity at the less studied TRPML2 and TRPML3 subtypes. TRPML1 is important for the function of lysosomes, and ML-SA1 has been used to study several disorders resulting from impaired lysosome function, including mucolipidosis type IV and Niemann-Pick's disease type C, as well as other conditions such as stroke and Alzheimer's disease. References Phthalimides Amides
ML-SA1
Chemistry
151
4,189,719
https://en.wikipedia.org/wiki/Ethnopsychopharmacology
A growing body of research has begun to highlight differences in the way racial and ethnic groups respond to psychiatric medication. Understanding the relevance between mental health and cultural associations is key to attempting to understand more about how the brain works for people of different ethnic and cultural groups. Mental health can be attributed to both the brain function but it can also be associated with environmental factors which can have a physiological effect. It has been noted that there are "dramatic cross-ethnic and cross-national variations in the dosing practices and side-effect profiles in response to practically all classes of psychotropics." Epidemiology It is important to understand epidemiology briefly since it can be connected with ethnopsychopharmacology. Studying how culture impacts the way disease is spread is important to in order to fully understand the racial disparities that impact how Western medication is used and perceived. Differences in drug metabolism Drug metabolism is controlled by a number of specific enzymes, and the action of these enzymes varies among individuals. For example, most individuals show normal activity of the IID6 isoenzyme that is responsible for the metabolism of many tricyclic antidepressant medications and most antipsychotic drugs. However, studies have found that one-third of Asian Americans and African Americans have a genetic alteration that decreases the metabolic rate of the IID6 isoenzyme, leading to a greater risk of side effects and toxicity. The CYP2D6 enzyme, important for the way in which the liver clears many drugs from the body, varies greatly between individuals in ways that can be ethnically specific. Though enzyme activity is genetically influenced, it can also be altered by cultural and environmental factors such as diet, the use of other medications, alcohol and disease states. Differences in pharmacodynamics If two individuals have the same blood level of a medication there may still be differences in the way that the body responds due to pharmacodynamic differences; pharmacodynamic responses may also be influenced by racial and cultural factors. In addition to biology and environment, culturally determined attitudes toward illness may affect how an individual responds to psychiatric medication. Cultural factors In addition to biology and environment, culturally determined attitudes toward illness and its treatment may affect how an individual responds to psychiatric medication. Some cultures see suffering and illness as unavoidable and not amenable to medication, while others treat symptoms with polypharmacy, often mixing medications with herbal drugs. Cultural differences may have an effect on adherence to medication regimes as well as influence the placebo effect. Further, the way an individual expresses and reacts to the symptoms of psychiatric illness, and the cultural expectations of the physician, may affect the diagnosis a patient receives. For example, bipolar disorder often is misdiagnosed as schizophrenia in people of color. Recommendations for research and practice The differential response of many ethnic minorities to certain psychiatric medications raises important concerns for both research and practice. Include Ethnic Groups. Most studies of psychiatric medications have white male subjects. Because there is often a greater difference within racial and ethnic groups than between them, researchers must be certain they choose prototypical representatives of these groups, or use a larger random sample. Further, because broad racial and ethnic groups have many different subgroups. For example, in North American research it may not be enough to characterize individuals as Asian, Hispanic, Native American, or African American. Even within the same ethnic group, there are no reliable measures to determine important cultural differences. "Start Low and Go Slow." Individuals who receive a higher dose of psychiatric medication than needed may discontinue treatment because of side effects, or they may develop toxic levels that lead to serious complications. A reasonable approach to prescribing medication to any psychiatric patient, regardless of race or culture, is to "start low and go slow". Someday there may be a simple blood test to predict how an individual will respond to a specific class of drugs; research in these fields fall in the domain of pharmacogenomics and pharmacometabolomics. See also Pharmacognosy Race and health References External links Culture and Ethnicity, National Mental Health Information Center Pharmacokinetics Ethnobiology Psychopharmacology Race and health
Ethnopsychopharmacology
Chemistry,Biology,Environmental_science
865
14,904,004
https://en.wikipedia.org/wiki/Pavement%20management
Pavement management is the process of planning the maintenance and repair of a network of roadways or other paved facilities in order to optimize pavement conditions over the entire network. It is also applied to airport runways and ocean freight terminals. In effect, every highway superintendent does pavement management. Pavement management incorporates life cycle costs into a more systematic approach to minor and major road maintenance and reconstruction projects. The needs of the entire network as well as budget projections are considered before projects are executed, as the cost of data collection can change significantly. Pavement management encompasses the many aspects and tasks needed to maintain a quality pavement inventory, and ensure that the overall condition of the road network can be sustained at desired levels. While pavement management covers the entire lifecycle of pavement from planning to maintenance in any transport infrastructure, road asset management and road maintenance planning target more specifically road infrastructure. In the United States, the introduction of the Governmental Accounting Standards Board’s (GASB’s) Statement 34 is having a dramatic impact on the financial reporting requirements of state and local governments. Introduced in June 1999, this provision recommends that governmental agencies report the value of their infrastructure assets in their financial statements. GASB recommends that government agencies use a historical cost approach for capitalizing long-lived capital assets; however, if historical information is not available, guidance is provided for an alternate approach based on the current replacement cost of the assets. A method of representing the costs associated with the use of the assets must also be selected, and two methods are allowed by GASB. One approach is to depreciate the assets over time. The modified approach, on the other hand, provides an agency more flexibility in reporting the value of its assets based upon the use of a systematic, defensible approach that accounts for the preservation of the asset. Pavement management and pavement management systems provide agencies with the tools necessary to evaluate their pavement assets and meet the GASB34 requirements under the modified depreciation approach. Pavement management systems A pavement management system (PMS) is a planning tool used to aid pavement management decisions. PMS software programs model future pavement deterioration due to traffic and weather, and recommend maintenance and repairs to the road's pavement based on the type and age of the pavement and various measures of existing pavement quality. Measurements can be made by persons on the ground, visually from a moving vehicle, or using automated sensors mounted to a vehicle. PMS software often helps the user create composite pavement quality rankings based on pavement quality measures on roads or road sections. Recommendations are usually biased towards predictive maintenance, rather than allowing a road to deteriorate until it needs more extensive reconstruction. Typical tasks performed by pavement management systems include: Inventory pavement conditions, identifying good, fair and poor pavements. Assign importance ratings for road segments, based on traffic volumes, road functional class, and community demand. Schedule maintenance of good roads to keep them in good condition. Schedule repairs of poor and fair pavements as remaining available funding allows. Research has shown that it is far less expensive to keep a road in good condition than it is to repair it once it has deteriorated. This is why pavement management systems place the priority on preventive maintenance of roads in good condition, rather than reconstructing roads in poor condition. In terms of lifetime cost and long term pavement conditions, this will result in better system performance. Agencies that concentrate on restoring their bad roads often find that by the time they've repaired them all, the roads that were in good condition have deteriorated. The State of California was among the first to adopt a (PMS) in 1979. Like others of its era, the first PMS was based in a mainframe computer and contained provisions for an extensive database. It can be used to determine long-term maintenance funding requirements and to examine the consequences on network condition if insufficient funding is available. Management approach The pavement management process has been incorporated into several pavement management systems including SirWay. The following management approach evolved over the last 30 years as part of the development of the PAVER management system (U.S. Army COE, Construction Engineering Research Laboratory, Micro PAVER 2004). The approach is a process that consists of the following steps: Inventory Definition Pavement Inspection Condition Assessment Condition Prediction Condition Analysis Work Planning Inventory Definition Typically, pavement management requires road inventory to be created and tied to an Asset Location Referencing System (ALRS). Road inventory includes road location using both coordinate and linear referencing systems, road width, road length and pavement type. Condition Assessment Pavement condition can be divided into structural and functional condition with various condition variables. Functional condition can be divided into roughness, texture and skid resistance while structural condition includes mechanical properties and pavement distresses. To measure such indices, costly laser-based tools are used extensively while development of cost effective tools such as RGB-D sensors significantly reduces the cost of data collection. Condition Prediction Pavement condition prediction is often referred to pavement deterioration modeling, which can be based on mechanical or empirical models. Also, hybrid parameterized models are popular. More recently other methods based on Markov models and machine learning have been proposed that outperform their former counterparts. Pavement deterioration is caused by traffic and weather conditions. Also, material and construction choices affect the deterioration process. It has been shown that empirical models outperform the mechanical and hybrid models in condition prediction. Work Planning Work planning is essentially road maintenance planning in which the maintenance works are assigned both spatially and temporally according to the desired criteria such as minimal costs to the society. References Road construction
Pavement management
Engineering
1,112
14,476,666
https://en.wikipedia.org/wiki/Lunar%20Design
Founded in 1984 by Jeff Smith, Gerard Furbershaw and Robert Brunner, LUNAR (Lunar Design) is a product design and development consultancy headquartered in the San Francisco Bay Area. The company's provides industrial, interaction and communication design; video story telling, mechanical and electrical engineering, manufacturing support, user validation, design research, and need finding & assessment. Its current and past clients include Apple Inc., Abbott Labs, Cisco Systems, Hewlett-Packard, Johnson & Johnson, Microsoft, Motorola, Philips, Oral-B, Palm, Pepsi and Sony. On May 14, 2015, Lunar was acquired by management consulting firm McKinsey & Company. Lunar Design partnered with Nova Cruz to design and develop the Xootr Scooter. Affiliates LUNAR has offices in California, Chicago, and Europe (Munich, Germany). LUNAR Europe GmbH http://lunar-europe.com was founded in January, 2007 and is headed by Roman Gebhard and Matthis Hamann. The Chicago office was started in 2011 and is led by Mark Dziersk. Achievement and awards LUNAR was one of the top five award-winning industrial design firms for over 10 years, according to BusinessWeek magazine. The firm has been recognized with accolades from the Industrial Designers Society of America IDEA Awards, Fast Company "Innovation by Design" Awards, Core 77 Design Awards, CES Innovations Design and Engineering Awards, iF Hannover Product Design Awards, and the ID Magazine Design Annual award, among others. Two of LUNAR product designs for Oral-B and Philips were featured in the “Prototype to Product” exhibit in the United Airlines terminal at the San Francisco International Airport in 2007. References External links Official Website Design companies established in 1984 1984 establishments in California 2015 mergers and acquisitions Industrial design firms Companies based in San Francisco Design companies of the United States
Lunar Design
Engineering
369
51,382,351
https://en.wikipedia.org/wiki/John%20Scott%20Lillie
Sir John Scott Lillie (1790 – 29 June 1868) was an Anglo-Irish decorated officer of the British Army and Portuguese Army who fought in the Peninsular War (1808–1814). He was a landowner, entrepreneur and inventor. He was Deputy Lieutenant of the County of Middlesex and Chairman of the Middlesex Quarter Sessions, a freemason, a radical politician and supporter of the great Irish statesman Daniel O'Connell. He was an early antivivisectionist and writer. Background John Lillie was the eldest son of Philip Lillie Esq., of Drumdoe Castle, County Roscommon and his wife Alicia, née Stafford. One source gives his birth date as 1789. He was heir to properties in Roscommon, Dublin and Bath. The family were said to be related to the Duke of Portland through his mother, Henrietta daughter of General John Scott of Fife, but the connection has yet to be established. Lillie's sister, Alicia, married as his second wife, Hugh Mill Bunbury of Guyana; their daughter became a noted Carmelite nun, while their youngest son, Charles Thomas fought in the Crimean War and was promoted Colonel. Having completed his education, Lillie sought his fortune in the British army. On his return to civilian life, having been thrice wounded in the Peninsular War, Lillie married Louisa Sutherland (b. 1791), daughter of Capt. Andrew Sutherland RN, Commissioner of Gibraltar and his wife Louisa Colebrooke, on 22 January 1820 at St George's, Hanover Square, Middlesex. The Lillies had a daughter and three sons, the youngest of whom, George Arthur Howard, became a Buddhist while out in India as an officer. Military career Lillie joined the 6th Warwickshire Regiment of Foot in 1807 as an ensign. The following year he embarked under Sir Brent Spencer in first expeditionary force to Portugal for service in Barbary, Cadiz and the Tagus. He joined the Lusitanian Legion of the Portuguese Army in the rank of captain, when still only 19 years old, under the watchful eye of Lieutenant-General William Carr Beresford, who raised the famous élite corps of light infantry, the Caçadores. Having taken part in engagements in defence of Portugal, Lillie was promoted lieutenant in 1810. He fought at the Battle of Bussaco and took part in the retreat to the Torres Vedras Lines. In 1812 at the Battle of Salamanca, he was reputed to have personally seized the Colours of the 116th French Line Regiment during the struggle for Arapiles. As commander of the 7th Caçadores, he led his troops into the Battle of the Pyrenees, Battle of Nivelle (1813), Battle of Orthez (1814) and finally, the Battle of Toulouse (1814), where he was gravely wounded (for the third time) and left for dead for 48 hours on the battlefield. He was awarded the Decoration of The Lily by the French. By the British he was awarded the Army Gold Cross, (later) he was honoured with the Military General Service Medal. In Wellington's army his progression was somewhat slow; he left the Peninsular War in the rank of major and in 1816 he was knighted by patent. In the 1831 Coronation Honours, he was made a Companion of the Bath. Also that year, Pedro of Portugal, promoted him to the rank of major general in the Portuguese Army. By the time he took formal retirement from the British Army, in 1855, he had attained the rank of lieutenant colonel with the commensurate pension. He left a record of his experiences in the war. The entrepreneur and inventor Still employed by the Army, but too disabled to serve and so on half pay, and recently married, he was received into the Prince of Wales (Freemasonry) Lodge (1923). On 'sick leave', Sir John sought to apply his energy and experience in London society. In 1822, he had bought 'the Hermitage', a grand villa once owned by the dramatist, Samuel Foote, with a fourteen-acre estate at North End, in the Parish of Fulham in the County of Middlesex. At that time, his new neighbours, the Edwardes and the Gunters were engaged in catching up with the canal boom and the burgeoning transportation revolution. Lillie, who was already a shareholder in the Hammersmith Bridge project, next became a major investor and actor in the Kensington Canal company, a scheme eventually bitterly opposed by Lord Holland. His strategy was to link the new Hammersmith river crossing with the village of Brompton, further downriver and closer to London. For that purpose, he donated some of his land in 1825 for a new stretch of public highway, joining Crown Lane and North End Lane to Counter's Creek (which was then being developed into the Kensington Canal), and the new canal bridge, built by Gunter. To service the canal and wharf construction on land that are today's Langtry Place, Rickett Street and Roxby Place, he laid out two further stretches of unmade road either side of the new highway, named initially as the Richmond Road. The unmade roads were eventually called Richmond Gardens – later Empress Place – and Seagrave Road in Fulham. Around 1830, he also built the 'North End Brewery' and tavern to the South of the highway, together with a maltings to the north on the shorter stretch, initially managed by a Miss Goslin. To commemorate his development of the area and as its original freeholder, the 1835 tavern became known as the "Lillie Arms" (now renamed 'The Lillie Langtry', after the actress and mistress, among others, of the Prince Of Wales). The Fulham stretch of Richmond Road and the canal bridge were eventually renamed Lillie Road and Lillie Bridge respectively. The canal project was dogged by financial difficulties and was an ill-considered venture, whose time had passed. It eventually gave way to the railways, as first one track was laid along the canal, and in the 1860s that was filled in and a second rail track put down. Sir John Lillie decided to leave North End in 1837 and moved with his family to Chelsea in the County of Middlesex, where he occupied 12, Cheyne Walk, a 'noble' Georgian mansion, to which he made major structural additions. From there, he moved to Kensington, probably in the late 1840s, where he remained to the end of his days. Patent holder During his lifetime, Sir John Scott Lillie took out upwards of 30 patents for all manner of improvements ranging from mechanisms for kneading dough, tilling fields, the chemical composition of stucco to propulsion engines on land and in the water. In 1836, he designed a power unit intended for propelling carriages and barges for which he was granted a patent. His military background influenced his design of the 'Lillie Rifle battery' an early form of machine gun. Aware of the challenges for transport caused by the transformation of open fields into industrialised areas, Lillie applied himself to the creation of durable street paving. He designed a system using layers of wood and asphalt to make it weather-proof and was granted a patent. In 1863, he was a founding member of the 'Institute of Inventors' and took the chair at the first annual general meeting, styled as 'General Sir John Scott Lillie'. The politician and activist While living in North End, Fulham, Lillie took up civic duties. He was swept up in the debate over parliamentary reform. In 1831, with the accession of a new king, he published a pamphlet about the redistribution of power in Parliament and the curbing of corruption in the electoral system. He was connected with the Whig politician, John Byng, 1st Earl of Strafford. In January 1835 Sir John Lillie stood for election as a burgess in the borough (constituency) of King's Lynn in the County of Norfolk, but despite loud support, he only managed to come third. His Irish roots and connections led Lillie to an active interest in Catholic emancipation. He was a friend and supporter of Daniel O'Connell. In his role as magistrate, Sir John intervened in 1840 in the matter of 'non-restraint of lunatics' at Hanwell Asylum, where it was reported that a lax approach was leading to self-harm by some patients. The matter was taken up in the press and in The Lancet. Even in advanced old age, Lillie took a stance against the cruel treatment of animals in experiments, witness his letter to The Lancet in January 1861. He was still writing to the press on the power of musketry in 1866. Sir John Lillie was widowed in May 1860, when his wife Dame Louisa died at Dover. He then married widow Elisabeth Hannah Carew on 26 June 1862 at the British Embassy in Paris. She survived him. He himself died at his residence in Norfolk Terrace, Kensington on 29 June 1868 and was buried at Brompton Cemetery, very close to his earlier efforts on behalf of transport development in Fulham. Awards and honours The Most Honourable Order of the Bath, CB (Great Britain) Army Gold Cross, (Great Britain), for Pyrenees, three clasps, Nivelle, Orthez, Toulouse Military General Service, (Great Britain) seven clasps, for Battle of Roliça, Battle of Vimeiro, Bussaco, Battle of Badajoz (1812), Battle of Salamanca, Battle of Vitoria and Battle of Nive Order of the Tower and Sword, Knight's breast Badge (Portugal) Commander's Cross for Five Actions (Portugal): Pyrenees, Nivelle, Nive, Orthez and Toulouse Campaign Cross for 4 Years, (Portugal) Decoration of the Lily, (France) Honour for rescuing a boy from drowning In 1827 Lillie was on board a paquet in the Thames when it was involved in a collision with the works by the Tower. The boat seems to have disintegrated and sank immediately, leaving passengers and crew in the water. Other craft were quickly despatched to rescue the luckless victims and Lillie, being a strong swimmer, assisted a nine-year-old boy who had been with the party and could not swim. They were both picked and taken to safety. For this act of bravery, Sir John Scott Lillie was honoured by the Royal Humane Society in January 1827. Bibliography Lillie, J.S. and Mayne, William. (2014) The Loyal Lusitanian Legion during the Peninsular War: The Campaign of Wellington's Portuguese Troops, 1809–11 published by Leonaur Ltd. Lillie, Sir John Scott. An Historical Sketch of the Origin and Progress of Parliamentary Corruption, and of the evils arising therefrom; in order to prove the ... necessity of parliamentary reform, London: Effingham Wilson, 1831, 86 pages. * Polytechnisches Journal. 63. Band, Jahrgang 1837, N.F. 13. Band, Hefte 1-6 komplett. (= 18. Jahrgang, 1.-6. Heft ). Eine Zeitschrift zur Verbreitung gemeinnüziger Kenntnisse im Gebiete der Naturwissenschaft, der Chemie, der Pharmacie, der Mechanik, der Manufakturen, Fabriken, Künste, Gewerbe, der Handlung, der Haus- und Landwirthschaft etc. Herausgegeben von Johann Gottfried und Emil Maximilian Dingler. Polytechnisches Journal. Hrsg. v. Johann Gottfried Dingler, Emil Maximilian Dingler und Julius Hermann Schultes: Published by Stuttgart in der J. G. Cotta'schen Buchhandlung (1837)., 1837 (in German) - includes Lillie's invention. Items named after Lillie Lillie Road (Fulham) Lillie Bridge, rebuilt by rail engineer, Sir John Fowler, 1st Baronet, 1866 Lillie Bridge Grounds, athletics and cricket grounds in Fulham where a number of records were achieved Lillie Rec, The Lillie Road Recreation Ground, a park at the junction with the Fulham Palace Road Lillie Bridge Railway Depot, 1872, engineering workshop for London Underground Lillie Hall, Seagrave Road, Fulham - briefly a roller-skating venue, then taken over by Charles Rolls (later of Rolls-Royce company) in 1903 as a car showroom (demolished) The Lillie Arms 1835 public house in Lillie Road, now renamed after 'Lillie Langtry' (opposite the Arts and crafts public house 'the Prince of Wales' destined for demolition in 2016) The Lillie Rifle battery Sir John Lillie Primary School (Fulham) See also Napoleonic Wars Arthur Wellesley, 1st Duke of Wellington History of the British canal system Radicals (UK) John Byng, 1st Earl of Strafford The Westminster Review References and notes 1790 births 1868 deaths Military personnel from County Roscommon Portuguese generals British Army personnel of the French Revolutionary Wars British Army commanders of the Napoleonic Wars Companions of the Order of the Bath Recipients of the Army Gold Cross People of the Peninsular War British inventors Mechanical engineers Transport pioneers English justices of the peace Deputy lieutenants of Middlesex Whig (British political party) politicians Knights Bachelor British Freemasons 19th-century British military personnel Burials at Brompton Cemetery Royal Warwickshire Fusiliers officers Irish officers in the British Army 19th-century Irish landowners British anti-vivisectionists Irish anti-vivisectionists
John Scott Lillie
Engineering
2,740
143,705
https://en.wikipedia.org/wiki/Jaroslav%20Heyrovsk%C3%BD
Jaroslav Heyrovský (; 20 December 1890 – 27 March 1967) was a Czech chemist and inventor who received the Nobel Prize in Chemistry in 1959 for his invention of polarography. Life and work Jaroslav Heyrovský was born in Prague on December 20, 1890, the fifth child of Leopold Heyrovský, Professor of Roman Law at the Charles University in Prague, and his wife Clara, née Hanl von Kirchtreu. He obtained his early education at secondary school until 1909 when he began his study of chemistry, physics, and mathematics at the Charles University in Prague. From 1910 to 1914 he continued his studies at University College London, under Professors Sir William Ramsay, W. C. McC. Lewis, and F. G. Donnan, taking his B.Sc. degree in 1913. He was particularly interested in working with Professor Donnan, on electrochemistry. During the First World War Heyrovský worked in a military hospital as a dispensing chemist and radiologist, which enabled him to continue his studies and to take his Ph.D. degree in Prague in 1918 and D.Sc. in London in 1921. Heyrovský started his university career as assistant to Professor B. Brauner in the Institute of Analytical Chemistry of the Charles University, Prague; he was promoted to Associate Professor in 1922 and in 1926 he became the university's first professor of physical chemistry. Heyrovský's invention of the polarographic method dates from 1922 and he concentrated his whole further scientific activity on the development of this new branch of electrochemistry. He formed a school of Czech polarographers in the university, and was himself in the forefront of polarographic research. In 1950 Heyrovský was appointed as the Director of the newly established Polarographic Institute, which was incorporated into the Czechoslovak Academy of Sciences in 1952. In 1926 Professor Heyrovský married Marie (Mary) Koranová, and the couple had two children, a daughter, Jitka, and a son, Michael. Jaroslav Heyrovský died on March 27, 1967. He was interred in the Vyšehrad cemetery in Prague. Honors, awards, legacy Many universities and seats of learning honored Heyrovský. He was elected Fellow of University College, London, in 1927, and received honorary doctorates from the Technical University, Dresden in 1955, the University of Warsaw in 1956, the University Aix-Marseille in 1959, and the University of Paris in 1960. He was granted honorary membership in the American Academy of Arts and Sciences in 1933; in the Hungarian Academy of Sciences in 1955; the Indian Academy of Sciences, Bangalore, in 1955; the Polish Academy of Sciences, Warsaw, in 1962; was elected Corresponding Member of the German Academy of Sciences, Berlin, in 1955; member of the German Academy of Natural Scientists, Leopoldina (Halle-Saale) in 1956; Foreign Member of the Royal Danish Academy of Sciences, Copenhagen, in 1962; Vice-President of the International Union of Physics from 1951 to 1957; President and first honorary member of the Polarographic Society, London; honorary member of the Polarographic Society of Japan; honorary member of the Chemical Societies of Czechoslovakia, Austria, Poland, England and India. In 1965, Heyrovský was elected a Foreign Member of the Royal Society (ForMemRS) in 1965. In Czechoslovakia Heyrovský was awarded the State Prize, First Grade, in 1951, and in 1955 the Order of the Czechoslovak Republic. Heyrovský lectured on polarography in the United States in 1933, the USSR in 1934, England in 1946, Sweden in 1947, the People's Republic of China in 1958, and in U.A.R. (Egypt) in 1960 and 1961. The crater Heyrovský on the Moon is named in his honour. References External links Biography including the Nobel Lecture, December 11, 1959 The Trends of Polarography 1890 births 1967 deaths Academic staff of Charles University Czechoslovak chemists Czechoslovak inventors Scientists from Prague Nobel laureates in Chemistry Nobel laureates from Austria-Hungary Czechoslovak Nobel laureates Charles University alumni Alumni of University College London Foreign members of the Royal Society Analytical chemists Members of the German Academy of Sciences at Berlin Burials at Vyšehrad Cemetery
Jaroslav Heyrovský
Chemistry
854
4,847,805
https://en.wikipedia.org/wiki/Flame%20Nebula
The Flame Nebula, designated as NGC 2024 and Sh2-277, is an emission nebula in the constellation Orion. It is about 1350 light-years away. At that distance, the Flame Nebula lies within the Orion B cloud of the larger Orion Molecular Cloud Complex. The bright star Alnitak (ζ Ori), the easternmost star in the Belt of Orion, appears very close to the Flame Nebula in the sky. But the star and nebula are not physically associated with one another. The Flame Nebula contains a young cluster of stars which includes at least one hot, luminous O-type star labeled IRS 2b. The dense gas and dust in the foreground of the nebula heavily obscures the star cluster inside the nebula, making studies at infrared wavelengths most useful. The energetic ultraviolet light emitted by the central O-type star IRS 2b into the Flame Nebula causes the gas to be excited and heated. The glow of the nebula results from the energy input from this central star. Within the nebula and surrounding the central hot star is a cluster of young, lower-mass stars, 86% of which have circumstellar disks. X-ray observations by the Chandra X-ray Observatory show several hundred young stars, out of an estimated population of 800 stars. X-ray and infrared images indicate that the young stars are concentrated near the center of the cluster. The Flame Nebula was observed with ALMA and this study found two populations, which are separated by a molecular cloud. The eastern population is 0.2-0.5 Myr old and has a disk fraction of 45±7%. The western population is slightly older at 1 Myr and has a lower disk fraction of 15±4%. This disk fraction is lower than the one observed in the mid-infrared, but the ALMA survey also observed a smaller region. The eastern part contains the O8 star IRS 2b and the western part contains the B0.5V star IRS 1. Hubble observations have shown that the Flame Nebula contains 4 clear proplyds and 4 candidate proplyds. Three of these are in the older western region and are pointing towards IRS 1. The other 5 are in the younger eastern region and are pointing towards IRS 2b. Gallery References External links Orion molecular cloud complex NGC objects Orion (constellation) Emission nebulae Sharpless objects Gould Belt Star-forming regions
Flame Nebula
Astronomy
482
49,633,297
https://en.wikipedia.org/wiki/Phosphate%20permease
Phosphate permeases are membrane transport proteins that facilitate the diffusion of phosphate into and out of a cell or organelle. Some of these families include: TC# 2.A.1.4 - Organophosphate:Pi Antiporter (OPA) Family, (i.e., Pho-84 of Neurospora crassa; TC# 2.A.1.9.2) TC# 2.A.20 - Inorganic Phosphate Transporter (PiT) Family TC# 2.A.47.2 - Phosphate porters of the Divalent Anion:Na+ Symporter (DASS) Family, includes Pho87/90/91 TC# 2.A.58 - Phosphate:Na+ Symporter (PNaS) Family TC# 2.A.94 - Phosphate Permease (Pho1) Family See also Major facilitator superfamily Ion transporter superfamily Phosphotransferase Inorganic phosphate permeases Transporter Classification Database See also TC# 3.A.10 - H+, Na+-translocating Pyrophosphatase (M+-PPase) Family TC# 4.E.1 - Vacuolar (Acidocalcisome) Polyphosphate Polymerase (V-PPP) Family Further reading EMBL-EBI, InterPro. "Phosphate permease (IPR004738) < InterPro < EMBL-EBI". www.ebi.ac.uk. Retrieved 2016-03-03. "pho-4 - Phosphate-repressible phosphate permease pho-4 - Neurospora crassa (strain ATCC 24698 / 74-OR23-1A / CBS 708.71 / DSM 1257 / FGSC 987) - pho-4 gene & protein". www.uniprot.org. Retrieved 2016-03-03. Versaw, W. K. (1995-02-03). "A phosphate-repressible, high-affinity phosphate permease is encoded by the pho-5+ gene of Neurospora crassa". Gene 153 (1): 135–139. ISSN 0378-1119. PMID 7883177. Ramaiah, Madhuvanthi; Jain, Ajay; Baldwin, James C.; Karthikeyan, Athikkattuvalasu S.; Raghothama, Kashchandra G. (2011-09-01). "Characterization of the phosphate starvation-induced glycerol-3-phosphate permease gene family in Arabidopsis". Plant Physiology157 (1): 279–291. doi:10.1104/pp.111.178541. ISSN 1532-2548. PMC 3165876. PMID 21788361. Stakheev, A. A.; Khairulina, D. R.; Ryazantsev, D. Yu; Zavriev, S. K. (2013-03-22). "Phosphate permease gene as a marker for the species-specific identification of the toxigenic fungus Fusarium cerealis". Russian Journal of Bioorganic Chemistry 39 (2): 153–160.doi:10.1134/S1068162013020131. ISSN 1068-1620. References Protein families Solute carrier family
Phosphate permease
Biology
739
44,983,434
https://en.wikipedia.org/wiki/18%20Tauri
18 Tauri is a single star in the zodiac constellation of Taurus, located 444 light years away from the Sun. It is visible to the naked eye as a faint, blue-white hued star with an apparent visual magnitude of 5.66. The star is moving further from the Earth with a heliocentric radial velocity of +4.8. It is a member of the Pleiades open cluster, which is positioned near the ecliptic and thus is subject to lunar occultations. This is a B-type main-sequence star with a stellar classification of B8 V, and is about halfway through its main sequence lifetime. It displays an infrared excess, suggesting the presence of an orbiting debris disk with a black body temperature of 75 K at a separation of from the host star. The star has 3.34 times the mass of the Sun and 2.89 times the Sun's radius. It is radiating 160 times the Sun's luminosity from its photosphere at an effective temperature of 13,748 K. 18 Tauri has a high rate of spin, showing a projected rotational velocity of 212 km/s. References B-type main-sequence stars Pleiades Taurus (constellation) Durchmusterung objects Tauri, 018 023324 017527 1144
18 Tauri
Astronomy
271
425,359
https://en.wikipedia.org/wiki/Job%20Definition%20Format
JDF (Job Definition Format) is a technical standard developed by the graphic arts industry to facilitate cross-vendor workflow implementations of the application domain. It is an XML format about job ticket, message description, and message interchange. JDF is managed by CIP4, the International Cooperation for the Integration of Processes in Prepress, Press and Postpress Organization. JDF was initiated by Adobe Systems, Agfa, Heidelberg and MAN Roland in 1999 but handed over to CIP3 at Drupa 2000. CIP3 then renamed itself CIP4. The initial focus was on sheetfed offset and digital print workflow, but has been expanded to web(roll)-fed systems, newspaper workflows and packaging and label workflows. It is promulgated by the prepress industry association CIP4, and is generally regarded as the successor to CIP3's Print Production Format (PPF) and Adobe Systems' Portable Job Ticket Format (PJTF). The JDF standard is at revision 1.8. The process of defining and promulgating JDF began circa 1999. The standard is in a fairly mature state; and a number of vendors have implemented or are in the process of implementing it. JDF PARC, a multivendor JDF interoperability demonstration, was a major event at the 2004 Drupa print industry show, and featured 21 vendors demonstrating, or attempting to demonstrate interoperability between a total of about forty pairs of products. JDF is an extensible format. It defines both JDF files and JMF, a job messaging format based on XML over HTTP. In practice, JDF-enabled products can communicate with each other either by exchanging JDF files, typically via "hot folders", or the net or by exchanging JMF messages over the net. As is typical of workflow applications, the JDF message contains information that enables each "node" to determine what files it needs as input and where they are found, and what processes it should perform. It then modifies the JDF job ticket to describe what it has done, and examines the JDF ticket to determine where the message and accompanying files should be sent next. The goal of CIP4 and the JDF format is to encompass the whole life cycle of a print and cross-media job, including device automation, management data collection and job-floor mechanical production process, including even such things as bindery, assembly of finished products on pallets. Before JDF can be completely realized, more vendors need to accept the standard. Therefore, few users have been able to completely utilize the benefits of the JDF system. In finishing and binding, and printing there is a tradition of automation and few large enough dominating companies that can steer the development of JDF system. But it is still necessary for the manufacturers of business systems to fully support JDF. The same progress has not been made here probably because many of these companies are small specialty companies who haven't the resource to manage such development and who don't specialize on graphic production. In addition, there is a huge amount of large-capital production machinery already existing in the trade which is incompatible with JDF. The graphic arts business is shrinking yearly and any large-capital decision is much more a risk than in previous years. The underlying incentive to adopt JDF is not sufficient in most cases to cause owners to abandon "acceptable" machinery that they presently have in favour of a large-capital purchase of somewhat faster, JDF-compliant capital goods. This is especially true in markets where large amounts of non-compliant production machinery are being sold in the used-equipment market and auction sales at considerable reductions in price from new equipment. For printing proofing Introduction to proofing Before describing the implementation of proofing in JDF, it's better to know a little about the terminology that will be used. Proofing: the process of producing a printed output on a device (proofer) that emulates the best it can the supposed printed output on press (the final production device that can be a conventional press, a digital press…) where the final printed product will be produced. Prepress proofing (or off-press proofing) provide a visual copy without creating a press proof (the process is cheaper). Soft proofing: the same as proofing, but the proofer is actually a screen. Approval: the process of approving or rejecting the proofing (or the soft proofing). Comments and annotations may be added to describe the reasons of the decision and to give instructions on what changes to do. The original input files have to be processed to be printed on the final press (interpreting, rendering, screening, color management....) and the same to be printed on the proofer (different characteristics). The decision on which of the processing steps will be executed once (common both for printing on the proofer and on the press) and which not will depend on many parameters (characteristics of the proofer device, user requirements, workflow requirements…). The proofing has to take in account the consistency between the press and the proofer. Proofing in JDF In JDF 1.1, proofing and soft proofing were defined as an atomic process on which the input were all the parameters required for a successful process. This has some drawbacks: Lack of flexibility: the semantics is specific for one workflow therefore limited to the definition of the processes and the resources that it can take as input. Lack of control: it is difficult to define the input resources with all the information required for control. Duplication: similar information has to be used to define both proofing and printing. If different resources are used, this will result in duplication. From JDF 1.2 proofing and soft proofing were deprecated in behalf of a combined process to specify the proofing workflow. The job ticket explicitly defines the processing and provides the flexibility to implement them in different workflows. In order to do that, the atomic processes were made capable of keeping all the information necessary to specify different configurations/options. Combined processes for proofing It is impossible to describe proofing by a unique combination of processes which in turn will depend on the capabilities of the RIPs (Raster image processor), the devices used for proofing and the proofing production workflow. It is still possible to define a generic combined process for the proofing. This will allow it to describe its step in a workflow. The generic combined proofing process combines the following JDF processes: ColorSpaceConversion (1): converts the contents of the input RunList from the input color spaces to the color model of the press. Interpreting: interprets the input RunList file(s) and converts them to an internal display list in order to go through the Rendering. Rendering: renders the raster data. Screening: screens the raster data. ColorSpaceConversion (2): converts the data from the press color model to the proofer device color model. Imposition: if imposition proofing is done, combines the pages and marks on the imposed sheets. ImageSetting: specifies the actual printing of the proof. Depending on the characteristics of the proofing device DigitalPrinting can be used as well. The ordering is not completely strict (same result may be achieved with different order combination of steps), but there are some precedence rules: the first color space conversion must be done before the second one, rendering must be done after interpreting, screening in turn must be done after rendering and the second color conversion, ImageSetting/DigitalPrinting must be done after screening. Combined processes for soft proofing Compared to proofing, since no printed proof is produced, no ImageSetting/DigitalProofing processes are required. Moreover, the rendered data is sent directly to the Approval process that must implement a user interface to show those data on the display and allow him/her to approve/reject the proof and eventually annotate it using digital signature. All the ordering consideration are still valid. Considerations on JDF atomic processes ColorSpaceConversion In a production workflow with proofing, there must be both the conversion of the input asset color spaces to the press color space and the one of press color space to the proofer color space. So in JDF two different ColorSpaceConversion processes are required and depending on the exact workflow and on the capabilities of the devices, they can be included in the same combined process. Interpreting and rendering Input data to the proofing combined process usually required both interpreting (with the exception of JDF ByteMap) and rendering. In these cases they will be included in the combined process describing the proofing step. Screening Two possibilities: Proofer can emulate the screening of the press: Screening should be performed once at the ripping combined process and the halftoned data should be sent directly to the proofing combined process. Proofer is a "contone proofer": one Screening process for the press and one for the proofer. ImageSetting/Digital Printing For printing the proof ImageSetting/DigitalPrinting process has to be specified at the end of the proofing combined process in order to define how the proof is actually printed. Approval Must be executed before the final production printing can be started. HP example: cutting of proofing time HP incorporates JDF into its proofing products. Even if it's only one step in the total process JDF cuts time from the printing process making printers more efficient because proofing traditional generation and delivery of proofs can take days. HP sends PDF files to a remote proofing. JDF file enables the inclusion of job information (color profiles, job ticket details...) that is sent to the client. In the future marking up the proof and digital signatures for approval will be implemented. See also CUPS Internet Printing Protocol PPML (Personalized Print Markup Language) Workflow management system References Wolfgang Kühn, Martin Grell: JDF: Process Integration, Technology, Product Description, Springer, Doug Sahlin: How to do Everything with Adobe Acrobat X, McGraw-Hill/Osborne Media, Kaj Johansson, Peter Lundberg, Robert Ryberg: A Guide to Graphic Print Production, John Wiley & Sons, External links CIP4 Home Page Application Note Implementing Proofing, SoftProofing and Proof Approval in JDF 1.2 CIP4 Technical Website What is JDF? JDF Specification Documents JDF Technology Report at the OASIS Cover Pages site JDF 1.8 PDF Printing terminology Print production XML-based standards Digital press
Job Definition Format
Technology
2,172
56,528,765
https://en.wikipedia.org/wiki/Fluoroanion
In chemistry, a fluoroanion or fluorometallate anion is a polyatomic anion that contains one or more fluorine atoms. The ions and salts form from them are also known as complex fluorides. They can occur in salts, or in solution, but seldom as pure acids. Fluoroanions often contain elements in higher oxidation states. They mostly can be considered as fluorometallates, which are a subclass of halometallates. Anions that contain both fluorine and oxygen can be called "oxofluoroanions" (or rarely "fluorooxoanions"). The following is a list of fluoroanions in atomic number order. trifluoroberyllate tetrafluoroberyllate tetrafluoroborate magnesium tetrafluoride trifluoroaluminate tetrafluoroaluminate pentafluoroaluminate hexafluoroaluminate heptafluoroaluminate hexafluorosilicate hexafluorophosphate Sulfur trifluoride anion pentafluorosulfate aka pentafluorosulfite or Sulfur pentafluoride ion sulfur pentafluoride anion tetrafluorochlorate hexafluorotitanate hexafluorovanadate(III) hexafluorovanadate(IV) hexafluorovanadate(V) trifluoromanganate hexafluoromanganate(III) hexafluoromanganate(IV) heptafluoromanganate IV Tetrafluoroferrate 1− and 2− hexafluoroferrate 4− and 3− tetrafluorocobaltate II Hexafluorocobaltate III and IV Heptafluorocobaltate IV Tetrafluoronickelate Hexafluoronickelate II, III and IV hexafluorocuprate tetrafluorozincate Hexafluorogallate hexafluorogermanate hexafluoroarsenate tetrafluorobromate hexafluorobromate pentafluorozirconate hexafluorozirconate octafluorozirconate hexafluoroniobate heptafluoroniobate octafluoromolybdate tetrafluoropalladate hexafluororhodate hexafluororuthenate(IV) hexafluororuthenate(V) hexafluoroindate hexafluorostannate fluoroantimonate hexafluoroiodate 1− octafluoroxenate tetrafluorolanthanate pentafluorocerate IV Hexafluorocerate IV Heptafluorocerate IV octafluorocerate IV pentafluorohafnate hexafluorohafnate heptafluorotantalate octafluorotantalate heptafluorotungstate octafluorotungstate octafluororhenate hexafluoroplatinate tetrafluoroaurate hexafluoroaurate hexafluorothallate(III) tetrafluorobismuthate hexafluorobismuthate hexafluorothorate hexafluorouranate(IV) hexafluorouranate(V) octafluorouranate(IV) octafluorouranate(V) References Fluorine compounds Anions Double salts
Fluoroanion
Physics,Chemistry
784
25,887,725
https://en.wikipedia.org/wiki/Psilocybe%20columbiana
Psilocybe columbiana is a species of mushroom in the family Hymenogastraceae known only from the páramos of high mountains in Colombia. It is in the Psilocybe fagicola complex with Psilocybe fagicola, Psilocybe oaxacana, Psilocybe banderillensis, Psilocybe herrerae, Psilocybe keralensis, Psilocybe neoxalapensis, and Psilocybe teofiloi. See also List of Psilocybin mushrooms Psilocybin mushrooms Psilocybe References columbiana Fungi of Colombia Altiplano Cundiboyacense Páramo fungi Entheogens Psychoactive fungi Psychedelic tryptamine carriers Fungi described in 1978 Taxa named by Gastón Guzmán Fungus species
Psilocybe columbiana
Biology
165
607,547
https://en.wikipedia.org/wiki/Otto%20Finsch
Friedrich Hermann Otto Finsch (8 August 1839, Warmbrunn – 31 January 1917, Braunschweig) was a German ethnographer, naturalist and colonial explorer. He is known for a two-volume monograph on the parrots of the world which earned him a doctorate. He also wrote on the people of New Guinea and was involved in plans for German colonization in Southeast Asia. Several species of bird (such as Oenanthe finschii, Iole finschii, Psittacula finschii) are named after him as also the town of Finschhafen in Morobe Province, Papua New Guinea and a crater on the Moon. Biography Finsch was born at Bad Warmbrunn in Silesia to Mortiz Finsch and Mathilde née Leder. His father was in the glass trade and he too trained as a glass painter. An interest in birds led him to use his artistic skills for the purpose. Finsch went to Budapest in 1857 and studied at the Royal Hungarian University, earning money by preparing natural history specimens. He then spent two years in Russe, Bulgaria on an invitation from the Austrian Consul and gave private tutoring in German while exploring the birdlife of the region. He published his first paper in the Journal fur Ornithologie on the birds of Bulgaria. This experience helped him obtain a curatorial position at the Rijksmuseum van Natuurlijke Historie in Leiden (1862–1865) assisting Herman Schlegel. In 1864 he returned to Germany on the suggestion of Gustav Hartlaub to become curator of the museum in Bremen and became its director in 1876. After publishing the two volume monographs on the parrots of the world, Die Papageien, monographisch bearbeitet (1867–68), he obtained an honorary doctorate from the Friedrich Wilhelms University in Bonn. Apart from ornithology he also took an interest in ethnology. In 1876 he accompanied the zoologist Alfred Brehm on an expedition to Turkestan and northwest China. Finsch resigned as curator of the museum in 1878 in order that he could resume his travels, sponsored by the Humboldt Foundation. Between spring 1879 and 1885 he made several visits to the Polynesian Islands, New Zealand, Australia and New Guinea. His proposal was to obtain as many artefacts as possible with the claim that native cultures, fauna and flora were fast vanishing. Finsch was shocked by the punitive actions of the English Methodist missionary George Brown (1835–1917) and was concerned by the violent conflicts between the natives and westerners. He also found no support for contemporary ideas on race with neat categories and found instead a continuum of variations in the human form. After witnessing a cannibal feast at Matupit he commented that the people were still not classifiable as "savages" as they maintained neat agriculture, had their own song, dance and followed commerce. He returned to Germany in 1882 and began to promote the creation of German colonies in the Pacific along with the South Sea Plotters, an influential group led by a banker Adolph von Hansemann. In 1884 he returned aboard the steamer Samoa to New Guinea as Bismarck's Imperial Commissioner to explore potential harbours under the guise of scientists and negotiated for the north-eastern portion of that island, together with New Britain and New Ireland, to become a German protectorate. It was renamed Kaiser-Wilhelmsland and the Bismarck Archipelago. The capital of the colony was named Finschhafen in his honour. In 1885 he was the first European to discover the Sepik river, and he named it after Kaiserin Augusta, the German Empress. Newspapers of the period speculated that he would be appointed as an administrator to the new territories but this never happened. He was instead offered a position as station director which involved menial administrative tasks that would come in the way of his plans to explore and study the region. He returned to Germany and spent much of the subsequent period without formal employment. Finsch had been married to Josephine Wychodil from around 1873 but they divorced around 1880. In 1886 he married Elisabeth née Hoffman (1860–1925). Elisabeth was a talented artist and she illustrated many of his catalogues. Finsch was briefly an advisor to the Neuguinea-Kompagnie. In 1898 he abandoned his dreams in ethnology and returned to ornithology, becoming curator of the bird collections at the Rijksmuseum in Leiden. He did not enjoy this period, noting that life for him, his wife and daughter Esther, felt like living in exile. He also wrote several articles on his past work Wie ich Kaiser-Wilhelmsland erwarb (How I acquired Kaiser Wilhelm’s Land, 1902) and Kaiser-Wilhelmsland. Eine friedliche Kolonialerwerbung (Kaiser Wilhelm’s Land: A peaceful colonial acquisition, 1905). Seeking return to Germany, he finally joined the ethnographical department of the Municipal Museum in Brunswick in 1904 and worked the remainder of his life there. In 1909 he was titled professor by the Duke of Braunschweig and honoured with a 'medal for distinguished services for art and science' in silver. One of his major works was on the parrots of the world. This was not without its critics, since he often tried to rename genera apparently to gain taxonomic authorship. Several species of birds bear his name, including the lilac-crowned parrot (Amazona finschi), Finsch's wheatear (Oenanthe finschii), Finsch's bulbul (Alophoixus finschii), and the grey-headed parakeet (Psittacula finschii). A species of monitor lizard, Varanus finschi, is named after him, because he collected what would become the holotype for this species. The crater Finsch on the Moon is also named in his honor. In 2008, following international treaties, some of the human remains that he had collected from Cape York and the Torres Straits that were held in the Charité Medical University in Berlin were repatriated. Additional remains have also been repatriated. Published works Catalog der Ausstellung ethnographischer und naturwissenschaftlicher Sammlungen (Bremen: Diercksen und Wichlein, 1877). Anthropologische Ergebnisse einer Reise in der Südsee und dem malayischen Archipel in den Jahren 1879–1882 (Berlin: A. Asher & Co., 1884). Otto Finsch, Masks of Faces of Races of Men from the South Sea Islands and the Malay Archipelago, taken from Living Originals in the Years 1879–82 (Rochester, NY: Ward's Natural Sciences Establishment, 1888). Ethnologische Erfahrungen und Belegstücke aus der Südsee: Beschreibender Katalog einer Sammlung im K.K. naturhistorischen Hofmuseum in Wien (Wien: A. Holder, 1893). Die Papageien / monographisch bearbeitet von Otto Finsch Leiden: Brill, 1867–68. with Gustav Hartlaub, "Die Vögel der Palau-Gruppe. Über neue und weniger gekannte Vögel von den Viti-, Samoa- und Carolinen-Inseln." Journal des Museum Godeffroy'', Heft 8, 1875 and Heft 12, 1876. References Other sources Herbert Abel, Otto Finsch: Ein Lebensbild Zur 50. Wiederkehr des Todestages am 31. Januar 1967. Jahrbuch der Schlesischen Friedrich-Wilhelms-Universität zu Breslau. Band XII. Wuerzburg: Holzner-Verlag. Howes, Hilary, 2018. « A “Perceptive Observer” in the Pacific: Life and Work of Otto Finsch » in Bérose - Encyclopédie internationale des histoires de l’anthropologie External links Resources related to research : BEROSE - International Encyclopaedia of the Histories of Anthropology. "Finsch, Otto (1839-1917)", Paris, 2018. (ISSN 2648-2770) AMNH anthropology collection Digitised works by Otto Finsch at Biodiversity Heritage Library 1839 births 1917 deaths People from Jelenia Góra People from the Province of Silesia 19th-century German biologists 19th-century German explorers German ornithologists Taxon authorities Explorers of Papua New Guinea Explorers from the Kingdom of Prussia Scholars from the Kingdom of Prussia
Otto Finsch
Biology
1,765
8,442,496
https://en.wikipedia.org/wiki/Colonial%20Revival%20garden
A Colonial Revival garden is a garden design intended to evoke the garden design typical of the Colonial period of Australia or the United States. The Colonial Revival garden is typified by simple rectilinear beds, straight (rather than winding) pathways through the garden, and perennial plants from the fruit, ornamental flower, and vegetable groups. The garden is usually enclosed, often by low walls, fences, or hedges. The Colonial Revival gardening movement was an important development in the gardening movement in the United States. The American colonial garden Generalizing about the common house garden in the colonial period in the United States is difficult, as garden plantings and even design varied considerably depending on the time period, wealth, climate, colonial heritage (whether British, French, or Spanish), and the purpose to which the garden was to be put (vegetable, flower, herb, etc.). Because of the overwhelmingly strong British influence in colonial America, the "colonial garden" generally refers to the most common type of garden found in the 13 British colonies. Colonial-era gardens in the southern colonies often exhibited the same design as those in the north. Gardens of the wealthy, however, often employed newer gardening ideas, such as the landscape garden or English garden. Colonial gardens tended to be small and close to the house. A straight walkway generally extended on a line equal with the entrance to the house through the center of the garden. (This layout was often abandoned in the north, where it was more important to site the garden so the building protected it from northwest winds.) Perpendicular straight paths often extended from this central path. Planting beds were usually square or rectangular although circular beds were also seen. In almost all cases, beds were raised to provide good drainage. Beds could sometimes be bordered with low-growing, neat plants such as chive or pinks. In areas with a Spanish influence, orchards generally were attached to the garden. The paths in the Colonial American garden were generally of brick, gravel, or stone. Brick was more commonly used in the south, however. Enclosure of the garden was common, often with boxwood hedges or wooden fences. Picket fences were common, but boxwood was usually used only in the south and in the later colonial period. Plantings in colonial gardens were generally not separated by type. Fruits, herbs, ornamental flowers, and vegetables were usually mixed together in the same planting bed. Ornamental flowers were often grown closer to the house, however, while vegetables which needed space to grow (such as corn, green beans, or pumpkins) would often be grown in larger beds further away. Fruit trees would sometimes line paths, to provide shade and produce, but fruit bushes were as common as fruit trees and always planted in the interior of the garden. Fruit trees would also be planted along the external border of the garden (while wealthier people with more land planted them in orchards). Ornamental shrubs were rare, but could include azalea, lilac, and mock orange. A stand-alone herb garden was uncommon in the United States. However, Colonial American herb gardens were generally of the same design as other gardens. They were usually less than across, and often consisted of four square plots separated by gravel paths. More commonly, herbs were mixed in with flowers and other plants. Commonly planted herbs included angelica, basil, burnet, calendula, caraway, chamomile, chervil, coriander, comfrey, dill, fennel, licorice, mint, nasturtium, parsley, sage, and tarragon. Herbs to a Colonial American did not have the same meaning as the words does in modern America. To colonists, "herb" meant not only savory plants added to dishes to enhance flavor but included medicinal plants as well as greens (such as nasturtiums and calendulas) meant to be eaten raw or cooked as part of a salad. The Australian colonial garden The first botanical gardens in Australia were founded early in the 19th century. The Royal Botanic Gardens, Sydney, 1816; the Royal Tasmanian Botanical Gardens, 1818; the Royal Botanic Gardens, Melbourne, 1845; Adelaide Botanic Gardens, 1854; and Brisbane Botanic Gardens, 1855. These were established essentially as colonial gardens of economic botany and acclimatisation. The Auburn Botanical Gardens, 1977, located in Sydney's western suburbs, are one of the popular and diverse botanical gardens in the Greater Western Sydney area. History of the Colonial Revival garden movement The Colonial Revival gardening movement traces its origins to the Centennial International Exhibition of 1876, the first official World's Fair held in the United States. The Centennial Exposition was held in Philadelphia, Pennsylvania, from May 10 to November 10, 1876, and it celebrated the 100th anniversary of the signing of the Declaration of Independence. Although the Colonial Revival gardening movement had already begun a short time before, the Centennial Exposition created intense interest in all things colonial — including the colonial garden. Colonial Revival gardens were widely popular from the late 1800s to the late 1930s. The Colonial Revival gardening movement occurred primarily in the eastern United States (where colonial heritage was strongest), although the gardens were constructed across the country. A number of writers published highly influential books about the Colonial Revival garden. Among these were Alice Morse Earle's Old Time Gardens (1901), Alice Morse Earle's Sun Dials and Roses of Yesterday (1902), and Grace Tabor's Old-Fashioned Gardening (1913). Colonial Revival gardens do not seek to imitate or replicate actual colonial gardens or colonial planting schemes. Rather, they are (as historical gardening expert Denise Wiles Adams notes) "romanticized" versions of colonial gardens. As Butler, Smalling, and Wilson put it: "Colonial Revival gardens were never intended to duplicate the gardens' historical appearance. They are twentieth-century gardens designed to meet contemporary needs, the artistic creations of very accomplished landscape architects that value aesthetic quality over historical accuracy." In terms of layout, the Colonial Revival garden still emphasizes straight lines and symmetry, and a central axis aligned with the house. Although plants typical of the colonial era are emphasized, many Colonial Revival gardens also soften the line where the house foundation meets the soil through the use of "foundation plantings" such as low evergreen shrubs. Modern Colonial Revival gardens tend to emphasize boxwood hedges as edging rather than fences. It is more common to see early 20th century favorites like delphiniums, hollyhocks, and violets used than historic plants. In the late 1800s and early 1900s, many Colonial Revival gardens were planted with brightly colored exotic plants which were not part of the colonial experience. These vibrantly colored plants were part of the Victorian era gardening legacy. But in the late 1900s and early 2000s, many Colonial Revival gardens have removed these exotic plants in favor of a more authentic colonial garden. Colonial Revival gardens also usually incorporate a "feature" like an arbor, bench, or fountain at the center of the garden where the paths intersect. Such features were elements of the late colonial period only. Examples Several notable examples exist of Colonial Revival gardens, most of them located on the east coast of the United States. They include: Arlington House, the Robert E. Lee Memorial, on the grounds of Arlington National Cemetery in Arlington County, Virginia Bassett Hall, a farmhouse located near Williamsburg, Virginia William Blount Mansion in Knoxville, Tennessee Colonial Williamsburg, located near Williamsburg, Virginia Hamilton House in South Berwick, Maine Mount Vernon, plantation home of George Washington located near Alexandria, Virginia Old Stone House in Washington, D.C. The Stevens-Coolidge Place in North Andover, Massachusetts See also Colonial Revival architecture Revivalism (architecture) References Bibliography Adams, Denise Wiles. "Garden Designs for Historic Homes." Old-House Journal. September–October 2005, p. 35-38. Bennett, Paul. Garden Lover's Guide to the South. New York: Princeton Architectural Press, 2000. Brinkley, M. Kent and Chappell, Gordon W. The Gardens of Colonial Williamsburg. Williamsburg, Va.: Colonial Williamsburg Foundation, 1996. Butler, Sara A.; Smalling, Jr., Walter; and Wilson, Richard Guy. The Campus Guide: University of Virginia. New York: Princeton Architectural Press, 1999. Cheek, Richard and Favretti, Rudy J. Gardens & Landscapes of Virginia. Little Compton, R.I.: Fort Church Publishers, 1993. Clayton, Virginia Tuttle. The Once and Future Gardener: Garden Writing From the Golden Age of Magazines, 1900-1940. Boston, Mass.: David R. Godine, 2000. Damrosch, Barbara. Theme Gardens. New York: Workman Pub., 2001. Emmet, Alan. So Fine a Prospect: Historic New England Gardens. Lebanon, N.H.: University Press of New England, 1997. Favretti, Rudy J. and Favretti, Joy P. Landscapes and Gardens for Historic Buildings. Walnut Creek, Calif.: AltaMira Press, 1997. Forsyth, Holly Kerr. Gardens of Eden: Among the World's Most Beautiful Gardens. Carlton, Vic.: Miegunyah Press, 2009. Griswold, Mac and Foley, Roger. Washington's Gardens at Mount Vernon: Landscape of the Inner Man. Boston, Mass.: Houghton Mifflin, 1999. Johnson, Vicki. "Symmetry in the Garden." Old House Interiors. May 2002, p. 72-75. Karson, Robin S. Fletcher Steele, Landscape Architect: An Account of the Gardenmaker's Life, 1885-1971. Amherst, Mass.: University of Massachusetts Press, 2003. Kowalchik, Claire; Hylton, William H.; and Carr, Anna. Rodale's Illustrated Encyclopedia of Herbs. Emmaus, Pa.: Rodale Press, 1998. Kunst, Scott G. "Victorian Vegetables." Old-House Journal. April 1987, p. 46-51. McGuire, Diane Kostial. Gardens of America: Three Centuries of Design. Charlottesville, Va.: Thomasson-Grant, 1989. Phillips, Ellen and Burrell, C. Colston. Rodale's Illustrated Encyclopedia of Perennials. Emmaus, Pa.: Rodale Press, 1993. Seeber, Barbara H. A City of Gardens: Glorious Public Gardens In and Around the Nation's Capital. Sterling, Va.: Capital Books, 2004. Tankard, Judith B. "Ellen Biddle Shipman's Colonial Revival Garden Style." In Re-Creating the American Past: Essays on the Colonial Revival. Richard Guy Wilson, Shaun Eyring, and Kenny Marotta, eds. Charlottesville, Va.: University Press of Virginia, 2006. Taylor, Patrick. The Oxford Companion to the Garden. New York: Oxford University Press, 2008. Thalimer, Carol and Thalimer, Dan. Quick Escapes, Atlanta: 27 Weekend Getaways From the Gateway to the South. Guilford, Con..: Globe Pequot Press, 2005. Wright, Renee. Virginia Beach, Richmond & Tidewater Virginia including Williamsburg, Norfolk and Jamestown: A Great Destination. Woodstock, Vt.: Countryman Press, 2010. Landscape architecture Colonial Revival Movement
Colonial Revival garden
Engineering
2,280
7,389,093
https://en.wikipedia.org/wiki/Museum%20Erotica
Museum Erotica was a sex museum in Copenhagen, Denmark, located just off of Strøget, Copenhagen's main shopping street. The museum was founded by director/photographer Ole Ege and business manager Kim Clausen. It originally opened in 1992 at Vesterbrogade 31 in Copenhagen. On May 14, 1994, it reopened at Købmagergade 24, where it remained until it closed in March 2009, following the sudden, unexpected death of Kim Clausen in 2008, and then the financial recession. The museum claimed to have had one million visitors . The museum often described itself as having "illustrated some of the sex life of Homo sapiens" , which reflects its very historic and holistic approach to its exhibitions. The walk through the museum took a visitor through many exhibits in roughly chronological order. A good deal of written commentary in English and Danish explained and augmented many of the items on display. There were extensive exhibitions on the beginning of erotic photography, a room with Playboy-centerfolds and other American pinups, and a special exhibition on Marilyn Monroe, among other things. One of the final displays in Museum Erotica was a small room with a sofa opposite a large wall of small television screens each showing a different porno video. The selections reflected the eclectic nature of the museum's displays. See also List of sex museums References Museums established in 2011 Sex museums Museum Erotica
Museum Erotica
Biology
286
72,926,560
https://en.wikipedia.org/wiki/Samsung%20Galaxy%20Z%20Flip%205
The Samsung Galaxy Z Flip 5 (stylized as Samsung Galaxy Z Flip5, sold as Samsung Galaxy Flip 5 in certain territories) is an Android-based foldable smartphone that was announced by Samsung Electronics on July 26, 2023. Its unveiling marked the first time that the Galaxy Unpacked event was held in the company's home country of South Korea. The phone was released on August 11, 2023. Specifications Design Hardware The Galaxy Z Flip 5 has two screens: its foldable inner 6.7-inch display with a 120 Hz variable refresh rate and its 3.4-inch cover display. The device has 8 GB of RAM, and either 256 GB or 512 GB of UFS 4.0 flash storage, with no support for expanding the device's storage capacity via micro-SD cards. The Z Flip 5 is powered by the Qualcomm Snapdragon 8 Gen 2 for Galaxy. The device's included battery is a 3700 mAh dual-cell unit that fast charges via USB-C up to 25 W, or via wireless charging up to 15 W. The Z Flip 5 features two rear cameras, including a 12 MP wide-angle camera and a 12 MP ultrawide camera. It has a 10 MP front-facing camera at the top of the display. References External links Samsung Galaxy Foldable smartphones Mobile phones introduced in 2023 Mobile phones with multiple rear cameras Mobile phones with 4K video recording Flagship smartphones
Samsung Galaxy Z Flip 5
Technology
298
69,148,282
https://en.wikipedia.org/wiki/Hafnium%20nitrate
Hafnium(IV) nitrate is an inorganic compound, a salt of hafnium and nitric acid with the chemical formula Hf(NO3)4. Synthesis Hafnium nitrate can be prepared by the reaction of hafnium tetrachloride and dinitrogen pentoxide. Properties Hafnium nitrate is slightly volatile, and can be sublimed at 110 °C and 0.1 mmHg. Hafnium nitrate decomposes on heating (≥ 160°C) to HfO(NO3)2 and then to HfO2. Applications Hafnium nitrate can be used for the preparation of materials containing hafnium dioxide. References Hafnium compounds Nitrates
Hafnium nitrate
Chemistry
144
45,715,740
https://en.wikipedia.org/wiki/Eilat%27s%20Coral%20Beach
Eilat's Coral Beach Nature Reserve and Conservation area () is a nature reserve and national park in the Red Sea, near the city of Eilat in Israel. It covers of shore, and is the northernmost shallow water coral reef in the world, and possibly one of the more resilient to climate change. It is popular for diving and research, and was founded by the Israel Nature and Parks Authority. At the southernmost point of the nature reserve there is the Coral World Underwater Observatory, the first of its kind in the world, and the largest public aquarium in the Middle East. It was listed as one of The New York Times' Places to Go in 2019. As of 2018, the Eilat coral reef is growing. History The coral reef in Eilat is unique in many ways. It is the northernmost coral reef in the world, and its isolated location in the Gulf of Aqaba, where salinity is higher and there is less water movement than on most days. This has enabled the development of many endemic species. In the past, the reef covered the entire western shore of the Gulf of Aqaba, and parts of the northern coast, so that its length was over 12 kilometers, but due to human activities it was harmed and the reef has been reduced to a very small area. The natural reserve was one of the three best known natural reserves in Israel in 1962. The main reserve, Coral Beach Reserve, was established in 1964. An adjacent and more southerly reserve, the Eilat South Sea Nature Reserve, was declared in 2002, and a third reserve, Coral Sea Nature Reserve, was declared in 2009. The coral reef is the only coral reef in Israel. Preservation In 2018, Jerusalem Post reported on the coral reef and said that the coral reef was growing again. The main purpose of the nature reserve, for which it was established, is the preservation and rehabilitation of the reef in the Gulf of Eilat, which was severely damaged over the years. As part of the conservation activities, the reserve defined most of the lagoon area (except for 3 shallow wading pools) as protected and closed by a fence on the side of the beach and buoys from the sea side, so that visitors will not be able to swim there and harm the corals and reef dwellers. In addition, the reserve has regulations against feeding fish and animals, which may be harmed by eating human food, or even accidentally swallow plastic wrappers and choke on them. In addition, the Society for the Protection of Nature in Israel sends divers and volunteers to clean the beach and the reef itself, search for debris and plastic packaging (which can often get stuck on a coral and suffocate it), and lead tours to the area. The reserve also monitors the corals and fish in the reserve, so that their health and developmental status can be known. In order to encourage the growth of the reef, iron and stone structures were scattered on the sandy bottom opposite the reef wall that simulate the areas on which corals usually form, thus allowing more room for corals to grow and develop in, which in turn allows more living areas for the reef dwellers. Gallery See also Coral World Underwater Observatory Coral Reef Tourism in Israel References External links Eilat Coral Beach Nature Reserve Nature reserves in Israel Landforms of the Red Sea Eilat Coral reefs Tourism in Eilat
Eilat's Coral Beach
Biology
686
24,086,376
https://en.wikipedia.org/wiki/Sirtuin-activating%20compound
Sirtuin-activating compounds (STAC) are chemical compounds having an effect on sirtuins, a group of enzymes that use NAD+ to remove acetyl groups from proteins. They are caloric restriction mimetic compounds that may be helpful in treating various aging-related diseases. Context Leonard P. Guarente is recognized as the leading proponent of the hypothesis that caloric restriction slows aging by activation of Sirtuins. STACs have been discovered by Konrad Howitz of Biomol Inc and biologist David Sinclair. In September 2003, Howitz and Sinclair et al. published a highly cited paper reporting that polyphenols such as resveratrol activate human SIRT1 and extend the lifespan of budding yeast (Howitz et al., Nature, 2003). Other examples of such products are butein, piceatannol, isoliquiritigenin, fisetin, and quercetin. Sirtuins depend on the crucial cellular molecule called nicotinamide adenine dinucleotide (NAD+) for their function. Falling NAD+ levels during aging may adversely impact sirtuin maintenance of DNA integrity and ability to combat oxidative stress-induced cell damage. Increasing cellular NAD+ levels with supplements like nicotinamide mononucleotide (NMN) during aging may slow or reverse certain aging processes with sirtuin function enhancement. Some STACs can cause artificial effects in the assay initially used for their identification, but it has been shown that STACs also activate SIRT1 against regular polypeptide substrates, with an influence of the substrate sequence. Sirtris Pharmaceuticals, Sinclair's company, was purchased by GlaxoSmithKline (GSK) in 2008, and subsequently shut down as a separate entity within GSK. See also Gerontology SRT1460 SRT1720 References External links http://pubs.acs.org/cen/coverstory/8234/8234aging.html Sirtris Pharmaceuticals' US patent for STAC Patent title: Novel Sirtuin Activating Compounds and Methods for Making the Same Anti-aging substances Biogerontology
Sirtuin-activating compound
Chemistry,Biology
450
66,709,348
https://en.wikipedia.org/wiki/British%20Columbia%20Shore%20Station%20Oceanographic%20Program
The British Columbia Shore Station Oceanographic Program is a sea surface temperature and salinity monitoring program on the Canadian coast of the northeast Pacific Ocean. The program is administered by Fisheries and Oceans Canada, and regroups 12 lighthouse stations in British Columbia. Most lighthouses are staffed by the Department of Fisheries and Oceans, but some have independent contractors instead. The practice of recording ocean water temperature and salinity levels in the area was initiated in 1914 at the Pacific Biological Station in Nanaimo. Data is collected daily around the time of the daytime high tide. The methodology of the sampling was originally designed by oceanographer John P. Tully, and was never modified in order to maintain the homogeneity of the data. The program expanded to 12 stations in the 1930s. Over time, more stations joined the programs while others stopped reporting. Currently, twelve stations remain in the program. Data from the Amphitrite point and Kains island lightstations, which started reporting in the mid-1930s, show an increase in coastal water temperatures of 0.08 °C per decade. On the other hand, data from the Entrance Island station, which started reporting around the same time, show an increase in coastal water temperatures of 0.15°C per decade. These trends are a result of anthropogenic climate change. The stations currently being monitored as part of the program are: See also Oceanography CCGS John P. Tully List of lighthouses in British Columbia References Oceanography Fisheries and Oceans Canada
British Columbia Shore Station Oceanographic Program
Physics,Environmental_science
302
2,041,443
https://en.wikipedia.org/wiki/Pierre%20Carreau
Pierre J. Carreau is a rheologist, the author of the model of Carreau fluid. He is a professor emeritus at École Polytechnique in Montreal and the founding director of CREPEC (Center for Applied Research on Polymers and Composites presently named Center for Research on High Performance Polymer and Composite Systems). Pierre Carreau is internationally known for his research work on the rheology of polymers, an area in which he co-authored two books and published more than 160 scientific articles, most in leading scientific journals. His best known works on rheological equations and conformation models for polymer systems are considered benchmarks in polymer engineering. The so-called Carreau Viscosity Model is now part of most software packages for the flow simulation of flow processing. Carreau received his BASc and MASc degrees in chemical engineering from Ecole Polytechnique of Montreal and his PhD in chemical engineering from UW-Madison in 1968. Since then, he has been a professor of chemical engineering at Ecole Polytechnique. He was chairman of the department from 1973 to 1979 and later was founding director of the Applied Research Center on Polymers, CRASP, created in 1988. He has also been a member of the Administration Board of Ecole Polytechnique of Montreal since 1995. One of Carreau's major goals has been to bridge the gap between theory and practice, translating complex molecular theories into usable results for industry. In many areas he has developed astute concepts and showed their interest for applications. In that respect, his work on mixing of polymers with helical ribbon agitators is highly recognized by the research community as well as by engineers involved in the design of polymerization reactors and other mixing systems. Carreau's ideas have been used to design large, high performance, economical industrial reactors in the U.S. and India. He has also shown interest in using larger blade helical impellers to mix difficult viscoelastic fluids and to reduce shear and prevent degradation of highly sensitive materials, such as biomaterials. He is a Fellow of the Chemical Institute of Canada (1989), Fellow of the Canadian Academy of Engineering (2001), a Fellow of the Royal Society of Canada (2006) and a Fellow of the Society of Rheology. References Year of birth missing (living people) Living people Polymer scientists and engineers Rheologists Canadian engineers
Pierre Carreau
Chemistry,Materials_science
487
12,728,056
https://en.wikipedia.org/wiki/Pomeranz%E2%80%93Fritsch%20reaction
The Pomeranz–Fritsch reaction, also named Pomeranz–Fritsch cyclization, is a named reaction in organic chemistry. It is named after Paul Fritsch (1859–1913) and Cäsar Pomeranz (1860–1926). In general it is a synthesis of isoquinoline. General reaction scheme The reaction below shows the acid-promoted synthesis of isoquinoline from benzaldehyde and a 2,2-dialkoxyethylamine. Various alkyl groups, e.g. methyl and ethyl groups, can be used as substituent R. In the archetypical reaction sulfuric acid was used as proton donor, but Lewis acids such as trifluoroacetic anhydride and lanthanide triflates have been used occasionally. Later, a wide range of diverse isoquinolines were successfully prepared. Reaction mechanism A possible mechanism is depicted below: First the benzalaminoacetal 1 is built by the condensation of benzaldehyde and a 2,2-dialkoxyethylamine. After the condensation a hydrogen-atom is added to one of the alkoxy groups. Subsequently, an alcohol is removed. Next, the compound 2 is built. After that a second hydrogen-atom is added to the compound. In the last step a second alcohol is removed and the bicyclic system becomes aromatic. Applications The Pomeranz–Fritsch reaction has general application in the preparation of isoquinoline derivatives. Isoquinolines find many applications, including: topical anesthetics such as dimethisoquin: vasodilators, a well-known example, papaverine, shown below. See also Bischler–Napieralski reaction Pictet–Spengler reaction References Nitrogen heterocycle forming reactions Heterocycle forming reactions Name reactions Isoquinolines
Pomeranz–Fritsch reaction
Chemistry
402
1,021,521
https://en.wikipedia.org/wiki/Equity%20premium%20puzzle
The equity premium puzzle refers to the inability of an important class of economic models to explain the average equity risk premium (ERP) provided by a diversified portfolio of equities over that of government bonds, which has been observed for more than 100 years. There is a significant disparity between returns produced by stocks compared to returns produced by government treasury bills. The equity premium puzzle addresses the difficulty in understanding and explaining this disparity. This disparity is calculated using the equity risk premium: The equity risk premium is equal to the difference between equity returns and returns from government bonds. It is equal to around 5% to 8% in the United States. The risk premium represents the compensation awarded to the equity holder for taking on a higher risk by investing in equities rather than government bonds. However, the 5% to 8% premium is considered to be an implausibly high difference and the equity premium puzzle refers to the unexplained reasons driving this disparity. Description The term was coined by Rajnish Mehra and Edward C. Prescott in a study published in 1985 titled "The Equity Premium: A Puzzle". An earlier version of the paper was published in 1982 under the title "A test of the intertemporal asset pricing model". The authors found that a standard general equilibrium model, calibrated to display key U.S. business cycle fluctuations, generated an equity premium of less than 1% for reasonable risk aversion levels. This result stood in sharp contrast with the average equity premium of 6% observed during the historical period. In 1982, Robert J. Shiller published the first calculation that showed that either a large risk aversion coefficient or counterfactually large consumption variability was required to explain the means and variances of asset returns. Azeredo (2014) shows, however, that increasing the risk aversion level may produce a negative equity premium in an Arrow-Debreu economy constructed to mimic the persistence in U.S. consumption growth observed in the data since 1929. The intuitive notion that stocks are much riskier than bonds is not a sufficient explanation of the observation that the magnitude of the disparity between the two returns, the equity risk premium (ERP), is so great that it implies an implausibly high level of investor risk aversion that is fundamentally incompatible with other branches of economics, particularly macroeconomics and financial economics. The process of calculating the equity risk premium, and selection of the data used, is highly subjective to the study in question, but is generally accepted to be in the range of 3–7% in the long-run. Dimson et al. calculated a premium of "around 3–3.5% on a geometric mean basis" for global equity markets during 1900–2005 (2006). However, over any one decade, the premium shows great variability—from over 19% in the 1950s to 0.3% in the 1970s. In 1997, Siegel found that the actual standard deviation of the 20-year rate of return was only 2.76%. This means that for long-term investors, the risk of holding the stock of a smaller than expected can be derived only by looking at the standard deviation of annual earnings. For long-term investors, the actual risks of fixed-income securities are higher. Through a series of reasoning, the equity premium should be negative. To quantify the level of risk aversion implied if these figures represented the expected outperformance of equities over bonds, investors would prefer a certain payoff of $51,300 to a 50/50 bet paying either $50,000 or $100,000. The puzzle has led to an extensive research effort in both macroeconomics and finance. So far a range of useful theoretical tools and numerically plausible explanations have been presented, but no one solution is generally accepted by economists. Theory The economy has a single representative household whose preferences over stochastic consumption paths are given by: where is the subjective discount factor, is the per capita consumption at time , U() is an increasing and concave utility function. In the Mehra and Prescott (1985) economy, the utility function belongs to the constant relative risk aversion class: where is the constant relative risk aversion parameter. When , the utility function is the natural logarithmic function. Weil (1989) replaced the constant relative risk aversion utility function with the Kreps-Porteus nonexpected utility preferences. The Kreps-Porteus utility function has a constant intertemporal elasticity of substitution and a constant coefficient of relative risk aversion which are not required to be inversely related - a restriction imposed by the constant relative risk aversion utility function. Mehra and Prescott (1985) and Weil (1989) economies are a variations of Lucas (1978) pure exchange economy. In their economies the growth rate of the endowment process, , follows an ergodic Markov Process. where . This assumption is the key difference between Mehra and Prescott's economy and Lucas' economy where the level of the endowment process follows a Markov Process. There is a single firm producing the perishable consumption good. At any given time , the firm's output must be less than or equal to which is stochastic and follows . There is only one equity share held by the representative household. We work out the intertemporal choice problem. This leads to: as the fundamental equation. For computing stock returns where gives the result. The derivative of the Lagrangian with respect to the percentage of stock held must equal zero to satisfy necessary conditions for optimality under the assumptions of no arbitrage and the law of one price. Data Much data exists that says that stocks have higher returns. For example, Jeremy Siegel says that stocks in the United States have returned 6.8% per year over a 130-year period. Proponents of the capital asset pricing model say that this is due to the higher beta of stocks, and that higher-beta stocks should return even more. Others have criticized that the period used in Siegel's data is not typical, or the country is not typical. Possible explanations A large number of explanations for the puzzle have been proposed. These include: Rare events hypothesis, Myopic loss aversion, rejection of the Arrow-Debreu model in favor of different models, modifications to the assumed preferences of investors, imperfections in the model of risk aversion, the excess premium for the risky assets equation results when assuming exceedingly low consumption/income ratios, and a contention that the equity premium does not exist: that the puzzle is a statistical illusion. Kocherlakota (1996), Mehra and Prescott (2003) present a detailed analysis of these explanations in financial markets and conclude that the puzzle is real and remains unexplained. Subsequent reviews of the literature have similarly found no agreed resolution. The mystery of stock premiums occupies a special place in financial and economic theories, and more progress is needed to understand the spread of stocks on bonds. Over time, as well as to determine the factors driving equity premium in various countries / regions may still be active research agenda. A 2023 paper by Edward McQuarrie argues the equity risk premium may not exist, at least not as is commonly understood, and is furthermore based on data from a too narrow a time period in the late 20th century. He argues a more detailed examination of historical data finds "over multi-decade periods, sometimes stocks outperformed bonds, sometimes bonds outperformed stocks and sometimes they performed about the same. New international data confirm this pattern. Asset returns in the US in the 20th century do not generalize." The equity premium: a deeper puzzle Azeredo (2014) showed that traditional pre-1930 consumption measures understate the extent of serial correlation in the U.S. annual real growth rate of per capita consumption of non-durables and services ("consumption growth"). Under alternative measures proposed in the study, the serial correlation of consumption growth is found to be positive. This new evidence implies that an important subclass of dynamic general equilibrium models studied by Mehra and Prescott (1985) generates negative equity premium for reasonable risk-aversion levels, thus further exacerbating the equity premium puzzle. Rare events hypothesis One possible solution to the equity premium puzzle considered by Julliard and Ghosh (2008) is whether it can be explained by the rare events hypothesis, founded by Rietz (1988). They hypothesized that extreme economic events such as the Great Depression, the World Wars and the Great Financial Crisis resulted in equity holders demanding high equity premiums to account for the possibility of the significant loss they could suffer if these events were to materialise. As such, when these extreme economic events do not occur, equity holders are rewarded with higher returns. However, Julliard and Ghosh concluded that rare events are unlikely to explain the equity premium puzzle because the Consumption Capital Asset Pricing Model was rejected by their data and much greater risk aversion levels were required to explain the equity premium puzzle. Moreover, extreme economic events affect all assets (both equity and bonds) and they all yield low returns. For example, the equity premium persisted during the Great Depression, and this suggests that an even greater catastrophic economic event is required, and it must be one which only affect stocks, not bonds. Myopic loss aversion Benartzi & Thaler (1995) contend that the equity premium puzzle can be explained by myopic loss aversion and their explanation is based on Kahneman and Tversky's prospect theory. They rely on two assumptions about decision-making to support theory; loss aversion and mental accounting. Loss aversion refers to the assumption that investors are more sensitive to losses than gains, and in fact, research calculates utility of losses felt by investors to be twice that of the utility of a gain. The second assumption is that investors frequently evaluate their stocks even when the purpose of the investment is to fund retirement or other long-term goals. This makes investors more risk averse compared to if they were evaluating their stocks less frequently. Their study found that the difference between returns gained from stocks and returns gained from bonds decrease when stocks are evaluated less frequently. The two combined creates myopic loss aversion and Benartzi & Thaler concluded that the equity premium puzzle can be explained by this theory. Individual characteristics Some explanations rely on assumptions about individual behavior and preferences different from those made by Mehra and Prescott. Examples include the prospect theory model of Benartzi and Thaler (1995) based on loss aversion. A problem for this model is the lack of a general model of portfolio choice and asset valuation for prospect theory. A second class of explanations is based on relaxation of the optimization assumptions of the standard model. The standard model represents consumers as continuously-optimizing dynamically-consistent expected-utility maximizers. These assumptions provide a tight link between attitudes to risk and attitudes to variations in intertemporal consumption which is crucial in deriving the equity premium puzzle. Solutions of this kind work by weakening the assumption of continuous optimization, for example by supposing that consumers adopt satisficing rules rather than optimizing. An example is info-gap decision theory, based on a non-probabilistic treatment of uncertainty, which leads to the adoption of a robust satisficing approach to asset allocation. Equity characteristics Another explanation of the equity premium puzzle focuses on the characteristics of equity that cannot be captured by typical models but are still consistent with optimisation by investors. The most significant characteristic that is not typically considered is the requirement for equity holders to monitor their activity and have a manager to assist them. Therefore, the principal-agent relationship is very prevalent between corporation managers and equity holders. If an investor was to choose to not have a manager, it is likely costly for them to monitor the activity of the corporations that they invest in and often rely heavily on auditors or they look to the market hypothesis in which information about asset values in the equity markets are exposed. This hypothesis is based on the theory that an investor who is inexperienced and uninformed can bank on the fact that they will get average market returns in an identifiable market portfolio, which is questionable as to whether or not this can be done by an uninformed investor. Although, as per the characteristics of equity in explaining the premium, it is only necessary to hypothesise that people looking to invest do not think they can reach the same level of performance of the market. Another explanation related to the characteristics of equity was explored by a variety of studies including Holmstrom and Tirole (1998), Bansal and Coleman (1996) and Palomino(1996)and was in relation to liquidity. Palomino described the noise trader model that was thin and had imperfect competition is the market for equities and the lower its equilibrium price dropped the higher the premium over risk-free bonds would rise. Holmstrom and Tirole in their studies developed another role for liquidity in the equity market that involved firms willing to pay a premium for bonds over private claims when they would be facing uncertainty over liquidity needs. Tax distortions Another explanation related to the observed growing equity premium was argued by McGrattan and Prescott (2001) to be a result of variations over time of taxes and particularly its effect on interest and dividend income. It is difficult however to give credibility to this analysis due to the difficulties in calibration utilised as well as ambiguity surrounding the existence of any noticeable equity premium before 1945. Even given this, it is evident that the observation that equity premium changes arising from the distortion of taxes over time should be taken into account and give more validity to the equity premium itself. Related data is mentioned in the Handbook of the Equity Risk Premium. Beginning in 1919, it captured the post-World War I recovery, while omitting wartime losses and low pre-war returns. After adding these earlier years, the arithmetic average of the British stock premium for the entire 20th century is 6.6%, which is about 21/4% lower than the incorrect data inferred from 1919-1999. Implied volatility Graham and Harvey have estimated that, for the United States, the expected average premium during the period June 2000 to November 2006 ranged between 4.65 and 2.50. They found a modest correlation of 0.62 between the 10-year equity premium and a measure of implied volatility (in this case VIX, the Chicago Board Options Exchange Volatility Index). Dennis, Mayhew & Stivers (2006) find that changes in implied volatility have an asymmetric effect on stock returns. They found that negative changes in implied volatility have a stronger impact on stock returns than positive changes in implied volatility. The authors argue that such an asymmetric volatility effect can be explained by the fact that investors are more concerned with downside risk than upside potential. That is, investors are more likely to react to negative news and expect negative changes in implied volatility to have a stronger impact on stock returns. The authors also find that changes in implied volatility can predict future stock returns. Stocks that experience negative changes in implied volatility have higher expected returns in the future. The authors state that this relationship is caused by the association between negative changes in implied volatility and market downturns. Yan (2011) presents an explanation for the equity premium puzzle using the slope of the implied volatility smile. The implied volatility smile refers to the pattern of implied volatilities for options contracts with the same expiration date but different strike prices. The slope of the implied volatility smile reflects the market's expectations for future changes in the stock price, with a steeper slope indicating higher expected volatility. The author shows that the slope of the implied volatility smile is a significant predictor of stock returns, even after controlling for traditional risk factors. Specifically, stocks with steeper implied volatility smiles (i.e., higher jump risk) have higher expected returns, consistent with the equity premium puzzle. The author argues that this relationship between the slope of the implied volatility smile and stock returns can be explained by investors' preference for jump risk. Jump risk refers to the risk of sudden, large movements in the stock price, which are not fully captured by traditional measures of volatility. Yan argues that investors are willing to accept lower average returns on stocks that have higher jump risk, because they expect to be compensated with higher returns during times of market stress. Information derivatives The simplest scientific interpretation of the puzzle suggests that consumption optimization is not responsible for the equity premium. More precisely, the timeseries of aggregate consumption is not a leading explanatory factor of the equity premium. The human brain is (simultaneously) engaged in many strategies. Each of these strategies has a goal. While individually rational, the strategies are in constant competition for limited resources. Even within a single person this competition produces a highly complex behavior which does not fit any simple model. Nevertheless, the individual strategies can be understood. In finance this is equivalent to understanding different financial products as information derivatives i.e. as products which are derived from all the relevant information available to the customer. If the numerical values for the equity premium are unknown, the rational examination of the equity product would have accurately predicted the observed ballpark values. From the information derivatives viewpoint consumption optimization is just one possible goal (which never really comes up in practice in its pure academic form). To a classically trained economist this may feel like a loss of a fundamental principle. But it may also be a much needed connection to reality (capturing the real behavior of live investors). Viewing equities as a stand-alone product (information derivative) does not isolate them from the wider economic picture. Equity investments survive in competition with other strategies. The popularity of equities as an investment strategy demands an explanation. In terms of data this means that the information derivatives approach needs to explain not just the realized equities performance but also the investor-expected equity premia. The data suggest the long-term equity investments have been very good at delivering on the theoretical expectations. This explains the viability of the strategy in addition to its performance (i.e. in addition to the equity premium). Market failure explanations Two broad classes of market failure have been considered as explanations of the equity premium. First, problems of adverse selection and moral hazard may result in the absence of markets in which individuals can insure themselves against systematic risk in labor income and noncorporate profits. Second, transaction costs or liquidity constraints may prevent individuals from smoothing consumption over time. In relation to transaction costs, there are significantly greater costs associated with trading stocks than trading bonds. These include costs to acquire information, broker fees, taxes, load fees and the bid-ask spread. As such, when shareholders attempt to capitalise on the equity premium by adjusting their asset allocation and purchasing more stocks, they incur significant trading costs which eliminate the gains from the equity premium. However, Kocherlakota (1996) contends that there is insufficient evidence to support this proposition and further data about the size and sources of trading costs need to be collected before this proposition could be validated. Denial of equity premium A final possible explanation is that there is no puzzle to explain: that there is no equity premium. This can be argued from a number of ways, all of them being different forms of the argument that we don't have enough statistical power to distinguish the equity premium from zero: Selection bias of the US market in studies. The US market was the most successful stock market in the 20th century. Other countries' markets displayed lower long-run returns (but still with positive equity premiums). Picking the best observation (US) from a sample leads to upwardly biased estimates of the premium. Survivorship bias of exchanges: This refers to the equity holder's fear about an economic crash such as the 1929 stock market crash eventuating, even when the probability of that event occurring is minute. The justification here is that over half of the stock exchanges that were operating in early 1900s were discontinued, and the equity risk premium calculated does not account for this. As such, the equity risk premium is "calculated for a survivor" such that if returns from these stock exchanges were included in the calculations, there may not have been such a great disparity between returns gleaned from bonds compared to stocks. However, this hypothesis cannot be easily proven and Mehra and Prescott (1985) in their studies, included the effect on the equity returns following the Great Depression. Although shares lost 80% of their value, comparisons of returns from stocks against bonds showed that even in those periods, significantly higher returns were gained from investing in stocks. Low number of data points: the period 1900–2005 provides only 105 years which is not a large enough sample size to run statistical analyses with full confidence, especially in view of the black swan effect. Windowing: returns of equities (and relative returns) vary greatly depending on which points are included. Using data starting from the top of the market in 1929 or starting from the bottom of the market in 1932 (leading to estimates of equity premium of 1% lower per year), or ending at the top in 2000 (vs. bottom in 2002) or top in 2007 (vs. bottom in 2009 or beyond) completely change the overall conclusion. However, in all windows considered, the equity premium is always greater than zero. A related criticism is that the apparent equity premium is an artifact of observing stock market bubbles in progress. David Blitz, head of Quant Research at Robeco, suggested that the size of the equity premium is not as large as widely believed. It is usually calculated, he said, on the presumption that the true risk-free asset is the one month T bill. If one recalculates, taking the five-year T-bond as the risk free asset, the equity premium is smaller and the risk-return relation becomes more positive. Note however that most mainstream economists agree that the evidence shows substantial statistical power. Benartzi & Thaler analyzed the equity returns over a 200-year period, between 1802 and 1990 and found that whilst equity returns were remained stable between 5.5% and 6.5%, return on government bonds fell significantly from around 5% to 0.5%. Moreover, analysis of how faculty members funded their retirement showed that people who had invested in stocks received much higher returns than people who had invested in government bonds. Implications Implications for the Individual Investor For the individual investor, the equity premium may represent a reasonable reward for taking on the risk of buying shares such that they base their decisions to allocate assets to shares or bonds depending on how risk tolerant or risk averse they are. On the other hand, if the investor believes that the equity premium arise from mistakes and fears, they would capitalize on that fear and mistake and invest considerable portions of their assets in shares. Here, it is prudent to note that economists more commonly allocate significant portions of their asset in shares. Currently, the equity premium is estimated to be 3%. Although this is lower than historical rates, it is still significantly more advantageous than bonds for investors investing in their retirements funds and other long-term funds. The magnitude of the equity premium brings about substantial implications for policy, welfare and also resource allocation. Policy and Welfare Implications Campbell and Cochrane (1995) have found in a study of a model that simulates equity premium value's consistent with asset prices, welfare costs are similar in magnitude to welfare benefits. Therefore essentially, a large risk premium in society where asset prices are a reflection of consumer preferences, implies that the cost of welfare is also large. It also means that in recessions, welfare costs are excessive regardless of aggregate consumption. As the equity premium rises, recession-state income marginal values steadily increase also thus further increasing the welfare costs of recessions. This also brings about questions regarding the need for microeconomic policies that operate by way of higher productivity in the long run by trading off short-term pain in the form of adjustment costs. Given the impact on welfare through recessions and the large equity premium, it is evident that these short-term trade offs as a result of economic policy are likely not ideal, and would be preferred to take place in times of normal economic activity. Resource Allocation When there is a large risk premium associated with equity, there is a high cost of systematic risk in returns. One of these being its potential implications on individual portfolio decisions. Some research has argued that high rates of return are just signs of misplaced risk-aversion in which investors are able to earn high returns with little risk from switching from stocks to other assets such as bonds. Research on the contrary indicates that a large percentage of the general public believe that the stock market is best for investors that are in it for the long haul and may also link to another implication being trends in the equity premium. Some claims have been made that the equity premium has declined over time in the past few years and may be supported by other studies claiming that tax declines may also continue to reduce the premium and the fact that transaction costs in securities markets decline this is consistent with a declining premium. The trend implication is also supported by models such as 'noise traders' that create a cyclical premium due to noise traders being excessively optimistic thus declining the premium, and vice versa when the optimism is replaced with pessimism, this would explain the constant decline of equity premium as a stock price bubble. See also Ellsberg paradox Fed model Loss aversion Risk aversion List of cognitive biases Economic puzzle Forward premium anomaly Real exchange-rate puzzles References Further reading Behavioral finance Finance theories Stock market 1985 introductions Economic puzzles Prospect theory
Equity premium puzzle
Biology
5,294
37,833,044
https://en.wikipedia.org/wiki/Comparison%20of%20Start%20menu%20replacements%20for%20Windows%208
Microsoft's Windows 8 operating system introduced an updated Start menu known as the "Start screen", which uses a full-screen design consisting of tiles to represent applications. This replaced the Windows desktop as the primary interface of the operating system. Additionally, the on-screen Start button was replaced by a hidden button in the corner of the screen; Microsoft explained that the Start button was removed because few people used it, noting the addition of "pinning" apps to the taskbar from Windows 7. The change was controversial among users, and a market ensued for applications which restore the visible Start button, emulate the previous Start menu design, or allow users to boot directly to the Desktop instead of the Start screen. The following is a list of Start menu replacements for Windows 8 which have received coverage from third-party sources: The number of skins in the table givens the number of built-in skins. If there are downloadable skins, then a "+" is appended to the number to indicate that download extensions are possible. RetroUI, StartIsBack, Classic Shell, Start8, and Pokki are five of the more notable of these. RetroUI is offered in 33 languages, and also for Windows Server 2012, and adds a taskbar and resizable windows. StartIsBack is also localized. Classic Shell used to be free and open source (now proprietary freeware), major items are localized and installing the Language Pack from Windows Update makes all items fully localized; Classic Shell is also available for Windows 7 and Windows Server, and claims over 25 million downloads. The Pokki download to restore the Start menu is free; as of January 2013, it has about 1.5 million users. The Pokki application platform, based on Chromium, enables desktop applications to be built—like mobile apps—using standard web languages like HTML5, CSS3, and JavaScript. It is also available for Windows XP and Windows 7. Pokki has raised $21.5M from investors like Google, Intel, and O'Reilly; its business model is to make a commission on software sold through its app store. Start8 has been downloaded over 5 million times. See also List of alternative shells for Windows References External links Compare Start Menus:Classic Shell vs StartIsBack, Start8 etc 5 Windows 8 Apps to Bring Back the Start Menu by LAPTOP Magazine Windows 8 Start Menu replacements for Windows 8
Comparison of Start menu replacements for Windows 8
Technology
489
39,948,617
https://en.wikipedia.org/wiki/Kepler-67
Kepler-67 is a star in the open cluster NGC 6811 in the constellation Cygnus. It has slightly less mass than the Sun and has one confirmed planet, slightly smaller than Neptune, announced in 2013. Planetary system References External links Kepler-67, The Open Exoplanet Catalogue Kepler 67, Exoplanet.eu G-type main-sequence stars Cygnus (constellation) 2115 Planetary transit variables Planetary systems with one confirmed planet
Kepler-67
Astronomy
94
3,524,476
https://en.wikipedia.org/wiki/Particle%20statistics
Particle statistics is a particular description of multiple particles in statistical mechanics. A key prerequisite concept is that of a statistical ensemble (an idealization comprising the state space of possible states of a system, each labeled with a probability) that emphasizes properties of a large system as a whole at the expense of knowledge about parameters of separate particles. When an ensemble describes a system of particles with similar properties, their number is called the particle number and usually denoted by N. Classical statistics In classical mechanics, all particles (fundamental and composite particles, atoms, molecules, electrons, etc.) in the system are considered distinguishable. This means that individual particles in a system can be tracked. As a consequence, switching the positions of any pair of particles in the system leads to a different configuration of the system. Furthermore, there is no restriction on placing more than one particle in any given state accessible to the system. These characteristics of classical positions are called Maxwell–Boltzmann statistics. Quantum statistics The fundamental feature of quantum mechanics that distinguishes it from classical mechanics is that particles of a particular type are indistinguishable from one another. This means that in an ensemble of similar particles, interchanging any two particles does not lead to a new configuration of the system. In the language of quantum mechanics this means that the wave function of the system is invariant up to a phase with respect to the interchange of the constituent particles. In the case of a system consisting of particles of different kinds (for example, electrons and protons), the wave function of the system is invariant up to a phase separately for both assemblies of particles. The applicable definition of a particle does not require it to be elementary or even "microscopic", but it requires that all its degrees of freedom (or internal states) that are relevant to the physical problem considered shall be known. All quantum particles, such as leptons and baryons, in the universe have three translational motion degrees of freedom (represented with the wave function) and one discrete degree of freedom, known as spin. Progressively more "complex" particles obtain progressively more internal freedoms (such as various quantum numbers in an atom), and, when the number of internal states that "identical" particles in an ensemble can occupy dwarfs their count (the particle number), then effects of quantum statistics become negligible. That's why quantum statistics is useful when one considers, say, helium liquid or ammonia gas (its molecules have a large, but conceivable number of internal states), but is useless applied to systems constructed of macromolecules. While this difference between classical and quantum descriptions of systems is fundamental to all of quantum statistics, quantum particles are divided into two further classes on the basis of the symmetry of the system. The spin–statistics theorem binds two particular kinds of combinatorial symmetry with two particular kinds of spin symmetry, namely bosons and fermions. See also Bose–Einstein statistics Fermi–Dirac statistics Statistical mechanics
Particle statistics
Physics
605
47,349,966
https://en.wikipedia.org/wiki/UGC%204879
UGC 4879, which is also known as VV 124, is the most isolated dwarf galaxy in the periphery of the Local Group. It is an irregular galaxy at a distance of 1.38 Mpc. Low-resolution spectroscopy yielded inconsistent radial velocities for different components of the galaxy, hinting at the presence of a stellar disk. There is also evidence of this galaxy containing dark matter. Appearance UGC 4879 is a transition type galaxy, meaning it has no rings (Denoted rs). It is also a spheroidal (dSph) galaxy, meaning it has a low luminosity. It has little to no gas or dust, and little recent star formation. It is also irregular, meaning it has no specific form. Gallery References External links C2.staticflickr.com Dwarf galaxies Local Group 04879 Ursa Major
UGC 4879
Astronomy
180
13,157,378
https://en.wikipedia.org/wiki/English%20Lowlands%20beech%20forests
The English Lowlands beech forests is a terrestrial ecoregion in the United Kingdom, as defined by the World Wide Fund for Nature (WWF) and the European Environment Agency (EEA). It covers of Southern England, approximately as far as the border with Devon and South Wales in the west, into the Severn valley in the north-west, into the East Midlands in the north, and up to the border of Norfolk in the north-east. The WWF code for this ecoregion is PA0421. Ecoregional context To the north, west and south-west lies the similar Celtic broadleaf forests ecoregion, which covers most of the rest of the British Isles. In addition, two further ecoregions are located in the south-western and north-western edges of Ireland, and the north-western fringes of Scotland (North Atlantic moist mixed forests), and beyond the Scottish Highland Boundary Fault (Caledonian conifer forests). The whole of this Atlantic archipelago is thus considered as originally (or in some sense ideally) forested, with only the far mountainous north being primarily coniferous. Across the English Channel lies the Atlantic mixed forests ecoregion in northern France and the Low Countries. The difference between the English lowlands beech forests and the Celtic broadleaf forests lies in the fact that south-eastern England is comparatively drier and warmer in climate, and lower-lying in terms of topography. Geologically, something of the distinction can be found in the dominance of the Southern England Chalk Formation in this ecoregion, and the Tees-Exe line, which divides the island of Great Britain into a sedimentary south-east, and a metamorphic and igneous north-west. However, the WWF division was preceded by that of the Hungarian biologist Miklos Udvardy, who had considered the greater part of the British Isles as just one biogeographic province in the Palearctic Realm, which he termed British Islands. Characteristics Historically, much of this lowland and submontane region was covered with high-canopy forests dominated by European beech (Fagus sylvatica), but also including other species of tree, including oak, ash, rowan and yew. In summer, the forests are generally cool and dark, because the beech produces a dense canopy, and thus restricts the growth of other species of tree and wild flowers. In the spring, however, thick carpets of bluebells (Hyacinthoides non-scripta) can be seen, flowering before the beech leafs out and shades the forest floor. The National Vegetation Classification (NVC) plant communities associated with beech forests (together with their occurrence ratios in England as a whole) are: W12 Fagus sylvatica – Mercurialis perennis (dog's mercury) woodland (base-rich soils) – c. 40% W14 Fagus sylvatica – Rubus fruticosus (bramble) woodland (mesotrophic soils) – c. 45% W15 Fagus sylvatica – Deschampsia flexuosa (wavy hair-grass) woodland (acidic soils) – c. 15% River systems, the most significant of which is the Thames, were historically host to lower-canopy riverine forests dominated by black alder, and this can still be encountered occasionally today. Also included in this ecoregion are the distinctive ecosystems associated with the rivers themselves, as well as their flood-meadows and estuaries. The soils are largely based on limestone, and the climate is temperate with steady amounts of rainfall. Temperatures can fall below freezing in the winter. Nowadays, much of this ecoregion has been given over to agriculture – with the growing of wheat, barley and rapeseed particularly common – as well as to the raising of livestock, especially cattle and sheep. In places it is very heavily populated, with towns, suburbs and villages found nearly everywhere – although the plateau of Salisbury Plain remains largely wild. The most significant centre of population is London, at the head of the Thames estuary, one of the largest cities in the world. Due to this high population density, and to a certain amount of depredation caused by the non-native grey squirrel, edible dormice (in the Chilterns) and deer, this forest ecoregion is considered at high risk, with a critical/endangered conservation status accorded it by the WWF. Air pollution may also be leading to a reduction in beech numbers, through increased susceptibility to disease. Among fauna found in this ecoregion, the West European hedgehog, red fox, Eurasian badger, European rabbit and wood mouse are relatively common, while the following are classed as near threatened on the IUCN Red List: European otter Red squirrel Harvest mouse Hazel dormouse Greater horseshoe bat Corn crake The barbastelle, as a vulnerable species on the Red List, is in greater danger still. Rare plants include the red helleborine, bird's-nest orchid and knothole yoke-moss. Rare fungi include the Devil's bolete and hedgehog mushroom. History At the end of the last glaciation, about 10,000 years ago, the area's ecosystem was characterised by a largely treeless tundra. Pollen studies have shown that this was replaced by a taiga of birch, and then pine, before their replacement in turn (c. 4500 BC) by most of the species of tree encountered today – including, by 4000 BC, the beech, which seems to have been introduced from mainland Europe. This was used as a source of flour, ground from the triangular nutlets contained in the "mast", or fruit of the beech, after its tannins had been leached out by soaking. Beechmast has also traditionally been fed to pigs. However, by 4000 BC, as Oliver Rackham has indicated, the dominant tree species was not the beech, but the small-leaved lime, also known as the pry tree. The wildwood was made up of a patchwork of lime-wood areas and hazel-wood areas, interspersed with oak and elm and other species. The pry seems to have become less abundant now because the climate has turned against it, making it difficult for it to grow from seed. Nevertheless, some remnants of ancient lime-wood still remain in south Suffolk. Clearance of forests began with the introduction of farming (c. 4500 BC), particularly in the higher-lying parts of the country, like the South Downs. At this time, the whole region, apart from upland areas under plough, and marshy areas (e.g. Romney Marsh in Kent and much of Somerset), was heavily forested, with woodland stretching nearly everywhere. Notable surviving examples include: The Forest of Arden (Warwickshire) The Chilterns (on the heights running from Oxfordshire through Buckinghamshire and Hertfordshire to Bedfordshire) Epping Forest (on the border of north-east Greater London and Essex) Kinver Edge (a remnant of the Mercian forest on the border of south Staffordshire and Worcestershire) Morfe Forest (south Shropshire) Savernake Forest (Wiltshire) Selwood Forest (Somerset) The Weald (Kent, East Sussex, West Sussex and Surrey) Wychwood (Oxfordshire) Wyre Forest (on the border of Worcestershire and Shropshire) All of these were once far more extensive than they are today. For example, according to a late 9th century writer, the Weald (from the Anglo-Saxon word weald = "forest") once stretched from Kent to Hampshire, and was 120 miles long by 30 broad (). The New Forest (in south-west Hampshire) remains the largest intact forested area in this ecoregion (at ). The hedgerow system, which separates fields from lanes and also from other fields, is also extensive across the region, and serves as an important habitat for otherwise displaced woodland fauna, providing shelter, nest sites, and berries for over-wintering birds such as the Redwing. Some species-rich hedgerows date back at least 700 years, if not 1,000. In recent decades many miles of hedgerows have been grubbed up to enlarge fields to make it easier for farmers to use large agricultural machines. Further lengths of hedgerow are frequently degraded by having their grown restricted by cutting with tractor-driven mechanical flails. Meanwhile the traditional more benign method of managing hedges by manual hedge-laying to maintain their vigour and ensure that they cannot be broached by farm animals has been almost completely discontinued due to labour costs and lack of skilled workers. The woodbank is a specific type of hedgerow, now mostly derelict, which was used to define woodland ownership boundaries and separate woodland from farmland. A ditch and bank were formed along the edge of a wood and trees (such as Hornbeam) planted along the bank. Over the years, the trees were managed by a process called layering to form a thick strong hedge of interlocking stems. Most woodbanks are no longer managed and have grown up into mature multi-stemmed trees with interlocking roots, but do not provide an effective boundary. For many species of bird, significant estuarine habitats include the Thames and Severn estuaries, and the mid-Essex coast. The Mesozoic history of the area can be seen in the Jurassic Coast World Heritage Site, where about 180 Ma of fossil-rich sedimentary deposits have been exposed along a stretch of the Dorset and East Devon coast. The science of palaeontology can be said to have started in large measure here, with the pioneering work of Mary Anning. The Great Storm of 1987 was responsible for the uprooting of some 15 million trees in this area. See also Ancient woodland Biodiversity Biodiversity action plan Bioregionalism Community forests in England Conservation biology Forestry Commission Geology of England List of ecoregions List of ecoregions in the United Kingdom List of forests in the United Kingdom National nature reserves in England National parks of England and Wales (New Forest and South Downs) Protected areas of the United Kingdom Royal forest Trees of Britain and Ireland List of ecoregions in Europe References External links Encyclopedia of Earth: Ecoregion European Environment Agency: Digital Map of European Ecological Regions Forestry Commission England Keepers of time: A statement of policy for England's Ancient and Native Woodland (Forestry Commission) UK Biodiversity Action Plan: Homepage UK Biodiversity Action Plan: Lowland beech and yew woodland Woodland Trust World Wide Fund for Nature: Conservation Science – Ecoregions Temperate broadleaf and mixed forests Palearctic ecoregions Ecoregions of the United Kingdom Lowlands beech forests Lowlands beech forests Old-growth forests
English Lowlands beech forests
Biology
2,150
2,172,275
https://en.wikipedia.org/wiki/Potassium%20periodate
Potassium periodate is an inorganic salt with the molecular formula KIO4. It is composed of a potassium cation and a periodate anion and may also be regarded as the potassium salt of periodic acid. Note that the pronunciation is per-iodate, not period-ate. Unlike other common periodates, such as sodium periodate and periodic acid, it is only available in the metaperiodate form; the corresponding potassium orthoperiodate (K5IO6) has never been reported. Preparation Potassium periodate can be prepared by the oxidation of an aqueous solution of potassium iodate by chlorine and potassium hydroxide. It can also be generated by the electrochemical oxidation of potassium iodate, however the low solubility of KIO3 makes this approach of limited use. Chemical properties Potassium periodate decomposes at 582 °C to form potassium iodate and oxygen. The low solubility of KIO4 makes it useful for the determination of potassium and cerium. It is slightly soluble in water (one of the less soluble of potassium salts, owing to a large anion), giving rise to a solution that is slightly alkaline. On heating (especially with manganese(IV) oxide as catalyst), it decomposes to form potassium iodate, releasing oxygen gas. KIO4 forms tetragonal crystals of the Scheelite type (space group I41/a). References Periodates Potassium compounds Oxidizing agents
Potassium periodate
Chemistry
304
44,991,939
https://en.wikipedia.org/wiki/Flavin-containing%20monooxygenase
The flavin-containing monooxygenase (FMO) protein family specializes in the oxidation of xeno-substrates in order to facilitate the excretion of these compounds from living organisms. These enzymes can oxidize a wide array of heteroatoms, particularly soft nucleophiles, such as amines, sulfides, and phosphites. This reaction requires an oxygen, an NADPH cofactor, and an FAD prosthetic group. FMOs share several structural features, such as a NADPH binding domain, FAD binding domain, and a conserved arginine residue present in the active site. Recently, FMO enzymes have received a great deal of attention from the pharmaceutical industry both as a drug target for various diseases and as a means to metabolize pro-drug compounds into active pharmaceuticals. These monooxygenases are often misclassified because they share activity profiles similar to those of cytochrome P450 (CYP450), which is the major contributor to oxidative xenobiotic metabolism. However, a key difference between the two enzymes lies in how they proceed to oxidize their respective substrates; CYP enzymes make use of an oxygenated heme prosthetic group, while the FMO family utilizes FAD to oxidize its substrates. History Prior to the 1960s, the oxidation of xenotoxic materials was thought to be completely accomplished by CYP450. However, in the early 1970s, Dr. Daniel Ziegler from the University of Texas at Austin discovered a hepatic flavoprotein isolated from pig liver that was found to oxidize a vast array of various amines to their corresponding nitro state. This flavoprotein named "Ziegler's enzyme" exhibited unusual chemical and spectrometric properties. Upon further spectroscopic characterization and investigation of the substrate pool of this enzyme, Dr. Ziegler discovered that this enzyme solely bound FAD molecule that could form a C4a-hydroxyperoxyflavin intermediate, and that this enzyme could oxidize a wide variety of substrates with no common structural features, including phosphines, sulfides, selenium compounds, amongst others. Once this was noticed, Dr. Ziegler's enzyme was reclassified as a broadband flavin monooxygenase. In 1984, the first evidence for multiple forms of FMOs was elucidated by two different laboratories when two distinct FMOs were isolated from rabbit lungs. Since then, over 150 different FMO enzymes have been successfully isolated from a wide variety of organisms. Up until 2002, only 5 FMO enzymes were successfully isolated from mammals. However, a group of researchers found a sixth FMO gene located on human chromosome 1. In addition to the sixth FMO discovered as of 2002, the laboratories of Dr. Ian Philips and Elizabeth Sheppard discovered a second gene cluster in humans that consists of 5 additional pseudogenes for FMO on human chromosome 1. Evolution of FMO gene family The FMO family of genes is conserved across all phyla that have been studied so far, therefore some form of the FMO gene family can be found in all studied eukaryotes. FMO genes are characterized by specific structural and functional constraints, which led to the evolution of different types of FMO's in order to perform a variety of functions. Divergence between the functional types of FMO's (FMO 1–5) occurred before the amphibians and mammals diverged into separate classes. FMO5 found in vertebrates appears to be evolutionarily older than other types of FMO's, making FMO5 the first functionally distinct member of the FMO family. Phylogenetic studies suggest that FMO1 and FMO3 are the most recent FMO's to evolve into enzymes with distinct functions. Although FMO5 was the first distinct FMO, it is not clear what function it serves since it does not oxygenate the typical FMO substrates involved in first-pass metabolism. Analyses of FMO genes across several species have shown extensive silent DNA mutations, which indicate that the current FMO gene family exists because of selective pressure at the protein level rather than the nucleotide level. FMO's found in invertebrates are found to have originated polyphyletically; meaning that a phenotypically similar gene evolved in invertebrates which was not inherited from a common ancestor. Classification and characterization FMOs are one subfamily of class B external flavoprotein monooxygenases (EC 1.14.13), which belong to the family of monooxygenase oxidoreductases, along with the other subfamilies Baeyer-Villiger monooxygenases and microbial N-hydroxylating monooxygenases. FMO's are found in fungi, yeast, plants, mammals, and bacteria. Mammals Developmental and tissue specific expression has been studied in several mammalian species, including humans, mice, rats, and rabbits. However, because FMO expression is unique to each animal species, it is difficult to make conclusions about human FMO regulation and activity based on other mammalian studies. It is likely that species-specific expression of FMO's contributes to differences in susceptibility to toxins and xenobiotics as well as the efficiency with excreting among different mammals. Six functional forms of human FMO genes have been reported. However, FMO6 is considered to be a pseudogene. FMOs 1–5 share between 50–58% amino acid identity across the different species. Recently, five more human FMO genes were discovered, although they fall in the category of pseudogenes. FMO1, FMO2, FMO3, FMO4, FMO5, FMO6 Yeast Unlike mammals, yeast (Saccharomyces cerevisiae) do not have several isoforms of FMO, but instead only have one called yFMO. This enzyme does not accept xenobiotic compounds. Instead, yFMO helps to fold proteins that contain disulfide bonds by catalyzing O2 and NADPH-dependent oxidations of biological thiols, just like mammalian FMO's. An example is the oxidation of glutathione to glutathione disulfide, both of which form a redox buffering system in the cell between the endoplasmic reticulum and the cytoplasm. yFMO is localized in the cytoplasm in order to maintain the optimum redox buffer ratio necessary for proteins containing disulfide bonds to fold properly. This non-xenobiotic role of yFMO may represent the original role of the FMO's before the rise of the modern FMO family of enzymes found in mammals. Plants Plant FMO's play a role in defending against pathogens and catalyze specific steps in the biosynthesis of auxin, a plant hormone. Plant FMO's also play a role in the metabolism of glucosinolates. These non-xenobiotic roles of plant FMO's suggest that other FMO functions could be identified in non-plant organisms. Structure Crystal structures have been determined for yeast (Schizosaccharomyces pombe) FMO (PDB: 1VQW) and bacterial (Methylophaga aminisulfidivorans) FMO (PDB: 2XVH). The crystal structures are similar to each other and they share 27% sequence identity. These enzymes share 22% and 31% sequence identity with human FMOs, respectively. FMOs have a tightly bound FAD prosthetic group and a binding NADPH cofactor. Both dinucleotide binding motifs form Rossmann folds. The yeast FMO and bacterial FMO are dimers, with each monomer consisting of two structural domains: the smaller NADPH binding domain and the larger FAD-binding domain. The two domains are connected by a double linker. A channel between the two domains leads to the active site where NADPH binds both domains and occupies a cleft that blocks access to the flavin group of FAD, which is bound to the large domain along the channel together with a water molecule. The nicotinamide group of NADPH interacts with the flavin group of FAD, and the NADPH binding site overlaps with the substrate binding site on the flavin group. FMOs contain several sequence motifs that are conserved across all domains: FAD-binding motif (GXGXXG) FMO identifying motif (FXGXXXHXXXF/Y) NADPH-binding motif (GXSXXA) F/LATGY motif arginine residue in the active site The FMO identifying motif interacts with the flavin of FAD. The F/LATGY motif is a sequence motif common in N-hydroxylating enzymes. The arginine residue interacts with the phosphate group of NADPH. Function The general function of these enzymes is to metabolise xenobiotics. Hence, they are considered to be xenobiotic detoxication catalysts. These proteins catalyze the oxygenation of multiple heteroatom-containing compounds that are present in our diet, such as amine-, sulfide-, phosphorus-, and other nucleophilic heteroatom-containing compounds. FMOs have been implicated in the metabolism of a number of pharmaceuticals, pesticides and toxicants, by converting the lipophilic xenobiotics into polar, oxygenated, and readily excreted metabolites. Substrate diversity FMO substrates are structurally diverse compounds. However, they all share similar characteristics: Soft nucleophiles (basic amines, sulfides, Se- or P-containing compounds) Neutral or single-positively charged Zwitterions, anions and dications are considered to be unfavorable substrates. There are several drugs reported to be typical substrates for FMOs. The majority of drugs function as alternate substrate competitive inhibitors to FMOs (i.e. good nucleophiles that compete with the drug for FMO oxygenation), since they are not likely to serve as FMO substrates. Only a few true FMO competitive inhibitors have been reported. Those include indole-3-carbinol and N,N-dimethylamino stilbene carboxylates. A well-known FMO inhibitor is methimazole (MMI). Mechanism The FMO catalytic cycle proceeds as follows: The cofactor NADPH binds to the oxidized state of the FAD prosthetic group, reducing it to FADH2. Molecular oxygen binds to the formed NADP+-FADH2-enzyme complex and is reduced, resulting in 4a-hydroperoxyflavin (4a-HPF or FADH-OOH). This species is stabilized by NADP+ in the catalytic site of the enzyme. These first two steps in the cycle are fast. In the presence of a substrate (S), a nucleophilic attack occurs on the distal O-atom of the prosthetic group. The substrate is oxygenated to SO, forming the 4a-hydroxyflavin (FADH-OH). Only when the flavin is in the hydroperoxy form is when the xenobiotic substrate will react. The flavin product then breaks down with release of water to reform FAD. Due to the low dissociation constant of the NADP+-enzyme complex, NADP+ is released by the end of the cycle and the enzyme returns to its original state. The rate-limiting step involves either the breakdown of FADH-OH to water or the release of NADP+. Quantum mechanics simulations showed the N-hydroxylation catalyzed by flavin-containing monooxygenases initiated by homolysis of the O-O bond in the C4a-hydroperoxyflavin intermediate resulting in the formation of an internal hydrogen bonded hydroxyl radical. Cellular expression in humans Expression of each type of FMO relies on several factors including, cofactor supply, physiological & environmental factors, as well as diet. Because of these factors, each type of FMO is expressed differently depending on the species and tissue. In humans, expression of FMO's is mainly concentrated to the human liver, lungs, and kidneys, where most of the metabolism of xenobiotics occur. However, FMO's can also be found in the human brain and small intestine. While FMO1-5 can be found in the brain, liver, kidneys, lungs, and small intestine, the distribution of each type of FMO differs depending on the tissue and the developmental stage of the person. Expression in adult tissues In an adult, FMO1 is predominately expressed in the kidneys and to a lesser extent in the lungs and small intestine. FMO2 is the most abundant of the FMO's and is mostly expressed in the lungs and kidneys, with lower expression in the liver and small intestine. FMO3 is highly concentrated in the liver, but is also expressed in the lungs. FMO4 is expressed mostly in the liver and kidneys. FMO5 is highly expressed in the liver, but also has substantial expression in the lungs and small intestine. Though FMO2 is the most expressed FMO in the brain, it only constitutes about 1% of that found in the lungs, making FMO expression in the brain fairly low. Expression in fetal Tissues The distribution of FMO's in various types of tissues changes as a person continues to develop, making the fetal distribution of FMO's quite different than adult distribution of FMO's. While the adult liver is dominated by the expression of FMO3 and FMO5, the fetal liver is dominated by the expression of FMO1 and FMO5. Another difference is in the brain, where adults mostly express FMO2 and fetuses mostly express FMO1. Clinical significance Drug development Drug metabolism is one of the most important factors to consider when developing new drugs for therapeutic applications. The degradation rate of these new drugs in an organism's system determines the duration and intensity of their pharmacological action. During the past few years, FMOs have gained a lot of attention in drug development since these enzymes are not readily induced or inhibited by the chemicals or drugs surrounding their environment. CYPs are the primary enzymes involved in drug metabolism. However, recent efforts have been directed towards the development of drug candidates that incorporate functional groups that can be metabolized by FMOs. By doing this, the number of potential adverse drug-drug interactions is minimized and the reliance on CYP450 metabolism is decreased. Several approaches have been made to screen potential drug interactions. One of them includes human FMO3 (hFMO3), which is described as the most vital FMO regarding drug interactions. In order to successfully screen hFMO3 in a high throughput fashion hFMO3 was successfully fixed to graphene oxide chips in order to measure the change in electrical potential generated as a result of the drug being oxidized when it interacts with the enzyme. Hypertension There is evidence that FMOs are associated to the regulation of blood pressure. FMO3 is involved in the formation of TMA N-oxides (TMAO). Some studies indicate that hypertension can develop when there are no organic osmolytes (i.e. TMAO) that can counteract an increase in osmotic pressure and peripheral resistance. Individuals with deficient FMO3 activity have a higher prevalence of hypertension and other cardiovascular diseases, since there is a decrease in formation of TMA N-oxides to counterbalance the effects of a higher osmotic pressure and peripheral resistance. Fish odor syndrome The trimethylaminuria disorder, also known as fish odor syndrome, causes abnormal FMO3-mediated metabolism or a deficiency of this enzyme in an individual. A person with this disorder has a low capacity to oxidize the trimethylamine (TMA) that comes from their diet to its odourless metabolite TMAO. When this happens, large amounts of TMA are excreted through the individual's urine, sweat, and breath, with a strong fish-like odor. As of today, there is no known cure or treatment for this disorder. However, doctors recommend patients to avoid foods containing choline, carnitine, nitrogen, sulfur and lecithin. Other diseases FMOs have also been associated with other diseases, such as cancer and diabetes. Yet, additional studies are imperative to elucidate what is the relationship between FMO function and these diseases, as well as to define these enzymes’ clinical relevance. References External links Research information on FMO1 (WikiGenes) EC 1.14.13 Oxidoreductases
Flavin-containing monooxygenase
Chemistry
3,540
5,612,336
https://en.wikipedia.org/wiki/Blanchard%27s%20transsexualism%20typology
The American-Canadian sexologist Ray Blanchard proposed a psychological typology of gender dysphoria, transsexualism, and fetishistic transvestism in a series of academic papers through the 1980s and 1990s. Building on the work of earlier researchers, including his colleague Kurt Freund, Blanchard categorized trans women into two groups: homosexual transsexuals who are attracted exclusively to men and are feminine in both behavior and appearance; and autogynephilic transsexuals who experience sexual arousal at the idea of having a female body (). Blanchard and his supporters argue that the typology explains differences between the two groups in childhood gender nonconformity, sexual orientation, history of sexual fetishism, and age of transition. Blanchard's typology has attracted significant controversy, especially following the 2003 publication of J. Michael Bailey's book The Man Who Would Be Queen, which presented the typology to a general audience. Scientific criticisms commonly made against Blanchard's research include that the typology is unfalsifiable because Blanchard and other supporters regularly dismiss or ignore data that challenges the theory, that it failed to properly control against cisgender women rather than against cisgender men in rating levels of autogynephilia, and that when such studies are performed they show that cisgender women have similar levels of autogynephilic responses to transgender women. The American Psychiatric Association includes with autogynephilia as a specifier to a diagnosis of transvestic disorder in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders (2013); this addition was objected to by the World Professional Association for Transgender Health (WPATH), who argued that there was a lack of scientific consensus on and empirical evidence for the concept of autogynephilia. History Background Beginning in the 1950s, clinicians and researchers developed a variety of classifications of transsexualism. These were variously based on sexual orientation, age of onset, and fetishism. Prior to Blanchard, these classifications generally divided transgender women into two groups: "homosexual transsexuals" if sexually attracted to men and "heterosexual fetishistic transvestites" if sexually attracted to women. These labels carried a social stigma of mere sexual fetishism, and contradicted trans women's self-identification as "heterosexual" or "homosexual", respectively. In 1982, Kurt Freund and colleagues argued there were two distinct types of trans women, each with distinct causes: one type associated with childhood femininity and androphilia (sexual attraction to men), and another associated with fetishism and gynephilia (sexual attraction to women). Freund stated that the sexual arousal in this latter type could be associated, not only with crossdressing, but also with other feminine-typical behaviors, such as applying make-up or shaving the legs. Freund, four of his colleagues, and two other sexologists had previously published papers on "feminine gender identity in homosexual males" and "Male Transsexualism" in 1974. They occasionally also used the term homosexual transsexual to describe transgender men attracted to women. Blanchard credited Freund with being the first author to distinguish between erotic arousal due to dressing as a woman (transvestic fetishism) and erotic arousal due to fantasizing about being female (which Freund called cross-gender fetishism). Early research Blanchard conducted a series of studies on people with gender dysphoria, analyzing the files of cases seen in the Gender Identity Clinic of the Clarke Institute of Psychiatry and comparing them on multiple characteristics. These studies have been criticized as bad science for being unfalsifiable and for failing to sufficiently operationalize their definitions. They have also been criticized for lacking reproducibility, and for a lack of a control group of cisgender women. Supporters of the typology deny these allegations. Studying patients who had felt like women at all times for at least a year, Blanchard classified them according to whether they were attracted to men, women, both, or neither. He then compared these four groups regarding how many in each group reported a history of sexual arousal together with cross-dressing. 73% of the gynephilic, asexual, and bisexual groups said they did experience such feelings, but only 15% of the androphilic group did. He concluded that asexual, bisexual, and gynephilic transsexuals were motivated by erotic arousal to the thought or image of themself as a woman, and he coined the term autogynephilia to describe this. Blanchard and colleagues conducted a study in 1986 using phallometry (a measure of blood flow to the penis), demonstrating arousal in response to cross-dressing audio narratives among trans women. Although this study is often cited as evidence for autogynephilia, the authors did not attempt to measure subjects' ideas of themselves as women. The authors concluded that gynephilic gender identity patients who denied experiencing arousal to cross-dressing were still measurably aroused by autogynephilic stimuli, and that autogynephilia among non-androphilic trans women was negatively associated with tendency to color their narrative to be more socially acceptable. However, in addition to having methodological problems, the reported data did not support this conclusion, because the measured arousal to cross-dressing situations was minimal and consistent with subjects' self-reported arousal. This study has been cited by proponents to argue that gynephilic trans women who reported no autogynephilic interests were misrepresenting their erotic interests. Popularization Blanchard's research and conclusions came to wider attention with the publication of popular science books on transsexualism, including The Man Who Would Be Queen (2003) by sexologist J. Michael Bailey and Men Trapped in Men's Bodies (2013) by sexologist and trans woman Anne Lawrence, both of which based their portrayals of male-to-female transsexuals on Blanchard's taxonomy. The concept of autogynephilia in particular received little public interest until Bailey's 2003 book, though Blanchard and others had been publishing studies on the topic for nearly 20 years. Bailey's book was followed by peer-reviewed articles critiquing the methodology used by Blanchard. Both Bailey and Blanchard have since attracted intense criticism by some clinicians and by many transgender activists. Measures of orientation Sexologists may measure sexual orientation using psychological personality tests, self reports, or techniques such as photoplethysmography. Blanchard argues that self-reporting is not always reliable. Morgan, Blanchard and Lawrence have speculated that many reportedly "non-homosexual" trans women systematically distorted their life stories because "non-homosexuals" were often screened out as candidates for surgery. Blanchard and Freund used the Masculine Identity in Females (MGI) scale and the Modified Androphilia Scale. Lawrence writes that homosexual transsexuals averaged a Kinsey scale measurement of 5–6 or a 9.86 ± 2.37 on the Modified Androphilia Scale. Neurological differences The concept that androphilia in trans women is related to homosexuality in cisgender men has been tested by MRI studies. Cantor interprets these studies as supporting Blanchard's transsexualism typology. These studies show neurological differences between trans women attracted to men and cis men attracted to women, as well as differences between androphilic and gynephilic trans women. The studies also showed differences between transsexual and nontranssexual people, leading to the conclusion that transsexuality is "a likely innate and immutable characteristic". According to a 2016 review, structural neuroimaging studies seem to support the idea that androphilic and gynephilic trans women have different brain phenotypes, though the authors state that more independent studies of gynephilic trans women are needed to confirm this. A 2021 review examining transgender neurology found similar differences in brain structure between cisgender homosexuals and heterosexuals. Autogynephilia Autogynephilia (derived from Greek for "love of oneself as a woman") is a term coined by Blanchard for "a male's propensity to be sexually aroused by the thought of himself as a female", intending for the term to refer to "the full gamut of erotically arousing cross-gender behaviors and fantasies". Blanchard states that he intended the term to subsume transvestism, including for sexual ideas in which feminine clothing plays only a small or no role at all. Other terms for such cross-gender fantasies and behaviors include automonosexuality, eonism, and sexo-aesthetic inversion. It is not disputed that autogynephilic sexual arousal exists and has been reported by both some transsexuals and some non-transsexuals. The disputed aspects of Blanchard's theories are the theory that autogynephilia is the central motivation for non-androphilic MtF transsexuals while being absent in androphilic ones, and his characterisations of autogynephilia, including as a paraphilia. Blanchard writes that the accuracy of these theories needs further empirical research to resolve, while others such as the transfeminist Julia Serano characterise them as incorrect. Subtypes Blanchard identified four types of autogynephilic sexual fantasy, but stated that co-occurrence of types was common. Transvestic autogynephilia: arousal to the act or fantasy of wearing typically feminine clothing Behavioral autogynephilia: arousal to the act or fantasy of doing something regarded as feminine Physiologic autogynephilia: arousal to fantasies of body functions specific to people regarded as female Anatomic autogynephilia: arousal to the fantasy of having a normative woman's body, or parts of one Relationship to gender dysphoria The exact proposed nature of the relationship between autogynephilia and gender dysphoria is unclear, and the desire to live as a woman often remains as strong or stronger after an initial sexual response to the idea has faded. Blanchard and Lawrence argue that this is because autogynephilia causes a female gender identity to develop, which becomes an emotional attachment and something aspirational in its own right. Many transgender people dispute that their gender identity is related to their sexuality, and have argued that the concept of autogynephilia unduly sexualizes trans women's gender identity. Some fear that the concept of autogynephilia will make it harder for gynephilic or "non-classical" MtF transsexuals to receive sex reassignment surgery. Lawrence writes that some transsexual women identify with autogynephilia, some of these feeling positively and some negatively as a result, with a range of opinions reflected as to whether or not this played a motivating role in their decision to transition. In the first peer-reviewed critique of autogynephilia research, Charles Allen Moser found no substantial difference between "autogynephilic" and "homosexual" transsexuals in terms of gender dysphoria, stating that the clinical significance of autogynephilia was unclear. According to Moser, the idea is not supported by the data, and that despite autogynephilia existing, it is not predictive of the behavior, history, and motivation of trans women. In a re-evaluation of the data used by Blanchard and others as the basis for the typology, he states that autogynephilia is not always present in trans women attracted to women, or absent in trans women attracted to men, and that autogynephilia is not the primary motivation for gynephilic trans women to seek sex reassignment surgery. In a 2011 study presenting an alternative to Blanchard's explanation, Larry Nuttbrock and colleagues reported that autogynephilia-like characteristics were strongly associated with a specific generational cohort as well as the ethnicity of the subjects; they hypothesized that autogynephilia may become a "fading phenomenon". As a sexual orientation Blanchard and Lawrence have classified autogynephilia as a sexual orientation. Blanchard attributed the notion of some cross-dressing men being sexually aroused by the image of themselves as female to Magnus Hirschfeld. (The concept of a taxonomy based on transsexual sexuality was refined by endocrinologist Harry Benjamin in the Benjamin Scale in 1966, who wrote that researchers of his day thought attraction to men while feeling oneself to be a woman was the factor that distinguished a transsexual from a transvestite (who "is a man [and] feels himself to be one").) Blanchard and Lawrence argue that just like more common sexual orientations such as heterosexuality and homosexuality, it is not only reflected by penile responses to erotic stimuli, but also includes the capacity for pair bond formation and romantic love. Later studies have found little empirical support for autogynephilia as a sexual identity classification, and sexual orientation is generally understood to be distinct from gender identity. Elke Stefanie Smith and colleagues describe Blanchard's approach as "highly controversial as it could erroneously suggest an erotic background" to transsexualism. Serano says the idea is generally disproven within the context of gender transition as trans women who are on feminizing hormone therapy, especially on anti-androgens, experience a severe drop and in some cases complete loss in libido. Despite this the vast majority of transgender women continue their transition. Erotic target location errors Blanchard conjectured that sexual interest patterns could have inwardly instead of outwardly directed forms, which he called erotic target location errors (ETLE). Autogynephilia would represent an inwardly directed form of gynephilia, with the attraction to women being redirected towards the self instead of others. These forms of erotic target location errors have also been observed with other base orientations, such as pedophilia, attraction to amputees, and attraction to plush animals. Anne Lawrence wrote that this phenomenon would help to explain an autogynephilia typology. Cisgender women The concept of autogynephilia has been criticized for implicitly assuming that cisgender women do not experience sexual desire mediated by their own gender identity. Research on autogynephilia in cisgender women shows that cisgender women commonly endorse items on adapted versions of Blanchard's autogynephilia scales. Moser created an Autogynephilia Scale for Women in 2009, based on items used to categorize MtF transsexuals as autogynephilic in other studies. A questionnaire that included the ASW was distributed to a sample of 51 professional cisgender women employed at an urban hospital; 29 completed questionnaires were returned for analysis. By the common definition of ever having erotic arousal to the thought or image of oneself as a woman, 93% of the respondents would be classified as autogynephilic. Using a more rigorous definition of "frequent" arousal to multiple items, 28% would be classified as autogynephilic. Lawrence criticized Moser's methodology and conclusions and stated that genuine autogynephilia occurs very rarely, if ever, in cisgender women as their experiences are superficially similar but the erotic responses are ultimately markedly different. Moser responded that Lawrence had made multiple errors by comparing the wrong items. Lawrence argues that the scales used by both Veale et al. and Moser fail to differentiate between arousal from wearing provocative clothing or imagining that potential partners find one attractive, and arousal merely from the idea that one is a woman or has a woman's body. In a 2022 study, Bailey and Kevin J. Hsu dispute that "natal females" experience autogynephilia based on an application of Blanchard's original Core Autogynephilia Scale to four samples of "autogynephilic natal males", four samples of "non-autogynephilic natal males" and two samples of "natal females". Serano and Veale argue that Bailey and Hsu's results do not support their conclusion, because most "natal females" in their research reported at least some autogynephilic fantasies. Furthermore, Bailey and Hsu's "autogynephilic natal male" samples 1, 2, and 4 do not apply to trans people as the majority of the sample were cis crossdressers, not trans women. Sample 3, which was majority trans women, did not have high rates of autogynephilia compared to the other two samples. Serano and Veale also criticize Bailey and Hsu for leaving out two scales that played a central role in Blanchard's original conception of autogynephilia, saying that this implies a much narrower definition of autogynephilia which would have excluded many of Blanchard's original trans subjects. Similar to Serano and Veale, Moser also criticizes Bailey and Hsu for mainly comparing the scores of cisgender women with cisgender male crossdressers instead of transgender women. Transfeminist critique Critics of the autogynephlia hypothesis include transfeminists such as Julia Serano and Talia Mae Bettcher. Serano describes the concept as flawed, unscientific, and needlessly stigmatizing. According to Serano, "Blanchard's controversial theory is built upon a number of incorrect and unfounded assumptions, and there are many methodological flaws in the data he offers to support it." She argues that flaws in Blanchard's original studies include: being conducted among overlapping populations primarily at the Clarke Institute in Toronto without nontranssexual controls; subtypes not being empirically derived but instead "begging the question that transsexuals fall into subtypes based on their sexual orientation"; and further research finding a non-deterministic correlation between cross-gender arousal and sexual orientation. She states that Blanchard did not discuss the idea that cross-gender arousal may be an effect, rather than a cause, of gender dysphoria, and that Blanchard assumed that correlation implied causation. Serano also states that the wider idea of cross-gender arousal was affected by the prominence of sexual objectification of women, accounting for both a relative lack of cross-gender arousal in transsexual men and similar patterns of autogynephilic arousal in non-transsexual women. She criticised proponents of the typology, claiming that they dismiss non-autogynephilic, non-androphilic transsexuals as misreporting or lying while not questioning androphilic transsexuals, describing it as "tantamount to hand-picking which evidence counts and which does not based upon how well it conforms to the model", either making the typology unscientific due to its unfalsifiability, or invalid due to the nondeterministic correlation that later studies found. Serano says that the typology undermined lived experience of transsexual women, contributed to pathologisation and sexualisation of transsexual women, and the literature itself fed into the stereotype of transsexuals as "purposefully deceptive", which could be used to justify discrimination and violence against transsexuals. According to Serano, studies have usually found that some non-androphilic transsexuals report having no autogynephilia. Bettcher, based on her own experience as a trans woman, has critiqued the notion of autogynephilia, and "target errors" generally, within a framework of "erotic structuralism," arguing that the notion conflates essential distinctions between "source of attraction" and "erotic content," and "(erotic) interest" and "(erotic) attraction," thus misinterpreting what she prefers to call, following Serano, "female embodiment eroticism." She maintains that not only is "an erotic interest in oneself as a gendered being," as she puts it, a non-pathological and indeed necessary component of regular sexual attraction to others, but within the framework of erotic structuralism, a "misdirected" attraction to oneself as postulated by Blanchard is outright nonsensical. Activist and law professor Florence Ashley writes that the autogynephilia concept has been "discredited", and that Bailey's and Blanchard's work "has long been criticised for perpetuating stereotypes and prejudices against trans women, notably suggesting that LGBQ trans women's primary motivation for transitioning is sexual arousal." Terminology The concept that trans people with different sexual orientations are etiologically different goes back to the 1920s, but the terms used have not always been agreed on. Blanchard said that one of his two types of gender dysphoria/transsexualism manifests itself in individuals who are almost if not exclusively attracted to men, whom he referred to as homosexual transsexuals. Blanchard uses the term "homosexual" relative to the person's sex assigned at birth, not their current gender identity. This use of the term "homosexual" relative to the person's birth sex has been heavily criticized by other researchers. It has been described as archaic confusing, demeaning, pejorative, offensive, and heterosexist. Benjamin states that trans women can only be "homosexual" if anatomy alone is considered, and psyches are ignored; he states that after sex-reassignment surgery, calling a male-to-female transsexual "homosexual" is pedantic and against "reason and common sense". Many authorities, including some supporters, criticize Blanchard's choice of terminology as confusing or degrading because it emphasizes trans women's assigned sex, and disregards their sexual orientation identity. Leavitt and Berger write that the term is "both confusing and controversial" and that trans women "vehemently oppose the label and its pejorative baggage." In 1987, this terminology was included in the American Psychiatric Association's Diagnostic and Statistical Manual of Mental Disorders (DSM-III-R) as "transsexual, homosexual subtype". The later DSM-IV (1994) and DSM-IV-TR (2000) stated that a transsexual was to be described as "attracted to males, females, both or neither". Blanchard defined the second type of transsexual as including those who are attracted almost if not exclusively to females (gynephilic), attracted to both males and females (bisexual), and attracted to neither males nor females (asexual); Blanchard referred to this latter set collectively as the non-homosexual transsexuals. Blanchard says that the "non-homosexual" transsexuals (but not the "homosexual" transsexuals) exhibit autogynephilia, which he defined as a paraphilic interest in having female anatomy. Alternative terms Professor of anatomy and reproductive biology Milton Diamond proposed the use of the terms androphilic (attracted to men) and gynephilic (attracted to women) as neutral descriptors for sexual orientation that do not make assumptions about the sex or gender identity of the person being described, alternatives to homosexual and heterosexual. Frank Leavitt and Jack Berger state that the label homosexual transsexual seems to have little clinical merit, as its referents have "little in common with homosexuals, except a stated erotic interest in males"; they too suggest "more neutral descriptive terms such as androphilia". Sexological research has been done using these alternative terms by researchers such as Sandra L. Johnson. Both Blanchard and Leavitt used a psychological test called the "modified androphilia scale" to assess whether a transsexual was attracted to men or not. Sociologist Aaron Devor wrote, "If what we really mean to say is attracted to males, then say 'attracted to males' or androphilic ... I see absolutely no reason to continue with language that people find offensive when there is perfectly serviceable, in fact better, language that is not offensive." Other traits According to the typology, autogynephilic transsexuals are attracted to femininity while homosexual transsexuals are attracted to masculinity. However, a number of other differences between the types have been reported. Cantor states that "homosexual transsexuals" usually begin to seek sex reassignment surgery (SRS) in their mid-twenties, while "autogynephilic transsexuals" usually seek clinical treatment in their mid-thirties or even later. Blanchard also states that homosexual transsexuals were younger when applying for sex reassignment, report a stronger cross-gender identity in childhood, have a more convincing cross-gender appearance, and function psychologically better than "non-homosexual" transsexuals. A lower percentage of those described as homosexual transsexuals report being (or having been) married, or report sexual arousal while cross-dressing. Bentler reported that 23% of homosexual transsexuals report a history of sexual arousal to cross-dressing, while Freund reported 31%. In 1990, using the alternative term "androphilic transsexual", Johnson wrote that there was a correlation between social adjustment to the new gender role and androphilia. Anne Lawrence, a proponent of the concept, argues that homosexual transsexuals pursue sex reassignment surgery out of a desire for greater social and romantic success. Lawrence has proposed that autogynephilic transsexuals are more excited about sexual reassignment surgery than homosexual transsexuals. She states that homosexual transsexuals are typically ambivalent or indifferent about SRS, while autogynephilic transsexuals want to have surgery as quickly as possible, are happy to be rid of their penis, and proud of their new genitals. Lawrence states that autogynephilia tends to appear along with other paraphilias. J. Michael Bailey argued that both "homosexual transsexuals" and "autogynephilic transsexuals" were driven to transition mainly for sexual gratification, as opposed to gender-identity reasons. Birth order Blanchard and Zucker state that birth order has some influence over sexual orientation in male-assigned people in general, and androphilic trans women in specific. This phenomenon is called the "fraternal birth order effect". In 2000, Richard Green reported that androphilic trans women tended have a later-than-expected birth order, and more older brothers than other subgroups of trans women. Each older brother increased the odds that a trans woman was androphilic by 40%. Transgender men Blanchard's typology is mainly concerned with transgender women. Richard Ekins and Dave King state that female-to-male transsexuals (trans men) are absent from the typology, while Blanchard, Cantor, and Katherine Sutton distinguish between gynephilic and androphilic trans men. They state that gynephilic trans men are the counterparts of androphilic trans women, that they experience strong childhood gender nonconformity, and that they generally begin to seek sex reassignment in their mid-twenties. They describe androphilic trans men as a rare but distinct group who say they want to become gay men, and, according to Blanchard, are often specifically attracted to gay men. Cantor and Sutton state that while this may seem analogous to autogynephilia, no distinct paraphilia for this has been identified. Gynephilic transgender men In 2000, Meredith L. Chivers and Bailey wrote, "Transsexualism in genetic females has previously been thought to occur predominantly in homosexual (gynephilic) women." According to them, Blanchard reported in 1987 that only 1 in 72 trans men he saw at his clinic were primarily attracted to men. They observed that these individuals were so uncommon that some researchers thought that androphilic trans men did not exist, or misdiagnosed them as homosexual transsexuals, attracted to women. They wrote that relatively few studies had examined childhood gender variance in trans men. In a 2005 study by Smith and van Goozen, their findings in regards to trans men were different from their findings for trans women. Smith and van Goozen's study included 52 female-to-male transsexuals, who were categorized as either homosexual or non-homosexual. Smith concluded that female-to-male transsexuals, regardless of sexual orientation, reported more GID symptoms in childhood, and a stronger sense of gender dysphoria. Smith wrote that she found some differences between homosexual and non-homosexual female-to-male transsexuals. Smith says that homosexual female-to-males reported more gender dysphoria than any group in her study. Inclusion in the Diagnostic and Statistical Manual of Mental Disorders In the third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III) (1980), the diagnosis of "302.5 Transsexualism" was introduced under "Other Psychosexual Disorders". This was an attempt to provide a diagnostic category for gender identity disorders. The diagnostic category, transsexualism, was for gender dysphoric individuals who demonstrated at least two years of continuous interest in transforming their physical and social gender status. The subtypes were asexual, homosexual (same "biological sex"), heterosexual (other "biological sex") and unspecified. This was removed in the DSM-IV, in which gender identity disorder replaced transsexualism. Previous taxonomies, or systems of categorization, used the terms classic transsexual or true transsexual, terms once used in differential diagnoses. The DSM-IV-TR included autogynephilia as an "associated feature" of gender identity disorder and as a common occurrence in the transvestic fetishism disorder, but does not classify autogynephilia as a disorder by itself. The paraphilias working group on the DSM-5, chaired by Ray Blanchard, included both with autogynephilia and with autoandrophilia as specifiers to transvestic disorder in an October 2010 draft of the DSM-5. This proposal was opposed by the World Professional Association for Transgender Health (WPATH), citing a lack of empirical evidence for these specific subtypes. WPATH argued that there was no scientific consensus on the concept, and that there was a lack of longitudinal studies on the development of transvestic fetishism. With autoandrophilia was removed from the final draft of the manual. Blanchard later said he had initially included it to avoid criticism: "I proposed it simply in order not to be accused of sexism [...] I don't think the phenomenon even exists." When published in 2013, the DSM-5 included With autogynephilia (sexual arousal by thoughts, images of self as a female) as a specifier to 302.3 Transvestic disorder (intense sexual arousal from cross-dressing fantasies, urges or behaviors); the other specifier is With fetishism (sexual arousal to fabrics, materials or garments). Societal impact Litigation In the 2010 U.S. Tax Court case O'Donnabhain v. Commissioner, the Internal Revenue Service cited Blanchard's typology as justification for denying a transgender woman's tax deductions for medical costs relating to treatment of her gender identity disorder, claiming the procedures were not medically necessary. The court found in favor of the plaintiff, Rhiannon O'Donnabhain, ruling that she should be allowed to deduct the costs of her treatment, including sex reassignment surgery and hormone therapy. In its decision, the court declared the IRS's position "at best a superficial characterization of the circumstances" that was "thoroughly rebutted by the medical evidence". Anti-LGBT groups According to the Southern Poverty Law Center (SPLC), autogynephilia has been promoted by anti-LGBT hate groups. These include the Family Research Council (FRC), United Families International (UFI), and the American College of Pediatricians (ACPeds). Both Blanchard and Bailey have written articles for 4thWaveNow, which the SPLC describes as an anti-trans website. Nic Rider and Elliot Tebbe characterize Blanchard's theory of autogynephilia as an anti-trans theory that functions to invalidate and delegitimize transgender individuals. Serano writes that trans-exclusionary radical feminists, self-described as "gender-critical" feminists, have embraced the idea of autogynephilia beginning in the 2000s. One early proponent of autogynephilia was radical feminist Sheila Jeffreys. The concept has been used to imply that trans women are sexually deviant men. The concept of autogynephilia became popular on gender-critical websites such as 4thWaveNow, Mumsnet, and the Reddit community /r/GenderCritical. See also Classification of transsexual and transgender people Autoeroticism Partialism Transgender sexuality List of transgender-related topics Notes References External links Men and sexuality Paraphilias Sexology Gender identity Sexual fetishism Sexual orientation Sexuality and society Transgender sexuality Transgender women-related topics LGBTQ-related controversies in the United States
Blanchard's transsexualism typology
Biology
7,009
69,461,042
https://en.wikipedia.org/wiki/Macacine%20alphaherpesvirus%202
Macacine alphaherpesvirus 2 (McHV-2) is a species of virus in the genus Simplexvirus, subfamily Alphaherpesvirinae, family Herpesviridae, and order Herpesvirales. References Alphaherpesvirinae
Macacine alphaherpesvirus 2
Biology
55
961,605
https://en.wikipedia.org/wiki/ACES%20%28computational%20chemistry%29
Aces II (Advanced Concepts in Electronic Structure Theory) is an ab initio computational chemistry package for performing high-level quantum chemical ab initio calculations. Its major strength is the accurate calculation of atomic and molecular energies as well as properties using many-body techniques such as many-body perturbation theory (MBPT) and, in particular coupled cluster techniques to treat electron correlation. The development of ACES II began in early 1990 in the group of Professor Rodney J. Bartlett at the Quantum Theory Project (QTP) of the University of Florida in Gainesville. There, the need for more efficient codes had been realized and the idea of writing an entirely new program package emerged. During 1990 and 1991 John F. Stanton, Jürgen Gauß, and John D. Watts, all of them at that time postdoctoral researchers in the Bartlett group, supported by a few students, wrote the backbone of what is now known as the ACES II program package. The only parts which were not new coding efforts were the integral packages (the MOLECULE package of J. Almlöf, the VPROP package of P.R. Taylor, and the integral derivative package ABACUS of T. Helgaker, P. Jorgensen J. Olsen, and H.J. Aa. Jensen). The latter was modified extensively for adaptation with Aces II, while the others remained very much in their original forms. Ultimately, two different versions of the program evolved. The first was maintained by the Bartlett group at the University of Florida, and the other (known as ACESII-MAB) was maintained by groups at the University of Texas, Universitaet Mainz in Germany, and ELTE in Budapest, Hungary. The latter is now called CFOUR. Aces III is a parallel implementation that was released in the fall of 2008. The effort led to definition of a new architecture for scalable parallel software called the super instruction architecture. The design and creation of software is divided into two parts: The algorithms are coded in a domain specific language called super instruction assembly language or SIAL, pronounced "sail" for easy communication. The SIAL programs are executed by a MPMD parallel virtual machine called the super instruction processor or SIP. The ACES III program consists of 580,000 lines of SIAL code of which 200,000 lines are comments, and 230,000 lines of C/C++ and Fortran of which 62,000 lines are comments. The latest version of the program was released on August 1, 2014. See also Quantum chemistry computer programs References ACES II Florida-Version Homepage ACES II Mainz-Austin-Budapest-Version Homepage (outdated) ACES III Homepage (outdated) CFOUR Homepage Computational chemistry software University of Florida
ACES (computational chemistry)
Physics,Chemistry
551
2,780,933
https://en.wikipedia.org/wiki/Epsilon%20Andromedae
Epsilon Andromedae, Latinized from ε Andromedae, is a star in the constellation of Andromeda. It can be seen with the naked eye, having an apparent visual magnitude of 4.4. Based upon an annual parallax shift of 21.04 mas as seen from Earth, it is located 155 light years from the Sun. The system is moving closer to the Sun with a radial velocity of −84 km/s. Its orbit in the Milky Way is highly eccentric, causing it to move rapidly relative to the Sun and its neighboring stars. Properties This is an evolved G-type giant star with a stellar classification of . The suffix notation indicates there is a strong underabundance of iron in the spectrum, and an overabundance of cyanogen (CN). ε Andromedae is believed to be a red clump star which is fusing helium in its core. It has about the same mass as the Sun, but has expanded to nine times the Sun's radius. The star is radiating 51 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 5,082 K. Naming In Chinese, (), meaning Legs (asterism), refers to an asterism consisting of ε Andromedae, η Andromedae, 65 Piscium, ζ Andromedae, δ Andromedae, π Andromedae, ν Andromedae, μ Andromedae, β Andromedae, σ Piscium, τ Piscium, 91 Piscium, υ Piscium, φ Piscium, χ Piscium and ψ¹ Piscium. Consequently, the Chinese name for ε Andromedae itself is (, .) References External links Image Epsilon Andromedae G-type giants Horizontal-branch stars Andromeda (constellation) Andromedae, Epsilon BD+28 0103 Andromedae, 30 003546 003031 0163 TIC objects
Epsilon Andromedae
Astronomy
425
238,680
https://en.wikipedia.org/wiki/Avogadro%27s%20law
Avogadro's law (sometimes referred to as Avogadro's hypothesis or Avogadro's principle) or Avogadro-Ampère's hypothesis is an experimental gas law relating the volume of a gas to the amount of substance of gas present. The law is a specific case of the ideal gas law. A modern statement is: Avogadro's law states that "equal volumes of all gases, at the same temperature and pressure, have the same number of molecules." For a given mass of an ideal gas, the volume and amount (moles) of the gas are directly proportional if the temperature and pressure are constant. The law is named after Amedeo Avogadro who, in 1812, hypothesized that two given samples of an ideal gas, of the same volume and at the same temperature and pressure, contain the same number of molecules. As an example, equal volumes of gaseous hydrogen and nitrogen contain the same number of molecules when they are at the same temperature and pressure, and display ideal gas behavior. In practice, real gases show small deviations from the ideal behavior and the law holds only approximately, but is still a useful approximation for scientists. Mathematical definition The law can be written as: or where V is the volume of the gas; n is the amount of substance of the gas (measured in moles); k is a constant for a given temperature and pressure. This law describes how, under the same condition of temperature and pressure, equal volumes of all gases contain the same number of molecules. For comparing the same substance under two different sets of conditions, the law can be usefully expressed as follows: The equation shows that, as the number of moles of gas increases, the volume of the gas also increases in proportion. Similarly, if the number of moles of gas is decreased, then the volume also decreases. Thus, the number of molecules or atoms in a specific volume of ideal gas is independent of their size or the molar mass of the gas. Derivation from the ideal gas law The derivation of Avogadro's law follows directly from the ideal gas law, i.e. where R is the gas constant, T is the Kelvin temperature, and P is the pressure (in pascals). Solving for V/n, we thus obtain Compare that to which is a constant for a fixed pressure and a fixed temperature. An equivalent formulation of the ideal gas law can be written using Boltzmann constant kB, as where N is the number of particles in the gas, and the ratio of R over kB is equal to the Avogadro constant. In this form, for V/N is a constant, we have If T and P are taken at standard conditions for temperature and pressure (STP), then k′ = 1/n0, where n0 is the Loschmidt constant. Historical account and influence Avogadro's hypothesis (as it was known originally) was formulated in the same spirit of earlier empirical gas laws like Boyle's law (1662), Charles's law (1787) and Gay-Lussac's law (1808). The hypothesis was first published by Amedeo Avogadro in 1811, and it reconciled Dalton atomic theory with the "incompatible" idea of Joseph Louis Gay-Lussac that some gases were composite of different fundamental substances (molecules) in integer proportions. In 1814, independently from Avogadro, André-Marie Ampère published the same law with similar conclusions. As Ampère was more well known in France, the hypothesis was usually referred there as Ampère's hypothesis, and later also as Avogadro–Ampère hypothesis or even Ampère–Avogadro hypothesis. Experimental studies carried out by Charles Frédéric Gerhardt and Auguste Laurent on organic chemistry demonstrated that Avogadro's law explained why the same quantities of molecules in a gas have the same volume. Nevertheless, related experiments with some inorganic substances showed seeming exceptions to the law. This apparent contradiction was finally resolved by Stanislao Cannizzaro, as announced at Karlsruhe Congress in 1860, four years after Avogadro's death. He explained that these exceptions were due to molecular dissociations at certain temperatures, and that Avogadro's law determined not only molecular masses, but atomic masses as well. Ideal gas law Boyle, Charles and Gay-Lussac laws, together with Avogadro's law, were combined by Émile Clapeyron in 1834, giving rise to the ideal gas law. At the end of the 19th century, later developments from scientists like August Krönig, Rudolf Clausius, James Clerk Maxwell and Ludwig Boltzmann, gave rise to the kinetic theory of gases, a microscopic theory from which the ideal gas law can be derived as an statistical result from the movement of atoms/molecules in a gas. Avogadro constant Avogadro's law provides a way to calculate the quantity of gas in a receptacle. Thanks to this discovery, Johann Josef Loschmidt, in 1865, was able for the first time to estimate the size of a molecule. His calculation gave rise to the concept of the Loschmidt constant, a ratio between macroscopic and atomic quantities. In 1910, Millikan's oil drop experiment determined the charge of the electron; using it with the Faraday constant (derived by Michael Faraday in 1834), one is able to determine the number of particles in a mole of substance. At the same time, precision experiments by Jean Baptiste Perrin led to the definition of the Avogadro number as the number of molecules in one gram-molecule of oxygen. Perrin named the number to honor Avogadro for his discovery of the namesake law. Later standardization of the International System of Units led to the modern definition of the Avogadro constant. Molar volume At standard temperature and pressure (100 kPa and 273.15 K), we can use Avogadro's law to find the molar volume of an ideal gas: Similarly, at standard atmospheric pressure (101.325 kPa) and 0 °C (273.15 K): Notes References Gas laws Amount of substance it:Volume molare#Legge di Avogadro
Avogadro's law
Physics,Chemistry,Mathematics
1,275
11,068,905
https://en.wikipedia.org/wiki/Grovesinia%20pyramidalis
Grovesinia pyramidalis is a plant pathogen. References External links Index Fungorum USDA ARS Fungal Database Fungal plant pathogens and diseases Sclerotiniaceae Fungus species Fungi described in 1983
Grovesinia pyramidalis
Biology
40
542,645
https://en.wikipedia.org/wiki/Amylose
Amylose is a polysaccharide made of α-D-glucose units, bonded to each other through α(1→4) glycosidic bonds. It is one of the two components of starch, making up approximately 20–25% of it. Because of its tightly packed helical structure, amylose is more resistant to digestion than other starch molecules and is therefore an important form of resistant starch. Structure Amylose is made up of α(1→4) bound glucose molecules. The carbon atoms on glucose are numbered, starting at the aldehyde (C=O) carbon, so, in amylose, the 1-carbon on one glucose molecule is linked to the 4-carbon on the next glucose molecule (α(1→4) bonds). The structural formula of amylose is pictured at right. The number of repeated glucose subunits (n) is usually in the range of 300 to 3000, but can be many thousands. There are three main forms of amylose chains can take. It can exist in a disordered amorphous conformation or two different helical forms. It can bind with itself in a double helix (A or B form), or it can bind with another hydrophobic guest molecule such as iodine, a fatty acid, or an aromatic compound. This is known as the V form and is how amylopectin binds to amylose in the structure of starch. Within this group, there are many different variations. Each is notated with V and then a subscript indicating the number of glucose units per turn. The most common is the V6 form, which has six glucose units a turn. V8 and possibly V7 forms exist as well. These provide an even larger space for the guest molecule to bind. This linear structure can have some rotation around the phi and psi angles, but for the most part bound glucose ring oxygens lie on one side of the structure. The α(1→4) structure promotes the formation of a helix structure, making it possible for hydrogen bonds to form between the oxygen atoms bound at the 2-carbon of one glucose molecule and the 3-carbon of the next glucose molecule. Fiber X-ray diffraction analysis coupled with computer-based structure refinement has found A-, B-, and C- polymorphs of amylose. Each form corresponds to either the A-, the B-, or the C- starch forms. A- and B- structures have different helical crystal structures and water contents, whereas the C- structure is a mixture of A- and B- unit cells, resulting in an intermediate packing density between the two forms. Physical properties Because the long linear chains of amylose more readily crystallize than amylopectin (which has short, highly branched chains), high-amylose starch is more resistant to digestion. Unlike amylopectin, amylose is insoluble in water. It also reduces the crystallinity of amylopectin and how easily water can infiltrate the starch. The higher the amylose content, the less expansion potential and the lower the gel strength for the same starch concentration. This can be countered partially by increasing the granule size. Function Amylose is important in plant energy storage. It is less readily digested than amylopectin; however, because of its helical structure, it takes up less space than amylopectin. As a result, it is the preferred starch for storage in plants. It makes up about 30% of the stored starch in plants, though the percentage varies by species and variety. The digestive enzyme α-amylase breaks down starch molecules into maltotriose and maltose, which can be used as sources of energy. Amylose is also an important thickener, water binder, emulsion stabilizer, and gelling agent in industrial and food-based contexts. Loose helical amylose chains have a hydrophobic interior that can bind to hydrophobic molecules such as lipids and aromatic compounds. The one problem with this is that, when it crystallizes or associates, it can lose some stability, often releasing water in the process (syneresis). When amylose concentration is increased, gel stickiness decreases but firmness increases. When other things, including amylopectin, bind to amylose, the viscosity can be affected, but incorporating κ-carrageenan, alginate, xanthan gum, or low-molecular-weight sugars can reduce the loss in stability. The ability to bind water can add substance to food, possibly serving as a fat replacement. For example, amylose is responsible for causing white sauce to thicken, but, upon cooling, the solid and the water will partly separate. Amylose is known for its good film-forming properties, useful in food packaging. Excellent film-forming behavior of amylose was studied already in 1950s. Amylose films are better for both barrier properties and mechanical properties when compared to the amylopectin films. In a laboratory setting, it can act as a marker. Iodine molecules fit neatly inside the helical structure of amylose, binding with the starch polymer that absorbs certain known wavelengths of light. Hence, a common test is the iodine test for starch. If starch is mixed with a small amount of yellow iodine solution, a blue-black color will be observed. The intensity of the color can be tested with a colorimeter, using a red filter to discern the concentration of starch present in the solution. It is also possible to use starch as an indicator in titrations involving iodine reduction. It is also used in amylose magnetic beads and resin to separate maltose-binding protein. Recent studies High-amylose varieties of rice, the less sticky long-grain rice, have a much lower glycemic load, which could be beneficial for diabetics. Researchers have identified the Granule Bound Starch Synthase (GBSS) as the enzyme that specifically elongates amylose during starch biosynthesis in plants. The waxy locus in maize encodes for the GBSS protein. Mutants lacking the GBSS protein produce starch containing only amylopectin, such as in waxy corn. In Arabidopsis leaves, another gene, encoding the Protein Targeting to STarch (PTST) protein, is required in addition to GBSS for amylose synthesis. Mutants lacking either protein produce starch without amylose. Genetically modified potato cultivar Amflora by BASF Plant Science was developed to not produce amylose. See also Amflora, genetically modified low amylose potato (high in amylopectin) Amylomaize, high amylose maize starch Russet Burbank potato, high amylose potato cultivar References External links Polysaccharides Starch
Amylose
Chemistry
1,464
37,542,396
https://en.wikipedia.org/wiki/Christen%20Larsen%20House
The Christen Larsen House at 990 N. 400 E in Pleasant Grove, Utah was built in c.1876. It was listed on the National Register of Historic Places in 1987. It is a one-story pair-house built of soft tufa rock. See also Neils Peter Larsen House, also NRHP-listed in Pleasant Grove References Pair-houses Houses on the National Register of Historic Places in Utah Neoclassical architecture in Utah Houses completed in 1876 Houses in Utah County, Utah National Register of Historic Places in Utah County, Utah Buildings and structures in Pleasant Grove, Utah
Christen Larsen House
Engineering
116
61,778,709
https://en.wikipedia.org/wiki/Munc-13
Munc-13 (an acronym for mammalian uncoordinated-13) is a protein which complexes with RIM and likely comprises part of cellular structure which anchors synaptic vesicles. Its activation by DAG seems to be important for maintaining high rate of synaptic release during prolonged repetitive stimulation. References Proteins Secretory vesicles Neurophysiology
Munc-13
Chemistry
76
8,941,807
https://en.wikipedia.org/wiki/Lunar%20Explorers%20Society
The Lunar Explorers Society is an organisation dedicated to achieving permanent presence of humanity on the Moon. The Society is open to all people in the world with an interest in lunar exploration. It hopes to bring the best of humanity to the Moon, and to bring the benefits of the Moon to all people on Earth. The last human mission to the Moon was in 1972, and though the first exploration efforts provided a huge scientific return, no further human exploration of the Moon has been done. However, several robotic lunar exploration missions have been conducted since the beginning of the 1990s. These missions fuelled the desire to return to the Moon among many lunar enthusiasts, and this was the background for the establishment of the Lunar Explorers Society in 2000. The founding members saw the need for an organisation where the members could share knowledge, join forces and pursue their ultimate goal: To establish a permanent human presence on the Moon to the benefit of all people on Earth. Objectives To support the establishment of a permanent human presence on the Moon To promote international cooperation between scientists working with Lunar exploration by providing a neutral platform for their discussions To raise awareness of what could be achieved by returning to the Moon through educational and outreach activities To promote the peaceful and fair use of the resources available on the Moon, to the benefit of mankind The Young Lunar Explorers Award The Young Lunar Explorers Award is presented to a candidate who has been instrumental in promoting lunar exploration among young lunar explorers once per year, at the annual International Conference on the Exploration and Utilisation of the Moon (ICEUM). The awards have been given to the following winners: 2008: The Google Lunar X Prize Foundation 2007: The Lunar Explorers Society 2005: The SSETI Express team 2004: International Space University (ISU) History The Lunar Explorers Society was founded 14 July 2000, during the 4th International Conference on the Exploration and Utilisation of the Moon (ICEUM4). There, the 156 ICEUM4 participants signed the founding declaration of the Lunar Explorers Society. The International Lunar Exploration Working Group (ILEWG) has supported the Lunar Explorers Society since the beginning External links Lunar Explorers Society web site Moon Society Lunarpedia Exploration of the Moon Space advocacy organizations Scientific organizations established in 2000
Lunar Explorers Society
Astronomy
441
9,557,604
https://en.wikipedia.org/wiki/Tire%20uniformity
Tire uniformity refers to the dynamic mechanical properties of pneumatic tires as strictly defined by a set of measurement standards and test conditions accepted by global tire and car makers. These standards include the parameters of radial force variation, lateral force variation, conicity, ply steer, radial run-out, lateral run-out, and sidewall bulge. Tire makers worldwide employ tire uniformity measurement as a way to identify poorly performing tires so they are not sold to the marketplace. Both tire and vehicle manufacturers seek to improve tire uniformity in order to improve vehicle ride comfort. Force variation background The circumference of the tire can be modeled as a series of very small spring elements whose spring constants vary according to manufacturing conditions. These spring elements are compressed as they enter the road contact area, and recover as they exit the footprint. Variation in the spring constants in both radial and lateral directions cause variations in the compressive and restorative forces as the tire rotates. Given a perfect tire, running on a perfectly smooth roadway, the force exerted between the car and the tire will be constant. However, a normally manufactured tire running on a perfectly smooth roadway will exert a varying force into the vehicle that will repeat every rotation of the tire. This variation is the source of various ride disturbances. Both tire and car makers seek to reduce such disturbances in order to improve the dynamic performance of the vehicle. Tire uniformity parameters Axes of measurement Tire forces are divided into three axes: radial, lateral, and tangential (or fore-aft). The radial axis runs from the tire center toward the tread, and is the vertical axis running from the roadway through the tire center toward the vehicle. This axis supports the vehicle's weight. The lateral axis runs sideways across the tread. This axis is parallel to the tire mounting axle on the vehicle. The tangential axis is the one in the direction of the tire travel. Radial force variation In so far as the radial force is the one acting upward to support the vehicle, radial force variation describes the change in this force as the tire rotates under load. As the tire rotates and spring elements with different spring constants enter and exit the contact area, the force will change. Consider a tire supporting a load running on a perfectly smooth roadway. It would be typical for the force to vary up and down from this value. A variation between would be characterized as an radial force variation (RFV). The radial force variation can be expressed as a peak-to-peak value, which is the maximum minus minimum value, or any harmonic value as described below. Some tire manufactures mark the sidewall with a red dot to indicate the location of maximal radial force and runout, the high spot. A yellow dot indicates the point of least weight. Use of the dots is specified in Technology Maintenance Council's RP243 performance standard. To compensate for this variation, tires are supposed to be installed with the red dot near the valve stem, assuming the valve stem is at the low point, or with the yellow dot near the valve stem, assuming the valve stem is at the heavy point. Harmonic analysis Radial force variation, as well as all other force variation measurements, can be shown as a complex waveform. This waveform can be expressed according to its harmonics by applying Fourier transform (FT). FT permits one to parameterize various aspects of the tire dynamic behavior. The first harmonic, expressed as radial force first harmonic (RF1H) describes the force variation magnitude that exerts a pulse into the vehicle one time for each rotation. Radial force second harmonic (RF2H) expresses the magnitude of the radial force that exerts a pulse twice per revolution, and so on. Often, these harmonics have known causes, and can be used to diagnose production problems. For example, a tire mold installed with 8 segments may thermally deform as to induce an eighth harmonic, so the presence of a high radial force eight harmonic (RF8H) would point to a mold sector parting problem. RF1H is the primary source of ride disturbances, followed by RF2H. High harmonics are less problematic because the rotating speed of the tire at highway speeds times the harmonic value makes disturbances at such high frequencies that they are damped or overcome by other vehicle dynamic conditions. Lateral force variation Insofar as the lateral force is the one acting side-to-side along the tire axle, lateral force variation describes the change in this force as the tire rotates under load. As the tire rotates and spring elements with different spring constants enter and exit the contact area, the lateral force will change. As the tire rotates it may exert a lateral force on the order of , causing steering pull in one direction. It would be typical for the force to vary up and down from this value. A variation between would be characterized as a lateral force variation (LFV). The lateral force variation can be expressed as a peak-to-peak value, which is the maximum minus minimum value, or any harmonic value as described above. Lateral force is signed, such that when mounted on the vehicle, the lateral force may be positive, making the vehicle pull to the left, or negative, pulling to the right. Tangential force variation Insofar as the tangential force is the one acting in the direction of travel, tangential force variation describes the change in this force as the tire rotates under load. As the tire rotates and spring elements with different spring constants enter and exit the contact area, the tangential force will change. As the tire rotates it exerts a high traction force to accelerate the vehicle and maintain its speed under constant velocity. Under steady-state conditions it would be typical for the force to vary up and down from this value. This variation would be characterized as tangential force variation (TFV). In a constant velocity test condition, the tangential force variation would be manifested as a small speed fluctuation occurring every rotation due to the change in rolling radius of the tire. Conicity Conicity is a parameter based on lateral force behavior. It is the characteristic that describes the tire's tendency to roll like a cone. This tendency affects the steering performance of the vehicle. In order to determine conicity, lateral force must be measured in both clockwise (LFCW) and counterclockwise direction (LFCCW). Conicity is calculated as one-half the difference of the values, keeping in mind that clockwise and counterclockwise values have opposite signs. Conicity is an important parameter in production testing. In many high-performance cars, tires with equal conicity are mounted on left and right sides of the car in order that their conicity effects will cancel each other and generate a smoother ride performance, with little steering effect. This necessitates the tire maker measuring conicity and sorting tires into groups of like-values. Ply steer Ply steer describes the lateral force a tire generates due to asymmetries in its carcass as is rolls forward with zero slip angle and may be called pseudo side slip. It is the characteristic that is usually described as the tire's tendency to “crab walk”, or move sideways while maintaining a straight-line orientation. This tendency affects the steering performance of the vehicle. In order to determine ply steer, the lateral force generated is measured as the tire rolls both forward and back, and ply steer is then calculated as one-half the sum of the values, keeping in mind that values have opposite signs. Radial run-out Radial run-out (RRO) describes the deviation of the tire's roundness from a perfect circle. The radial run-out can be expressed as the peak-to-peak value as well as harmonic values. Radial run-out imparts an excitation into the vehicle in a manner similar to radial force variation. It is most often measured near the tire's centerline, although some tire makers have adopted measurement of radial run-out at three positions: left shoulder, center, and right shoulder. Some tire manufactures mark the sidewall with a red dot to indicate the location of maximal radial force and runout. Lateral run-out Lateral run-out (LRO) describes the deviation of the tire's sidewall from a perfect plane. LRO can be expressed as the peak-to-peak value as well as harmonic values. LRO imparts an excitation into the vehicle in a manner similar to lateral force variation. LRO is most often measured in the upper sidewall, near the tread shoulder. Sidewall bulge and depression Given that the tire is an assembly of multiple components that are cured in a mold, there are many process variations that cause cured tires to be classified as rejects. Bulges and depressions in the sidewall are such defects. A bulge is a weak spot in the sidewall that expands when the tire is inflated. A depression is a strong spot that does not expand in equal measure as the surrounding area. Both are deemed visual defects. Tires are measured in production to identify those with excessive visual defects. Bulges may also indicate defective construction conditions such as missing cords, which pose a safety hazard. As a result, tire makers impose stringent inspection standards to identify tires with bulges. Sidewall Bulge and Depression is also referred to as bulge and dent, and bumpy sidewall. Tire uniformity measurement machines Tire uniformity machines are special-purpose machines that automatically inspect tires for the tire uniformity parameters described above. They consist of several subsystems, including tire handling, chucking, measurement rims, bead lubrication, inflation, load wheel, spindle drive, force measurement, and geometry measurement. The tire is first centered, and the bead areas are lubricated to assure a smooth fitment to the measurement rims. The tire is indexed into the test station and placed on the lower chuck. The upper chuck lowers to make contact with the upper bead. The tire is inflated to the set point pressure. The load wheel advances to contact the tire and apply the set loading force. The spindle drive accelerates the tire to the test speed. Once speed, force, and pressure are stable, load cells measure the force exerted on the load wheel by the tire. The force signal is processed in analog circuitry, and then analyzed to extract the measurement parameters. Tires are marked according to various standards that may include radial force variation (RFV) high point angle, side of positive conicity, and conicity magnitude. Other types of uniformity machines There are numerous variations and innovations among several tire uniformity machine makers. The standard test speed for tire uniformity machines is 60 r/min of a standard load wheel that approximates 5 miles per hour. High speed uniformity machines are used in research and development environments that reach 250 km/h and higher. High speed uniformity machines have also been introduced for production testing. Machines that combine force variation measurement with dynamic balance measurement are also in use. Tire uniformity correction Radial and lateral force variation can be reduced at the tire uniformity machine via grinding operations. In the center grind operation, a grinder is applied to the tread center to remove rubber at the high point of radial force variation. On the top and bottom tread shoulder grinders are applied to reduce the size of the road contact area, or footprint, and the resulting force variation. Top and bottom grinders can be controlled independently to reduce conicity values. Grinders are also employed to correct excessive radial run-out. Effects of tire variations can also be reduced by mounting the tire in such a way that unbalanced rims and valve stems helps compensate for imperfect tires. Geometry measurement systems Radial run-out, lateral run-out, conicity, and bulge measurements are also performed on the tire uniformity machine. There are several generations of measurement technologies in use. These include Contact Stylus, Capacitive Sensors, Fixed-Point Laser Sensors, and Sheet-of-Light Laser Sensors. Contact stylus Contact Stylus technology utilizes a touch-probe to ride along the tire surface as it rotates. Analog instrumentation senses the movement of the probe, and records the run-out waveform. When used to measure radial runout, the stylus is fitted to a large-area paddle that can span the voids in the tread pattern. When used to measure lateral runout on the sidewall the stylus runs in a very narrow smooth track. The contact stylus method is one of the earliest technologies, and requires considerable effort to maintain its mechanical performance. The small area-of-interest in the sidewall area limits the effectiveness in discerning sidewall bulges and depressions elsewhere on the sidewall. Capacitive sensors Capacitive Sensors generate a dielectric field between the tire and sensor. As the distance between the tire and the sensor varies, the voltage and/or current properties of the dielectric field change. Analog circuitry is employed to measure the field changes and record the run-out waveform. Capacitive sensors have a larger area-of-interest, on the order of 10mm compared to the very narrow contact stylus method. The capacitive sensor method is one of the earliest technologies, and has proven highly reliable; however, the sensor must be positioned very close to the tire surface during measurement, so collisions between tire and sensor have led to long-term maintenance problems. In addition, some sensors are very sensitive to moisture/humidity and ended with erroneous readings. The 10mm area-of-interest also means that bulge measurement is limited to a small portion of the tire. Capacitive sensors employ void filtering to remove the effect of the voids between the tread lugs in radial runout measurement, and letter filtering to remove the effect of raised letters and ornamentation on the sidewall. Fixed-point laser sensors Fixed-Point Laser Sensors were developed as an alternative to the above methods. Lasers combine the narrow-track area-of-interest with a large stand off distance from the tire. In order to cover a larger area-of-interest, mechanical positioning systems have been employed to take readings at multiple positions in the sidewall. Fixed-Point Laser sensors employ void filtering to remove the effect of the voids between the tread lugs in radial run-out measurement, and letter filtering to remove the effect of raised letters and ornamentation on the sidewall. Sheet-of-light laser systems Sheet-of-light laser (SL) systems were introduced in 2003, and have emerged as the most capable and reliable run-out, bulge and depression measurement methods. Sheet-of-light sensors project a laser line instead of a laser point, and thereby create a very large area-of-interest. Sidewall sensors can easily span an area from the bead area to the tread shoulder, and inspect the complete sidewall for bulge and depression defects. Large radial sensors can span 300mm or more to cover the entire tread width. This enables characterization of RRO in multiple tracks. Sheet-of-light sensors also feature stand off distances large enough to assure no collisions with the tire. Two-dimensional tread void filtering and sidewall letter filtering are also employed to eliminate these characteristics from the runout measurements. References Tires Vehicle technology
Tire uniformity
Engineering
3,141
34,726,547
https://en.wikipedia.org/wiki/Denjoy%E2%80%93Koksma%20inequality
In mathematics, the Denjoy–Koksma inequality, introduced by as a combination of work of Arnaud Denjoy and the Koksma–Hlawka inequality of Jurjen Ferdinand Koksma, is a bound for Weyl sums of functions f of bounded variation. Statement Suppose that a map f from the circle T to itself has irrational rotation number α, and p/q is a rational approximation to α with p and q coprime, |α – p/q| < 1/q2. Suppose that φ is a function of bounded variation, and μ a probability measure on the circle invariant under f. Then References Theorems in analysis Inequalities
Denjoy–Koksma inequality
Mathematics
141
57,387,353
https://en.wikipedia.org/wiki/National%20Association%20of%20Women%20Pharmacists
The National Association of Women Pharmacists was founded in London on 15 June 1905, following discussions between Margaret Elizabeth Buchanan and Isabella Skinner Clarke. Early meetings were held at Clarke's home. Membership was restricted to those who had passed the major or minor examination and 50 women joined immediately. By 1912 Buchanan claimed that practically all women practicing pharmacy were members. Buchanan served as its president at one point. Elsie Hooper (1879–1969) was the first secretary. She and other members joined the Women's Coronation Procession, a 40,000-strong march from Westminster to the Albert Hall, on 17 June 1911 in support of votes for women. In June 1911 the Chemist and Druggist carried photographs of women pharmacists in the march and reported "Miss Elsie Hooper, B.Sc., was in the Science Section, and several other women pharmacists did the two-and-a-half hours’ march." The association is supportive of, and collaborates with, the Royal Pharmaceutical Society, but is an independent organisation. The Annual General Meeting in 2019 decided to wind up the association at the end of 2019 but it will continue as a semi-autonomous network within the Pharmacists' Defence Association. References External links Organizations for women in science and technology Pharmacy organisations in the United Kingdom
National Association of Women Pharmacists
Technology
265
39,345,434
https://en.wikipedia.org/wiki/Inertial%20balance
An inertial balance is a device that allows the measurement of inertial mass (as opposed to gravitational mass for a regular balance) that can be operated in the microgravity environment space where weight is negligible (e.g. in the International Space Station.) The principle of operation is based on a vibrating spring-mass system. The frequency of vibration will depend on the unknown mass, being higher for lower mass. The object to be measured is placed in the inertial balance, and a manual initial displacement of the spring mechanism starts the oscillation. The time needed to complete a given number of cycles is measured. Knowing the characteristic spring constant and damping coefficient of the spring system, the mass of the object can be computed according to the harmonic oscillator model. Alternatively, a calibration of the device with known masses can be performed, so that the spring constant and any appreciable damping will implicitly be accounted for, and need not be separately known or estimated. See the data analysis PDF under External Links below for a discussion of several calibration approaches. See also Mass Harmonic oscillator External links Inertial mass measurement in the International Space Station (dead link?) NASA's instructions for constructing an inertial balance Inertial balance demonstration for the physics classroom Suggestions for inertial balance data analysis A Java applet for a spring-mass system, with three attached PDFs on SHM, driven/damped oscillation, and a spring-mass system with friction Measuring instruments
Inertial balance
Physics,Technology,Engineering
319
45,159,528
https://en.wikipedia.org/wiki/DeLano%20Award%20for%20Computational%20Biosciences
The DeLano Award for Computational Biosciences is a prize in the field of computational biology. It is awarded annually for "the most accessible and innovative development or application of computer technology to enhance research in the life sciences at the molecular level". The prize was established by the American Society for Biochemistry and Molecular Biology (ASBMB) in memory of Warren Lyford DeLano, an American bioinformatician. DeLano developed the PyMOL open source molecular viewer software and was an advocate for the increased adoption of open source practices in the sciences. DeLano died unexpectedly in 2009. Laureates include the Nobel Prize winner Michael Levitt, who was given the Delano Award in 2013 for his work in computational bioscience. Laureates 2023 - Eytan Ruppin 2022 - Tatyana Sharpee 2020 - Yang Zhang 2019 - Brian Kuhlman 2018 - Chris Sander 2017 - Brian K. Shoichet 2016 - Todd O. Yeates 2015 - Vijay S. Pande 2014 - Michael Levitt 2013 - Helen M. Berman 2012 - Barry Honig 2011 - Axel T. Brunger See also List of biology awards List of awards in bioinformatics and computational biology References Bioinformatics Biology awards
DeLano Award for Computational Biosciences
Technology,Engineering,Biology
256
3,930,805
https://en.wikipedia.org/wiki/Disodium%20glutamate
Disodium glutamate, abbreviated DSG, (Na2C5H7NO4) is a sodium salt of glutamic acid. It is used as a flavoring agent to impart umami flavor. Formation Disodium glutamate can be produced by neutralizing glutamic acid with two molar equivalents of sodium hydroxide (NaOH). See also Monosodium glutamate References Glutamates Organic sodium salts Umami enhancers
Disodium glutamate
Chemistry
105
18,797,385
https://en.wikipedia.org/wiki/Aggregate%20Spend
Aggregate Spend is the process used in the United States to aggregate and monitor the total amount spent by healthcare manufacturers on individual healthcare professionals and organizations (HCP/O) through payments, gifts, honoraria, travel and other means. Also often referred to as the Physician Payments Sunshine Act, this initiative is a growing body of federal and state legislations intended to collectively address all or some of the following goals: (a) Provide transparency with regard to who, in the life sciences industry, is contributing what benefits to which physician; (b) Mandate statutory reports at least once a year; and, (c) Limit spend per physician. Organizations monitored include pharmaceutical, biotechnology and, in some states, medical device organizations. U.S. federal laws On September 6, 2007, Senator Chuck Grassley (R-Iowa) introduced the Physician Payments Sunshine Act of 2007 (S. 2029). In March 2008, Rep. Peter DeFazio (D-Oregon) and Rep. Pete Stark (D-California) introduced a slightly different companion bill in the House of Representatives. (H.R. 5605). These bills were reintroduced in 111th Congress as the Physician Payments Sunshine Act of 2009 (S. 301 and H.R. 3138), again by Senator Chuck Grassley and in the House of Representatives by Rep. Baron Hill (D-Indiana). The bills all aimed to replace the differing state legislations with a single law, common to all 50 states. According to Ashley Glacel, the press secretary for the Senate Aging Committee, whose chairman, Herb Kohl, co-sponsored the bill, the Senate bill is more expansive because it also include Medical Device makers. The bills would amend the Social Security Act "to provide for transparency in the relationship between physicians and manufacturers of drugs, devices, or medical supplies for which payment is made under Medicare, Medicaid, or SCHIP." The bill proposed that each quarter, beginning on January 1, 2008, companies or their agents which manufacture drugs, medical devices, or medical supplies would be required to disclose all payments over $25 in value made "to a physician, or to an entity that a physician is employed by, has tenure with, or has an ownership interest in". The bill would also require manufacturers to provide details on the date, value and nature of the payment, such as whether it was for "food, entertainment, or gifts", "trips or travel", "a product or other item provided for less than market value", "participation in a medical conference, continuing medical education, or other educational or informational program or seminar, provision of materials related to such a conference or educational or informational program or seminar, or remuneration for promoting or participating in such a conference or educational or informational program or seminar", "product rebates or discounts", "consulting fees or honoraria" or "any other economic benefit". Companies would be required to submit a summary report in electronic format. The proposed penalties for breaches were "not less than $10,000, but not more than $100,000", for each such failure. The proposed federal law would undermine a stronger Vermont law if passed, according to state officials and advocacy groups. The reporting threshold under the proposed federal law is $500 - much higher than the $25 threshold found in a similar Vermont law passed five years ago. If passed, the federal bill would preempt the state law. In May 2008, the Pharmaceutical Research and Manufacturers of America stated that they supported a revised version of the bill, but only on condition of "the continued inclusion of the provision that preempts state law". In a media statement, the PhRMA president, Billy Tauzi,n stated that "PhRMA believes that preempting local and state marketing reporting or disclosure laws that have been enacted or are pending avoids a confusing myriad of local, state and federal requirements that confuse patients accessing the information and are overly burdensome and costly for those required to report." PPACA The federal bill was finally passed on March 21, 2010, as a provision under the Patient Protection and Affordable Care (PPAC) Act (https://www.cms.gov/LegislativeUpdate/downloads/PPACA.pdf), and several states — including California, Massachusetts, Minnesota, Maine, District of Columbia, West Virginia, Vermont and Nevada — have already passed their versions of the Sunshine Law. The federal law was due to go into effect from January 1, 2012, with the earliest reports (covering January - December 2012) mandated on or before March 31, 2013. The penalties range from $10,000 to $100,000 for each violation, and can go up to $1 million. In February 2013 the planned dates for implementation were changed to: earliest reports to cover August - December 2013; submission by March 31, 2014. In February 2014 CMS (The Centers for Medicare & Medicaid Services) advised the planned submission dates and what would be submitted were changed. In essence this was because the required registration process, for those who would submit data and attest to the accuracy of that data, was not ready for use, nor were the supporting systems, people and processes for receiving the data On February 18, Open Payments registration and data submission for applicable manufacturers and applicable GPOs opened with a two-phased approach for the first reporting year of the new program: Phase 1 (February 18 through March 31) includes user registration in CMS’ Enterprise Portal (the gateway to CMS’ Enterprise Identity Management system (EIDM)) and submission of corporate profile information and summary aggregate 2013 (August - December) payment data. Phase 2 (begins in May and extends for no fewer than 30 days) includes industry registration in the Open Payments system, submission of detailed 2013( August - December) payment data, and legal attestation to the accuracy of the data. After Phase 2 submission is complete, physicians and teaching hospitals will have the opportunity to register with OpenPayments and view the transactions reported under their name, prior to it being made available to the public. During this review period, any reported transactions may be disputed by the recipient. If a transfer of value is disputed, it will still be publicized, but remain flagged as disputed, until the dispute has been resolved. U.S. State Laws Aggregate Spend compliance has been affected by individual state law compliance, which requires healthcare manufacturers to address and collect distinct spend types to comply with disclosure requirements at the HCP/O aggregate level. Minnesota, West Virginia, Vermont, California, Nevada, and Washington D.C. all have some type of gift-giving limit or disclosure law. Starting in July 2009, Massachusetts and Vermont Gift Ban Law became active with bans of $5,000 and $10,000 per violation respectively. Other states are evaluating similar options as well. See also Bad Pharma (2012) by Ben Goldacre References Further reading "What's All the Commotion Over Aggregate Spend?", Thought Leadership Sales and Marketing Compliance, Volume 3, Issue 1, Fall 2009 “Sales & Marketing Compliance: Keeping up with Global and Local Challenges”, PharmaVOICE, January 2007 “Pharmaceutical Company Payments to Physicians: Early Experiences With Disclosure Laws in Vermont and Minnesota”, JAMA, Vol. 297 No. 11, March 21, 2007 External links "Tracking of Spend Data Widens", Pharmaceutical Executive, February 20, 2008 United States Senate Special Committee on Aging, June 27, 2007 Surgeons for Sale: Conflicts and Consultant Payments in the Medical Device Industry "A Free Lunch", Forbes, February 25, 2008 “Minnesota Limit on Gifts to Doctors May Catch On”, The New York Times, October 12, 2007] Healthcare reform in the United States Life sciences industry
Aggregate Spend
Biology
1,589
23,158,917
https://en.wikipedia.org/wiki/Safety%20Investigation%20Authority
The Safety Investigation Authority of Finland (SIAF or SIA, , lit. Accident Investigation Center, shortened to ; ) is the accident investigation authority of Finland. It investigates all major accidents, and all aviation, maritime, and rail accidents and incidents. SIAF is located within the Ministry of Justice, and is headquartered in Helsinki, Finland. The SIAF was previously known in English as the Accident Investigation Board of Finland. Organization The SIAF consists of five investigation branches: aviation, maritime, rail, other accidents, and exceptional events. The SIA has appointed a chief investigator to each. References External links Safety Investigation Authority Safety Investigation Authority Safety Investigation Authority Safety Investigation Act (525/2011) Ministry of Justice (Finland) Aviation in Finland Finland Rail accident investigators Organisations based in Helsinki Transport organisations based in Finland Transport safety organizations
Safety Investigation Authority
Technology
165
6,775,491
https://en.wikipedia.org/wiki/Victor%20Meyer%20apparatus
The Victor Meyer apparatus is the standard laboratory method for determining the molecular weight of a volatile liquid. It was developed by Viktor Meyer, who spelled his name Victor in publications at the time of its development. In this method, a known mass of a volatile solid or liquid under examination is converted into its vapour form by heating in a Victor Meyer's tube. The vapour displaces its own volume of air. The volume of air displaced at experimental temperature and pressure is calculated. Then volume of air displaced at standard temperature and pressure is calculated. Using this, mass of air displaced at 2.24 × 10−2 m3 of vapour at STP is calculated. This value represents the molecular mass of the substance. The apparatus consists of an inner Victor Meyer's tube, lower end of which is in form of a bulb. The upper end of tube has a side tube that leads to a trough filled with water. The Victor Meyer's tube is surrounded by an outer jacket. In the outer jacket, a liquid is placed, which boils at a temperature at least 30 K higher than the substance under examination. A small quantity of glass-wool or asbestos pad covers the lower end of the Victor Meyer's tube to prevent breakage, when a glass bottle containing the substance under examination is dropped to it Procedure The liquid in the outer jacket is heated until no more air escapes from the side tube. Then, a graduated tube filled with water is inverted over the side tube dipping in a trough filled with water. A small quantity of substance is weighed exactly in a small stoppered bottle and is dropped in the Victor Meyer's tube and sealed immediately. The bottle falls on the asbestos pad and its contents suddenly change into vapour, blows out the stopper and displaces an equal volume of air in graduated tube. The volume of air displaced is measured by taking the graduated tube out, closing its mouth with thumb and dipping in a jar filled with water. When water levels inside and outside the tube is equal, the volume of air displaced is noted. The atmospheric pressure and laboratory temperature are noted. Victor Meyer method of alcohol distinction Victor Meyer suggested a method for determining the types of alcohol i.e. (primary, secondary or tertiary). In this method the sample alcohol is treated with PI3 to get the iodoalkane which is again treated with AgNO2 to get the nitroalkane. The nitroalkane is then treated with nitrous acid which is obtained by NaNO2 and HCl. The resulting solution is treated with KOH and the colour is observed. The red, blue and no colour indicates the primary, secondary and tertiary alcohol respectively. References General Chemistry-John Russel by McGraw Hill International Editions 3rd edition University General Chemistry-An Introduction to Chemical Science edited by CNR Rao by McMillan Indian Ltd. Inorganic Chemistry, by P.L.Soni Tamil Nadu State Board class 11 textbook vol.1 page-31. Laboratory equipment
Victor Meyer apparatus
Chemistry
608
562,999
https://en.wikipedia.org/wiki/4000%20%28number%29
4000 (four thousand) is the natural number following 3999 and preceding 4001. It is a decagonal number. Selected numbers in the range 4001–4999 4001 to 4099 4005 – triangular number 4007 – safe prime 4010 – magic constant of n × n normal magic square and n-queens problem for n = 20 4013 – balanced prime 4019 – Sophie Germain prime 4021 – prime of the form 2p-1 4027 – super-prime 4028 – sum of the first 45 primes 4030 – third weird number 4031 – sum of the cubes of the first six primes 4032 – pronic number 4033 – sixth super-Poulet number; strong pseudoprime in base 2 4057 – prime of the form 2p-1 4060 – tetrahedral number 4073 – Sophie Germain prime 4079 – safe prime 4091 – super-prime 4095 – triangular number and odd abundant number; number of divisors in the sum of the fifth and largest known unitary perfect number, largest Ramanujan–Nagell number of the form 4096 = 642 = 163 = 84 = 46 = 212, smallest number with exactly 13 factors, a superperfect number 4100 to 4199 4104 = 23 + 163 = 93 + 153 4127 – safe prime 4133 – super-prime 4139 – safe prime 4140 – Bell number 4141 – centered square number 4147 – smallest cyclic number in duodecimal represented in base-12 notation as 2497122×4147dez = 4972123×4147dez = 7249124×4147dez = 972412 4153 – super-prime 4160 – pronic number 4166 – centered heptagonal number 4167 = 7! − 6! − 5! − 4! − 3! − 2! − 1!, number of planar partitions of 14 4169 – a number of points of norm <= 10 in cubic lattice 4177 – prime of the form 2p-1 4181 – Fibonacci number, Markov number 4186 – triangular number 4187 – factor of R13, the record number of wickets taken in first-class cricket by Wilfred Rhodes 4199 – highly cototient number, product of three consecutive primes 4200 to 4299 4200 – nonagonal number, pentagonal pyramidal number, largely composite number 4210 – 11th semi-meandric number 4211 – Sophie Germain prime 4213 – Riordan number 4217 – super-prime, happy number 4219 – cuban prime of the form x = y + 1, centered hexagonal number 4225 = 652, centered octagonal number 4227 – sum of the first 46 primes 4240 – Leyland number 4257 – decagonal number 4259 – safe prime 4261 – prime of the form 2p-1 4271 – Sophie Germain prime 4273 – super-prime, number of non-isomorphic set-systems of weight 11 4278 – triangular number 4279 – little Schroeder number 4283 – safe prime 4289 – highly cototient number 4290 – pronic number 4300 to 4399 4320 – largely composite number 4324 – 23rd square pyramidal number 4325 – centered square number 4339 – super-prime, twin prime 4349 – Sophie Germain prime 4356 = 662, sum of the cubes of the first eleven integers 4357 – prime of the form 2p-1 4359 – perfect totient number 4369 – seventh super-Poulet number 4371 – triangular number 4373 – Sophie Germain prime 4374 – The largest number such that both it and the next number (4375) are 7-smooth 4375 – perfect totient number (the smallest not divisible by 3) 4391 – Sophie Germain prime 4397 – Year of Comet Hale–Bopp's return, super-prime 4400 to 4499 4400 – the number of missing persons in the sci-fi show The 4400 4409 – Sophie Germain prime, highly cototient number, balanced prime, 600th prime number 4410 – member of the Padovan sequence 4411 – centered heptagonal number 4421 – super-prime, alternating factorial 4422 – pronic number 4425 = 15 + 25 + 35 + 45 + 55 4438 – sum of the first 47 primes 4444 - repdigit 4446 – nonagonal number 4447 – cuban prime of the form x = y + 1 4457 – balanced prime 4463 – super-prime 4465 – triangular number 4481 – Sophie Germain prime 4489 = 672, centered octagonal number 4495 – tetrahedral number 4500 to 4599 4503 – largest number not the sum of four or fewer squares of composites 4505 – fifth Zeisel number 4513 – centered square number 4516 – centered pentagonal number 4517 – super-prime, happy number 4522 – decagonal number 4547 – safe prime 4549 – super-prime 4556 – pronic number 4560 – triangular number 4567 – super-prime 4579 – octahedral number 4597 – balanced prime 4600 to 4699 4604 – sum of the only two known Wieferich primes, 1093 and 3511 4607 – Woodall number 4608 – 3-smooth number (29×32) 4619 – highly cototient number 4620 – largely composite number 4621 – prime of the form 2p-1 4624 = 682, 173 – 172 4641 – magic constant of n × n normal magic square and n-queens problem for n = 21 4655 – number of free decominoes 4656 – triangular number 4657 – balanced prime 4661 – sum of the first 48 primes 4663 – super-prime, centered heptagonal number 4679 – safe prime 4680 – largely composite number 4681 – eighth super-Poulet number 4688 – 2-automorphic number 4689 – sum of divisors and number of divisors are both triangular numbers 4691 – balanced prime 4692 – pronic number 4699 – nonagonal number 4700 to 4799 4703 – safe prime 4705 = 482 + 492 = 172 + 182 + … + 262, centered square number 4727 – sum of the squares of the first twelve primes 4731 – centered pentagonal number 4733 – Sophie Germain prime 4753 – triangular number 4759 – super-prime 4761 = 692, centered octagonal number 4769 = number of square (0,1)-matrices without zero rows and with exactly 5 entries equal to 1 4787 – safe prime, super-prime 4788 – 14th Keith number 4793 – Sophie Germain prime 4795 – decagonal number 4799 – safe prime 4800 to 4899 4801 – super-prime, cuban prime of the form x = y + 2, smallest prime with a composite sum of digits in base 7 4830 – pronic number 4840 - square yards in an acre 4851 – triangular number, pentagonal pyramidal number 4862 – Catalan number 4871 – Sophie Germain prime 4877 – super-prime 4879 – 11th Kaprekar number 4888 – sum of the first 49 primes 4900 to 4999 4900 = 702, the only square-pyramidal square other than 1 () 4901 – centered square number 4913 = 173 4919 – Sophie Germain prime, safe prime 4922 – centered heptagonal number 4933 – super-prime 4941 – centered cube number 4943 – Sophie Germain prime, super-prime 4950 – triangular number, 12th Kaprekar number 4951 – centered pentagonal number 4957 – sum of three and five consecutive primes (1637 + 1657 + 1663, 977 + 983 + 991 + 997 + 1009) 4959 – nonagonal number 4960 – tetrahedral number; greater of fourth pair of Smith brothers 4970 – pronic number 4973 – the 666th prime 4991 – Lucas–Carmichael number 4993 – balanced prime 4999 – prime of the form Prime numbers There are 119 prime numbers between 4000 and 5000: 4001, 4003, 4007, 4013, 4019, 4021, 4027, 4049, 4051, 4057, 4073, 4079, 4091, 4093, 4099, 4111, 4127, 4129, 4133, 4139, 4153, 4157, 4159, 4177, 4201, 4211, 4217, 4219, 4229, 4231, 4241, 4243, 4253, 4259, 4261, 4271, 4273, 4283, 4289, 4297, 4327, 4337, 4339, 4349, 4357, 4363, 4373, 4391, 4397, 4409, 4421, 4423, 4441, 4447, 4451, 4457, 4463, 4481, 4483, 4493, 4507, 4513, 4517, 4519, 4523, 4547, 4549, 4561, 4567, 4583, 4591, 4597, 4603, 4621, 4637, 4639, 4643, 4649, 4651, 4657, 4663, 4673, 4679, 4691, 4703, 4721, 4723, 4729, 4733, 4751, 4759, 4783, 4787, 4789, 4793, 4799, 4801, 4813, 4817, 4831, 4861, 4871, 4877, 4889, 4903, 4909, 4919, 4931, 4933, 4937, 4943, 4951, 4957, 4967, 4969, 4973, 4987, 4993, 4999 References Integers
4000 (number)
Mathematics
2,162
24,416,767
https://en.wikipedia.org/wiki/Syringodermataceae
Syringodermataceae is a family of brown algae. It includes two genera, Microzonia and Syringoderma. References Brown algae Brown algae families
Syringodermataceae
Biology
34
10,489,808
https://en.wikipedia.org/wiki/NGC%205653
NGC 5653 is an unbarred spiral galaxy in the constellation Boötes. It was discovered on March 13, 1785, by John Herschel and subsequently placed in the New General Catalogue. References IRAS F14280+3126 Hubble's infrared galaxy gallery External links Hubble's infrared galaxy gallery Boötes 5653 51814 09318 Unbarred spiral galaxies
NGC 5653
Astronomy
80
15,222,240
https://en.wikipedia.org/wiki/Dienogest
Dienogest, sold under the brand name Visanne among others, is a progestin medication which is used in birth control pills and in the treatment of endometriosis. It is also used in menopausal hormone therapy and to treat heavy periods. Dienogest is available both alone and in combination with estrogens. It is taken by mouth. Side effects of dienogest include menstrual irregularities, headaches, nausea, breast tenderness, depression, and acne, among others. Dienogest is a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone. It is a unique progestogen, with strong effects in the uterus. The medication has some antiandrogenic activity, which may help to improve androgen-dependent symptoms like acne, and has no other important hormonal activity. Dienogest was discovered in 1979 and was introduced for medical use in 1995. Additional formulations of dienogest were approved between 2007 and 2010. It is sometimes referred to as a "fourth-generation" progestin. Dienogest is marketed widely throughout the world. It is available as a generic medication. Medical uses Birth control Dienogest is used primarily in birth control pills in combination with ethinylestradiol under the brand name Valette. It is also available in a quadriphasic birth control pill combined with estradiol valerate, marketed as Natazia in the United States and Qlaira in some European countries and Russia. Endometriosis Dienogest is approved as a standalone medication under the brand names Visanne and Dinagest in various places such as Europe, Australia, Japan, Singapore, and Malaysia for the treatment of endometriosis. It has been found to be equally effective as gonadotropin-releasing hormone agonists (GnRH agonists), such as leuprorelin, in the treatment of endometriosis. Heavy periods Birth control pills containing dienogest and estradiol valerate are approved in the United States for the treatment of menorrhagia (heavy menstrual bleeding). Menopausal symptoms Dienogest is used in combination with estradiol valerate in the treatment of menopausal symptoms in certain countries such as Germany and the Netherlands. Available forms Dienogest is available both alone and in combination with estrogens. The following formulations are available: Dienogest 1 mg oral tablets (Dinagest) and 2 mg oral tablets (Valette) (not available in U.S.) – indicated for endometriosis Dienogest 2 mg and estradiol valerate 3 mg oral tablets (Natazia) (U.S.) – indicated for contraception and menorrhagia 2 dark yellow tablets each containing 3 mg estradiol valerate 5 medium red tablets each containing 2 mg estradiol valerate and 2 mg dienogest 17 light yellow tablets each containing 2 mg estradiol valerate and 3 mg dienogest 2 dark red tablets each containing 1 mg estradiol valerate 2 white tablets (inert) Dienogest 2 to 3 mg and estradiol valerate 1 to 3 mg oral tablets (Qlaira) (not available in U.S.) – indicated for contraception Each dark yellow active tablet contains 3 mg estradiol valerate Each medium red active tablet contains 2 mg estradiol valerate and 2 mg dienogest Each light yellow active tablet contains 2 mg estradiol valerate and 3 mg dienogest Each dark red active tablet contains 1 mg estradiol valerate Dienogest 2 mg and ethinylestradiol 30 μg oral tablets (Valette) – indicated for contraception Dienogest 2 mg and estradiol valerate 1 or 2 mg oral tablets (various) – indicated for menopausal hormone therapy The availability of these formulations differs by country (see Availability). Contraindications Contraindications of dienogest include active venous thromboembolism, previous or current cardiovascular disease, diabetes with cardiovascular complications, previous or current severe liver disease or tumors, hormone-dependent cancers such as breast cancer, and undiagnosed vaginal bleeding. Side effects Side effects associated with dienogest are the same as those expected of a progestogen. They include menstrual irregularities, headaches, nausea, breast tenderness, depression, acne, weight gain, flatulence, and others. Dienogest produces no androgenic side effects and has little effect on metabolic and lipid hemostatic parameters. Birth control pills containing estradiol valerate/dienogest are associated with a significantly increased risk of venous thromboembolism. However, they are associated with a significantly lower risk of venous thromboembolism than birth control pills containing ethinylestradiol and a progestin. Overdose In safety studies, dienogest has been assessed in women with endometriosis at high doses of as much as 20 mg/day for up to 24 weeks and produced no clinically relevant effects on lipid metabolism, liver enzymes, the coagulatory system, or thyroid metabolism. Interactions Dienogest is metabolized mainly by the cytochrome P450 enzyme CYP3A4, and for this reason, inhibitors and inducers of CYP3A4 can alter the amount of exposure to dienogest when administered concomitantly with it. (For a list of CYP3A4 inhibitors and inducers, see here.) The strong CYP3A4 inhibitors ketoconazole and erythromycin have been found to increase exposure to dienogest by up to 3-fold, whereas the strong CYP3A4 inducer rifampicin (rifampin) was found to decrease steady-state and area-under-curve concentrations of dienogest by 50% and 80%, respectively. Pharmacology Pharmacodynamics Dienogest has progestogenic activity, possibly some antiprogestogenic activity, and has antiandrogenic activity. The medication does not interact with the estrogen receptor, the glucocorticoid receptor, or the mineralocorticoid receptor, and hence has no estrogenic, glucocorticoid, or antimineralocorticoid activity. Because of its relatively high selectivity as a progestogen, dienogest may have favorable safety and tolerability compared to various other progestins. Progestogenic activity Dienogest is an agonist of the progesterone receptor (PR), and hence is a progestogen. It has relatively weak affinity for the PR in vitro in human uterine tissue, about 10% that of progesterone. Despite its low affinity for the PR however, dienogest has high progestogenic activity in vivo. In addition, although its metabolites, such as 9α,10β-dihydrodienogest and 3α,5α-tetrahydrodienogest, have greater affinity for the PR than does dienogest itself, the medication is not considered to be a prodrug. Dienogest has been described as "special" progestogen, possessing low or moderate antigonadotropic efficacy but strong or very strong endometrial efficacy. In relation to its endometrial activity, dienogest is said to be one of the strongest progestogens available. The high endometrial activity of dienogest underlies its ability to stabilize the menstrual cycle when combined with either ethinylestradiol or estradiol valerate (which has lower relative effects on the uterus compared to ethinylestradiol) in birth control pills, and also its use in the treatment of endometriosis. The combination of most other progestins with estradiol or an estradiol ester like estradiol valerate as birth control pills was unsatisfactory due to a high incidence of irregular menstrual bleeding. This is a property that ethinylestradiol does not share with estradiol, because of its resistance to metabolism in the endometrium and hence its greater relative effects in this part of the body. In contrast to other progestins, due to its high endometrial efficacy, the combination of dienogest with estradiol valerate in birth control pills is able to prevent breakthrough bleeding, and is uniquely able to treat heavy menstrual bleeding. The absence of withdrawal bleeding, otherwise known as "silent menstruation", also may occur. Dienogest has antiovulatory potency that is similar to that of 17α-hydroxyprogesterone derivatives like cyproterone acetate but endometrial potency that is much stronger and similar to that of gonane 19-nortestosterone progestins like levonorgestrel. Unlike other progestogens, except in the case of its strong effects in the uterus, dienogest has been described as lacking antiestrogenic effects, and does not antagonize beneficial effects of estradiol, for instance in the metabolic and vascular systems. Dienogest showed some possible antiprogestogenic activity in one animal bioassay when administered before but not at the same time as progesterone. The minimum effective dose of oral dienogest required to inhibit ovulation is 1 mg/day. The inhibition of ovulation by dienogest occurs mainly via a direct peripheral action in the ovary of inhibiting folliculogenesis as opposed to a central action of inhibiting gonadotropin secretion. Oral therapy with 2 mg/day dienogest in cyclical premenopausal women reduced serum progesterone levels to anovulatory levels, but circulating levels of luteinizing hormone and follicle-stimulating hormone were not considerably affected. At this dosage, estradiol levels are reduced to early follicular phase levels of about 30 to 50 pg/mL. Such levels are insufficient for reactivation of endometrioses, but are sufficient to avoid menopausal-like symptoms such as hot flashes and bone loss. This is in contrast to gonadotropin-releasing hormone analogues (GnRH analogues), which suppress estradiol levels to lower concentrations and readily induce menopausal-like symptoms. Dienogest appears to have similar effects in the breasts as norethisterone acetate, and may likewise increase the risk of breast cancer when combined with an estrogen in postmenopausal women, although this has yet to be confirmed in clinical studies. Antigonadotropic effects Dienogest has been found to suppress testosterone levels in men by 43% at 2 mg/day, 70% at 5 mg/day, and 81% at 10 mg/day. The suppression of testosterone levels with 10 mg/day dienogest was comparable to that with 10 mg/day cyproterone acetate. In general, progestogens are able to suppress testosterone levels in men by a maximum of about 70 to 80% at sufficiently high doses. Antiandrogenic activity Dienogest is one of the only 19-nortestosterone derivative progestins that does not have androgenic properties. In fact, it is actually an antagonist of the androgen receptor (AR), and hence has antiandrogenic activity. The antiandrogenic activity of dienogest in the Hershberger test is about 30 to 40% of that of cyproterone acetate. It may be able to improve androgen-dependent symptoms such as acne and hirsutism. Metabolites of dienogest, such as 9α,10β-dihydrodienogest and 3α,5α-tetrahydrodienogest, show greater affinity for the AR than does dienogest itself. Dienogest has no affinity for sex hormone-binding globulin (SHBG), and hence does not displace testosterone or estradiol from this plasma protein or increase the free fractions of these hormones. Other activities Dienogest does not inhibit or induce CYP3A4, unlike many other related progestins. Because of this, it may have a lower propensity for drug interactions. Dienogest weakly stimulates the proliferation of MCF-7 breast cancer cells in vitro, an action that is independent of the classical PRs and is instead mediated via the progesterone receptor membrane component-1 (PGRMC1). Certain other progestins are also active in this assay, whereas progesterone acts neutrally. It is unclear if these findings may explain the different risks of breast cancer observed with progesterone and progestins in clinical studies. Pharmacokinetics Dienogest is rapidly absorbed with oral administration and has high bioavailability of approximately 90%. Peak levels of dienogest occur within approximately 2 hours after an oral dose. The pharmacokinetics of dienogest are linear; single oral doses of dienogest were found to result in maximal levels of 28 ng/mL with 1 mg, 54 ng/mL with 2 mg, 101 ng/mL with 4 mg, and 212 ng/mL with 8 mg. The corresponding area-under-the-curve levels were 306, 577, 1153, and 2293 ng/mL, respectively. Dienogest reaches steady-state concentrations within 6 days of continuous administration, and does not accumulate in the body. The plasma protein binding of dienogest is 90%, with a relatively high free fraction of 10%. It is exclusively bound to albumin, with no binding to SHBG or corticosteroid-binding globulin. The lack of affinity of dienogest for SHBG is in contrast to most other 19-nortestosterone progestins. The volume of distribution of dienogest is relatively low at 40 L. Dienogest is metabolized in the liver. Metabolic pathways of dienogest include reduction of its Δ4-3-keto group, hydroxylation mainly via CYP3A4, removal of its C17α cyanomethyl group, and conjugation. The metabolites of dienogest are quickly excreted and are said to be mostly inactive. The elimination half-life of dienogest is relatively short at approximately 7.5 to 10.7 hours. The short half-life of dienogest relative to other 19-nortestosterone progestins is in part due to its lack of binding to SHBG and hence prolongation in the circulation. The clearance of dienogest is 3 L/h. It is eliminated mainly in the urine, both as sulfate and glucuronide conjugates and as free steroid. Chemistry Dienogest, also known as δ9-17α-cyanomethyl-19-nortestosterone or as 17α-cyanomethylestra-4,9-dien-17β-ol-3-one, is a synthetic estrane steroid and a derivative of testosterone. It is a member of the estrane subgroup of the 19-nortestosterone family of progestins, but unlike most other 19-nortestosterone progestins, is not a derivative of norethisterone (17α-ethynyl-19-nortestosterone). This is because it uniquely possesses a cyanomethyl group (i.e., a nitrile group) at the C17α position rather than the usual ethynyl group. It is also unique among most 19-nortestosterone progestins in that it has a double bond between the C9 and C10 positions. Dienogest is the C17α cyanomethyl derivative of the anabolic–androgenic steroid (AAS) dienolone, as well as the C17α cyanomethyl analogue of the AAS methyldienolone (17α-methyldienolone) and ethyldienolone (17α-ethyldienolone). In terms of structure–activity relationships, the C17α cyanomethyl group of dienogest is responsible for its unique antiandrogenic instead of androgenic activity relative to other 19-nortestosterone progestins. A loss of ability to activate the AR is also seen with other testosterone derivatives with extended-length C17α substitutions such as topterone (propyltestosterone) (compare to the AAS ethyltestosterone and methyltestosterone) and allylestrenol (compare to the AAS ethylestrenol). Studies with steroids similar to dienogest (e.g., dienolone) have found that the introduction of a double bond between the C9 and C10 positions is associated with similar/almost unchanged affinity for the PR and AR. On the other hand, the C9(10) double bond of dienogest appears to inhibit metabolism via 5α-reductase and/or 5β-reductase, which is the major metabolic route for other 19-nortestosterone progestins like norethisterone, norgestrel, and etonogestrel, and this may serve to improve the metabolic stability and potency of dienogest. History Dienogest was synthesized in 1979 in Jena, Germany under the leadership of Kurt Ponsold, was initially referred to as STS-557. It was found that its potency was 10 times that of levonorgestrel. The first product on the market to contain dienogest was a combined birth control pill (with ethinylestradiol), Valette, introduced in 1995 and made by Jenapharm. In 2007, dienogest was introduced as Dinagest in Japan for the treatment of endometriosis, and it was subsequently marketed for this indication as Visanne in Europe and Australia in December 2009 and April 2010, respectively. Qlaira was introduced in Europe in 2009 and Natazia was introduced in the United States in 2010. Society and culture Generic names Dienogest is the generic name of the drug and its , , , and , while diénogest is its . It is also known by its synonyms dienogestril and cyanomethyldienolone as well as by its numerous former developmental code names including BAY 86-5258, M-18575, MJR-35, SH-660, SH-T00660AA, STS-557, and ZK-37659. Brand names Dienogest is marketed in combination with estradiol valerate as a birth control pill primarily under the brand names Natazia and Qlaira and in combination with ethinylestradiol as a birth control pill primarily under the brand name Valette, although these combinations are marketed under numerous other brand names as well. In the case of the dienogest and estradiol valerate birth control pill, these other brand names include Gianda and Klaira. Dienogest is also marketed in combination with estradiol valerate for use in menopausal hormone therapy under a variety of brand names including Climodien, Climodiène, Estradiol Valeraat / Dienogest, Klimodien, lafamme, Lafleur, Mevaren, Valerix, and Velbienne. Dienogest is marketed as a standalone medication for the treatment of endometriosis primarily under the brand name Visanne, but is also available under the brand names Alondra, Dinagest, Disven, Visabelle, and Visannette in various countries. Availability Dienogest is available both alone and in combination with ethinylestradiol and estradiol valerate widely throughout the world, including but not limited to Canada, Europe, Latin America, and Southeast Asia. It is available specifically as a standalone medication in Canada, Europe, Latin America, Russia, Australia, South Africa, Georgia, Israel, Japan, South Korea, Hong Kong, and Thailand. It is notably not available as a standalone medication in the United States or the United Kingdom. Research Dienogest has been studied as a form of male hormonal contraception. As of July 2018, dienogest is in phase III clinical trials in Japan for the treatment of adenomyosis and dysmenorrhea. The combination of estradiol valerate and dienogest is in pre-registration in Europe for the treatment of acne. Dienogest is also being evaluated for the potential treatment of anorexia nervosa. References Further reading External links Drugs developed by Bayer Conjugated dienes Enones Estranes Hormonal contraception Nitriles Progestogens Science and technology in East Germany Steroidal antiandrogens Tertiary alcohols
Dienogest
Chemistry
4,540
36,805,637
https://en.wikipedia.org/wiki/Zeta%20Pyxidis
Zeta Pyxidis (ζ Pyxidis) is a wide binary star system in the southern constellation of Pyxis. It is visible to the naked eye with a combined apparent visual magnitude of +4.88. Based upon an annual parallax shift of 13.35 mas as seen from Earth, it is located around 244 light years from the Sun. The yellow-hued primary, component A, is an evolved G-type giant star with a stellar classification of , where the suffix notation indicating it has anomalously weak lines of cyanogen. At the age of 1.88 billion years, is a red clump star that is generating energy through the fusion of helium at its core. The primary has nearly double the mass of the Sun and is radiating 69 times the Sun's luminosity from its photosphere at an effective temperature of 4,876 K. The companion, component B, is a magnitude 9.59 star at an angular separation of 52.20 arc seconds along a position angle of 61°, as of 2010. References G-type giants Horizontal-branch stars Binary stars Pyxidis, Zeta Pyxis Durchmusterung objects 073898 042483 3433
Zeta Pyxidis
Astronomy
257
1,469,817
https://en.wikipedia.org/wiki/Strategic%20Negotiations
Strategic Negotiations: A Theory of Change in Labor-Management Relations, a 1994 Harvard Business School Press publication, is a book on negotiation by the authors; Richard E. Walton, Joel Cutcher-Gershenfeld, and Robert McKersie. The book explains concepts and strategies of negotiation to the reader. Summary In the book, the authors identify three primary negotiation strategies. These are "forcing," "fostering," and "escape". Each represents an overarching pattern of interaction that characterizes the negotiations. A strategy does not emerge all at once, but over time as a result of consistent patterns of interaction. A forcing strategy generally involves taking a "distributive" or win–lose approach to the negotiations, combined with a "divide and conquer" approach to internal relations in the other side, and an attitudinal approach that emphasizes uncertainty and distrust. By contrast, a fostering strategy generally involves taking an "integrative" or win-win approach to the negotiations, combined with a "consensus" approach to internal relations in both sides, and an attitudinal approach that emphasizes openness and understanding. "Escape" is a non-negotiations strategy in which one or more parties seek to end or undercut the relationship, which leads to a loss-loss situation. These strategy and process elements of negotiations can be combined with an understanding of structure in order to predict outcomes that are both substantive and relationship outcomes. See also Conflict resolution List of books about negotiation Negotiation theory References External links Strategic Negotiations: A Theory of Change in Labor-Management Relations Books about negotiation 1994 non-fiction books Personal development Harvard Business Publishing books
Strategic Negotiations
Biology
338
3,783,853
https://en.wikipedia.org/wiki/Bond%20graph
A bond graph is a graphical representation of a physical dynamic system. It allows the conversion of the system into a state-space representation. It is similar to a block diagram or signal-flow graph, with the major difference that the arcs in bond graphs represent bi-directional exchange of physical energy, while those in block diagrams and signal-flow graphs represent uni-directional flow of information. Bond graphs are multi-energy domain (e.g. mechanical, electrical, hydraulic, etc.) and domain neutral. This means a bond graph can incorporate multiple domains seamlessly. The bond graph is composed of the "bonds" which link together "single-port", "double-port" and "multi-port" elements (see below for details). Each bond represents the instantaneous flow of energy () or power. The flow in each bond is denoted by a pair of variables called power variables, akin to conjugate variables, whose product is the instantaneous power of the bond. The power variables are broken into two parts: flow and effort. For example, for the bond of an electrical system, the flow is the current, while the effort is the voltage. By multiplying current and voltage in this example you can get the instantaneous power of the bond. A bond has two other features described briefly here, and discussed in more detail below. One is the "half-arrow" sign convention. This defines the assumed direction of positive energy flow. As with electrical circuit diagrams and free-body diagrams, the choice of positive direction is arbitrary, with the caveat that the analyst must be consistent throughout with the chosen definition. The other feature is the "causality". This is a vertical bar placed on only one end of the bond. It is not arbitrary. As described below, there are rules for assigning the proper causality to a given port, and rules for the precedence among ports. Causality explains the mathematical relationship between effort and flow. The positions of the causalities show which of the power variables are dependent and which are independent. If the dynamics of the physical system to be modeled operate on widely varying time scales, fast continuous-time behaviors can be modeled as instantaneous phenomena by using a hybrid bond graph. Bond graphs were invented by Henry Paynter. Systems for bond graph Many systems can be expressed in terms used in bond graph. These terms are expressed in the table below. Conventions for the table below: is the active power; is a matrix object; is a vector object; is the Hermitian conjugate of ; it is the complex conjugate of the transpose of . If is a scalar, then the Hermitian conjugate is the same as the complex conjugate; is the Euler notation for differentiation, where: Vergent-factor: Other systems: Thermodynamic power system (flow is entropy-rate and effort is temperature) Electrochemical power system (flow is chemical activity and effort is chemical potential) Thermochemical power system (flow is mass-rate and effort is mass specific enthalpy) Macroeconomics currency-rate system (displacement is commodity and effort is price per commodity) Microeconomics currency-rate system (displacement is population and effort is GDP per capita) Tetrahedron of state The tetrahedron of state is a tetrahedron that graphically shows the conversion between effort and flow. The adjacent image shows the tetrahedron in its generalized form. The tetrahedron can be modified depending on the energy domain. Using the tetrahedron of state, one can find a mathematical relationship between any variables on the tetrahedron. This is done by following the arrows around the diagram and multiplying any constants along the way. For example, if you wanted to find the relationship between generalized flow and generalized displacement, you would start at the and then integrate it to get . More examples of equations can be seen below. Relationship between generalized displacement and generalized flow. Relationship between generalized flow and generalized effort. Relationship between generalized flow and generalized momentum. Relationship between generalized momentum and generalized effort. Relationship between generalized flow and generalized effort, involving the constant C. All of the mathematical relationships remain the same when switching energy domains, only the symbols change. This can be seen with the following examples. Relationship between displacement and velocity. Relationship between current and voltage, this is also known as Ohm's law. Relationship between force and displacement, also known as Hooke's law. The negative sign is dropped in this equation because the sign is factored into the way the arrow is pointing in the bond graph. For power systems, the formula for the frequency of resonance is as follows: For power density systems, the formula for the velocity of the resonance wave is as follows: Components If an engine is connected to a wheel through a shaft, the power is being transmitted in the rotational mechanical domain, meaning the effort and the flow are torque (τ) and angular velocity (ω) respectively. A word bond graph is a first step towards a bond graph, in which words define the components. As a word bond graph, this system would look like: A half-arrow is used to provide a sign convention, so if the engine is doing work when τ and ω are positive, then the diagram would be drawn: This system can also be represented in a more general method. This involves changing from using the words, to symbols representing the same items. These symbols are based on the generalized form, as explained above. As the engine is applying a torque to the wheel, it will be represented as a source of effort for the system. The wheel can be presented by an impedance on the system. Further, the torque and angular velocity symbols are dropped and replaced with the generalized symbols for effort and flow. While not necessary in the example, it is common to number the bonds, to keep track of in equations. The simplified diagram can be seen below. Given that effort is always above the flow on the bond, it is also possible to drop the effort and flow symbols altogether, without losing any relevant information. However, the bond number should not be dropped. The example can be seen below. The bond number will be important later when converting from the bond graph to state-space equations. Association of elements Series association Suppose that an element has the following behavior: where is a generic function (it can even differentiate/integrate its input) and is the element's constant. Then, suppose that in a 1-junction you have many of this type of element. Then the total voltage across the junction is: Parallel association Suppose that an element has the following behavior: where is a generic function (it can even differentiate/integrate its input) and is the element's constant. Then, suppose that in a 0-junction you have many of this type of element. Then it is valid: Single-port elements Single-port elements are elements in a bond graph that can have only one port. Sources and sinks Sources are elements that represent the input for a system. They will either input effort or flow into a system. They are denoted by a capital "S" with either a lower case "e" or "f" for effort or flow respectively. Sources will always have the arrow pointing away from the element. Examples of sources include: motors (source of effort, torque), voltage sources (source of effort), and current sources (source of flow). where J indicates a junction. Sinks are elements that represent the output for a system. They are represented the same way as sources, but have the arrow pointing into the element instead of away from it. Inertia Inertia elements are denoted by a capital "I", and always have power flowing into them. Inertia elements are elements that store energy. Most commonly these are a mass for mechanical systems, and inductors for electrical systems. Resistance Resistance elements are denoted by a capital "R", and always have power flowing into them. Resistance elements are elements that dissipate energy. Most commonly these are a damper, for mechanical systems, and resistors for electrical systems. Compliance Compliance elements are denoted by a capital "C", and always have power flowing into them. Compliance elements are elements that store potential energy. Most commonly these are springs for mechanical systems, and capacitors for electrical systems. Two-port elements These elements have two ports. They are used to change the power between or within a system. When converting from one to the other, no power is lost during the transfer. The elements have a constant that will be given with it. The constant is called a transformer constant or gyrator constant depending on which element is being used. These constants will commonly be displayed as a ratio below the element. Transformer A transformer applies a relationship between flow in flow out, and effort in effort out. Examples include an ideal electrical transformer or a lever. Denoted where the r denotes the modulus of the transformer. This means and Gyrator A gyrator applies a relationship between flow in effort out, and effort in flow out. An example of a gyrator is a DC motor, which converts voltage (electrical effort) into angular velocity (angular mechanical flow). meaning that and Multi-port elements Junctions, unlike the other elements can have any number of ports either in or out. Junctions split power across their ports. There are two distinct junctions, the 0-junction and the 1-junction which differ only in how effort and flow are carried across. The same junction in series can be combined, but different junctions in series cannot. 0-junctions 0-junctions behave such that all effort values (and its time integral/derivative) are equal across the bonds, but the sum of the flow values in equals the sum of the flow values out, or equivalently, all flows sum to zero. In an electrical circuit, the 0-junction is a node and represents a voltage shared by all components at that node. In a mechanical circuit, the 0-junction is a joint among components, and represents a force shared by all components connected to it. An example is shown below. Resulting equations: 1-junctions 1-junctions behave opposite of 0-junctions. 1-junctions behave such that all flow values (and its time integral/derivative) are equal across the bonds, but the sum of the effort values in equals the sum the effort values out, or equivalently, all efforts sum to zero. In an electrical circuit, the 1 junction represents a series connection among components. In a mechanical circuit, the 1-junction represents a velocity shared by all components connected to it. An example is shown below. Resulting equations: Causality Bond graphs have a notion of causality, indicating which side of a bond determines the instantaneous effort and which determines the instantaneous flow. In formulating the dynamic equations that describe the system, causality defines, for each modeling element, which variable is dependent and which is independent. By propagating the causation graphically from one modeling element to the other, analysis of large-scale models becomes easier. Completing causal assignment in a bond graph model will allow the detection of modeling situation where an algebraic loop exists; that is the situation when a variable is defined recursively as a function of itself. As an example of causality, consider a capacitor in series with a battery. It is not physically possible to charge a capacitor instantly, so anything connected in parallel with a capacitor will necessarily have the same voltage (effort variable) as that across the capacitor. Similarly, an inductor cannot change flux instantly and so any component in series with an inductor will necessarily have the same flow as the inductor. Because capacitors and inductors are passive devices, they cannot maintain their respective voltage and flow indefinitely—the components to which they are attached will affect their respective voltage and flow, but only indirectly by affecting their current and voltage respectively. Note: Causality is a symmetric relationship. When one side "causes" effort, the other side "causes" flow. In bond graph notation, a causal stroke may be added to one end of the power bond to indicate that this side is defining the flow. Consequently, the side opposite from the casual stroke controls the effort. Sources of flow () define flow, so they host the causal stroke: Sources of effort () define effort, so the other end hosts the causal stroke: Consider a constant-torque motor driving a wheel, i.e. a source of effort (). That would be drawn as follows: Symmetrically, the side with the causal stroke (in this case the wheel) defines the flow for the bond. Causality results in compatibility constraints. Clearly only one end of a power bond can define the effort and so only one end of a bond can (the other end) have a causal stroke. In addition, the two passive components with time-dependent behavior, and , can only have one sort of causation: an component determines flow; a component defines effort. So from a junction, , the preferred causal orientation is as follows: The reason that this is the preferred method for these elements can be further analyzed if you consider the equations they would give shown by the tetrahedron of state. The resulting equations involve the integral of the independent power variable. This is preferred over the result of having the causality the other way, which results in derivative. The equations can be seen below. It is possible for a bond graph to have a causal bar on one of these elements in the non-preferred manner. In such a case a "causal conflict" is said to have occurred at that bond. The results of a causal conflict are only seen when writing the state-space equations for the graph. It is explained in more details in that section. A resistor has no time-dependent behavior: apply a voltage and get a flow instantly, or apply a flow and get a voltage instantly, thus a resistor can be at either end of a causal bond: Transformers are passive, neither dissipating nor storing energy, so causality passes through them: A gyrator transforms flow to effort and effort to flow, so if flow is caused on one side, effort is caused on the other side and vice versa: Junctions In a 0-junction, efforts are equal; in a 1-junction, flows are equal. Thus, with causal bonds, only one bond can cause the effort in a 0-junction and only one can cause the flow in a 1-junction. Thus, if the causality of one bond of a junction is known, the causality of the others is also known. That one bond is called the 'strong bond' In a nutshell, 0-junctions must have a single causal bar, 1-junctions must have all but one causal bars. Determining causality In order to determine the causality of a bond graph certain steps must be followed. Those steps are: Draw Source Causal Bars Draw Preferred causality for C and I bonds Draw causal bars for 0 and 1 junctions, transformers and gyrators Draw R bond causal bars If a causal conflict occurs, change C or I bond to differentiation A walk-through of the steps is shown below. The first step is to draw causality for the sources, over which there is only one. This results in the graph below. The next step is to draw the preferred causality for the C bonds. Next apply the causality for the 0 and 1 junctions, transformers, and gyrators. However, there is an issue with 0-junction on the left. The 0-junction has two causal bars at the junction, but the 0-junction wants one and only one at the junction. This was caused by having be in the preferred causality. The only way to fix this is to flip that causal bar. This results in a causal conflict, the corrected version of the graph is below, with the representing the causal conflict. Converting from other systems One of the main advantages of using bond graphs is that once you have a bond graph it doesn't matter the original energy domain. Below are some of the steps to apply when converting from the energy domain to a bond graph. Electromagnetic The steps for solving an Electromagnetic problem as a bond graph are as follows: Place an 0-junction at each node Insert Sources, R, I, C, TR, and GY bonds with 1 junctions Ground (both sides if a transformer or gyrator is present) Assign power flow direction Simplify These steps are shown more clearly in the examples below. Linear mechanical The steps for solving a Linear Mechanical problem as a bond graph are as follows: Place 1-junctions for each distinct velocity (usually at a mass) Insert R and C bonds at their own 0-junctions between the 1 junctions where they act Insert Sources and I bonds on the 1 junctions where they act Assign power flow direction Simplify These steps are shown more clearly in the examples below. Simplifying The simplifying step is the same regardless if the system was electromagnetic or linear mechanical. The steps are: Remove Bond of zero power (due to ground or zero velocity) Remove 0 and 1 junctions with less than three bonds Simplify parallel power Combine 0 junctions in series Combine 1 junctions in series These steps are shown more clearly in the examples below. Parallel power Parallel power is when power runs in parallel in a bond graph. An example of parallel power is shown below. Parallel power can be simplified, by recalling the relationship between effort and flow for 0 and 1-junctions. To solve parallel power you will first want to write down all of the equations for the junctions. For the example provided, the equations can be seen below. (Please make note of the number bond the effort/flow variable represents). By manipulating these equations you can arrange them such that you can find an equivalent set of 0 and 1-junctions to describe the parallel power. For example, because and you can replace the variables in the equation resulting in and since , we now know that . This relationship of two effort variables equaling can be explained by an 0-junction. Manipulating other equations you can find that which describes the relationship of a 1-junction. Once you have determined the relationships that you need you can redraw the parallel power section with the new junctions. The result for the example show is seen below. Examples Simple electrical system A simple electrical circuit consisting of a voltage source, resistor, and capacitor in series. The first step is to draw 0-junctions at all of the nodes: The next step is to add all of the elements acting at their own 1-junction: The next step is to pick a ground. The ground is simply an 0-junction that is going to be assumed to have no voltage. For this case, the ground will be chosen to be the lower left 0-junction, that is underlined above. The next step is to draw all of the arrows for the bond graph. The arrows on junctions should point towards ground (following a similar path to current). For resistance, inertance, and compliance elements, the arrows always point towards the elements. The result of drawing the arrows can be seen below, with the 0-junction marked with a star as the ground. Now that we have the Bond graph, we can start the process of simplifying it. The first step is to remove all the ground nodes. Both of the bottom 0-junctions can be removed, because they are both grounded. The result is shown below. Next, the junctions with less than three bonds can be removed. This is because flow and effort pass through these junctions without being modified, so they can be removed to allow us to draw less. The result can be seen below. The final step is to apply causality to the bond graph. Applying causality was explained above. The final bond graph is shown below. Advanced electrical system A more advanced electrical system with a current source, resistors, capacitors, and a transformer Following the steps with this circuit will result in the bond graph below, before it is simplified. The nodes marked with the star denote the ground. Simplifying the bond graph will result in the image below. Lastly, applying causality will result in the bond graph below. The bond with star denotes a causal conflict. Simple linear mechanical A simple linear mechanical system, consisting of a mass on a spring that is attached to a wall. The mass has some force being applied to it. An image of the system is shown below. For a mechanical system, the first step is to place a 1-junction at each distinct velocity, in this case there are two distinct velocities, the mass and the wall. It is usually helpful to label the 1-junctions for reference. The result is below. The next step is to draw the R and C bonds at their own 0-junctions between the 1-junctions where they act. For this example there is only one of these bonds, the C bond for the spring. It acts between the 1-junction representing the mass and the 1-junction representing the wall. The result is below. Next you want to add the sources and I bonds on the 1-junction where they act. There is one source, the source of effort (force) and one I bond, the mass of the mass both of which act on the 1-junction of the mass. The result is shown below. Next power flow is to be assigned. Like the electrical examples, power should flow towards ground, in this case the 1-junction of the wall. Exceptions to this are R, C, or I bond, which always point towards the element. The resulting bond graph is below. Now that the bond graph has been generated, it can be simplified. Because the wall is grounded (has zero velocity), you can remove that junction. As such the 0-junction the C bond is on, can also be removed because it will then have less than three bonds. The simplified bond graph can be seen below. The last step is to apply causality, the final bond graph can be seen below. Advanced linear mechanical A more advanced linear mechanical system can be seen below. Just like the above example, the first step is to make 1-junctions at each of the distant velocities. In this example there are three distant velocity, Mass 1, Mass 2, and the wall. Then you connect all of the bonds and assign power flow. The bond can be seen below. Next you start the process of simplifying the bond graph, by removing the 1-junction of the wall, and removing junctions with less than three bonds. The bond graph can be seen below. There is parallel power in the bond graph. Solving parallel power was explained above. The result of solving it can be seen below. Lastly, apply causality, the final bond graph can be seen below. State equations Once a bond graph is complete, it can be utilized to generate the state-space representation equations of the system. State-space representation is especially powerful as it allows complex multi-order differential system to be solved as a system of first-order equations instead. The general form of the state equation is where is a column matrix of the state variables, or the unknowns of the system. is the time derivative of the state variables. is a column matrix of the inputs of the system. And and are matrices of constants based on the system. The state variables of a system are and values for each C and I bond without a causal conflict. Each I bond gets a while each C bond gets a . For example, if you have the following bond graph you would have the following , , and matrices: The matrices of and are solved by determining the relationship of the state variables and their respective elements, as was described in the tetrahedron of state. The first step to solve the state equations is to list all of the governing equations for the bond graph. The table below shows the relationship between bonds and their governing equations. "♦" denotes preferred causality. For the example provided, the governing equations are the following. These equations can be manipulated to yield the state equations. For this example, you are trying to find equations that relate and in terms of , , and . To start you should recall from the tetrahedron of state that starting with equation 2, you can rearrange it so that . can be substituted for equation 4, while in equation 4, can be replaced by due to equation 3, which can then be replaced by equation 5. can likewise be replaced using equation 7, in which can be replaced with which can then be replaced with equation 10. Following these substituted yields the first state equation which is shown below. The second state equation can likewise be solved, by recalling that . The second state equation is shown below. Both equations can further be rearranged into matrix form. The result of which is below. At this point the equations can be treated as any other state-space representation problem. International conferences on bond graph modeling (ECMS and ICBGM) A bibliography on bond graph modeling may be extracted from the following conferences : ECMS-2013 27th European Conference on Modelling and Simulation, May 27–30, 2013, Ålesund, Norway ECMS-2008 22nd European Conference on Modelling and Simulation, June 3–6, 2008 Nicosia, Cyprus ICBGM-2007: 8th International Conference on Bond Graph Modeling And Simulation, January 15–17, 2007, San Diego, California, U.S.A. ECMS-2006 20TH European Conference on Modelling and Simulation, May 28–31, 2006, Bonn, Germany IMAACA-2005 International Mediterranean Modeling Multiconference ICBGM-2005 International Conference on Bond Graph Modeling and Simulation, January 23–27, 2005, New Orleans, Louisiana, U.S.A. – Papers ICBGM-2003 International Conference on Bond Graph Modeling and Simulation (ICBGM'2003) January 19–23, 2003, Orlando, Florida, USA – Papers 14TH European Simulation symposium October 23–26, 2002 Dresden, Germany ESS'2001 13th European Simulation symposium, Marseilles, France October 18–20, 2001 ICBGM-2001 International Conference on Bond Graph Modeling and Simulation (ICBGM 2001), Phoenix, Arizona U.S.A. European Simulation Multi-conference 23-26 May, 2000, Gent, Belgium 11th European Simulation symposium, October 26–28, 1999 Castle, Friedrich-Alexander University, Erlangen-Nuremberg, Germany ICBGM-1999 International Conference on Bond Graph Modeling and Simulation January 17–20, 1999 San Francisco, California ESS-97 9TH European Simulation Symposium and Exhibition Simulation in Industry, Passau, Germany, October 19–22, 1997 ICBGM-1997 3rd International Conference on Bond Graph Modeling And Simulation, January 12–15, 1997, Sheraton-Crescent Hotel, Phoenix, Arizona 11th European Simulation Multiconference Istanbul, Turkey, June 1–4, 1997 ESM-1996 10th annual European Simulation Multiconference Budapest, Hungary, June 2–6, 1996 ICBGM-1995 Int. Conf. on Bond Graph Modeling and Simulation (ICBGM’95), January 15–18, 1995, Las Vegas, Nevada. See also 20-sim simulation software based on the bond graph theory AMESim simulation software based on the bond graph theory Hybrid bond graph Coenergy References Further reading http://www.site.uottawa.ca/~rhabash/ESSModelFluid.pdf Explains modeling the bond graph in the fluid domain http://www.dartmouth.edu/~sullivan/22files/Fluid_sys_anal_w_chart.pdf Explains modeling the bond graph in the fluid domain External links Simscape Official MATLAB/Simulink add-on library for graphical bond graph programming BG V.2.1 Freeware MATLAB/Simulink add-on library for graphical bond graph programming Scientific visualization Diagrams Application-specific graphs Electrical engineering Mechanical engineering Modeling languages
Bond graph
Physics,Engineering
5,719
77,306,203
https://en.wikipedia.org/wiki/Richard%20G.%20Luthy
Richard G. Luthy (born 1945) is the Silas H. Palmer Professor of Civil and Environmental Engineering at Stanford University, California. His specialty is water quality engineering and the future of urban water supplies and reuse in water-stressed regions. Luthy was elected to the National Academy of Engineering in 1999 for leadership in water quality protection and engineering. Career During his childhood, Luthy lived in Prairie Village, KS and attended Prairie Elementary School. His family moved west to Palo Alto, CA in 1956 when he was in sixth grade. He attended the University of California, Berkeley from 1963-1967, majoring in chemical engineering. He received an MS from the University of Hawaiʻi at Mānoa in 1969 in its newly-formed program in ocean engineering. Luthy served in the US Navy Civil Engineer Corps from 1969-1972 and was promoted from Ensign to Lieutenant. He was a qualified Navy deep sea diving and salvage officer with the Seabees with tours of duty at Port Hueneme, CA in the Naval Facilities Engineering Service Center, and at Davisville, Rhode Island where he was assistant office in charge of Underwater Construction Team One. Luthy was a deep submergence vehicle operator for the Naval Experimental Manned Observatory (DSV-5 Nemo). This was the first submersible with a transparent, all-acrylic spherical hull designed to oversee underwater construction and salvage work. Luthy returned to the University of California, Berkeley on GI Bill for graduate studies in environmental engineering on water treatment and water quality. Academic research Luthy joined the faculty in civil and environmental engineering at Carnegie Mellon University in 1975. He served as Associate Dean in the university's school of engineering, and as Department Head of Civil and Environmental Engineering. He was recruited by Stanford and returned to Palo Alto in 1999. He was chair of Stanford's Civil and Environmental Engineering Department from 2003-2009. Luthy's research has emphasized physicochemical processes for water quality engineering. His graduate studies and early research coincided with the passage of nation's major water quality legislation, as well as with the energy crisis in the 1970s. This resulted in research projects supported by the United States Environmental Protection Agency, the Energy Research and Development Administration and its successor, United States Department of Energy, with studies on water management and treatment in coal gasification and liquefaction. That work led to a body of research on the behavior of polycyclic aromatic hydrocarbons in treatment and fate in the environment. Research in the 1980s and 1990s addressed soil and groundwater contamination and bioavailability of hydrophobic organic compounds and PFCs, so-called forever chemicals, for protection of groundwater. His research focused on persistent and bioaccumulative organic compounds in sediments that resulted in the development of in-situ treatment technologies to sequester toxic hydrophobic organic compounds. Having seen California grow in population and prosper economically, he witnessed how the state's major water infrastructure that served the state well in mid-20th Century was stretched to its limits to meet the needs of the 21st Century. Luthy worked with colleagues at Stanford and elsewhere on more sustainable urban water systems including reuse and stormwater capture for water supply. From 2011 to 2022 he was the Director of the National Science Foundation Engineering Research Center for Re-inventing the Nation's Urban Water Infrastructure. Luthy is recognized for advancement of environmental engineering with energy-efficient, decentralized water reuse; urban stormwater capture and treatment for water supply; and urban water supply strategies for California in the face of maintaining environmental flows in rivers while addressing climate change and competing demands. Educational contributions and achievements Luthy is acknowledged for significant contributions to environmental engineering education, research, and practice. Luthy has served on numerous academic advisory boards and visiting committees, as well as committees of the NSF, EPA, NRC, NAE and other organizations. He is a past member and chair of the National Research Council's Water Science and Technology Board. Luthy has held various positions in the Association of Environmental Engineering and Science Professors including Vice President and President, and a founding board member and subsequent Chair of the AEESP Foundation. He has served on and chaired the NAE Peer Committee for Civil and Environmental Engineering. Selected publications Luthy, R. G., Selleck, R. E., & Galloway, T. R. (1977). Surface properties of petroleum refinery waste oil emulsions. Environmental Science & Technology, 11(13), 1211-1217. Luthy, R. G., Selleck, R. E., & Galloway, T. R. (1978). Removal of emulsified oil with organic coagulants and dissolved air flotation. Journal (Water Pollution Control Federation), 331-346. Luthy, R. G. (1981). Treatment of coal coking and coal gasification wastewaters. Journal (Water Pollution Control Federation), 325-339. Edwards, D. A., Liu, Z., & Luthy, R. G. (1994). Experimental data and modeling for surfactant micelles, HOCs, and soil. Journal of Environmental Engineering, 120, 23-41. Ahn, S., Werner, D., Karapanagioti, H. K., McGlothlin, D. R., Zare, R. N., & Luthy, R. G. (2005). Phenanthrene and Pyrene Sorption and Intraparticle Diffusion in Polyoxymethylene, Coke, and Activated Carbon. Environmental Science & Technology, 17(39), 6516-6526. Ghosh, U., Zimmerman, J. R., & Luthy, R. G. (2003). PCB and PAH speciation among particle types in contaminated harbor sediments and effects on PAH bioavailability. Environmental Science and Technology, 37. Luthy, R. G., Aiken, G. R., Brusseau, M. L., Cunningham, S. D., Gschwend, P. M., Pignatello, J. J., ... & Westall, J. C. (1997). Sequestration of hydrophobic organic contaminants by geosorbents. Environmental Science & Technology, 31(12), 3341-3347. Higgins, C. P., & Luthy, R. G. (2006). Sorption of perfluorinated surfactants on sediments. Environmental science & technology, 40(23), 7251-7256. Patmont, C. R., Ghosh, U., LaRosa, P., Menzie, C. A., Luthy, R. G., Greenberg, M. S., ... & Quadrini, J. (2015). In situ sediment treatment using activated carbon: a demonstrated sediment cleanup technology. Integrated environmental assessment and management, 11(2), 195-207. Luthy, R. G., Wolfand, J. M., & Bradshaw, J. L. (2020). Urban water revolution: Sustainable water futures for California cities. Journal of Environmental Engineering, 146(7), 04020065. Luthy, R. G., & Sedlak, D. L. (2015). Urban water-supply reinvention. Daedalus, 144(3), 72-82. Gile, B. C., Sciuto, P. A., Ashoori, N., & Luthy, R. G. (2020). Integrated Water Management at the Peri-Urban Interface: A Case Study of Monterey, California. Water (20734441), 12(12) Bischel, H. N., Simon, G. L., Frisby, T. M., & Luthy, R. G. (2012). Management experiences and trends for water reuse implementation in Northern California. Environmental science & technology, 46(1), 180-188. Luthy, R. G., Sharvelle, S., & Dillon, P. (2019). Urban stormwater to enhance water supply. Environmental Science & Technology, 53(10), 5534-5542. Galdi, S. M., Szczuka, A., Shin, C., Mitch, W. A., & Luthy, R. G. (2022). Dissolved methane recovery and trace contaminant fate following mainstream anaerobic treatment of municipal wastewater. ACS Es&t Engineering, 3(1), 121-130. Pritchard, J. C., Cho, Y. M., Hawkins, K. M., Spahr, S., Higgins, C. P., & Luthy, R. G. (2023). Predicting PFAS and Hydrophilic Trace Organic Contaminant Transport in Black Carbon-Amended Engineered Media Filters for Improved Stormwater Runoff Treatment. Environmental Science & Technology, 57(38), 14417-14428. Spahr, S., Teixidó, M., Gall, S. S., Pritchard, J. C., Hagemann, N., Helmreich, B., & Luthy, R. G. (2022). Performance of biochars for the elimination of trace organic contaminants and metals from urban stormwater. Environmental Science: Water Research & Technology, 8(6), 1287-1299. Gile, B. C., Sherris, A. R., Holmes, R. T., Fendorf, S., & Luthy, R. G. Water Supply Planning in the Face of Drought and Ecosystem Flows: Examining the Impact of the Bay-Delta Plan on Bay Area Water Supply. Environmental science & technology. References 1945 births Living people Environmental engineers
Richard G. Luthy
Chemistry,Engineering
2,069
2,903,209
https://en.wikipedia.org/wiki/54%20Aurigae
54 Aurigae is a multiple star system located around away from the Sun in the northern constellation of Auriga. It is visible to the naked eye as a dim, blue-white hued star with a combined apparent visual magnitude of 6.02. The system is moving further from the Sun with a heliocentric radial velocity of around +19 km/s. 54 Aurigae is resolved into two visible components, of magnitudes 6.22 and 7.82, separated by . The double was discovered in 1843 when the separation was only . There is no separate measure of the parallax of the secondary, but it shares a common proper motion with the brighter star and they are assumed to form a binary. The spectral class B7 III is assigned to the brighter of the pair, indicating a hot giant star, although it has also been given as B7/8 III/V, suggesting it may be a main sequence star. Most sources can't give a separate spectral classification for the fainter star, but it has been listed as DA1/K4V, indicating it is either a white dwarf or red dwarf. The brighter component of the visible pair is an eclipsing binary with a period of 1.8797 days, and a primary eclipse depth of 0.03 magnitudes. It is radiating 315 times the luminosity of the Sun from its photosphere at an effective temperature of , and is spinning with a projected rotational velocity of . References B-type giants Suspected variables Binary stars Auriga Durchmusterung objects Aurigae, 54 047395 031852 2438
54 Aurigae
Astronomy
337
48,890,472
https://en.wikipedia.org/wiki/NanKang%20Biotech%20Incubation%20Center
The NanKang Biotech Incubation Center (NBIC; ) is a biotech incubator formed by the SME Foundation of the Ministry of Economic Affairs, ROC and managed by the Development Center for Biotechnology. The incubator focuses on the initiation and development of biotech startups with an emphasis of the biotech industry including pharmaceuticals, medical devices and applied biotech. NBIC is a model incubator in biotech industry, different from the university affiliated incubator, supported by the SME Foundation. As of 2015, the incubator has 21 Class I client companies, 1 Class II, and 20 Class III. Beside office space, the incubator provides wet laboratory with instruments and facilities. History NanKang Biotech Incubation Center (NBIC) is based in Taipei's Nankang Software Park. Opened on 27 August 2004, it occupies 1,200 ping () of space. The land was leased from The opening ceremony was hosted by Ho Mei-yueh, then Minister of Economic Affairs. Attending guests included Yuan T. Lee, then president of Academia Sinica and Ma Ying-jeou, then Mayor of Taipei. Recognition NBIC is accredited by the National Business Incubation Association (NBIA) since 2011 as a Soft Landings incubator. A full article reported the incubator extensively in The Business Incubator, pp34–39, Volume 1 Issue 1, June–September 2012. Parent organization Development Center for Biotechnology/SMEA References 2004 establishments in Taiwan Business incubators Biotechnology in Taiwan
NanKang Biotech Incubation Center
Biology
320