id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
16,845,955
https://en.wikipedia.org/wiki/GEFT
RAC/CDC42 exchange factor, also known as GEFT, is a human gene. Interactions GEFT has been shown to interact with RHOA. References Further reading
GEFT
Chemistry
37
70,710,332
https://en.wikipedia.org/wiki/LO%20Pegasi
LO Pegasi is a single star in the northern constellation of Pegasus that has been the subject of numerous scientific studies. LO Pegasi, abbreviated LO Peg, is the variable star designation. It is too faint to be viewed with the naked eye, having an apparent visual magnitude that ranges from 9.04 down to 9.27. Based on parallax measurements, LO Peg is located at a distance of 79 light years from the Sun. It is a member of the young AB Doradus moving group, and is drifting closer with a radial velocity of −23 km/s. This is a K-type main-sequence star with a stellar classification of K3Vke, where the 'k' suffix indicates interstellar absorption lines and 'e' means there are emission lines in the spectrum. It became of interest to astronomers when significant X-ray emission was detected from this star in 1994. R. D. Jeffries and associates reported flare activity based on a rotationally-broadened hydrogen α emission line and found the star varied in brightness. LO Peg is an ultrafast rotator, completing a full rotation every . It is classified as a BY Draconis variable that is magnetically active and has star spots. The combination of non-uniform surface brightness and rotation makes it appear to vary in luminosity. Up to 25.7% of the surface is covered in spots. Long term changes in periodicity suggest activity cycles, similar to the solar cycle, with periods of approximately 3 and 7.4 years. The element lithium has been detected in its atmosphere, whose abundance, in combination with the star's rapid rotation, indicates this is a young star with an age of no more than a few hundred million years. See also References Further reading K-type main-sequence stars BY Draconis variables Emission-line stars Pegasus (constellation) Durchmusterung objects Gliese and GJ objects 106231 Pegasi, LO
LO Pegasi
Astronomy
396
62,378,017
https://en.wikipedia.org/wiki/Tong%20Sun
Tong Sun (born 1968) is a Professor of Sensor Engineering and Director of the Research Centre for Photonics and Instrumentation at City, University of London. She was awarded the Royal Academy of Engineering Silver Medal in 2016 and awarded an Order of the British Empire (OBE) in the 2018 Birthday Honours. In 2020 she was elected Fellow of the Royal Academy of Engineering. Early life and education Sun was born in Southern China. By the time she attended primary school, the Cultural Revolution had finished and the educational system had been restored. Sun studied engineering at the Harbin Institute of Technology. Here she worked in the Department of Precision Instrumentation, where she earned a master's degree in 1993 and doctorate in 1998. On her holidays from university, Sun's commute back to her parents' house would last 34 hours. She moved to City, University of London for a second doctorate, during which she researched optical fibres, and graduated in 1999. Research and career After earning her doctorate Sun joined Nanyang Technological University where she worked as an Assistant Professor until 2001. She moved back to City, University of London in 2001. When she was promoted to Professor in 2008 she became the first woman to be promoted to Professor of engineering at City. She also serves as Director of the Research Centre for Photonics and Instrumentation. Her research involves the development of optical fibre sensors to monitor sensitive equipment in extreme environments. Her research has contributed to several different technologies, including drug detection, corrosion monitoring and combating food spoilage. She has worked with the Home Office and Smiths Group. In 2007 Sun co-founded Sengenia Ltd, a fibre sensing spin-out. She has developed humidity sensors that can withstand challenging environments such as acidic sewers in Sydney and rice stores in China. Sun continues to work with researchers at the Shandong Academy of Sciences on the implementation of optical fibres in the mining industry . In 2017 Sun was awarded the Australian Water Safety Council New South Wales Water Award to trial her sensors in Sydney Water. She is working with AECOM and Indian Institutes of Technology to enhance the sustainability of cities in India. This research was recognised with one of the most successful projects funded by the UK-India Education Research Initiative. Sun designed a sensor system that could be used to measure strain and temperature in pantographs, the connectors that are used to link overhead power cables in for electric trains. These devices are essential for train function and routine checks can miss important information. The optical sensors developed by Sun can continuously monitor pantograph behaviour during operation. The instrumented pantographs developed by Sun are currently being developed by Brecknell Willis. In 2018 Sun was awarded a Royal Academy of Engineering Research Chair to work with Brecknell Willis on new railway electrification systems. Sun is working on contactless electrification systems that integrate optical fibre sensors for continuous, in situ all-weather monitoring. The first pantographs went on service trial in 2019 and included Global Positioning System and video equipment. In 2019 they were awarded funding from the Railway Industry Association, Rail Safety and Standards Board and Innovate UK. Sun was shortlisted for the Times Higher Education Research Supervisor of the Year. Awards and honours Her awards and honours include; 2008 Elected Fellow of the Institution of Engineering and Technology 2016 Royal Academy of Engineering Silver Medal 2018 Order of the British Empire for services to engineering 2020 Elected Fellow of the Royal Academy of Engineering Selective publications Her publications include; References 1968 births Living people Chinese emigrants to England Chinese women engineers Fellows of the Royal Academy of Engineering Female fellows of the Royal Academy of Engineering Fellows of the Institution of Engineering and Technology Sensor manufacturers Academics of City, University of London Alumni of City, University of London Harbin Institute of Technology alumni Officers of the Order of the British Empire
Tong Sun
Engineering
750
1,539,548
https://en.wikipedia.org/wiki/Reversible%20computing
Reversible computing is any model of computation where the computational process, to some extent, is time-reversible. In a model of computation that uses deterministic transitions from one state of the abstract machine to another, a necessary condition for reversibility is that the relation of the mapping from states to their successors must be one-to-one. Reversible computing is a form of unconventional computing. Due to the unitarity of quantum mechanics, quantum circuits are reversible, as long as they do not "collapse" the quantum states on which they operate. Reversibility There are two major, closely related types of reversibility that are of particular interest for this purpose: physical reversibility and logical reversibility. A process is said to be physically reversible if it results in no increase in physical entropy; it is isentropic. There is a style of circuit design ideally exhibiting this property that is referred to as charge recovery logic, adiabatic circuits, or adiabatic computing (see Adiabatic process). Although in practice no nonstationary physical process can be exactly physically reversible or isentropic, there is no known limit to the closeness with which we can approach perfect reversibility, in systems that are sufficiently well isolated from interactions with unknown external environments, when the laws of physics describing the system's evolution are precisely known. A motivation for the study of technologies aimed at implementing reversible computing is that they offer what is predicted to be the only potential way to improve the computational energy efficiency (i.e., useful operations performed per unit energy dissipated) of computers beyond the fundamental von Neumann–Landauer limit of energy dissipated per irreversible bit operation. Although the Landauer limit was millions of times below the energy consumption of computers in the 2000s and thousands of times less in the 2010s, proponents of reversible computing argue that this can be attributed largely to architectural overheads which effectively magnify the impact of Landauer's limit in practical circuit designs, so that it may prove difficult for practical technology to progress very far beyond current levels of energy efficiency if reversible computing principles are not used. Relation to thermodynamics As was first argued by Rolf Landauer while working at IBM, in order for a computational process to be physically reversible, it must also be logically reversible. Landauer's principle is the observation that the oblivious erasure of n bits of known information must always incur a cost of in thermodynamic entropy. A discrete, deterministic computational process is said to be logically reversible if the transition function that maps old computational states to new ones is a one-to-one function; i.e. the output logical states uniquely determine the input logical states of the computational operation. For computational processes that are nondeterministic (in the sense of being probabilistic or random), the relation between old and new states is not a single-valued function, and the requirement needed to obtain physical reversibility becomes a slightly weaker condition, namely that the size of a given ensemble of possible initial computational states does not decrease, on average, as the computation proceeds forwards. Physical reversibility Landauer's principle (and indeed, the second law of thermodynamics) can also be understood to be a direct logical consequence of the underlying reversibility of physics, as is reflected in the general Hamiltonian formulation of mechanics, and in the unitary time-evolution operator of quantum mechanics more specifically. The implementation of reversible computing thus amounts to learning how to characterize and control the physical dynamics of mechanisms to carry out desired computational operations so precisely that the experiment accumulates a negligible total amount of uncertainty regarding the complete physical state of the mechanism, per each logic operation that is performed. In other words, precisely track the state of the active energy that is involved in carrying out computational operations within the machine, and design the machine so that the majority of this energy is recovered in an organized form that can be reused for subsequent operations, rather than being permitted to dissipate into the form of heat. Although achieving this goal presents a significant challenge for the design, manufacturing, and characterization of ultra-precise new physical mechanisms for computing, there is at present no fundamental reason to think that this goal cannot eventually be accomplished, allowing someday to build computers that generate much less than 1 bit's worth of physical entropy (and dissipate much less than kT ln 2 energy to heat) for each useful logical operation that they carry out internally. Today, the field has a substantial body of academic literature. A wide variety of reversible device concepts, logic gates, electronic circuits, processor architectures, programming languages, and application algorithms have been designed and analyzed by physicists, electrical engineers, and computer scientists. This field of research awaits the detailed development of a high-quality, cost-effective, nearly reversible logic device technology, one that includes highly energy-efficient clocking and synchronization mechanisms, or avoids the need for these through asynchronous design. This sort of solid engineering progress will be needed before the large body of theoretical research on reversible computing can find practical application in enabling real computer technology to circumvent the various near-term barriers to its energy efficiency, including the von Neumann–Landauer bound. This may only be circumvented by the use of logically reversible computing, due to the second law of thermodynamics. Logical reversibility For a computational operation to be logically reversible means that the output (or final state) of the operation can be computed from the input (or initial state), and vice versa. Reversible functions are bijective. This means that reversible gates (and circuits, i.e. compositions of multiple gates) generally have the same number of input bits as output bits (assuming that all input bits are consumed by the operation, and that all input/output states are possible). An inverter (NOT) gate is logically reversible because it can be undone. The NOT gate may however not be physically reversible, depending on its implementation. The exclusive or (XOR) gate is irreversible because its two inputs cannot be unambiguously reconstructed from its single output, or alternatively, because information erasure is not reversible. However, a reversible version of the XOR gate—the controlled NOT gate (CNOT)—can be defined by preserving one of the inputs as a 2nd output. The three-input variant of the CNOT gate is called the Toffoli gate. It preserves two of its inputs a,b and replaces the third c by . With , this gives the AND function, and with this gives the NOT function. Because AND and NOT together is a functionally complete set, the Toffoli gate is universal and can implement any Boolean function (if given enough initialized ancilla bits). Similarly, in the Turing machine model of computation, a reversible Turing machine is one whose transition function is invertible, so that each machine state has at most one predecessor. Yves Lecerf proposed a reversible Turing machine in a 1963 paper, but apparently unaware of Landauer's principle, did not pursue the subject further, devoting most of the rest of his career to ethnolinguistics. In 1973 Charles H. Bennett, at IBM Research, showed that a universal Turing machine could be made both logically and thermodynamically reversible, and therefore able in principle to perform an arbitrarily large number of computation steps per unit of physical energy dissipated, if operated sufficiently slowly. Thermodynamically reversible computers could perform useful computations at useful speed, while dissipating considerably less than kT of energy per logical step. In 1982 Edward Fredkin and Tommaso Toffoli proposed the Billiard ball computer, a mechanism using classical hard spheres to do reversible computations at finite speed with zero dissipation, but requiring perfect initial alignment of the balls' trajectories, and Bennett's review compared these "Brownian" and "ballistic" paradigms for reversible computation. Aside from the motivation of energy-efficient computation, reversible logic gates offered practical improvements of bit-manipulation transforms in cryptography and computer graphics. Since the 1980s, reversible circuits have attracted interest as components of quantum algorithms, and more recently in photonic and nano-computing technologies where some switching devices offer no signal gain. Surveys of reversible circuits, their construction and optimization, as well as recent research challenges, are available. Commercialization London-based Vaire Computing is prototyping a chip in 2025, for release in 2027. See also , on the uncertainty interpretation of the second law of thermodynamics , a variant of reversible cellular automata References Further reading Frank, Michael P. (2017). "The Future of Computing Depends on Making It Reversible"" (web) / "Throwing Computing Into Reverse" (print). IEEE Spectrum. 54 (9): 32–37. doi:10.1109/MSPEC.2017.8012237. Perumalla K. S. (2014), Introduction to Reversible Computing, CRC Press. External links Introductory article on reversible computing First International Workshop on reversible computing Publications of Michael P. Frank: Sandia (2015-), FSU (2004-'15), UF (1999-2004), MIT 1996-'99). Internet Archive backup of the "Reversible computing community Wiki" that was administered by Frank Reversible Computation workshop/conference series CCC Workshop on Physics & Engineering Issues in Adiabatic/Reversible Classical Computing Open-source toolkit for reversible circuit design Digital electronics Models of computation Thermodynamics
Reversible computing
Physics,Chemistry,Mathematics,Engineering
2,067
64,504,593
https://en.wikipedia.org/wiki/Colloquium%20on%20Violence%20%26%20Religion
The Colloquium on Violence and Religion (COV&R) is an international organization dedicated to “exploring, critiquing, and developing” the mimetic theory proposed by the French historian, literary critic, and anthropological philosopher René Girard. Membership includes scholars of theology, religious studies, literary studies, philosophy, psychology, and other academic fields as well as clergy and other practitioners. Girard's work focused on the sources of human violence in mimetic (unconsciously imitative) desire and the centrality of religion in the formation of culture through the management of violence (the single-victim mechanism or scapegoat effect), but the scope of the Colloquium on Violence & Religion's interest has expanded beyond violence to mimetic desire's positive potential and beyond religion to other disciplines. The Colloquium on Violence & Religion is affiliated with regional organizations around the world devoted to Girard's work, mimetic theory, and peacemaking. History The Colloquium on Violence & Religion began with a meeting in 1990 at Stanford University with theologians James G. Williams, Robert Hamerton-Kelly, and Charles Mabee as its three co-founders. When constituted formally in 1991, it formed a board with Girard as honorary chair; Raymund Schwager, a theologian from the University of Innsbruck, as president; Williams as executive secretary; and Wolfgang Palaver (de), also a theologian from Innsbruck, as editor of the newsletter. Prominent board members have included James Alison, Eric Gans, and Walter Wink. Publications Michigan State University Press publishes the annual journal of Colloquium on Violence & Religion, Contagion: Journal of Violence, Mimesis, and Culture (ISSN 1930-1200) and two related series of books: Breakthroughs in Mimetic Theory and Studies in Violence, Mimesis, and Culture. Colloquium on Violence & Religion also publishes a quarterly online newsletter, The Bulletin of the Colloquium on Violence and Religion. A complete bibliography is included in the fully searchable Index Theologicus database. Annual meeting Colloquium on Violence & Religion holds an annual summer meeting, usually in July. The location has recently rotated in a three-year cycle between sites in North America, Europe, and the rest of the world. It also meets in conjunction with the annual meeting of the American Academy of Religion in November. Presidents 1991–1995: Raymund Schwager 1995–1999: Cesário Bandera 1999–2003: Diana Culbertson 2003–2007: Sandor Goodhart 2007–2011: Wolfgang Palaver (de) 2011–2015: Ann W. Astell 2015–2019: Jeremiah Alberg 2019–2023: Martha Reineke References External links Index Theologicus bibliography of mimetic theory Violence Religion and society Theology
Colloquium on Violence & Religion
Biology
583
43,451,153
https://en.wikipedia.org/wiki/American%20Super%20Computing%20Leadership%20Act
The American Super Computing Leadership Act () is a bill that would require the United States Department of Energy to improve and increase its use of high-end computers, especially exascale computing, through an organized research program. The bill was introduced into the United States House of Representatives during the 113th United States Congress. Background There are existing exascale computer research programs in both China and Europe. Provisions of the bill This summary is based largely on the summary provided by the Congressional Research Service, a public domain source. The American Super Computing Leadership Act would amend the Department of Energy High-End Computing Revitalization Act of 2004 with respect to: (1) exascale computing (computing system performance at or near 10 to the 18th power floating point operations per second); and (2) a high-end computing system with performance substantially exceeding that of systems commonly available for advanced scientific and engineering applications. The bill would direct the United States Secretary of Energy (DOE) to: (1) coordinate the development of high-end computing systems across DOE; (2) partner with universities, National Laboratories, and industry to ensure the broadest possible application of the technology developed in the program to other challenges in science, engineering, medicine, and industry; and (3) include among the multiple architectures researched, at DOE discretion, any computer technologies that show promise of substantial reductions in power requirements and substantial gains in parallelism of multicore processors, concurrency, memory and storage, bandwidth, and reliability. The bill would repeal authority for establishment of at least one High-End Software Development Center. The bill would direct the Secretary to conduct a coordinated research program to develop exascale computing systems to advance DOE missions. Requires establishment through competitive merit review of two or more DOE National Laboratory-industry-university partnerships to conduct integrated research, development, and engineering of multiple exascale architectures. The bill would require the Secretary to conduct mission-related co-design activities in developing such exascale platforms. Defines "co-design" as the joint development of application algorithms, models, and codes with computer technology architectures and operating systems to maximize effective use of high-end computing systems. The bill would direct the Secretary to develop any advancements in hardware and software technology required to realize fully the potential of an exascale production system in addressing DOE target applications and solving scientific problems involving predictive modeling and simulation and large-scale data analytics and management. Requires DOE also to explore the use of exascale computing technologies to advance a broad range of science and engineering. The bill would direct the Secretary to submit to Congress an integrated strategy and program management plan. The bill would require the Secretary, before initiating construction or installation of an exascale-class computing facility, to transmit to Congress a separate plan detailing: (1) the proposed facility's cost projections and capabilities to significantly accelerate the development of new energy technologies; (2) technical risks and challenges that must be overcome to achieve successful completion and operation of the facility; and (3) an independent assessment of the scientific and technological advances expected from such a facility relative to those expected from a comparable investment in expanded research and applications at terascale-class and petascale-class computing facilities, including an evaluation of where investments should be made in the system software and algorithms to enable these advances. Procedural history The American Super Computing Leadership Act was introduced into the United States House of Representatives on June 25, 2013 by Rep. Randy Hultgren (R, IL-14). It was referred to the United States House Committee on Science, Space and Technology and the United States House Science Subcommittee on Energy. The bill was scheduled to be voted on under a suspension of the rules on September 8, 2014. Debate and discussion Rep. Hultgren was inspired by a new 33.89-pentaflop computer, the Tianhe-2, that was announced in China. Hultgren said that "it's important not to lose sight that the reality was that it was built by China's National University of Defense Technology." The chair of the Department of Cognitive Sciences at Rensselaer Polytechnic Institute, Selmer Bringsford, said that the United States falling behind in this field would be "devastating" because "if we were to lose our capacity to build preeminently smart machines, that would be a very dark situation, because machines can serve as weapons." Another consequence to the United States falling behind could be a brain drain of the best scientists and engineers in the field to other countries that are doing more advanced work. Rick Stevens testified in support of the bill during a May 22, 2013 hearing. Stevens is the Associate Laboratory Director responsible for Computing, Environment, and Life Sciences research at Argonne National Laboratory. He called high-performance computing "vital to our national interest" arguing that it is "needed by all branches of science and engineering" and is used "by U.S. industry to maintain a competitive edge in the development of new products and services." Aline D. McNaull or the American Institute of Physics reported that members of the United States House Science Subcommittee on Energy "demonstrated bi-partisan enthusiasm for advanced computing technology" during a May 22, 2013 hearing. References External links Library of Congress - Thomas H.R. 2495 beta.congress.gov H.R. 2495 GovTrack.us H.R. 2495 OpenCongress.org H.R. 2495 WashingtonWatch.com H.R. 2495 Proposed legislation of the 113th United States Congress Computer performance
American Super Computing Leadership Act
Technology
1,139
381,010
https://en.wikipedia.org/wiki/Group%20algebra%20of%20a%20locally%20compact%20group
In functional analysis and related areas of mathematics, the group algebra is any of various constructions to assign to a locally compact group an operator algebra (or more generally a Banach algebra), such that representations of the algebra are related to representations of the group. As such, they are similar to the group ring associated to a discrete group. The algebra Cc(G) of continuous functions with compact support If G is a locally compact Hausdorff group, G carries an essentially unique left-invariant countably additive Borel measure μ called a Haar measure. Using the Haar measure, one can define a convolution operation on the space Cc(G) of complex-valued continuous functions on G with compact support; Cc(G) can then be given any of various norms and the completion will be a group algebra. To define the convolution operation, let f and g be two functions in Cc(G). For t in G, define The fact that is continuous is immediate from the dominated convergence theorem. Also where the dot stands for the product in G. Cc(G) also has a natural involution defined by: where Δ is the modular function on G. With this involution, it is a *-algebra. Theorem. With the norm: Cc(G) becomes an involutive normed algebra with an approximate identity. The approximate identity can be indexed on a neighborhood basis of the identity consisting of compact sets. Indeed, if V is a compact neighborhood of the identity, let fV be a non-negative continuous function supported in V such that Then {fV}V is an approximate identity. A group algebra has an identity, as opposed to just an approximate identity, if and only if the topology on the group is the discrete topology. Note that for discrete groups, Cc(G) is the same thing as the complex group ring C[G]. The importance of the group algebra is that it captures the unitary representation theory of G as shown in the following Theorem. Let G be a locally compact group. If U is a strongly continuous unitary representation of G on a Hilbert space H, then is a non-degenerate bounded *-representation of the normed algebra Cc(G). The map is a bijection between the set of strongly continuous unitary representations of G and non-degenerate bounded *-representations of Cc(G). This bijection respects unitary equivalence and strong containment. In particular, U is irreducible if and only if U is irreducible. Non-degeneracy of a representation of Cc(G) on a Hilbert space H means that is dense in H. The convolution algebra L1(G) It is a standard theorem of measure theory that the completion of Cc(G) in the L1(G) norm is isomorphic to the space L1(G) of equivalence classes of functions which are integrable with respect to the Haar measure, where, as usual, two functions are regarded as equivalent if and only if they differ only on a set of Haar measure zero. Theorem. L1(G) is a Banach *-algebra with the convolution product and involution defined above and with the L1 norm. L1(G) also has a bounded approximate identity. The group C*-algebra C*(G) Let C[G] be the group ring of a discrete group G. For a locally compact group G, the group C*-algebra C*(G) of G is defined to be the C*-enveloping algebra of L1(G), i.e. the completion of Cc(G) with respect to the largest C*-norm: where ranges over all non-degenerate *-representations of Cc(G) on Hilbert spaces. When G is discrete, it follows from the triangle inequality that, for any such , one has: hence the norm is well-defined. It follows from the definition that, when G is a discrete group, C*(G) has the following universal property: any *-homomorphism from C[G] to some B(H) (the C*-algebra of bounded operators on some Hilbert space H) factors through the inclusion map: The reduced group C*-algebra Cr*(G) The reduced group C*-algebra Cr*(G) is the completion of Cc(G) with respect to the norm where is the L2 norm. Since the completion of Cc(G) with regard to the L2 norm is a Hilbert space, the Cr* norm is the norm of the bounded operator acting on L2(G) by convolution with f and thus a C*-norm. Equivalently, Cr*(G) is the C*-algebra generated by the image of the left regular representation on ℓ2(G). In general, Cr*(G) is a quotient of C*(G). The reduced group C*-algebra is isomorphic to the non-reduced group C*-algebra defined above if and only if G is amenable. von Neumann algebras associated to groups The group von Neumann algebra W*(G) of G is the enveloping von Neumann algebra of C*(G). For a discrete group G, we can consider the Hilbert space ℓ2(G) for which G is an orthonormal basis. Since G operates on ℓ2(G) by permuting the basis vectors, we can identify the complex group ring C[G] with a subalgebra of the algebra of bounded operators on ℓ2(G). The weak closure of this subalgebra, NG, is a von Neumann algebra. The center of NG can be described in terms of those elements of G whose conjugacy class is finite. In particular, if the identity element of G is the only group element with that property (that is, G has the infinite conjugacy class property), the center of NG consists only of complex multiples of the identity. NG is isomorphic to the hyperfinite type II1 factor if and only if G is countable, amenable, and has the infinite conjugacy class property. See also Graph algebra Incidence algebra Hecke algebra of a locally compact group Path algebra Groupoid algebra Stereotype algebra Stereotype group algebra Hopf algebra Notes References Algebras C*-algebras von Neumann algebras Unitary representation theory Harmonic analysis Lie groups
Group algebra of a locally compact group
Mathematics
1,346
2,070,189
https://en.wikipedia.org/wiki/Surrey%20Satellite%20Technology
Surrey Satellite Technology Ltd, or SSTL, is a company involved in the manufacture and operation of small satellites. A spin-off company of the University of Surrey, it is presently wholly owned by Airbus Defence and Space. The company began out of research efforts centred upon amateur radio satellites, known by the UoSAT (University of Surrey Satellite) name or by an OSCAR (Orbital Satellite Carrying Amateur Radio) designation. SSTL was founded in 1985, following successful trials on the use of commercial off-the-shelf (COTS) components on satellites, cumulating in the UoSat-1 test satellite. It funds research projects with the university's Surrey Space Centre, which does research into satellite and space topics. In April 2008, the University of Surrey agreed to sell its majority share in the company to European multinational conglomerate EADS Astrium. In August 2008, SSTL opened a US subsidiary, which included both offices and a production site in Denver, Colorado; in 2017, the company decided to discontinue manufacturing activity in the US, winding up this subsidiary. SSTL was awarded the Queen's Award for Technological Achievement in 1998, and the Queen's Awards for Enterprise in 2005. In 2006 SSTL won the Times Higher Education award for outstanding contribution to innovation and technology. In 2009, SSTL ranked 89 out of the 997 companies that took part in the Sunday Times Top 100 companies to work for. In 2020, SSTL started the creation of a telecommunications spacecraft called Lunar Pathfinder for lunar missions. It will be launched in 2025 and used for data transmission to Earth. History Background and early years During the early decades of the Cold War era, access to space was effectively the privilege of a handful of superpowers; by the 1970s, only the most affluent of countries could afford to engage in space programmes due to extreme complexity and expenses involved. Despite the exorbitant costs to produce and launch, early satellites could only offer limited functionality, having no ability to be reprogrammed once in orbit. During the late 1970s, a group of researchers at the University of Surrey, headed by Martin Sweeting, were experimenting with the use of commercial off-the-shelf (COTS) components in satellite construction; if found viable, such techniques would be highly disruptive to the established satellite industry. The team's first satellite, UoSAT-1, was assembled in a small university lab, using in a cleanroom fabricated from B&Q and integrating printed circuit boards designed by hand on a kitchen table. In 1981, UoSAT-1 was launched with NASA's aid; representing the first modern reprogrammable small satellite, it outlived its planned three-year life by more than five years. Having successfully demonstrated that relatively compact and inexpensive satellites could be rapidly built to perform sophisticated missions, the team decided to take further steps to commercialise their research. During 1985, Surrey Satellite Technology Ltd (SSTL) was founded in Guildford, Surrey, United Kingdom as a spin-off venture from the university. Since its founding, it has steadily grown, having worked with numerous international customers to launch over 70 satellites over the course of three decades. Growth and restructuring In 2002, SSTL moved into remote sensing services with the launch of the Disaster Monitoring Constellation (DMC) and an associated child company, DMC International Imaging. Some of these satellites also include other imaging payloads and experimental payloads: onboard hardware-based image compression (on BilSAT), a GPS reflectometry experiment and onboard Internet router (on the UK-DMC satellite). The DMC satellites are notable for communicating with their ground stations using the Internet Protocol for payload data transfer and command and control, so extending the Internet into space, and allowing experiments with the Interplanetary Internet to be carried out. Many of the technologies used in the design of the DMC satellites, including Internet Protocol use, were tested in space beforehand on SSTL's earlier UoSAT-12 satellite. During June 2004, American private space company SpaceX arranged to acquire a 10% stake in SSTL from Surrey University; speaking on the purchase, Elon Musk stated: "SSTL is a high-quality company that is probably the world leader in small satellites. We look at this as more a case of similar corporate cultures getting together". The University of Surrey then awarded Musk an honorary doctorate. In April 2008, the University of Surrey agreed to sell its majority share in SSTL, roughly 80% of the company's capital, to European multinational conglomerate EADS Astrium. SSTL has remained an independent entity despite all shares having been purchased by Airbus, the parent company of EADS Astrium. During 2005, SSTL completed construction of GIOVE-A1, the first test satellite for Europe's Galileo space navigation system. In 2010 and 2012, the firm was awarded contracts to supply 22 navigation payloads for Galileo, the last of which was delivered during 2016. During 2017, SSTL was awarded a contract to supply a further 12 payloads; this was viewed as a coup in light of the political backdrop surrounding Brexit. During the 2010s, SSTL has been working on various improvements in its satellite technology, such as synthetic-aperture radar (SAR) as well as smaller and lighter units. According to Luis Gomes, SSTL's head of Earth observation, micro-satellites translate to a lower cost of design, construction and launch, albeit at a cost of a more frequent failure rate, in comparison to larger and more costly units. These features has been marketed towards customers such as the DMC. In summer 2008, Surrey formed an American subsidiary, Surrey Satellite Technology-US, in Douglas County, Colorado, intent on serving US customers in the smallsat market. In June 2017, SSTL announced their intention to close their Colorado satellite manufacturing facility, opting to instead consolidate all of its manufacturing activity in the UK. Sarah Parker, SSTL's managing director, said that the rapid growth of new competing firms in the small satellite sector had changed the marketplace, necessitating reorganisation, which has included the increased use of outsourcing. Satellites Eutelsat Quantum satellite platform, consisting of a central thrust tube housing a bipropellant chemical propulsion system, GEO momentum wheels and gyro. Designed to be reconfigurable via software definitions, enabling it to change roles and functions. Delivered to Airbus in Toulouse during January 2019 for assembly and testing. Quantum is SSTL's first geostationary satellite platform. COSMIC-2/FORMOSAT-7 for National Space Organization (Taiwan) and NOAA (US). Atmospheric limb sounding by GNSS radio occultation, ionospheric research; follow-on mission to COSMIC/FORMOSAT-3. VESTA-1 a technology demonstration mission for Honeywell launched December 2018 that will test a new two-way VHF Data Exchange System (VDES) payload for the exactEarth advanced maritime satellite constellation. Launched 3 December 2018. NovaSAR-1:- Part funded by UK Government, S-Band SAR Payload supplied by Airbus Defence &Space. Incorporates an S-Band Synthetic Aperture Radar to help monitor suspicious shipping activity. Launched on 16 September 2018, by ISRO. RemoveDEBRIS: Active Debris Removal (ADR) technology demonstration in 2018 (e.g. capture, deorbiting) representative of an operational scenario during a low-cost mission using novel key technologies. RemoveDebris will deploy a representative small satellite and then will recapture and de-orbit it. Launched on 2 April 2018 to the International Space Station, deployed from the KIBO airlock on the ISS in June 2018. Telesat LEO prototype satellite for Telesat as part of a test and validation phase for an advanced, global LEO satellite constellation. Launched January 2018. CARBONITE-2, an Earth Observation technology demonstration mission owned and operated by SSTL and launched January 2018 which successfully demonstrated video-from-orbit capability. TripleSat: A Constellation of 3 Earth observation satellites imaging at 1m resolution. Image data leased to Chinese company 21AT. Five RapidEye satellite platforms delivered to MDA MacDonald Dettwiler & Associates for the RapidEye Constellation and successfully launched from Baikonur on 29 August 2008. UK-DMC 2 and Deimos-1 were launched on a Dnepr rocket from the Baikonur Cosmodrome on 29 July 2009. NigeriaSat-2 and NX satellites, successfully launched on 17 August 2011. exactView-1, successfully launched on 22 July 2012 on a Soyuz rocket from the Baikonur Cosmodrome. SAPPHIRE: Providing a satellite-based Resident Space Object (RSO) observing service that will provide accurate tracking data on deep space orbiting objects. Sapphire is the Canadian Department of National Defence's first dedicated operational military satellite. Its space-based electro-optical sensor will track man-made space objects in Earth orbits between 6000 and 40,000 km as part of Canada's continued support of Space Situational Awareness and the U.S. Space Surveillance Network by updating the U.S. Satellite Catalogue that is used by both NORAD and Canada. STRaND-1: Surrey Training, Research and Nanosatellite Development 1, launched in 2013, flies several new technologies for space applications and demonstration including the use of Android (operating system) open source operating system on a Smartphone. See also Aerospace industry in the United Kingdom Comparison of satellite buses UoSAT-1 UoSAT-2 UoSAT-3 UoSAT-4 UoSAT-5 UoSAT-12 Cerise AlSAT-1 UK-DMC UK-DMC 2 UK-DMC 3 BILSAT-1 Deimos-1 Snap-1 nanosatellite FASat-Alfa and Bravo RemoveDEBRIS KazEOSat 2 Navigation Payloads for Europe's Galileo Constellation Between 2010 and 2020 SSTL manufactured and delivered 34 navigation payloads for the deployment phase of Galileo, Europe's satellite navigation system. OHB System AG was the prime contractor and builder of the spacecraft platform and SSTL had full responsibility for the navigation payloads, the brains of Galileo's navigation system. References External links Surrey Satellite Technology Ltd Aerospace companies of the United Kingdom Aerospace engineering organizations Airbus Defence and Space Spacecraft manufacturers Space programme of the United Kingdom University of Surrey Companies based in Guildford Technology companies established in 1985 1985 establishments in England Amateur radio companies
Surrey Satellite Technology
Engineering
2,128
23,434,128
https://en.wikipedia.org/wiki/C14H10O2
{{DISPLAYTITLE:C14H10O2}} The molecular formula C14H10O2 (molar mass: 210.23 g/mol, exact mass: 210.0681 u) may refer to: Benzil 9,10-Dihydroxyanthracene Molecular formulas
C14H10O2
Physics,Chemistry
67
39,300,614
https://en.wikipedia.org/wiki/Social%20monogamy%20in%20mammalian%20species
Social monogamy in mammals is defined as sexually mature adult organisms living in pairs. While there are many definitions of social monogamy, this social organization can be found in invertebrates, reptiles and amphibians, fish, birds, mammals, and humans. It should not be confused with genetic monogamy, which refers to two individuals who only reproduce with one another. Social monogamy does not describe the sexual interactions or patterns of reproduction between monogamous pairs; rather it strictly refers to the patterns of their living conditions. Rather, sexual and genetic monogamy describe reproductive patterns. It is possible for a species to be both genetically monogamous and socially monogamous but it is more likely for species to practice social monogamy and not genetic monogamy. Social monogamy consists of, but is not limited to: sharing the same territory; obtaining food resources; and raising offspring together. A unique characteristic of monogamy is that unlike in polygamous species, parents share parenting tasks. Even though their tasks are shared, monogamy does not define the degree of paternal investment in the breeding of the young. Only ~3–5% of all mammalian species are socially monogamous, including some species that mate for life and ones that mate for an extended period of time. Monogamy is more common among primates: about 29% of primate species are socially monogamous. Lifelong monogamy is very rare; however, it is exemplified by species such as the Prairie vole (Microtus ochrogaster). A vast majority of monogamous mammals practice serial social monogamy where another male or female is accepted into a new partnership in the case of a partner's death. In addition, there are some species that exhibit short-term monogamy which involves partnership termination while one's partner is still alive; however, it usually lasts for at least one breeding season. Monogamy usually does not occur in groups where there is a high abundance of females, but rather in ones where females occupy small ranges. Socially monogamous mammals live at significantly lower population densities than do solitary species. Additionally, most mammals exhibit male-biased dispersal; however, most monogamous mammalian species display female-biased dispersal. Some socially monogamous species exhibit pair bonds that occur between two sexually mature organisms, have an affective component, be specific to the individual, last longer that one reproductive cycle, and be quantifiable in strength or quality of relationship. Pair bonding can exhibit (but does not have to) sexual behaviors and/or bi-parental care. Pair bonding cannot exhibit, however, organisms that cannot identify one another in a pair, end in the death of a mate or separation from the mate directly after mating, lack of distress when separated from the mate, or lack sociality. Not all socially monogamous species exhibit pair bonding, but all pair bonding animals practice social monogamy. These characteristics aid in identifying a species as being socially monogamous. At the biological level, social monogamy affects the neurobiology of the organism through hormone pathways such as vasopressin and oxytocin. Vasopressin is related to the distress hormone an organism feels when separated from their mate while oxytocin is associated with the affective component of the social interactions between mates. These biological factors give way to a genetic component that evolution could act on via selection to evolve social monogamy in animals. Types of social monogamy Facultative monogamy Facultative monogamy, or Type I monogamy, occurs when the male is not fully committed to one female, but he chooses to stay with her because there are no other mating opportunities available to him. In this type of monogamy, species rarely spend time with their families, and there is a lack of paternal care towards the offspring. Elephant shrews (Rhynchocyon chrysopygus and Elephantulus rufescens), Agoutis (Dasyprocta punctata), Grey duikers (Sylvicapra grimmia), and Pacaranas (Dinomys branickii) are some of the most common examples of the mammalian species that display Type I monogamy. In addition, these species are characterized to occupy low areas over a large expand of land. Obligate monogamy Obligate monogamy, or Type II monogamy, is practiced by species that live in overlapping territories, where females cannot rear their young without the help of their partners. Species such as Indris (Indri indri), Night monkeys (Aotus trivirgatus), African dormice (Notomys alexis), and Hutias (Capromys melanurus) are observed as family groups who live together with a number of generations of their young. There are several factors that are associated with Type II monogamy: high paternal investment when offspring mature in the family setting delayed sexual maturation observed in juveniles that remain in the family group juveniles contributing greatly to the rearing of their siblings when retained in the family group. Group living One of the key factors of monogamous pairings is group living. Advantages to living in groups include, but are not limited to: Susceptibility to predation: animals such as the common dwarf mongoose (Helogale parvula) and tamarin (such as Saguinus oedipus) may benefit from such group living by having alarm calls in response to an approaching predator. Food acquisition: it is considerably easier for animals to hunt in a group rather than by themselves. For this reason, mammals such as dwarf mongooses, marmosets and tamarins hunt in groups and share their food among their family members or members of the group. Localization of resources: in some species, such as Eurasian beaver (Castor fiber), localization of an adequate lodge area (a pond or a stream) is more beneficial in a group setting. This group living arrangement gives beavers a better chance to find a high quality place to live by searching for it in a group rather than by one individual. These group living advantages, however, do not describe why monogamy, and not polygyny, has evolved in the species mentioned above. Some possible conditions which may account for cases of monogamous behavior in mammalian species may have to do with: scarce resources available on any given territory so that two or more individuals are needed in order to defend it physical environment conditions are so unfavorable that multiple individuals are needed to cope with it early breeding serves as an advantage to the species and is crucial to monogamous species. Evolution of monogamy There are several hypotheses for the evolution of mammalian monogamy that have been extensively studied. While some of these hypotheses apply to a majority of monogamous species, other apply to a very limited number of them. Proximate causes Hormones and Neurotransmitters Vasopressin is a hormone that induces a male Prairie vole to mate with one female, form a pair bond, and exhibit mate-guarding behavior (i.e. increase the degree of monogamous behavior). The presence of vasopressin receptor 1A (V1aR) in the ventral forebrain is associated with pair bonding, which is necessary for monogamy. Genetic differences in the V1aR gene also play a role in monogamy: voles with long V1aR alleles exhibit more monogamous tendencies by preferring their mate over a stranger of the opposite sex, whereas voles with short V1aR alleles displayed a lesser degree of partner preference. Vasopressin is responsible for forming attachment between male and female prairie voles. Vasopressin also regulates paternal care. Finally, vasopressin activity results in "postmating aggression" that allows prairie voles to protect their mate. Oxytocin is a hormone that regulates pair bond formation along with vasopressin. Blocking either oxytocin or vasopressin prevents formation of the pair bond but continues to allow for social behavior. Blocking both hormones resulted in no pair bond and reduced sociality. Oxytocin also attenuates the negative effects of cortisol, a hormone related to stress, so that monogamy helps produce positive health effects. Male marmosets that received an oxytocin antagonist had increased HPA-axis activity in response to a stressor than when treated with a control, showing the oxytocin associated with the pair bond lessens the physiological responses to stress. Also, marmosets who previously had elevated cortisol levels spent more time in close proximity to their mate than marmosets with previously normal cortisol levels. Dopamine, a neurotransmitter, produces pleasurable effects that reinforce monogamous behavior. Haloperidol, a dopamine antagonist, prevented partner preference but did not disrupt mating while apomorphine, a dopamine agonist, induced pair bonding without mating, showing dopamine is necessary for the formation of the pair bond in prairie voles. In addition, mating induced a 33% increase in turnover of dopamine in the nucleus accumbens. While this result was not statistically significant, it may indicate that mating can induce pair bond formation via the dopaminergic reward system. Elevated testosterone levels are associated with decreased paternal behavior and decreased testosterone levels are associated with decreased rates of infanticide. Experienced Marmoset fathers had decreased testosterone levels after exposure to their 2-week-old infant's scent but not their 3-month-old infant's or a stranger infant's, suggesting offspring-specific olfactory signals can regulate testosterone and induce paternal behavior. Ultimate causes Female distribution Female distribution seems to be one of the best predictors of the evolution of monogamy in some species of mammals. It is possible that monogamy evolved due to a low female availability or high female dispersion where males were unable to monopolize more than one mate over a period of time. In species such as Kirk's dik-dik (Madoqua kirkii) and Rufous elephant shrew (Elephantulus rufescens), biparental care is not very common. These species do, however, exhibit monogamous mating systems presumably due to high dispersal rates. Komers and Brotherton (1997) indicated that there is a significant correlation between mating systems and grouping patterns in these species. Furthermore, monogamous mating system and female dispersion are found to be closely related. Some of the main conclusions of the occurrence of monogamy in mammals include: Monogamy occurs when males are unable to monopolize more than one female Monogamy should be more likely if female under-dispersion occurs Female home range is larger for monogamous species When females are solitary and occupy large ranges This phenomenon is not common for all species, but species such as the Japanese serow (Capricornis crispus) exhibits this behavior, for example. Bi-parental care It is believed that bi-parental care had an important role in the evolution of monogamy. Because mammalian females undergo periods of gestation and lactation, they are well adapted to take care of their young for a long period of time, as opposed to their male partners who do not necessarily contribute to this rearing process. Such differences in parental contribution could be a result of the male's drive to seek other females in order to increase their reproductive success, which may prevent them from spending extra time helping raise their offspring. Helping a female in young rearing could potentially jeopardize a male's fitness and result in the loss of mating opportunities. There are some monogamous species that exhibit this type of care mainly to improve their offspring's survivorship; however it does not occur in more than 5% of all mammals. Bi-parental care has been extensively studied in the California deermouse (Peromyscus californicus). This species of mice is known to be strictly monogamous; mates pair for a long period of time, and the level of extra-pair paternity is considerably low. It has been shown that in the event of female removal, it is the male that takes direct care of the offspring and acts as the primary hope for the survival of his young. Females who attempt to raise their young in cases where their mate is removed often do not succeed due to high maintenance costs that have to do with raising an offspring. With the presence of males, the survival of the offspring is much more probable; thus, it is in the best interest for both parents to contribute. This concept also applies to other species, ilike the Fat-tailed dwarf lemurs (Cheirogaleus medius), where females were also not successful at raising their offspring without paternal help. Lastly, in a study performed by Wynne-Edwards (1987), 95% of Campbell's dwarf hamsters (Phodopus campbelli) survived in the presence of both parents, but only 47% survived if the father was removed. There are several key factors that may affect the extent to which males care for their young: Intrinsic ability to aid offspring: the male's ability to exhibit parental care. Sociality: male paternal behavior shaped by permanent group living. There is a closer association between the male and his offspring in small groups that are often composed of individuals that are genetically related. Common examples include mongooses, wolves, and naked mole-rats. High costs to polygyny: some males could evolve to care for their offspring in cases where females were too dispersed over the given territory and the male could not find consistent females to mate with. In those territories, individuals such as elephant shrews, and dasyproctids, stay within their known territories rather than going outside of their limits in order to search for another mate, which would be more costly than staying around his adapted territory. Paternity certainty: There are cases where males care for offspring that they are not genetically related to especially in groups where cooperative breeding is practiced. However, in some species, males are able to identify their own offspring, especially in threat of infanticide. In these groups, paternity certainty could be a factor deciding about biparental care. Infanticide In primates, it is thought that risk of infanticide is the primary driver for the evolution of socially monogamous relationships. Primates are unusual in that 25% of all species are socially monogamous; additionally, this trait has evolved separately in every major clade. Primates also experience higher rates of infanticide than most other animals, with infanticide rates as high as 63% in some species. Opie, Atkinson, Dunbar, & Shutlz (2013) found strong evidence that male infanticide preceded the evolutionary switch to social monogamy in primates rather than bi-parental care or female distribution, suggesting that infanticide is the main cause for the evolution of social monogamy in primates. This is consistent with the findings that indicate that the percentage of infant loss is significantly lower in monogamous than in polyandrous species. Due to the length of gestation and lactation in female mammals, infanticide, the killing of the offspring by adult individuals, is relatively common in this group. Since there is a strong male to male competition for reproduction in species with this behaviour, infanticide could be an adaptative strategy to enhance fitness if: the male only kills unrelated infants. the male's chance of siring the next offspring is high. the female could benefit from killing other female's offspring by reducing future competition for food or shelter. The rates of infanticide are very low in other monogamous groups of larger mammals. Evolutionary consequences The forementioned ultimate causes of monogamy in mammals can have phenotypic consequences on the sexual size dimorphism of mammals. In other words, it is thought that in monogamous species males would tend to have a similar or lower body size to the one of females. This is because males from monogamous species do not compete as strongly with each other, hence investing in greater physical abilities would be costlier for males. Comparatively, we can conclude that sexual dimorphism is reduced in long-term pair bonding species, by observing that polygynous species tend to have a greater sexual size dimorphism. References Monogamy Animal sexuality Mammalian sexuality
Social monogamy in mammalian species
Biology
3,393
60,054,667
https://en.wikipedia.org/wiki/ROSE%20test
The resistivity of solvent extract (ROSE) test is a test for the presence and average concentration of soluble ionic contaminants, for example on a printed circuit board (PCB). It was developed in the early 1970s. Some manufacturers use it as part of Six Sigma processes. Some modern fluxes have low solubility in traditional ROSE solvents such as water and isopropyl alcohol, and therefore require the use of different solvents. References Chemical tests Printed circuit board manufacturing
ROSE test
Chemistry,Engineering
101
41,439,747
https://en.wikipedia.org/wiki/Pseudohypoxia
Pseudohypoxia refers to a condition that mimics hypoxia, by having sufficient oxygen yet impaired mitochondrial respiration due to a deficiency of necessary co-enzymes, such as NAD+ and TPP. The increased cytosolic ratio of free NADH/NAD+ in cells (more NADH than NAD+) can be caused by diabetic hyperglycemia and by excessive alcohol consumption. Low levels of TPP results from thiamine deficiency. The insufficiency of available NAD+ or TPP produces symptoms similar to hypoxia (lack of oxygen), because they are needed primarily by the Krebs cycle for oxidative phosphorylation, and NAD+ to a lesser extent in anaerobic glycolysis. Oxidative phosphorylation and glyocolysis are vital as these metabolic pathways produce ATP, which is the molecule that releases energy necessary for cells to function. As there is not enough NAD+ or TPP for aerobic glycolysis nor fatty acid oxidation, anaerobic glycolysis is excessively used which turns glycogen and glucose into pyruvate, and then the pyruvate into lactate (fermentation). Fermentation also generates a small amount of NAD+ from NADH, but only enough to keep anaerobic glycolysis going. The excessive use of anaerobic glycolysis disrupts the lactate/pyruvate ratio causing lactic acidosis. The decreased pyruvate inhibits gluconeogenesis and increases release of fatty acids from adipose tissue. In the liver, the increase of plasma free fatty acids results in increased ketone production (which in excess causes ketoacidosis). The increased plasma free fatty acids, increased acetyl-CoA (accumulating from reduced Krebs cycle function), and increased NADH all contribute to increased fatty acid synthesis within the liver (which in excess causes fatty liver disease). Pseudohypoxia also leads to hyperuricemia as elevated lactic acid inhibits uric acid secretion by the kidney; as well as the energy shortage from inhibited oxidative phosphorylation leads to increased turnover of adenosine nucleotides by the myokinase reaction and purine nucleotide cycle. Research has shown that declining levels of NAD+ during aging cause pseudohypoxia, and that raising nuclear NAD+ in old mice reverses pseudohypoxia and metabolic dysfunction, thus reversing the aging process. It is expected that human NAD trials will begin in 2014. Pseudohypoxia is a feature commonly noted in poorly-controlled diabetes. Reactions In poorly controlled diabetes, as insulin is insufficient, glucose cannot enter the cell and remains high in the blood (hyperglycemia). The polyol pathway converts glucose into fructose, which can then enter the cell without requiring insulin. The oxidative damage done to cells in diabetes damages DNA and causes poly (ADP ribose) polymerases or PARPs to be activated, such as PARP1. Both processes reduce the available NAD+. In ethanol catabolism, ethanol is converted into acetate, consuming NAD+. When alcohol is consumed in small quantities, the NADH/NAD+ ratio remains in balance enough for the acetyl-CoA (converted from acetate) to be used for oxidative phosphorylation. However, even moderate amounts of alcohol (1-2 drinks) results in more NADH than NAD+, which inhibits oxidative phosphorylation. In chronic excessive alcohol consumption, the microsomal ethanol oxidizing system (MEOS) is used in addition to alcohol dehydrogenase. Diabetes Polyol pathway D-glucose + NADPH → Sorbitol + NADP+ (catalyzed by aldose reductase) Sorbitol + NAD+ → D-fructose + NADH (catalyzed by sorbitol dehydrogenase) Poly (ADP-ribose) polymerase-1 Protein + NAD+ → Protein + ADP-ribose + nicotinamide (catalyzed by PARP1) Ethanol catabolism Alcohol dehydrogenase Ethanol + NAD+ → Acetaldehyde + NADH + H+ (catalyzed by alcohol dehydrogenase) Acetaldehyde + NAD+ → Acetate + NADH + H+ (catalyzed by aldehyde dehydrogenase) MEOS Ethanol + NADPH + H+ + O2 → Acetaldehyde + NADP+ + 2H2O (catalyzed by CYP2E1) Acetaldehyde + NAD+ → Acetate + NADH + H+ (catalyzed by aldehyde dehydrogenase) See also Hypoxia (medical) Hypoxia (disambiguation) - list under Hypoxia (medical) e.g. Intrauterine hypoxia Bioenergetic systems - metabolic pathways of producing ATP Metabolic acidosis References Cell biology Medical signs Geriatrics Senescence
Pseudohypoxia
Chemistry,Biology
1,075
11,128,659
https://en.wikipedia.org/wiki/Tilletia%20controversa
Tilletia controversa is a fungal plant pathogen. It is a fungus known to cause the smut disease TCK smut in soft white and hard red winter wheats. It stunts the growth of the plants and leaves smut balls in the grain heads. When the grain is milled the smut balls emit a fishy odor that lowers the quality of the flour. TCK smut exists in the western and northwestern United States, but is not considered a major problem. The disease took on policy significance because China applied a zero tolerance on the presence of TCK spores, resulting in a ban from 1974 to 1999 on shipments from the Pacific Northwest. Until the summer of 1996, China accepted shipments of U.S. wheat from the Gulf Coast, and negotiated price discounts with the shippers to cover the cost of decontamination if traces of TCK were found. Then in June 1996, China rejected all cargoes of U.S. wheat with traces of TCK. The November 1999 U.S.-China Agricultural Cooperation Agreement removes the ban and allows imports of U.S. wheat and other grains that meet a specific TCK tolerance level, thus improving the competitiveness of U.S. wheat with Canadian and Australian exports. Symptomology Presents in the dough stage. References External links Fungal plant pathogens and diseases Wheat diseases Ustilaginomycotina Fungi described in 1874 Fungus species
Tilletia controversa
Biology
291
10,603
https://en.wikipedia.org/wiki/Field%20%28mathematics%29
In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics. The best known fields are the field of rational numbers, the field of real numbers and the field of complex numbers. Many other fields, such as fields of rational functions, algebraic function fields, algebraic number fields, and p-adic fields are commonly used and studied in mathematics, particularly in number theory and algebraic geometry. Most cryptographic protocols rely on finite fields, i.e., fields with finitely many elements. The theory of fields proves that angle trisection and squaring the circle cannot be done with a compass and straightedge. Galois theory, devoted to understanding the symmetries of field extensions, provides an elegant proof of the Abel–Ruffini theorem that general quintic equations cannot be solved in radicals. Fields serve as foundational notions in several mathematical domains. This includes different branches of mathematical analysis, which are based on fields with additional structure. Basic theorems in analysis hinge on the structural properties of the field of real numbers. Most importantly for algebraic purposes, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Number fields, the siblings of the field of rational numbers, are studied in depth in number theory. Function fields can help describe properties of geometric objects. Definition Informally, a field is a set, along with two operations defined on that set: an addition operation written as , and a multiplication operation written as , both of which behave similarly as they behave for rational numbers and real numbers, including the existence of an additive inverse for all elements , and of a multiplicative inverse for every nonzero element . This allows one to also consider the so-called inverse operations of subtraction, , and division, , by defining: , . Classic definition Formally, a field is a set together with two binary operations on called addition and multiplication. A binary operation on is a mapping , that is, a correspondence that associates with each ordered pair of elements of a uniquely determined element of . The result of the addition of and is called the sum of and , and is denoted . Similarly, the result of the multiplication of and is called the product of and , and is denoted or . These operations are required to satisfy the following properties, referred to as field axioms. These axioms are required to hold for all elements , , of the field : Associativity of addition and multiplication: , and . Commutativity of addition and multiplication: , and . Additive and multiplicative identity: there exist two distinct elements and in such that and . Additive inverses: for every in , there exists an element in , denoted , called the additive inverse of , such that . Multiplicative inverses: for every in , there exists an element in , denoted by or , called the multiplicative inverse of , such that . Distributivity of multiplication over addition: . An equivalent, and more succinct, definition is: a field has two commutative operations, called addition and multiplication; it is a group under addition with as the additive identity; the nonzero elements form a group under multiplication with as the multiplicative identity; and multiplication distributes over addition. Even more succinctly: a field is a commutative ring where and all nonzero elements are invertible under multiplication. Alternative definition Fields can also be defined in different, but equivalent ways. One can alternatively define a field by four binary operations (addition, subtraction, multiplication, and division) and their required properties. Division by zero is, by definition, excluded. In order to avoid existential quantifiers, fields can be defined by two binary operations (addition and multiplication), two unary operations (yielding the additive and multiplicative inverses respectively), and two nullary operations (the constants and ). These operations are then subject to the conditions above. Avoiding existential quantifiers is important in constructive mathematics and computing. One may equivalently define a field by the same two binary operations, one unary operation (the multiplicative inverse), and two (not necessarily distinct) constants and , since and . Examples Rational numbers Rational numbers have been widely used a long time before the elaboration of the concept of field. They are numbers that can be written as fractions , where and are integers, and . The additive inverse of such a fraction is , and the multiplicative inverse (provided that ) is , which can be seen as follows: The abstractly required field axioms reduce to standard properties of rational numbers. For example, the law of distributivity can be proven as follows: Real and complex numbers The real numbers , with the usual operations of addition and multiplication, also form a field. The complex numbers consist of expressions with real, where is the imaginary unit, i.e., a (non-real) number satisfying . Addition and multiplication of real numbers are defined in such a way that expressions of this type satisfy all field axioms and thus hold for . For example, the distributive law enforces It is immediate that this is again an expression of the above type, and so the complex numbers form a field. Complex numbers can be geometrically represented as points in the plane, with Cartesian coordinates given by the real numbers of their describing expression, or as the arrows from the origin to these points, specified by their length and an angle enclosed with some distinct direction. Addition then corresponds to combining the arrows to the intuitive parallelogram (adding the Cartesian coordinates), and the multiplication is – less intuitively – combining rotating and scaling of the arrows (adding the angles and multiplying the lengths). The fields of real and complex numbers are used throughout mathematics, physics, engineering, statistics, and many other scientific disciplines. Constructible numbers In antiquity, several geometric problems concerned the (in)feasibility of constructing certain numbers with compass and straightedge. For example, it was unknown to the Greeks that it is, in general, impossible to trisect a given angle in this way. These problems can be settled using the field of constructible numbers. Real constructible numbers are, by definition, lengths of line segments that can be constructed from the points 0 and 1 in finitely many steps using only compass and straightedge. These numbers, endowed with the field operations of real numbers, restricted to the constructible numbers, form a field, which properly includes the field of rational numbers. The illustration shows the construction of square roots of constructible numbers, not necessarily contained within . Using the labeling in the illustration, construct the segments , , and a semicircle over (center at the midpoint ), which intersects the perpendicular line through in a point , at a distance of exactly from when has length one. Not all real numbers are constructible. It can be shown that is not a constructible number, which implies that it is impossible to construct with compass and straightedge the length of the side of a cube with volume 2, another problem posed by the ancient Greeks. A field with four elements In addition to familiar number systems such as the rationals, there are other, less immediate examples of fields. The following example is a field consisting of four elements called , , , and . The notation is chosen such that plays the role of the additive identity element (denoted 0 in the axioms above), and is the multiplicative identity (denoted in the axioms above). The field axioms can be verified by using some more field theory, or by direct computation. For example, , which equals , as required by the distributivity. This field is called a finite field or Galois field with four elements, and is denoted or . The subset consisting of and (highlighted in red in the tables at the right) is also a field, known as the binary field or . Elementary notions In this section, denotes an arbitrary field and and are arbitrary elements of . Consequences of the definition One has and . In particular, one may deduce the additive inverse of every element as soon as one knows . If then or must be , since, if , then . This means that every field is an integral domain. In addition, the following properties are true for any elements and : if Additive and multiplicative groups of a field The axioms of a field imply that it is an abelian group under addition. This group is called the additive group of the field, and is sometimes denoted by when denoting it simply as could be confusing. Similarly, the nonzero elements of form an abelian group under multiplication, called the multiplicative group, and denoted by or just , or . A field may thus be defined as set equipped with two operations denoted as an addition and a multiplication such that is an abelian group under addition, is an abelian group under multiplication (where 0 is the identity element of the addition), and multiplication is distributive over addition. Some elementary statements about fields can therefore be obtained by applying general facts of groups. For example, the additive and multiplicative inverses and are uniquely determined by . The requirement is imposed by convention to exclude the trivial ring, which consists of a single element; this guides any choice of the axioms that define fields. Every finite subgroup of the multiplicative group of a field is cyclic (see ). Characteristic In addition to the multiplication of two elements of , it is possible to define the product of an arbitrary element of by a positive integer to be the -fold sum (which is an element of .) If there is no positive integer such that , then is said to have characteristic . For example, the field of rational numbers has characteristic 0 since no positive integer is zero. Otherwise, if there is a positive integer satisfying this equation, the smallest such positive integer can be shown to be a prime number. It is usually denoted by and the field is said to have characteristic then. For example, the field has characteristic since (in the notation of the above addition table) . If has characteristic , then for all in . This implies that , since all other binomial coefficients appearing in the binomial formula are divisible by . Here, ( factors) is the th power, i.e., the -fold product of the element . Therefore, the Frobenius map is compatible with the addition in (and also with the multiplication), and is therefore a field homomorphism. The existence of this homomorphism makes fields in characteristic quite different from fields of characteristic . Subfields and prime fields A subfield of a field is a subset of that is a field with respect to the field operations of . Equivalently is a subset of that contains , and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. This means that , that for all both and are in , and that for all in , both and are in . Field homomorphisms are maps between two fields such that , , and , where and are arbitrary elements of . All field homomorphisms are injective. If is also surjective, it is called an isomorphism (or the fields and are called isomorphic). A field is called a prime field if it has no proper (i.e., strictly smaller) subfields. Any field contains a prime field. If the characteristic of is (a prime number), the prime field is isomorphic to the finite field introduced below. Otherwise the prime field is isomorphic to . Finite fields Finite fields (also called Galois fields) are fields with finitely many elements, whose number is also referred to as the order of the field. The above introductory example is a field with four elements. Its subfield is the smallest field, because by definition a field has at least two distinct elements, and . The simplest finite fields, with prime order, are most directly accessible using modular arithmetic. For a fixed positive integer , arithmetic "modulo " means to work with the numbers The addition and multiplication on this set are done by performing the operation in question in the set of integers, dividing by and taking the remainder as result. This construction yields a field precisely if is a prime number. For example, taking the prime results in the above-mentioned field . For and more generally, for any composite number (i.e., any number which can be expressed as a product of two strictly smaller natural numbers), is not a field: the product of two non-zero elements is zero since in , which, as was explained above, prevents from being a field. The field with elements ( being prime) constructed in this way is usually denoted by . Every finite field has elements, where is prime and . This statement holds since may be viewed as a vector space over its prime field. The dimension of this vector space is necessarily finite, say , which implies the asserted statement. A field with elements can be constructed as the splitting field of the polynomial . Such a splitting field is an extension of in which the polynomial has zeros. This means has as many zeros as possible since the degree of is . For , it can be checked case by case using the above multiplication table that all four elements of satisfy the equation , so they are zeros of . By contrast, in , has only two zeros (namely and ), so does not split into linear factors in this smaller field. Elaborating further on basic field-theoretic notions, it can be shown that two finite fields with the same order are isomorphic. It is thus customary to speak of the finite field with elements, denoted by or . History Historically, three algebraic disciplines led to the concept of a field: the question of solving polynomial equations, algebraic number theory, and algebraic geometry. A first step towards the notion of a field was made in 1770 by Joseph-Louis Lagrange, who observed that permuting the zeros of a cubic polynomial in the expression (with being a third root of unity) only yields two values. This way, Lagrange conceptually explained the classical solution method of Scipione del Ferro and François Viète, which proceeds by reducing a cubic equation for an unknown to a quadratic equation for . Together with a similar observation for equations of degree 4, Lagrange thus linked what eventually became the concept of fields and the concept of groups. Vandermonde, also in 1770, and to a fuller extent, Carl Friedrich Gauss, in his Disquisitiones Arithmeticae (1801), studied the equation for a prime and, again using modern language, the resulting cyclic Galois group. Gauss deduced that a regular -gon can be constructed if . Building on Lagrange's work, Paolo Ruffini claimed (1799) that quintic equations (polynomial equations of degree ) cannot be solved algebraically; however, his arguments were flawed. These gaps were filled by Niels Henrik Abel in 1824. Évariste Galois, in 1832, devised necessary and sufficient criteria for a polynomial equation to be algebraically solvable, thus establishing in effect what is known as Galois theory today. Both Abel and Galois worked with what is today called an algebraic number field, but conceived neither an explicit notion of a field, nor of a group. In 1871 Richard Dedekind introduced, for a set of real or complex numbers that is closed under the four arithmetic operations, the German word Körper, which means "body" or "corpus" (to suggest an organically closed entity). The English term "field" was introduced by . In 1881 Leopold Kronecker defined what he called a domain of rationality, which is a field of rational fractions in modern terms. Kronecker's notion did not cover the field of all algebraic numbers (which is a field in Dedekind's sense), but on the other hand was more abstract than Dedekind's in that it made no specific assumption on the nature of the elements of a field. Kronecker interpreted a field such as abstractly as the rational function field . Prior to this, examples of transcendental numbers were known since Joseph Liouville's work in 1844, until Charles Hermite (1873) and Ferdinand von Lindemann (1882) proved the transcendence of and , respectively. The first clear definition of an abstract field is due to . In particular, Heinrich Martin Weber's notion included the field . Giuseppe Veronese (1891) studied the field of formal power series, which led to introduce the field of -adic numbers. synthesized the knowledge of abstract field theory accumulated so far. He axiomatically studied the properties of fields and defined many important field-theoretic concepts. The majority of the theorems mentioned in the sections Galois theory, Constructing fields and Elementary notions can be found in Steinitz's work. linked the notion of orderings in a field, and thus the area of analysis, to purely algebraic properties. Emil Artin redeveloped Galois theory from 1928 through 1942, eliminating the dependency on the primitive element theorem. Constructing fields Constructing fields from rings A commutative ring is a set that is equipped with an addition and multiplication operation and satisfies all the axioms of a field, except for the existence of multiplicative inverses . For example, the integers form a commutative ring, but not a field: the reciprocal of an integer is not itself an integer, unless . In the hierarchy of algebraic structures fields can be characterized as the commutative rings in which every nonzero element is a unit (which means every element is invertible). Similarly, fields are the commutative rings with precisely two distinct ideals, and . Fields are also precisely the commutative rings in which is the only prime ideal. Given a commutative ring , there are two ways to construct a field related to , i.e., two ways of modifying such that all nonzero elements become invertible: forming the field of fractions, and forming residue fields. The field of fractions of is , the rationals, while the residue fields of are the finite fields . Field of fractions Given an integral domain , its field of fractions is built with the fractions of two elements of exactly as Q is constructed from the integers. More precisely, the elements of are the fractions where and are in , and . Two fractions and are equal if and only if . The operation on the fractions work exactly as for rational numbers. For example, It is straightforward to show that, if the ring is an integral domain, the set of the fractions form a field. The field of the rational fractions over a field (or an integral domain) is the field of fractions of the polynomial ring . The field of Laurent series over a field is the field of fractions of the ring of formal power series (in which ). Since any Laurent series is a fraction of a power series divided by a power of (as opposed to an arbitrary power series), the representation of fractions is less important in this situation, though. Residue fields In addition to the field of fractions, which embeds injectively into a field, a field can be obtained from a commutative ring by means of a surjective map onto a field . Any field obtained in this way is a quotient , where is a maximal ideal of . If has only one maximal ideal , this field is called the residue field of . The ideal generated by a single polynomial in the polynomial ring (over a field ) is maximal if and only if is irreducible in , i.e., if cannot be expressed as the product of two polynomials in of smaller degree. This yields a field This field contains an element (namely the residue class of ) which satisfies the equation . For example, is obtained from by adjoining the imaginary unit symbol , which satisfies , where . Moreover, is irreducible over , which implies that the map that sends a polynomial to yields an isomorphism Constructing fields within a bigger field Fields can be constructed inside a given bigger container field. Suppose given a field , and a field containing as a subfield. For any element of , there is a smallest subfield of containing and , called the subfield of F generated by and denoted . The passage from to is referred to by adjoining an element to . More generally, for a subset , there is a minimal subfield of containing and , denoted by . The compositum of two subfields and of some field is the smallest subfield of containing both and . The compositum can be used to construct the biggest subfield of satisfying a certain property, for example the biggest subfield of , which is, in the language introduced below, algebraic over . Field extensions The notion of a subfield can also be regarded from the opposite point of view, by referring to being a field extension (or just extension) of , denoted by , and read " over ". A basic datum of a field extension is its degree , i.e., the dimension of as an -vector space. It satisfies the formula . Extensions whose degree is finite are referred to as finite extensions. The extensions and are of degree , whereas is an infinite extension. Algebraic extensions A pivotal notion in the study of field extensions are algebraic elements. An element is algebraic over if it is a root of a polynomial with coefficients in , that is, if it satisfies a polynomial equation , with in , and . For example, the imaginary unit in is algebraic over , and even over , since it satisfies the equation . A field extension in which every element of is algebraic over is called an algebraic extension. Any finite extension is necessarily algebraic, as can be deduced from the above multiplicativity formula. The subfield generated by an element , as above, is an algebraic extension of if and only if is an algebraic element. That is to say, if is algebraic, all other elements of are necessarily algebraic as well. Moreover, the degree of the extension , i.e., the dimension of as an -vector space, equals the minimal degree such that there is a polynomial equation involving , as above. If this degree is , then the elements of have the form For example, the field of Gaussian rationals is the subfield of consisting of all numbers of the form where both and are rational numbers: summands of the form (and similarly for higher exponents) do not have to be considered here, since can be simplified to . Transcendence bases The above-mentioned field of rational fractions , where is an indeterminate, is not an algebraic extension of since there is no polynomial equation with coefficients in whose zero is . Elements, such as , which are not algebraic are called transcendental. Informally speaking, the indeterminate and its powers do not interact with elements of . A similar construction can be carried out with a set of indeterminates, instead of just one. Once again, the field extension discussed above is a key example: if is not algebraic (i.e., is not a root of a polynomial with coefficients in ), then is isomorphic to . This isomorphism is obtained by substituting to in rational fractions. A subset of a field is a transcendence basis if it is algebraically independent (do not satisfy any polynomial relations) over and if is an algebraic extension of . Any field extension has a transcendence basis. Thus, field extensions can be split into ones of the form (purely transcendental extensions) and algebraic extensions. Closure operations A field is algebraically closed if it does not have any strictly bigger algebraic extensions or, equivalently, if any polynomial equation , with coefficients , has a solution . By the fundamental theorem of algebra, is algebraically closed, i.e., any polynomial equation with complex coefficients has a complex solution. The rational and the real numbers are not algebraically closed since the equation does not have any rational or real solution. A field containing is called an algebraic closure of if it is algebraic over (roughly speaking, not too big compared to ) and is algebraically closed (big enough to contain solutions of all polynomial equations). By the above, is an algebraic closure of . The situation that the algebraic closure is a finite extension of the field is quite special: by the Artin–Schreier theorem, the degree of this extension is necessarily , and is elementarily equivalent to . Such fields are also known as real closed fields. Any field has an algebraic closure, which is moreover unique up to (non-unique) isomorphism. It is commonly referred to as the algebraic closure and denoted . For example, the algebraic closure of is called the field of algebraic numbers. The field is usually rather implicit since its construction requires the ultrafilter lemma, a set-theoretic axiom that is weaker than the axiom of choice. In this regard, the algebraic closure of , is exceptionally simple. It is the union of the finite fields containing (the ones of order ). For any algebraically closed field of characteristic , the algebraic closure of the field of Laurent series is the field of Puiseux series, obtained by adjoining roots of . Fields with additional structure Since fields are ubiquitous in mathematics and beyond, several refinements of the concept have been adapted to the needs of particular mathematical areas. Ordered fields A field F is called an ordered field if any two elements can be compared, so that and whenever and . For example, the real numbers form an ordered field, with the usual ordering . The Artin–Schreier theorem states that a field can be ordered if and only if it is a formally real field, which means that any quadratic equation only has the solution . The set of all possible orders on a fixed field is isomorphic to the set of ring homomorphisms from the Witt ring of quadratic forms over , to . An Archimedean field is an ordered field such that for each element there exists a finite expression whose value is greater than that element, that is, there are no infinite elements. Equivalently, the field contains no infinitesimals (elements smaller than all rational numbers); or, yet equivalent, the field is isomorphic to a subfield of . An ordered field is Dedekind-complete if all upper bounds, lower bounds (see Dedekind cut) and limits, which should exist, do exist. More formally, each bounded subset of is required to have a least upper bound. Any complete field is necessarily Archimedean, since in any non-Archimedean field there is neither a greatest infinitesimal nor a least positive rational, whence the sequence , every element of which is greater than every infinitesimal, has no limit. Since every proper subfield of the reals also contains such gaps, is the unique complete ordered field, up to isomorphism. Several foundational results in calculus follow directly from this characterization of the reals. The hyperreals form an ordered field that is not Archimedean. It is an extension of the reals obtained by including infinite and infinitesimal numbers. These are larger, respectively smaller than any real number. The hyperreals form the foundational basis of non-standard analysis. Topological fields Another refinement of the notion of a field is a topological field, in which the set is a topological space, such that all operations of the field (addition, multiplication, the maps and ) are continuous maps with respect to the topology of the space. The topology of all the fields discussed below is induced from a metric, i.e., a function that measures a distance between any two elements of . The completion of is another field in which, informally speaking, the "gaps" in the original field are filled, if there are any. For example, any irrational number , such as , is a "gap" in the rationals in the sense that it is a real number that can be approximated arbitrarily closely by rational numbers , in the sense that distance of and given by the absolute value is as small as desired. The following table lists some examples of this construction. The fourth column shows an example of a zero sequence, i.e., a sequence whose limit (for ) is zero. The field is used in number theory and -adic analysis. The algebraic closure carries a unique norm extending the one on , but is not complete. The completion of this algebraic closure, however, is algebraically closed. Because of its rough analogy to the complex numbers, it is sometimes called the field of complex p-adic numbers and is denoted by . Local fields The following topological fields are called local fields: finite extensions of (local fields of characteristic zero) finite extensions of , the field of Laurent series over (local fields of characteristic ). These two types of local fields share some fundamental similarities. In this relation, the elements and (referred to as uniformizer) correspond to each other. The first manifestation of this is at an elementary level: the elements of both fields can be expressed as power series in the uniformizer, with coefficients in . (However, since the addition in is done using carrying, which is not the case in , these fields are not isomorphic.) The following facts show that this superficial similarity goes much deeper: Any first-order statement that is true for almost all is also true for almost all . An application of this is the Ax–Kochen theorem describing zeros of homogeneous polynomials in . Tamely ramified extensions of both fields are in bijection to one another. Adjoining arbitrary -power roots of (in ), respectively of (in ), yields (infinite) extensions of these fields known as perfectoid fields. Strikingly, the Galois groups of these two fields are isomorphic, which is the first glimpse of a remarkable parallel between these two fields: Differential fields Differential fields are fields equipped with a derivation, i.e., allow to take derivatives of elements in the field. For example, the field , together with the standard derivative of polynomials forms a differential field. These fields are central to differential Galois theory, a variant of Galois theory dealing with linear differential equations. Galois theory Galois theory studies algebraic extensions of a field by studying the symmetry in the arithmetic operations of addition and multiplication. An important notion in this area is that of finite Galois extensions , which are, by definition, those that are separable and normal. The primitive element theorem shows that finite separable extensions are necessarily simple, i.e., of the form , where is an irreducible polynomial (as above). For such an extension, being normal and separable means that all zeros of are contained in and that has only simple zeros. The latter condition is always satisfied if has characteristic . For a finite Galois extension, the Galois group is the group of field automorphisms of that are trivial on (i.e., the bijections that preserve addition and multiplication and that send elements of to themselves). The importance of this group stems from the fundamental theorem of Galois theory, which constructs an explicit one-to-one correspondence between the set of subgroups of and the set of intermediate extensions of the extension . By means of this correspondence, group-theoretic properties translate into facts about fields. For example, if the Galois group of a Galois extension as above is not solvable (cannot be built from abelian groups), then the zeros of cannot be expressed in terms of addition, multiplication, and radicals, i.e., expressions involving . For example, the symmetric groups is not solvable for . Consequently, as can be shown, the zeros of the following polynomials are not expressible by sums, products, and radicals. For the latter polynomial, this fact is known as the Abel–Ruffini theorem: (and ), (where is regarded as a polynomial in , for some indeterminates , is any field, and ). The tensor product of fields is not usually a field. For example, a finite extension of degree is a Galois extension if and only if there is an isomorphism of -algebras . This fact is the beginning of Grothendieck's Galois theory, a far-reaching extension of Galois theory applicable to algebro-geometric objects. Invariants of fields Basic invariants of a field include the characteristic and the transcendence degree of over its prime field. The latter is defined as the maximal number of elements in that are algebraically independent over the prime field. Two algebraically closed fields and are isomorphic precisely if these two data agree. This implies that any two uncountable algebraically closed fields of the same cardinality and the same characteristic are isomorphic. For example, and are isomorphic (but not isomorphic as topological fields). Model theory of fields In model theory, a branch of mathematical logic, two fields and are called elementarily equivalent if every mathematical statement that is true for is also true for and conversely. The mathematical statements in question are required to be first-order sentences (involving , , the addition and multiplication). A typical example, for , an integer, is = "any polynomial of degree in has a zero in " The set of such formulas for all expresses that is algebraically closed. The Lefschetz principle states that is elementarily equivalent to any algebraically closed field of characteristic zero. Moreover, any fixed statement holds in if and only if it holds in any algebraically closed field of sufficiently high characteristic. If is an ultrafilter on a set , and is a field for every in , the ultraproduct of the with respect to is a field. It is denoted by , since it behaves in several ways as a limit of the fields : Łoś's theorem states that any first order statement that holds for all but finitely many , also holds for the ultraproduct. Applied to the above sentence , this shows that there is an isomorphism The Ax–Kochen theorem mentioned above also follows from this and an isomorphism of the ultraproducts (in both cases over all primes ) . In addition, model theory also studies the logical properties of various other types of fields, such as real closed fields or exponential fields (which are equipped with an exponential function ). Absolute Galois group For fields that are not algebraically closed (or not separably closed), the absolute Galois group is fundamentally important: extending the case of finite Galois extensions outlined above, this group governs all finite separable extensions of . By elementary means, the group can be shown to be the Prüfer group, the profinite completion of . This statement subsumes the fact that the only algebraic extensions of are the fields for , and that the Galois groups of these finite extensions are given by . A description in terms of generators and relations is also known for the Galois groups of -adic number fields (finite extensions of ). Representations of Galois groups and of related groups such as the Weil group are fundamental in many branches of arithmetic, such as the Langlands program. The cohomological study of such representations is done using Galois cohomology. For example, the Brauer group, which is classically defined as the group of central simple -algebras, can be reinterpreted as a Galois cohomology group, namely . K-theory Milnor K-theory is defined as The norm residue isomorphism theorem, proved around 2000 by Vladimir Voevodsky, relates this to Galois cohomology by means of an isomorphism Algebraic K-theory is related to the group of invertible matrices with coefficients the given field. For example, the process of taking the determinant of an invertible matrix leads to an isomorphism . Matsumoto's theorem shows that agrees with . In higher degrees, K-theory diverges from Milnor K-theory and remains hard to compute in general. Applications Linear algebra and commutative algebra If , then the equation has a unique solution in a field , namely This immediate consequence of the definition of a field is fundamental in linear algebra. For example, it is an essential ingredient of Gaussian elimination and of the proof that any vector space has a basis. The theory of modules (the analogue of vector spaces over rings instead of fields) is much more complicated, because the above equation may have several or no solutions. In particular systems of linear equations over a ring are much more difficult to solve than in the case of fields, even in the specially simple case of the ring of the integers. Finite fields: cryptography and coding theory A widely applied cryptographic routine uses the fact that discrete exponentiation, i.e., computing ( factors, for an integer ) in a (large) finite field can be performed much more efficiently than the discrete logarithm, which is the inverse operation, i.e., determining the solution to an equation . In elliptic curve cryptography, the multiplication in a finite field is replaced by the operation of adding points on an elliptic curve, i.e., the solutions of an equation of the form . Finite fields are also used in coding theory and combinatorics. Geometry: field of functions Functions on a suitable topological space into a field can be added and multiplied pointwise, e.g., the product of two functions is defined by the product of their values within the domain: . This makes these functions a -commutative algebra. For having a field of functions, one must consider algebras of functions that are integral domains. In this case the ratios of two functions, i.e., expressions of the form form a field, called field of functions. This occurs in two main cases. When is a complex manifold . In this case, one considers the algebra of holomorphic functions, i.e., complex differentiable functions. Their ratios form the field of meromorphic functions on . The function field of an algebraic variety (a geometric object defined as the common zeros of polynomial equations) consists of ratios of regular functions, i.e., ratios of polynomial functions on the variety. The function field of the -dimensional space over a field is , i.e., the field consisting of ratios of polynomials in indeterminates. The function field of is the same as the one of any open dense subvariety. In other words, the function field is insensitive to replacing by a (slightly) smaller subvariety. The function field is invariant under isomorphism and birational equivalence of varieties. It is therefore an important tool for the study of abstract algebraic varieties and for the classification of algebraic varieties. For example, the dimension, which equals the transcendence degree of , is invariant under birational equivalence. For curves (i.e., the dimension is one), the function field is very close to : if is smooth and proper (the analogue of being compact), can be reconstructed, up to isomorphism, from its field of functions. In higher dimension the function field remembers less, but still decisive information about . The study of function fields and their geometric meaning in higher dimensions is referred to as birational geometry. The minimal model program attempts to identify the simplest (in a certain precise sense) algebraic varieties with a prescribed function field. Number theory: global fields Global fields are in the limelight in algebraic number theory and arithmetic geometry. They are, by definition, number fields (finite extensions of ) or function fields over (finite extensions of ). As for local fields, these two types of fields share several similar features, even though they are of characteristic and positive characteristic, respectively. This function field analogy can help to shape mathematical expectations, often first by understanding questions about function fields, and later treating the number field case. The latter is often more difficult. For example, the Riemann hypothesis concerning the zeros of the Riemann zeta function (open as of 2017) can be regarded as being parallel to the Weil conjectures (proven in 1974 by Pierre Deligne). Cyclotomic fields are among the most intensely studied number fields. They are of the form , where is a primitive th root of unity, i.e., a complex number that satisfies and for all . For being a regular prime, Kummer used cyclotomic fields to prove Fermat's Last Theorem, which asserts the non-existence of rational nonzero solutions to the equation . Local fields are completions of global fields. Ostrowski's theorem asserts that the only completions of , a global field, are the local fields and . Studying arithmetic questions in global fields may sometimes be done by looking at the corresponding questions locally. This technique is called the local–global principle. For example, the Hasse–Minkowski theorem reduces the problem of finding rational solutions of quadratic equations to solving these equations in and , whose solutions can easily be described. Unlike for local fields, the Galois groups of global fields are not known. Inverse Galois theory studies the (unsolved) problem whether any finite group is the Galois group for some number field . Class field theory describes the abelian extensions, i.e., ones with abelian Galois group, or equivalently the abelianized Galois groups of global fields. A classical statement, the Kronecker–Weber theorem, describes the maximal abelian extension of : it is the field obtained by adjoining all primitive th roots of unity. Kronecker's Jugendtraum asks for a similarly explicit description of of general number fields . For imaginary quadratic fields, , , the theory of complex multiplication describes using elliptic curves. For general number fields, no such explicit description is known. Related notions In addition to the additional structure that fields may enjoy, fields admit various other related notions. Since in any field , any field has at least two elements. Nonetheless, there is a concept of field with one element, which is suggested to be a limit of the finite fields , as tends to . In addition to division rings, there are various other weaker algebraic structures related to fields such as quasifields, near-fields and semifields. There are also proper classes with field structure, which are sometimes called Fields, with a capital 'F'. The surreal numbers form a Field containing the reals, and would be a field except for the fact that they are a proper class, not a set. The nimbers, a concept from game theory, form such a Field as well. Division rings Dropping one or several axioms in the definition of a field leads to other algebraic structures. As was mentioned above, commutative rings satisfy all field axioms except for the existence of multiplicative inverses. Dropping instead commutativity of multiplication leads to the concept of a division ring or skew field; sometimes associativity is weakened as well. The only division rings that are finite-dimensional -vector spaces are itself, (which is a field), and the quaternions (in which multiplication is non-commutative). This result is known as the Frobenius theorem. The octonions , for which multiplication is neither commutative nor associative, is a normed alternative division algebra, but is not a division ring. This fact was proved using methods of algebraic topology in 1958 by Michel Kervaire, Raoul Bott, and John Milnor. Wedderburn's little theorem states that all finite division rings are fields. Notes Citations References , especially Chapter 13 . See especially Book 3 () and Book 6 (). External links Algebraic structures Abstract algebra
Field (mathematics)
Mathematics
9,020
58,622,672
https://en.wikipedia.org/wiki/Aspergillus%20viridinutans
Aspergillus viridinutans is a species of fungus in the genus Aspergillus. The species was first isolated in Frankston, Victoria, Australia and described in 1954. It is from the Fumigati section of Aspergillus. Several fungi from this section produce heat-resistant ascospores, and the isolates from this section are frequently obtained from locations where natural fires have previously occurred. A. viridinutans has been identified as the cause of chronic aspergillosis. The mycotoxin viriditoxin was first identified in A. viridinutans. A draft genome sequence of the strain derived from the original species description has been generated. Growth and morphology A. viridinutans can be cultivated on different medium sources, including both Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of colonies can be seen in the pictures below. References viridinutans Fungi described in 1954 Fungus species
Aspergillus viridinutans
Biology
221
47,234,522
https://en.wikipedia.org/wiki/Xi%27an%20H-20
The Xi'an H-20 (; alternatively Xi'an H-X) is a projected subsonic stealth bomber design of the People's Liberation Army Air Force. It is referred to as a strategic project by the People's Liberation Army, and will be the first dedicated strategic bomber developed by China. The development of a strategic bomber was revealed in September 2016. Design and development In 2016, People's Liberation Army Air Force (PLAAF) general Ma Xiaotian announced that China was developing a new type of long-range bomber on the air force's open day. In 2018, a Chinese military spokesperson confirmed the development was making "great progress". According to the United States Department of Defense, the H-20 is expected to be a flying wing with a range of at least 8,500 km and a payload capacity of at least 10 tonnes; according to the Rand Corporation, an American funded thinktank, it will allow China "to reliably threaten U.S. targets within and beyond the Second Island Chain, to include key U.S. military bases in Guam and Hawaii." The payload is projected to be at least 10 tonnes of conventional or nuclear weapons. Throughout the years, multiple models and computer-generated pictures have surfaced on the internet, some published by magazines run by state-owned defense companies. Defense analysts have noted several recurring features on these models, including serrated air intakes, cranked-kite wings, and foldable twin tail surfaces that can be switched between being horizontal tailplanes and V-tails. In July 2022, Chinese state media suggested the H-20 was close to taking its maiden flight. In March 2024, during the second session of the 14th National People's Congress, vice commander of the People's Liberation Army Air Force, Wang Wei, indicated that H-20 will be revealed "very soon". See also References Citations Sources External links Chinese bomber aircraft H-20 Stealth aircraft Strategic bombers Proposed military aircraft
Xi'an H-20
Engineering
405
38,493,679
https://en.wikipedia.org/wiki/43%20Sagittarii
43 Sagittarii is a single star in the southern constellation of Sagittarius. It has the Bayer designation d Sagittarii, while 43 Sagittarii is the Flamsteed designation. This object is visible to the naked eye as a faint, yellow-hued star with an apparent visual magnitude of 4.88. From parallax measurements, it is estimated to lie around 470 light years away from the Sun. The star is drifting further from the Earth with a heliocentric radial velocity of +15.2 km/s. It is located near the ecliptic and thus is subject to lunar occultations. This is an aging giant/bright giant star with a stellar classification of G8II-III, and is most likely (97% chance) on the horizontal branch. It is around 350 million years old with 3.3 times the mass of the Sun. Having exhausted the supply of hydrogen at its core, the star has expanded to 24 times the Sun's radius and is now generating energy through core helium fusion. It is radiating 277 times the luminosity of the Sun from its swollen photosphere at an effective temperature of 4,813 K. References G-type bright giants G-type giants Horizontal-branch stars Sagittarius (constellation) Sagittarii, d BD-19 5379 Sagittarii, 43 180540 094820 7304
43 Sagittarii
Astronomy
294
50,512,532
https://en.wikipedia.org/wiki/Fixed-point%20ocean%20observatory
A fixed-point ocean observatory is an ocean observing autonomous system of automatic sensors and samplers that continuously gathers data from deep sea, water column and lower atmosphere, and transmits the data to shore in real or near real-time. Infrastructure Fixed-point ocean observatories are typically composed of a cable anchored to the sea floor to which several automatic sensors and samplers are attached. The cable ends with a buoy at the ocean surface that may have some more sensors attached. Most observatories have communicating buoys that transmit data to shore, and which allow changes to the acquisition method of the sensors, as required. These unmanned platforms can be linked via a cable to the shore transmitting data via an internet connection, or they can transmit data to relay buoys which are able to provide a satellite link to the shore. An example for a network of observatories is the Ocean Observatories Initiative. Instrumentation A typical multi-disciplinary observatory is equipped with sensors and instruments to measure physical and biogeochemical variables along the water column. Additionally the surface buoy can hold several sensors measuring atmospheric parameters at sea level. Main measured variables: In order to do so, typically the ocean observatories are equipped with instruments like: ADCP – Acoustic Doppler current profiler, to measure currents; CTD (conductivity, temperature and depth) sensors, to measure conductivity and thermal variations at a known depth; Hydrophone – to record sounds; Sediment Trap – to quantify the quantity of sinking material; Deep sea camera – to capture footage on location; Seismometer – to record the earth motion; CO2 analyser – to measure CO2; Dissolved Oxygen sensor – to measure dissolved oxygen; Fluorometers – to measure Chlorophyll; Turbidity sensor – to measure turbidity. Purpose Ocean observatories can collect data for different purposes from scientific research to environmental monitoring for marine operations or governance for the benefit of economy and society as a whole. Ocean observatories provide real-time, or near real time data allowing to detect changes as they happen, such as geo-hazards for example. Furthermore continuous time series data allow to investigate interannual-to-decadal changes and to capture episodic events, changes in ocean circulation, water properties, water mass formation and ecosystems, to quantify air-sea fluxes, and to analyse the role of the oceans for the climate. The data collected by the several ocean observatories around the globe on the sub-sea-floor, seafloor, and water column, allows to improve our knowledge of the ocean including: Ocean physics and climate change Biodiversity and ecosystem assessment Carbon cycle and ocean acidification Geophysics and geodynamics Moreover networks of ocean observatories can also be used to input data into global ocean models and to calibrate them thus allowing for the investigation of future changes in ocean circulation and ecosystems. See also VENUS Canada, an ocean observatory operated by Ocean Networks Canada. NEPTUNE Canada, a sister observatory to VENUS, also operated by Ocean Networks Canada. MARS, a similar MBARI cabled-based oceanography observatory. SATURN, Science and Technology University Research Network, a coastal margin, or river-to-ocean, testbed observatory for the United States Pacific Northwest, a project of the National Science Foundation Science and Technology Center for Coastal Margin Observation and Prediction. Ocean development References External links JERICO-NEXT Project (Joint European Research Infrastructure network for Coastal Observatories) European Multidisciplinary Seafloor and water-column Observatory Oceanography
Fixed-point ocean observatory
Physics,Environmental_science
739
16,859,117
https://en.wikipedia.org/wiki/CACNA2D2
Voltage-dependent calcium channel subunit alpha2delta-2 is a protein that in humans is encoded by the CACNA2D2 gene. This gene encodes a member of the alpha-2/delta subunit family, a protein in the voltage-dependent calcium channel complex. Calcium channels mediate the influx of calcium ions into the cell upon membrane polarization and consist of a complex of alpha-1, alpha-2/delta, beta, and gamma subunits in a 1:1:1:1 ratio. Various versions of each of these subunits exist, either expressed from similar genes or the result of alternative splicing. Research on a highly similar protein in rabbit suggests the protein described in this record is cleaved into alpha-2 and delta subunits. Alternate transcriptional splice variants of this gene, encoding different isoforms, have been characterized. CACNA2D2 containing channels are blocked by amiodarone. See also Voltage-dependent calcium channel References Further reading External links Ion channels
CACNA2D2
Chemistry
208
49,342,572
https://en.wikipedia.org/wiki/Group%20actions%20in%20computational%20anatomy
Group actions are central to Riemannian geometry and defining orbits (control theory). The orbits of computational anatomy consist of anatomical shapes and medical images; the anatomical shapes are submanifolds of differential geometry consisting of points, curves, surfaces and subvolumes,. This generalized the ideas of the more familiar orbits of linear algebra which are linear vector spaces. Medical images are scalar and tensor images from medical imaging. The group actions are used to define models of human shape which accommodate variation. These orbits are deformable templates as originally formulated more abstractly in pattern theory. The orbit model of computational anatomy The central model of human anatomy in computational anatomy is a Groups and group action, a classic formulation from differential geometry. The orbit is called the space of shapes and forms. The space of shapes are denoted , with the group with law of composition ; the action of the group on shapes is denoted , where the action of the group is defined to satisfy The orbit of the template becomes the space of all shapes, . Several group actions in computational anatomy The central group in CA defined on volumes in are the diffeomorphism group which are mappings with 3-components , law of composition of functions , with inverse . Submanifolds: organs, subcortical structures, charts, and immersions For sub-manifolds , parametrized by a chart or immersion , the diffeomorphic action the flow of the position . Scalar images such as MRI, CT, PET Most popular are scalar images, , with action on the right via the inverse. . Oriented tangents on curves, eigenvectors of tensor matrices Many different imaging modalities are being used with various actions. For images such that is a three-dimensional vector then Tensor matrices Cao et al. examined actions for mapping MRI images measured via diffusion tensor imaging and represented via there principle eigenvector. For tensor fields a positively oriented orthonormal basis of , termed frames, vector cross product denoted then The Frénet frame of three orthonormal vectors, deforms as a tangent, deforms like a normal to the plane generated by , and . H is uniquely constrained by the basis being positive and orthonormal. For non-negative symmetric matrices, an action would become . For mapping MRI DTI images (tensors), then eigenvalues are preserved with the diffeomorphism rotating eigenvectors and preserves the eigenvalues. Given eigenelements , then the action becomes Orientation Distribution Function and High Angular Resolution HARDI Orientation distribution function (ODF) characterizes the angular profile of the diffusion probability density function of water molecules and can be reconstructed from High Angular Resolution Diffusion Imaging (HARDI). The ODF is a probability density function defined on a unit sphere, . In the field of information geometry, the space of ODF forms a Riemannian manifold with the Fisher-Rao metric. For the purpose of LDDMM ODF mapping, the square-root representation is chosen because it is one of the most efficient representations found to date as the various Riemannian operations, such as geodesics, exponential maps, and logarithm maps, are available in closed form. In the following, denote square-root ODF () as , where is non-negative to ensure uniqueness and . Denote diffeomorphic transformation as . Group action of diffeomorphism on , , needs to guarantee the non-negativity and . Based on the derivation in, this group action is defined as where is the Jacobian of . References Computational anatomy Computational anatomy Geometry Fluid mechanics Theory of probability distributions Neural engineering Biomedical engineering
Group actions in computational anatomy
Physics,Mathematics,Engineering,Biology
751
31,594,746
https://en.wikipedia.org/wiki/Bis%28trimethylsilyl%29acetylene
Bis(trimethylsilyl)acetylene (BTMSA) is an organosilicon compound with the formula Me3SiC≡CSiMe3 (Me = methyl). It is a crystalline solid that melts slightly above room temperature and is soluble in organic solvents. This compound is used as a surrogate for acetylene. BTMSA is prepared by treating acetylene with butyllithium followed by addition of trimethylsilyl chloride (Me = CH3, Bu = C4H9): HC≡CH + 2 BuLi → LiC≡CLi + 2 BuH LiC≡CLi + 2 Me3SiCl → Me3SiC≡CSiMe3 + 2 LiCl Applications BTMSA is used as a nucleophile in Friedel-Crafts type acylations and alkylations and a precursor to lithium trimethylsilylacetylide. The TMS groups can be removed with tetra-n-butylammonium fluoride (TBAF) and replaced with protons. BTMSA is also a useful reagent in cycloaddition reactions. Illustrating its versatility, BTMSA was used in a concise total synthesis of (±)-estrone. A key step in this synthesis was the formation of the steroidal skeleton, catalyzed by CpCo(CO)2. BTMSA also serves as a ligand in organometallic chemistry. For example, it forms stable adducts with metallocenes. Cp2TiCl2 + Mg + Me3SiC≡CSiMe3 → Cp2Ti[(CSiMe3)2] + MgCl2 References Alkyne derivatives Trimethylsilyl compounds
Bis(trimethylsilyl)acetylene
Chemistry
365
1,037,163
https://en.wikipedia.org/wiki/Whispering%20gallery
A whispering gallery is usually a circular, hemispherical, elliptical or ellipsoidal enclosure, often beneath a dome or a vault, in which whispers can be heard clearly in other parts of the gallery. Such galleries can also be set up using two parabolic dishes. Sometimes the phenomenon is detected in caves. Theory A whispering gallery is most simply constructed in the form of a circular wall, and allows whispered communication from any part of the internal side of the circumference to any other part. The sound is carried by waves, known as whispering-gallery waves, that travel around the circumference clinging to the walls, an effect that was discovered in the whispering gallery of St Paul's Cathedral in London. The extent to which the sound travels at St Paul's can also be judged by clapping in the gallery, which produces four echoes. Other historical examples are the Gol Gumbaz mausoleum in Bijapur, India and the Echo Wall of the Temple of Heaven in Beijing. A hemispherical enclosure will also guide whispering gallery waves. The waves carry the words so that others will be able to hear them from the opposite side of the gallery. The gallery may also be in the form of an ellipse or ellipsoid, with an accessible point at each focus. In this case, when a visitor stands at one focus and whispers, the line of sound emanating from this focus reflects directly to the focus at the other end of the gallery, where the whispers may be heard. In a similar way, two large concave parabolic dishes, serving as acoustic mirrors, may be erected facing each other in a room or outdoors to serve as a whispering gallery, a common feature of science museums. Egg-shaped galleries, such as the Golghar Granary at Bankipore, and irregularly shaped smooth-walled galleries in the form of caves, such as the Ear of Dionysius in Syracuse, also exist. Examples India The Gol Gumbaz in Bijapur, India. The Golghar Granary in Bankipore, India. The Victoria Memorial in Kolkata. United Kingdom St Paul's Cathedral in London is the place where whispering-gallery waves were first discovered by Lord Rayleigh . Gloucester Cathedral has a whispering gallery. The Berkeley Wetherspoons Bristol has a whispering gallery. United States Grand Central Terminal in New York City: a landing amid the Oyster Bar ramps, in front of the Oyster Bar restaurant Statuary Hall in the United States Capitol. Salt Lake Tabernacle in Salt Lake City, Utah Centennial fountain in front of Green Library at Stanford University in California Gates Circle, Buffalo, New York The Whispering Arch in St. Louis Union Station Charles Stover Bench, Central Park, New York, New York Waldo Hutchins Bench, Central Park, New York, New York Other parts of the world The Echo Wall in the Temple of Heaven in Beijing. Basilica of St. John Lateran, Rome. The Salle des Caryatides in the Louvre, Paris, France. Ear of Dionysius cave in Syracuse, Sicily. Banco dos Namorados (Lovers' bench) in Santiago de Compostela, Spain. In science The term whispering gallery has been borrowed in the physical sciences to describe other forms of whispering-gallery waves such as light or matter waves. See also Acoustic mirror Parabolic loudspeaker Room acoustics References External links Ear of Dionysius: visiting information, videos and sounds of this cave. Grand Central Station: visiting information, videos and sounds of the whispering gallery. St Paul's Cathedral: visiting information, videos and sounds of the whispering gallery. Acoustics Rooms
Whispering gallery
Physics,Engineering
746
10,939,045
https://en.wikipedia.org/wiki/Carboxypeptidase%20E
Carboxypeptidase E (CPE), also known as carboxypeptidase H (CPH) and enkephalin convertase, is an enzyme that in humans is encoded by the CPE gene. This enzyme catalyzes the release of C-terminal arginine or lysine residues from polypeptides. CPE is involved in the biosynthesis of most neuropeptides and peptide hormones. The production of neuropeptides and peptide hormones typically requires two sets of enzymes that cleave the peptide precursors, which are small proteins. First, proprotein convertases cut the precursor at specific sites to generate intermediates containing C-terminal basic residues (lysine and/or arginine). These intermediates are then cleaved by CPE to remove the basic residues. For some peptides, additional processing steps, such as C-terminal amidation, are subsequently required to generate the bioactive peptide, although for many peptides the action of the proprotein convertases and CPE is sufficient to produce the bioactive peptide. Tissue distribution Carboxypeptidase E is found in brain and throughout the neuroendocrine system, including the endocrine pancreas, pituitary, and adrenal gland chromaffin cells. Within cells, carboxypeptidase E is present in the secretory granules along with its peptide substrates and products. Carboxypeptidase E is a glycoprotein that exists in both membrane-associated and soluble forms. The membrane-binding is due to an amphiphilic α-helix within the C-terminal region of the protein. Species distribution Carboxypeptidase E is found in all species of vertebrates that have been examined, and is also present in many other organisms that have been studied (nematode, sea slug). Carboxypeptidase E is not found in the fruit fly (Drosophila), and another enzyme (presumably carboxypeptidase D) fills in for carboxypeptidase E in this organism. In humans, CPE is encoded by the CPE gene. Function Carboxypeptidase E functions in the production of nearly all neuropeptides and peptide hormones. The enzyme acts as an exopeptidase to activate neuropeptides. It does that by cleaving off basic C-terminal amino acids, producing the active form of the peptide. Products of carboxypeptidase E include insulin, the enkephalins, vasopressin, oxytocin, and most other neuroendocrine peptide hormones and neuropeptides. It has been proposed that membrane-associated carboxypeptidase E acts as a sorting signal for regulated secretory proteins in the trans-Golgi network of the pituitary and in secretory granules; regulated secretory proteins are mostly hormones and neuropeptides. However, this role for carboxypeptidase E remains controversial, and evidence shows that this enzyme is not necessary for the sorting of regulated secretory proteins. Clinical significance Mice with mutant carboxypeptidase E, Cpefat, display endocrine disorders like obesity and infertility. In some strains of mice, the fat mutation also causes hyperproinsulinemia in adult male mice, but this is not found in all strains of mice. The obesity and infertility in the Cpefat mice develop with age; young mice (<8 weeks of age) are fertile and have normal body weight. Peptide processing in Cpefat mice is impaired, with a large accumulation of peptides with C-terminal lysine and/or arginine extensions. Levels of the mature forms of peptides are generally reduced in these mice, but not eliminated. It is thought that a related enzyme (carboxypeptidase D) also contributes to neuropeptide processing and gives rise to the mature peptides in the Cpefat mice. Mutations in the CPE gene are not common within the human population, but have been identified. One patient with extreme obesity (Body Mass Index >50) was found to have a mutation that deleted nearly the entire CPE gene. This patient had intellectual disability (inability to read or write) and had abnormal glucose homeostasis, similar to mice lacking CPE activity. In obesity, high levels of circulating free fatty acids have been reported to cause a decrease in the amount of carboxypeptidase E protein in pancreatic beta-cells, leading to beta-cell dysfunction (hyperproinsulinemia) and increased beta-cell apoptosis (via an increase in ER stress). However, because CPE is not a rate-limiting enzyme for the production of most neuropeptides and peptide hormones, it is not clear how relatively modest decreases in CPE activity can cause physiological effects. See also Carboxypeptidase Carboxypeptidase A References Further reading External links The MEROPS online database for peptidases and their inhibitors: M14.005 Proteins EC 3.4.17 Metabolism
Carboxypeptidase E
Chemistry,Biology
1,079
30,380,875
https://en.wikipedia.org/wiki/Datiscoside
Datiscoside is any one of several chemical compounds isolated from certain plants, notably Datisca glomerata. They can be seen as derivatives of the triterpene hydrocarbon cucurbitane (), more specifically from cucurbitacin F. They include: Datiscoside B, from D. glomerata Datiscoside D, from D. glomerata Datiscoside H, from D. glomerata References Triterpene glycosides
Datiscoside
Chemistry
110
55,894,376
https://en.wikipedia.org/wiki/Valley%20of%20the%20Boom
Valley of the Boom (stylized as Valley_of_the_BOOM) is an American docudrama television miniseries created by Matthew Carnahan that premiered on January 13, 2019, on National Geographic. The series centers on the 1990s tech boom and bust in Silicon Valley and it stars Bradley Whitford, Steve Zahn, Lamorne Morris, John Karna, Dakota Shapiro, Oliver Cooper, and John Murphy. Premise Valley of the Boom takes a close look at "the culture of speculation, innovation and debauchery that led to the rapid inflation and burst of the 1990s tech bubble. As with its hybrid series Mars, Nat Geo [uses] select doc elements to support the scripted drama to tell the true inside story of the dramatic early days of Silicon Valley." The series features interviews with many of the people depicted in the dramatized portions of the production in addition to other Internet personalities such as Mark Cuban and Arianna Huffington. Notably absent from these interviews are Netscape co-founder and former vice president of technology Marc Andreessen, who declined to be interviewed, and Jamie Zawinski. Although the program is primarily focused on the quick rise and fall of three influential technology companies, namely Netscape, theGlobe.com, and Pixelon, the program also highlights smaller companies of that era, such as sfGirl.com. Cast and characters Main Bradley Whitford as James L. Barksdale Lamorne Morris as Darrin Morris Oliver Cooper as Todd Krizelman John Karna as Marc Andreessen Dakota Shapiro as Stephan Paternot John Murphy as Jim Clark Steve Zahn as Michael Fenne Recurring Raf Rogers as Sean Alvaro Chiara Zanni as Sheila Fred Henderson as Mike Egan Camille Hollett-French as Tara Hernandez Mike Kovac as Balding Ponytail Coder Nick Hunnings as Ed Cespedes Tom Stevens as Phillip Siobhan Williams as Jenn Vincent Dangerfield as Lee Wiskowski Jacob Richter as Dan Goodin Hilary Jardine as Patty Beron Paul Herbert as Paul Ward Carey Feehan as Robert Dunning Donna Benedicto as Kate Guest Keegan Connor Tracy as Rosanne Siino ("Part 1: print ("hello, world")") Luvia Petersen as Mary Meeker ("Part 1: print ("hello, world")") Michael Patrick Denis as Thomas Reardon ("Part 2: pseudocode") Jesse James as Barry Moore ("Part 4: priority inversion") Siobhan Williams as Jenn ("Part 4: priority inversion") Doug Abrahams as Ace Greenberg ("Part 4: priority inversion") David Stuart as Pit Boss ("Part 4: priority inversion") Connor Tracy as Rosanne Siino ("Part 5: segfault") Rachel Hayward as Joyce ("Part 5: segfault") Tom Stevens as Phillip ("Part 6: fatal error") Episodes Production Development On November 15, 2017, it was announced that National Geographic had given the production a series order consisting of six episodes. Executive producers included Matthew Carnahan, Arianna Huffington, Jason Goldberg, Brant Pinvidic, and David Walpert. Carnahan acted as showrunner for the series and directed as well. David Newsom was co-executive producer and led the non-scripted unit of the production. Joel Ehninger acted in the role of producer. Production companies involved with the series included STXtelevision and Matthew Carnahan Circus Products. On September 24, 2018, it was announced that the series would premiere on January 13, 2019. Casting On March 16, 2018, it was announced that Bradley Whitford, Steve Zahn, Lamorne Morris, John Karna, Dakota Shapiro, and Oliver Cooper had joined the series' main cast. Filming Principal photography for the series began on March 26, 2018 in Vancouver, Canada and was expected to conclude by May 28, 2018. Release Marketing On July 24, 2018, the first trailer for the series was released. Premiere On September 21, 2018, the series held its world premiere during the second annual Tribeca TV Festival in New York City. Following a screening, a conversation took place featuring members of the cast and crew including creator Matthew Carnahan, actors Bradley Whitford, Steve Zahn, Lamorne Morris, and real-life subject Stephan Paternot, founder of theGlobe.com. Distribution The series premiered globally on National Geographic in 171 countries and 45 languages. STXtelevision distributes the series in China. Reception The series has been met with a mixed response from critics upon its premiere. On the review aggregation website Rotten Tomatoes, the series holds a 72% approval rating with an average rating of 5.90 out of 10 based on 18 reviews. The website's critical consensus reads, "A visual collage of dot com history, Valley of Boom proves to be just as sprawling and ramshackle as the docuseries' subject." Metacritic, which uses a weighted average, assigned the series a score of 58 out of 100 based on 11 critics, indicating "mixed or average reviews". Notes References External links 2019 American television series debuts 2019 American television series endings Documentary television series about computing American English-language television shows National Geographic (American TV channel) original programming Nerd culture Science docudramas
Valley of the Boom
Technology
1,079
76,434,050
https://en.wikipedia.org/wiki/Dick%20Sandberg
Dick Sandberg (Söderhamn, Sweden; born in May 8, 1967) is a Swedish mechanical engineer and wood scientist at the Norwegian University of Science and Technology (NTNU), who is an elected fellow (FIAWS) of the International Academy of Wood Science. He is currently the editor-in-chief at the journal Wood Material Science and Engineering, and the period 2013-2024, he was a faculty member and chair professor at the Luleå University of Technology. Career He got his PhD degree in mechanical engineering, with specialization in wood technology and processing from the KTH Royal Institute of Technology in Stockholm in 1998. Afterwards, he worked as a wood specialist and manager in several wood enterprises in Sweden, and later served as a professor in forest products at Linnaeus University in Växjö. In 2015, he was elected as a chaired professor at Wood Science and Engineering division of Lulea University of Technology, in Skellefteå, where he worked until September 2024. His main research interests include, among others, wood material properties, scanning technology and wood machining, as well as the production systems. Until March 2024, he has published more than 300 research works in several international journals and conferences. Recognition Since 2009, Sandberg has served as the editor-in-chief at the referred wood-related, scientific journal Wood Material Science and Engineering of the Taylor & Francis Group. In 2021, in recognition of his research work, he was elected as a fellow at the International Academy of Wood Science. In October 2023, a meta-research carried out by John Ioannidis et al. at Stanford University included Dick Sandberg in Elsevier Data 2022, where he was ranked in the top 2% of researchers of all time in wood science (forestry – materials), having a c-index of 2.9702. In 2023, along with wood scientists Alfred Teischinger and Peter Niemz, he edited the referenced edition of Springer "Handbook of Wood Science and Technology". References External links LTU – Wood Science and Engineering ResearchGate Swedish scientists Fellows of the International Academy of Wood Science Wood scientists 1967 births Living people Academic staff of the Norwegian University of Science and Technology
Dick Sandberg
Materials_science
449
376,538
https://en.wikipedia.org/wiki/Truncated%20dodecahedron
In geometry, the truncated dodecahedron is an Archimedean solid. It has 12 regular decagonal faces, 20 regular triangular faces, 60 vertices and 90 edges. Construction The truncated dodecahedron is constructed from a regular dodecahedron by cutting all of its vertices off, a process known as truncation. Alternatively, the truncated dodecahedron can be constructed by expansion: pushing away the edges of a regular dodecahedron, forming the pentagonal faces into decagonal faces, as well as the vertices into triangles. Therefore, it has 32 faces, 90 edges, and 60 vertices. The truncated dodecahedron may also be constructed by using Cartesian coordinates. With an edge length centered at the origin, they are all even permutations of where is the golden ratio. Properties The surface area and the volume of a truncated dodecahedron of edge length are: The dihedral angle of a truncated dodecahedron between two regular dodecahedral faces is 116.57°, and that between triangle-to-dodecahedron is 142.62°. The truncated dodecahedron is an Archimedean solid, meaning it is a highly symmetric and semi-regular polyhedron, and two or more different regular polygonal faces meet in a vertex. It has the same symmetry as the regular icosahedron, the icosahedral symmetry. The polygonal faces that meet for every vertex are one equilateral triangle and two regular decagon, and the vertex figure of a truncated dodecahedron is . The dual of a truncated dodecahedron is triakis icosahedron, a Catalan solid, which shares the same symmetry as the truncated dodecahedron. The truncated dodecahedron is non-chiral, meaning it is congruent to its mirror image. Truncated dodecahedral graph In the mathematical field of graph theory, a truncated dodecahedral graph is the graph of vertices and edges of the truncated dodecahedron, one of the Archimedean solids. It has 60 vertices and 90 edges, and is a cubic Archimedean graph. Related polyhedron The truncated dodecahedron can be applied in the polyhedron's construction known as the augmentation. Examples of polyhedrons are the Johnson solids, whose constructions are involved by attaching pentagonal cupolas onto the truncated dodecahedron: augmented truncated dodecahedron, parabiaugmented truncated dodecahedron, metabiaugmented truncated dodecahedron, and triaugmented truncated dodecahedron. See also Great stellated truncated dodecahedron References Further reading External links Editable printable net of a truncated dodecahedron with interactive 3D view Uniform polyhedra Archimedean solids Truncated tilings
Truncated dodecahedron
Physics
558
74,350,224
https://en.wikipedia.org/wiki/John%20S.%20Montmollin
John Samuel de Montmollin II (1808 – June 9, 1859) of Savannah, Georgia, was an American slave trader, banker and plantation owner. According to descendants, Montmollin was heavily involved in the organization of the illegal slave transport Wanderer. Montmollin died in a steamboat boiler explosion on the Savannah River in 1859. Biography Montmollin's maternal grandfather was Jonathan Edwards the younger, thus he was a first cousin, once removed, to Aaron Burr; as vice president, Burr stayed at the Montmollin home in 1802 while visiting Savannah. Montmollin married at Savannah, in 1842, Miss Harriet M. Rossignol. In 1848, he was a city marshal of Savannah, where he owned a plantation. Montmollin was president of the Mechanics' Savings Bank of Savannah, which had been organized in 1854, and had capital amounting to in 1857. Beginning in 1856, he funded the construction of a still-extant three-story brick building now known as the John Montmollin Warehouse. The third floor was a slave pen (after the city was occupied by Union troops during the American Civil War the building was turned into a school for the city's African-American children, most of whom had never before had the opportunity to learn how to read or write). In December 1858 Montmollin sought to purchase "one or two gangs of rice field Negros." According to his daughter-in-law, who was interviewed in 1931, Montmollin sought to reopen the transatlantic slave trade and was responsible for organizing the illegal human trafficking transport Wanderer in 1858. John S. Montmollin was one of approximately eleven people killed when a boiler exploded on the Savannah River steamboat John G. Lawton on June 9, 1859. His body was found "imbedded in the marsh, head downwards, to the hips, some seventy to eighty yards from where the explosion occurred, showing it must have been driven very high into the air. A handkerchief, which he had in his hand at the time of the accident, was still tight in his grasp." Montmollin was killed "within a short distance of the spot where his [Wanderer] captives had been incarcerated" on an island in the Savannah River. Following Montmollin's death, his widow found that "her husband died owing debts of more than $30,000" and so in 1863 petitioned a court for permission to sell the estate slaves she had inherited. Permission was granted and she sold 81 slaves in Savannah in April 1863 for . See also List of Georgia slave traders Timeline of Savannah, Georgia Georgia in the American Civil War Nelson C. Trowbridge, another slave trader involved with the Wanderer References 1808 births 1859 deaths 19th-century American criminals 19th-century pirates Accidental deaths in Georgia (U.S. state) American bank presidents American slave owners American mass murderers American pirates American proslavery activists John People from Savannah, Georgia 19th-century American slave traders Deaths from explosion History of slavery in Georgia (U.S. state) 19th-century American planters Wanderer (slave ship)
John S. Montmollin
Chemistry
631
248,582
https://en.wikipedia.org/wiki/Local%20homeomorphism
In mathematics, more specifically topology, a local homeomorphism is a function between topological spaces that, intuitively, preserves local (though not necessarily global) structure. If is a local homeomorphism, is said to be an étale space over Local homeomorphisms are used in the study of sheaves. Typical examples of local homeomorphisms are covering maps. A topological space is locally homeomorphic to if every point of has a neighborhood that is homeomorphic to an open subset of For example, a manifold of dimension is locally homeomorphic to If there is a local homeomorphism from to then is locally homeomorphic to but the converse is not always true. For example, the two dimensional sphere, being a manifold, is locally homeomorphic to the plane but there is no local homeomorphism Formal definition A function between two topological spaces is called a if every point has an open neighborhood whose image is open in and the restriction is a homeomorphism (where the respective subspace topologies are used on and on ). Examples and sufficient conditions Local homeomorphisms versus homeomorphisms Every homeomorphism is a local homeomorphism. But a local homeomorphism is a homeomorphism if and only if it is bijective. A local homeomorphism need not be a homeomorphism. For example, the function defined by (so that geometrically, this map wraps the real line around the circle) is a local homeomorphism but not a homeomorphism. The map defined by which wraps the circle around itself times (that is, has winding number ), is a local homeomorphism for all non-zero but it is a homeomorphism only when it is bijective (that is, only when or ). Generalizing the previous two examples, every covering map is a local homeomorphism; in particular, the universal cover of a space is a local homeomorphism. In certain situations the converse is true. For example: if is a proper local homeomorphism between two Hausdorff spaces and if is also locally compact, then is a covering map. Local homeomorphisms and composition of functions The composition of two local homeomorphisms is a local homeomorphism; explicitly, if and are local homeomorphisms then the composition is also a local homeomorphism. The restriction of a local homeomorphism to any open subset of the domain will again be a local homomorphism; explicitly, if is a local homeomorphism then its restriction to any open subset of is also a local homeomorphism. If is continuous while both and are local homeomorphisms, then is also a local homeomorphism. Inclusion maps If is any subspace (where as usual, is equipped with the subspace topology induced by ) then the inclusion map is always a topological embedding. But it is a local homeomorphism if and only if is open in The subset being open in is essential for the inclusion map to be a local homeomorphism because the inclusion map of a non-open subset of yields a local homeomorphism (since it will not be an open map). The restriction of a function to a subset is equal to its composition with the inclusion map explicitly, Since the composition of two local homeomorphisms is a local homeomorphism, if and are local homomorphisms then so is Thus restrictions of local homeomorphisms to open subsets are local homeomorphisms. Invariance of domain Invariance of domain guarantees that if is a continuous injective map from an open subset of then is open in and is a homeomorphism. Consequently, a continuous map from an open subset will be a local homeomorphism if and only if it is a locally injective map (meaning that every point in has a neighborhood such that the restriction of to is injective). Local homeomorphisms in analysis It is shown in complex analysis that a complex analytic function (where is an open subset of the complex plane ) is a local homeomorphism precisely when the derivative is non-zero for all The function on an open disk around is not a local homeomorphism at when In that case is a point of "ramification" (intuitively, sheets come together there). Using the inverse function theorem one can show that a continuously differentiable function (where is an open subset of ) is a local homeomorphism if the derivative is an invertible linear map (invertible square matrix) for every (The converse is false, as shown by the local homeomorphism with ). An analogous condition can be formulated for maps between differentiable manifolds. Local homeomorphisms and fibers Suppose is a continuous open surjection between two Hausdorff second-countable spaces where is a Baire space and is a normal space. If every fiber of is a discrete subspace of (which is a necessary condition for to be a local homeomorphism) then is a -valued local homeomorphism on a dense open subset of To clarify this statement's conclusion, let be the (unique) largest open subset of such that is a local homeomorphism. If every fiber of is a discrete subspace of then this open set is necessarily a subset of In particular, if then a conclusion that may be false without the assumption that 's fibers are discrete (see this footnote for an example). One corollary is that every continuous open surjection between completely metrizable second-countable spaces that has discrete fibers is "almost everywhere" a local homeomorphism (in the topological sense that is a dense open subset of its domain). For example, the map defined by the polynomial is a continuous open surjection with discrete fibers so this result guarantees that the maximal open subset is dense in with additional effort (using the inverse function theorem for instance), it can be shown that which confirms that this set is indeed dense in This example also shows that it is possible for to be a dense subset of 's domain. Because every fiber of every non-constant polynomial is finite (and thus a discrete, and even compact, subspace), this example generalizes to such polynomials whenever the mapping induced by it is an open map. Local homeomorphisms and Hausdorffness There exist local homeomorphisms where is a Hausdorff space but is not. Consider for instance the quotient space where the equivalence relation on the disjoint union of two copies of the reals identifies every negative real of the first copy with the corresponding negative real of the second copy. The two copies of are not identified and they do not have any disjoint neighborhoods, so is not Hausdorff. One readily checks that the natural map is a local homeomorphism. The fiber has two elements if and one element if Similarly, it is possible to construct a local homeomorphisms where is Hausdorff and is not: pick the natural map from to with the same equivalence relation as above. Properties A map is a local homeomorphism if and only if it is continuous, open, and locally injective. In particular, every local homeomorphism is a continuous and open map. A bijective local homeomorphism is therefore a homeomorphism. Whether or not a function is a local homeomorphism depends on its codomain. The image of a local homeomorphism is necessarily an open subset of its codomain and will also be a local homeomorphism (that is, will continue to be a local homeomorphism when it is considered as the surjective map onto its image, where has the subspace topology inherited from ). However, in general it is possible for to be a local homeomorphism but to be a local homeomorphism (as is the case with the map defined by for example). A map is a local homomorphism if and only if is a local homeomorphism and is an open subset of Every fiber of a local homeomorphism is a discrete subspace of its domain A local homeomorphism transfers "local" topological properties in both directions: is locally connected if and only if is; is locally path-connected if and only if is; is locally compact if and only if is; is first-countable if and only if is. As pointed out above, the Hausdorff property is not local in this sense and need not be preserved by local homeomorphisms. The local homeomorphisms with codomain stand in a natural one-to-one correspondence with the sheaves of sets on this correspondence is in fact an equivalence of categories. Furthermore, every continuous map with codomain gives rise to a uniquely defined local homeomorphism with codomain in a natural way. All of this is explained in detail in the article on sheaves. Generalizations and analogous concepts The idea of a local homeomorphism can be formulated in geometric settings different from that of topological spaces. For differentiable manifolds, we obtain the local diffeomorphisms; for schemes, we have the formally étale morphisms and the étale morphisms; and for toposes, we get the étale geometric morphisms. See also Notes Citations References Theory of continuous functions Functions and mappings General topology
Local homeomorphism
Mathematics
1,975
72,853,865
https://en.wikipedia.org/wiki/Cobalt%20laurate
Cobalt laurate is an metal-organic compound with the chemical formula . It is classified as a metallic soap, i.e. a metal derivative of a fatty acid (lauric acid). Synthesis Cobalt laurate can be prepared by the reaction of aqueous solutions of cobalt(II) chloride (CoCl2) with sodium laurate. Physical properties Cobalt laurate forms dark violet crystals. It does not dissolve in water, but is soluble in alcohol. References Laurates Cobalt(II) compounds
Cobalt laurate
Chemistry
103
44,542,513
https://en.wikipedia.org/wiki/Ethyl%20methyl%20cellulose
Ethyl methyl cellulose is a thickener, vegetable gum, foaming agent and emulsifier. Its E number is E465. Chemically, it is a derivative of cellulose with ethyl and methyl groups attached by ether linkages. It can be prepared by treatment of cellulose with dimethyl sulfate and ethyl chloride in the presence of an alkali. See also Ethyl cellulose Methyl cellulose References Cellulose Food additives Cellulose ethers
Ethyl methyl cellulose
Chemistry
105
53,596,792
https://en.wikipedia.org/wiki/FGLM%20algorithm
FGLM is one of the main algorithms in computer algebra, named after its designers, Faugère, Gianni, Lazard and Mora. They introduced their algorithm in 1993. The input of the algorithm is a Gröbner basis of a zero-dimensional ideal in the ring of polynomials over a field with respect to a monomial order and a second monomial order. As its output, it returns a Gröbner basis of the ideal with respect to the second ordering. The algorithm is a fundamental tool in computer algebra and has been implemented in most of the computer algebra systems. The complexity of FGLM is O(nD3), where n is the number of variables of the polynomials and D is the degree of the ideal. There are several generalization and various applications for FGLM. References Computer algebra Commutative algebra Polynomials
FGLM algorithm
Mathematics,Technology
176
29,482,459
https://en.wikipedia.org/wiki/Discovery%20and%20development%20of%20proton%20pump%20inhibitors
Proton pump inhibitors (PPIs) block the gastric hydrogen potassium ATPase (H+/K+ ATPase) and inhibit gastric acid secretion. These drugs have emerged as the treatment of choice for acid-related diseases, including gastroesophageal reflux disease (GERD) and peptic ulcer disease. PPIs also can bind to other types of proton pumps such as those that occur in cancer cells and are finding applications in the reduction of cancer cell acid efflux and reduction of chemotherapy drug resistance. History Evidence emerged by the end of the 1970s that the newly discovered proton pump (H+/K+ ATPase) in the secretory membrane of the parietal cell was the final step in acid secretion. Literature from anaesthetic screenings led attention to the potential antiviral compound pyridylthioacetamide which after further examination pointed the focus on an anti-secretory compound with unknown mechanisms of action called timoprazole. Timoprazole is a pyridylmethylsulfinyl benzimidazole and appealed due to its simple chemical structure and its surprisingly high level of anti-secretory activity. Optimization of substituted benzimidazoles and their antisecretory effects were studied on the newly discovered proton pump to obtain higher pKa values of the pyridine, thereby facilitating accumulation within the parietal cell and increasing the rate of acid-mediated conversion to the active mediate. As a result of such optimization the first proton pump inhibiting drug, omeprazole, was released on the market. Other PPIs like lansoprazole and pantoprazole would follow in its footsteps, claiming their share of a flourishing market, after their own course of development. Basic structure PPIs can be divided into two groups based on their basic structure. Although all members have a substituted pyridine part, one group has linked to various benzimidazoles, whereas the other has linked to a substituted imidazopyridine. All marketed PPIs (omeprazole, lansoprazole, pantoprazole) are in the benzimidazole group. Proton pump inhibitors are prodrugs and their actual inhibitory form is somewhat controversial. In acidic solution, the sulfenic acid is isolated before reaction with one or more cysteines accessible from the luminar surface of the enzyme, a tetracyclic sulfenamide. This is a planar molecule thus any enantiomer of a PPI loses stereospecifity upon activation. The effectiveness of these drugs derives from two factors: their target, the H+/K+ ATPase which is responsible for the last step in acid secretion; therefore, their action on acid secretion is independent of the stimulus to acid secretion, of histamine, acetylcholine, or other yet to be discovered stimulants. In addition, their mechanism of action involves covalent binding of the activated drug to the enzyme, resulting in a duration of action that exceeds their plasma half-life. The gastric ATPase Acid secretion by the human stomach results in a median diurnal pH of 1.4. This very large (>106-fold) H+ gradient is generated by the gastric H+/K+ ATPase which is an ATP-driven proton pump. Hydrolysis of one ATP molecule is used to catalyse the electroneutral exchange of two luminal potassium ions for two cytoplasmic protons through the gastric membrane. Structure The proton pump, H+/K+ ATPase is a α,β-heterodimeric enzyme. The catalytic α subunit has ten transmembrane segments with a cluster of intramembranal carboxylic amino acids located in the middle of the transmembrane segments TM4, TM5, TM6 and TM8. The β subunit has one transmembrane segment with N terminus in cytoplasmic region. The extracellular domain of the β subunit contains six or seven N-linked glycosylation sites which is important for the enzyme assembly, maturation and sorting. Function The ion transport is accomplished by cyclical conformational changes of the enzyme between its two main reaction states, E1 and E2. The cytoplasmic-open E1 and luminal-open E2 states have high affinity for H+ and K+. The expulsion of the proton at 160 mM (pH 0.8) concentration results from movement of lysine 791 into the ion binding site in the E2P configuration. Discovery In the year 1975, timoprazole was found to inhibit acid secretion irrespective of stimulus, extracellular or intracellular. Studies on timoprazole revealed enlargement of the thyroid gland due to inhibition of iodine uptake as well as atrophy of the thymus gland. A literature search showed that some substituted mercapto-benzimidazoles had no effect on iodine uptake and introduction of such substituents into timoprazole resulted in an elimination of the toxic effects, without reducing the antisecretory effect. A derivative of timoprazole, omeprazole, was discovered in 1979, and was the first of a new class of drug that control acid secretion in the stomach, a proton pump inhibitor (PPI). Addition of 5-methoxy-substitution to the benzimidazole moiety of omeprazole was also made and gave the compound much more stability at neutral pH. In 1980, an Investigational New Drug (IND) application was filed and omeprazole was taken into Phase III human trials in 1982. A new approach for the treatment of acid-related diseases was introduced, and omeprazole was quickly shown to be clinically superior to the histamine H2 receptor antagonists, and was launched in 1988 as Losec in Europe, and in 1990 as Prilosec in the United States. In 1996, Losec became the world's biggest ever selling pharmaceutical, and by 2004 over 800 million patients had been treated with the drug worldwide. During the 1980s, about 40 other companies entered the PPIs area, but few achieved market success: Takeda with lansoprazole, Byk Gulden (now Nycomed) with pantoprazole, and Eisai with rabeprazole, all of which were analogues of omeprazole. Development Pantoprazole The story of pantoprazole's discovery is a good example of the stepwise development of PPIs. The main focus of modification of timoprazole was the benzimidazole part of its structure. Addition of a trifluoromethyl group to the benzimidazole moiety led to a series of very active compounds with varying solution-stability. In general fluoro substituents were found to block metabolism at the point where they were attached. Later the more balanced fluoroalkoxy substituent, instead of the highly lipophilic and strongly electron-withdrawing trifluoromethyl substituent, led to highly active compounds with supposed longer half-lives and higher solution stability. It was realized that activity was somehow linked to instability in solution and then came to the conclusion that the cyclic sulfenamides, formed in acidic conditions, were the active principle of the PPIs. Finally, it was understood that seemingly small alterations in the backbone of timoprazole led nowhere, and focus had to be centered on the substituents on the backbone. However, necessary intramolecular rearrangement of the benzimidazole into sulfenamide posed severe geometric constraints. Optimal compounds would be those that were stable at neutral pH but were quickly activated at low pH. A clear-cut design of active inhibitors was still not possible because in the complex multi-step chemistry the influence of a substituent on each step in the cascade could be different, and therefore not predictable for the overall rate of the prerequisite acid activation. Smith Kline and French, that entered into collaboration with Byk Gulden mid-1984, greatly assisted in determining criteria for further development. From 1985, the aim was to identify a compound with good stability at neutral pH, sustaining this higher level of stability down to pH 5 but being rapidly activateable at lower pHs, combined with a high level of H+/K+ ATPase inhibition. From the numerous already synthesized and tested compounds that fulfilled these criteria the most promising candidates were pantoprazole and its salt, pantoprazole sodium. In 1986 pantoprazole sodium sesquihydrate was synthesized and from 1987 onwards the development of pantoprazole was switched to the sodium salt which is more stable and has better compatibility with other excipients used in the drug formulation. Pantoprazole was identified after nearly seven years of research and registered for clinical use after a further seven years of development, and finally reached its first market in 1994 in Germany. During the course of the studies on pantoprazole, more than 650 PPIs had been synthesized and evaluated. Pantoprazole obtained high selection criteria in its development process — especially concerning the favorable low potential for interaction with other drugs. Good solubility of pantoprazole and a very high solution stability allowed it to become the first marketed PPI for intravenous use in critical care patients. Esomeprazole Omeprazole showed an inter-individual variability and therefore a significant number of patients with acid-related disorders required higher or multiple doses to achieve symptom relief and healing. Astra started a new research program in 1987 to identify a new analogue to omeprazole with less interpatient variability. Only one compound proved superior to omeprazole and that was the (S)-(−)-isomer, esomeprazole, which was developed as the magnesium salt. Esomeprazole magnesium (brand name Nexium) received its first approval in 2000 and provided more pronounced inhibition of acid secretion and less inter-patient variation compared to omeprazole. In 2004, Nexium had already been used to treat over 200 million patients. Benzimidazoles Omeprazole (brand names Losec, Prilosec, Zegerid, Ocid, Lomac, Omepral, Omez, Ultop, Ortanol, Gastrozol) Omeprazole was the first PPI on the market, in 1988. It is a 1:1 racemate drug with a backbone structure of timoprazole, but substituted with two methoxy and two methyl groups. One of the methoxy group is at position 6 of the bensoimidazole and the other at position 4 of the pyridine and the methyl groups are at position 3 and 5 of the pyridine. Omeprazole is available as enteric-coated tablets, capsules, chewable tablets, powder for oral suspensions and powder for intravenous injection. Lansoprazole (brand names: Prevacid, Zoton, Inhibitol, Levant, Lupizole, Lancid, Lansoptol, Epicur) Lansoprazole was the second of the PPI drugs to reach the market, being launched in Europe in 1991 and the US in 1995. It has no substitutions at the benzimidazole but two substituents on the pyridine, methyl group at position 3 and a trifluoroethoxy group at position 4. The drug is a 1:1 racemate of the enantiomers dexlansoprazole and levolansoprazole. It is available in gastroresistant capsules and tablets as well as chewable tablets. Pantoprazole (brand names: Protonix, Somac, Pantoloc, Pantozol, Zurcal, Zentro, Pan, Nolpaza, Controloc, Sunpras) Pantoprazole was the third PPI and was introduced to the German market in 1994. It has a difluoroalkoxy sidegroup on the benzimidazole part and two methoxy groups in position 3 and 4 on the pyridine. Pantoprazole was first prepared in April 1985 by a small group of scale-up chemists. It is a dimethoxy-substituted pyridine bound to a fluoroalkoxy substituted benzimidazole. Pantoprazole sodium is available as gastroresistant or delayed release tablets and as lyophilized powder for intravenous use. Rabeprazole (brand names: Zechin, Rabecid, Nzole-D, AcipHex, Pariet, Rabeloc, Zulbex, Ontime, Noflux) Rabeprazole is a novel benzimidazole compound on market, since 1999 in USA. It is similar to lansoprazole in having no substituents on its benzimidazole part and a methyl group at site 3 on the pyridine, the only difference is the methoxypropoxy substitution at site 4 instead of the trifluoroethoxy group on lansoprazole. Rabeprazole is marketed as rabeprazole sodium salt. It is available as enteric-coated tablets. Esomeprazole (brand names: Nexium, Esotrex, Emanera, Neo-Zext) In 2001 esomeprazole was launched in USA, as a follow-up of omeprazoles patent. Esomeprazole is the (S)-(−)-enantiomer of omeprazole and provides higher bioavailability and improved efficacy, in terms of stomach acid control, over the (R)-(+)-enantiomer of omeprazole. In theory, by using pure esomeprazole the effects on the proton pump will be equal in all patients, eliminating the "poor metabolizer effect" of the racemate omeprazole. It is available as delayed-release capsules or tablets and as esomeprazole sodium for intravenous injection/infusion. Oral esomeprazole preparations are enteric-coated, due to the rapid degradation of the drug in the acidic condition of the stomach. This is achieved by formulating capsules using the multiple-unit pellet system. Although the (S)-(−)-isomer is more potent in humans, the (R)-(+)-isomer is more potent in testings of rats, while the enantiomers are equipotent in dogs. Dexlansoprazole (brand names: Kapidex, Dexilant) Dexlansoprazole was launched as a follow up of lansoprazole in 2009. Dexlansoprazole is an (R)-(+)-enantiomer of lansoprazole, marketed as Dexilant. After oral appliance of the racemic lansoprazole, the circulating drug is 80% dexlansoprazole. Moreover, both enantiomers have similar effects on the proton pump. Consequently, the main advantage of Dexilant is not the fact that it is an enantiopure substance. The advantage is the pharmaceutical formulation of the drug, which is based on a dual release technology, with the first quick release producing a blood plasma peak concentration about one hour after application, and the second retarded release producing another peak about four hours later. Imidazopyridines Tenatoprazole Tenatoprazole (TU-199), an imidazopyridine proton pump inhibitor, is a novel compound that has been designed as a new chemical entity with a substantially prolonged plasma half-life (7 hours), but otherwise has similar activity as other PPIs. The difference in the structural backbone of tenatoprazole compared to benzimidazole PPIs, is its imidazo[4,5-b]pyridine moiety, which reduces the rate of metabolism, allowing a longer plasma residence time but also decreases the pKa of the fused imidazole N as compared to the current PPIs. Tenatoprazole has the same substituents as omeprazole, the methoxy groups at position 6 on the imidazopyridine and at position 4 on the pyridine part as well as two methyl groups at position 3 and 5 on the pyridine. The bioavailability of tenatoprazole is double for the (S)-(−)-tenatoprazole sodium salt hydrate form when compared to the free form in dogs. This increased bioavailability is due to differences in the crystal structure and hydrophobic nature of the two forms, and therefore its more likely to be marketed as the pure (S)-(−)-enantiomer. PPIs binding mode The disulfide binding of the inhibitor takes place in the luminal sector of the H+/K+ ATPase were 2 mol of inhibitor is bound per 1 mol of active site H+/K+ ATPase. All PPIs react with cysteine 813 in the loop between TM5 and TM6 on the H+/K+ ATPase, fixing the enzyme in the E2 configuration. Omeprazole reacts with cysteine 813 and 892. Rabeprazole binds to cysteine 813 and both 892 and 321. Lansoprazole reacts with cysteine 813 and cysteine 321, whereas pantoprazole and tenatoprazole react with cysteine 813 and 822. Reaction with cysteine 822 confers a rather special property to the covalently inhibited enzyme, namely irreversibility to reducing agents. The likely first step is binding of the prodrug protonated on the pyridine of the compound with cysteine 813. Then the second proton is added with acid transport by the H+/K+ ATPase, and the compound is activated. Recent data suggest the hydrated sulfenic acid to be the reactive species forming directly from the mono-protonated benzimidazole bound on the surface of the pump. Saturation of the gastric ATPase Even though consumption of food stimulates acid secretion and acid secretion activates PPIs, PPIs cannot inhibit all pumps. About 70% of pump enzyme is inhibited, as PPIs have a short half-life and not all pump enzymes are activated. It takes about 3 days to reach steady-state inhibition of acid secretion, as a balance is struck between covalent inhibition of active pumps, subsequent stimulation of inactive pumps after the drug has been eliminated from the blood, and de novo synthesis of new pumps. Clinical pharmacology Although the drugs omeprazole, lansoprazole, pantoprazole, and rabeprazole share common structure and mode of action, each differs somewhat in its clinical pharmacology. Differing pyridine and benzimidazole substituents result in small, but potentially significant different physical and chemical properties. Direct comparison of pantoprazole sodium with other anti-secretory drugs showed that it was significantly more effective than H2-receptor antagonists and either equivalent or better than other clinically used PPIs. Another study states rabeprazole undergoes activation over a greater pH range than omeprazole, lansoprazole, and pantoprazole, and converts to the sulphenamide form more rapidly than any of these three drugs. Most oral PPI preparations are enteric-coated, due to the rapid degradation of the drugs in the acidic conditions of the stomach. For example omeprazole is unstable in acid with a half-life of 2 min at pH 1–3, but is significantly more stable at pH 7 (half-life ca. 20 h). The acid protective coating prevents conversion to the active principle in the lumen of the stomach, which then will react with any available sulfhydryl group in food and will not penetrate to the lumen of the secretory canaliculus The oral bioavailability of PPIs is high; 77% for pantoprazole, 80–90% for lansoprazole and 89% for esomeprazole. All the PPIs except tenatoprazole are rapidly metabolized in the liver by CYP enzymes, mostly by CYP2C19 and CYP3A4. PPIs are sensitive to CYP enzymes and have different pharmacokinetic profiles. Studies comparing the efficacy of PPIs indicate that esomeprazole and tenatoprazole have stronger acid suppression, with a longer period of intragastric pH (pH > 4). Studies of the effect of tenatoprazole on acid secretion in in vivo animal models, such as pylorus-ligated rats and acute gastric fistula rats, demonstrated a 2- to 4-fold more potent inhibitory activity compared with omeprazole. A more potent inhibitory activity was also shown in several models of induced gastric lesions. In Asian as well as Caucasian healthy subjects, tenatoprazole exhibited a seven-fold longer half-life than the existing H+/K+ ATPase inhibitors. It is thus hypothesized that a longer half-life results in a more prolonged inhibition of gastric acid secretion, especially during the night. A strong relationship has been stated between the degree and duration of gastric acid inhibition, as measured by monitoring of the 24-hour intragastric pH in pharmacodynamic studies, and the rate of healing and symptom relief reported. A clinical study showed that nocturnal acid breakthrough duration was significantly shorter for 40 mg of tenatoprazole than for 40 mg of esomeprazole, with the conclusion that tenatoprazole was significantly more potent than esomeprazole during the night. Although, the therapeutic relevance of this pharmacological advantage deserves further study. PPIs have been used successfully in triple-therapy regiments with clarithromycin and amoxicillin for the eradication of Helicobacter pylori with no significant difference between different PPI-based regimens. Future research and new generations of PPIs Potassium-competitive acid blockers or acid pump antagonists Despite the fact that PPIs have revolutionized the treatment of GERD, there is still room for improvement in the speed of onset of acid suppression as well as mode of action that is independent of an acidic environment and also better inhibition of the proton pump. Therefore, a new class of PPIs, potassium-competitive acid blockers (P-CABs) or acid pump antagonists (APAs), have been under development the past years and will most likely be the next generation of drugs that suppress gastric activity. These new agents can in a reversible and competitive fashion inhibit the final step in the gastric acid secretion with respect to K+ binding to the parietal cell gastric H+/K+ ATPase. That is, they block the action of the H+/K+ ATPase by binding to or near the site of the K+ channel. Since the binding is competitive and reversible these agents have the potential to achieve faster inhibition of acid secretion and longer duration of action compared to PPIs, resulting in quicker symptom relief and healing. The imidazopyridine-based compound SCH28080 was the prototype of this class, and turned out to be hepatotoxic. Newer agents that are currently in development include CS-526, linaprazan, soraprazan and revaprazan in which the latter have reached clinical trials. Studies remain to determine whether these or other related compounds can become useful. In June 2006, Yuhan obtained approval from the Korean FDA for the use of revaprazan (brand name Revanex) in the treatment of gastritis. Vonoprazan is a newer agent with a faster and longer lasting action, first marketed in Japan, then in Russia, and in 2023 was approved for use in the US. It is still being trialed in the UK. See also Digestion Stomach Gastric acid Gastroesophageal reflux disease Hydrogen potassium ATPase Proton pump inhibitor References Proton-pump inhibitors Gastroenterology Proton-Pump Inhibitors, Discovery And Development Of
Discovery and development of proton pump inhibitors
Chemistry,Biology
5,147
72,165,192
https://en.wikipedia.org/wiki/Upside-down%20painting
Most paintings are intended to be hung in a precise orientation, defining an upper part and a lower part. Some paintings are displayed upside down, sometimes by mistake since the image does not represent an easily recognizable oriented subject and lacks a signature or by a deliberate decision of the exhibitor. Examples In 1941 unfinished version of New York City, a 1942 oil by Piet Mondrian, was hung upside-down at 1945 at the MOMA of New York and since 1980 at the Kunstsammlung Nordrhein-Westfalen. After the mistake was discovered in 2022, the painting's orientation was not corrected, to avoid damage. , a paper-cut by Henri Matisse, depicts a ship reflecting on the water. It hung upside down at MOMA for 47 days in 1961. Georgia O'Keeffe's The Lawrence Tree (1929) depicts a tree from its foot. It hung up upside down in 1931 and between 1979 and 1989. Her Oriental Poppies hung upside down for 30 years at the Weisman Art Museum of the University of Minnesota. Vincent van Gogh's Long Grass with Butterflies spent two weeks inverted at the National Gallery of London. Salvador Dalí's Four Fishermen's Wives in Cadaquès was upside down at the Metropolitan Museum of New York. Pablo Picasso's 1912 drawing The Fiddler was upside down at the Reina Sofía Museum of Madrid. The representations of the head and the fiddle were confused. Josep Amorós's portrait of Philip V of Spain hangs upside down at the , Spain. The king ordered the burning of Xàtiva in 1701, during the War of the Spanish Succession. Georg Baselitz used a painting by Louis-Ferdinand von Rayski, Wermsdorf Woods, as a model, in order to paint his first picture with an inverted motif: The Wood On Its Head (1969). By inverting his paintings, the artist is able to emphasize the organisation of colours and form and confront the viewer with the picture's surface rather than the personal content of the image. In this sense, the paintings are empty and not subject to interpretation. Instead, one can only look at them. When both orientations are valid Some works display rotational symmetry or are ambiguous figures that allow both orientations to be meaningful. Giuseppe Arcimboldo painted several works that are still lifes in one orientation and related portraits in the other. See also Spolia (fragments of sculpture and architecture recycled in new buildings) may not be in the original orientation for ideological or pragmatical reasons. An example is the blocks in the shape of a Medusa head reused as column bases in the Basilica Cistern of Constantinople. , a genre depicting enemies hanging from their feet. 🔝, a symbol to show the top side of an object. Denny Dent, an artist who sometimes painted upside-down portraits on stage before turning the canvas right-side-up for the audience References Rotation Painting Visual arts exhibitions
Upside-down painting
Physics
599
56,909,364
https://en.wikipedia.org/wiki/Isethionates
Isethionates are esters of long-chain aliphatic carboxylic acids (C8 – C18) with isethionic acid (2-hydroxyethanesulfonic acid) or salts thereof, such as ammonium isethionate or sodium isethionate. They are also referred to as acyl isethionates or acyloxyethanesulfonates. Like the taurides, isethionates are a class of particularly mild anionic surfactants which, unlike ordinary soaps, retain their washing-active properties even in hard water. Isethionates are obtained on an industrial scale reacting mixtures of carboxylic acids with salts of isethionic acid under acidic catalysis e. g. with methanesulfonic acid. The mixtures of carboxylic acids are obtained from the hydrolysis of animal fats (tallow) or vegetable oils, preferably coconut oil, but also palm oil, soybean oil or castor oil. Isethionates are solids which are often mixed with fatty acids (up to 30% by weight) to lower their freezing point. Despite its low water solubility (100ppm at 25 °C), the lower-priced sodium cocoylisethionate has found more widespread use than its well water-soluble ammonium salt (> 25 wt.% at 25 °C). To solubilize the sparsely soluble isethionates and taurides, the formation of mixtures with amphoteric surfactants (such as cocamidopropyl betaine) are proposed. From such mixtures, it is possible to prepare liquid, clear and transparent aqueous concentrates which are liquid at room temperature. Isethionates are characterized by excellent skin compatibility, excellent foaming (even in hard water), good cleansing properties and a pleasant skin feel. They are non-toxic and readily biodegradable. However, in contrast to the taurides, they are not long-term stable outside a pH range of 5 to 8. Isethionates are used in solid soaps (so-called syndet bars) and in other personal care products such as lotions, washing and shower gels, shampoos, liquid soaps, shaving creams, and other cosmetic and dermatological preparations. List References Literature Wilfried Umbach (Hrsg.), Kosmetik und Hygiene von Kopf bis Fuß, Wiley-VCH Verlag GmbH & Co. KGaA, 3. vollst. überarb. u. erw. Auflage (27. Juli 2012), Sulfonic acids Carboxylate esters Surfactants
Isethionates
Chemistry
570
1,042,649
https://en.wikipedia.org/wiki/History%20of%20the%20bicycle
Vehicles that have two wheels and require balancing by the rider date back to the early 19th century. The first means of transport making use of two wheels arranged consecutively, and thus the archetype of the bicycle, was the German draisine dating back to 1817. The term bicycle was coined in France in the 1860s, and the descriptive title "penny farthing", used to describe an "ordinary bicycle", is a 19th-century term. Earliest unverified bicycle There are several early claims regarding the invention of the bicycle, but many remain unverified. A sketch from around 1500 AD is attributed to Gian Giacomo Caprotti, a pupil of Leonardo da Vinci, but it was described by Hans-Erhard Lessing in 1998 as a purposeful fraud, a description now generally accepted. However, the authenticity of the bicycle sketch is still vigorously maintained by followers of Augusto Marinoni, a lexicographer and philologist, who was entrusted by the Commissione Vinciana of Rome with the transcription of Leonardo's Codex Atlanticus. Later, and equally unverified, is the contention that a certain "Comte de Sivrac" developed a célérifère in 1792, demonstrating it at the Palais-Royal in France. The célérifère supposedly had two wheels set on a rigid wooden frame and no steering, directional control being limited to that attainable by leaning. A rider was said to have sat astride the machine and pushed it along using alternate feet. It is now thought that the two-wheeled célérifère never existed (though there were four-wheelers) and it was instead a misinterpretation by the well-known French journalist Louis Baudry de Saunier in 1891. In Japan, a pedal-powered tricycle called '陸舟奔車 (Rikushu-honsha)' was described in '新製陸舟奔車之記 (Records of a Newly Made Rikushu-honsha)' (owned by the Hikone Public Library, Hikone, Japan), written in 1732 by 平石久平次時光 (Hiraishi Kuheiji Tokimitsu) (1696-1771), a retainer of the Hikone domain. However, it was not further developed, and the practical use of bicycles in Japan did not occur until modern bicycles were imported from Europe. 19th century 1817 to 1819: The Draisine or Velocipede The first verifiable claim for a practically used bicycle belongs to German Baron Karl von Drais Sauerbronn, a civil servant to the Grand Duke of Baden in Germany. Drais invented his Laufmaschine (German for "running machine") in 1817, that was called Draisine (English) or draisienne (French) by the press. Karl von Drais patented this design in 1818, which was the first commercially successful two-wheeled, steerable, human-propelled machine, commonly called a velocipede, and nicknamed hobby-horse or dandy horse. It was initially manufactured in Germany and France. Hans-Erhard Lessing (Drais's biographer) found from circumstantial evidence that Drais's interest in finding an alternative to the horse was the starvation and death of horses caused by crop failure in 1816, the Year Without a Summer (following the volcanic eruption of Tambora in 1815). On his first reported ride from Mannheim on June 12, 1817, he covered 13 km (eight miles) in less than an hour. Constructed almost entirely of wood, the draisine weighed 22 kg (48 pounds), had brass bushings within the wheel bearings, iron shod wheels, a rear-wheel brake and 152 mm (6 inches) of trail of the front-wheel for a self-centering caster effect. This design was welcomed by mechanically minded men daring to balance, and several thousand copies were built and used, primarily in Western Europe and in North America. Its popularity rapidly faded when, partly due to increasing numbers of accidents, some city authorities began to prohibit its use. However, in 1866 Paris a Chinese visitor named Bin Chun could still observe foot-pushed velocipedes. The Draisine is regarded as the first bicycle and Karl von Drais is seen as the "father of the bicycle". The concept was picked up by a number of British cartwrights; the most notable was Denis Johnson of London announcing in late 1818 that he would sell an improved model. Johnson called his machine as a pedestrian curricle or velocipede, but the public preferred nicknames like "hobby-horse," after the children's toy or, worse still, "dandyhorse," after the foppish men, then called dandies, who often rode them. Johnson's machine was an improvement on Drais's, being notably more elegant: his wooden frame had a serpentine shape instead of Drais's straight one, allowing the use of larger wheels without raising the rider's seat, but was still the same design. During the summer of 1819, the "hobby-horse", thanks in part to Johnson's marketing skills and better patent protection, became the craze and fashion in London society. The dandies, the Corinthians of the Regency, adopted it, and therefore the poet John Keats referred to it as "the nothing" of the day. Riders wore out their boots surprisingly rapidly, and the fashion ended within the year, after riders on pavements (sidewalks) were fined two pounds. 1820s to 1850s: An Era of 3 and 4-Wheelers The intervening decades of the 1820s–1850s witnessed many developments concerning human-powered vehicles often using technologies similar to the draisine, even if the idea of a workable two-wheel design, requiring the rider to balance, had been dismissed. These new machines had three wheels (tricycles) or four (quadracycles) and came in a very wide variety of designs, using pedals, treadles, and hand-cranks, but these designs often suffered from high weight and high rolling resistance. However, Willard Sawyer in Dover successfully manufactured a range of treadle-operated 4-wheel vehicles and exported them worldwide in the 1850s. 1830s: The Reported Scottish Inventions The first mechanically propelled two-wheel vehicle is believed by some to have been built by Kirkpatrick Macmillan, a Scottish blacksmith, in 1839. A nephew later claimed that his uncle developed a rear-wheel-drive design using mid-mounted treadles connected by rods to a rear crank, similar to the transmission of a steam locomotive. Proponents associate him with the first recorded instance of a bicycling traffic offense, when a Glasgow newspaper reported in 1842 an accident in which an anonymous "gentleman from Dumfries-shire... bestride a velocipede... of ingenious design" knocked over a pedestrian in the Gorbals and was fined five shillings. However, the evidence connecting this with Macmillan is weak, since it is unlikely that the artisan Macmillan would have been termed a gentleman, nor is the report clear on how many wheels the vehicle had. A similar machine was said to have been produced by Gavin Dalzell of Lesmahagow, circa 1845. There is no record of Dalzell ever having laid claim to inventing the machine. It is believed that he copied the idea having recognized the potential to help him with his local drapery business and there is some evidence that he used the contraption to take his wares into the rural community around his home. A replica still exists today in the Riverside Museum in Glasgow. The museum holds the honor of exhibiting the oldest bike in existence today. The first documented producer of rod-driven two-wheelers, treadle bicycles, was Thomas McCall, of Kilmarnock in 1869. The design was inspired by the French front-crank velocipede of the Lallement/Michaux type. 1853 and the invention of the first bicycle with pedal crank "Tretkurbelfahrrad" by Philipp Moritz Fischer Philipp Moritz Fischer, who used the draisine to get to school from the age of 9, invented the pedal crank in 1853. After years of living all over Europe, he left London to go back to his native town of Schweinfurt, Bavaria, when his first son died at a young age. He built the very first bicycle with pedals in 1853; however, he did not make the invention public. The Tretkurbelfahrrad from 1853 is still sustained and is on public display in the municipality museum in Schweinfurt. 1860s and the Michaux "Velocipede", aka "Boneshaker" The first widespread and commercially successful design was French. An example is at the Canada Science and Technology Museum, in Ottawa, Ontario. Initially developed around 1863, it sparked a fashionable craze briefly during 1868–70. Its design was simpler than the Macmillan bicycle; it used rotary cranks and pedals mounted to the front wheel hub. Pedaling made it easier for riders to propel the machine at speed, but the rotational speed limitation of this design created stability and comfort concerns which would lead to the large front wheel of the "penny farthing". It was difficult to pedal the wheel that was used for steering. The use of metal frames reduced the weight and provided sleeker, more elegant designs, and also allowed mass-production. Different braking mechanisms were used depending on the manufacturer. In England, the velocipede earned the name of "bone-shaker" because of its rigid frame and iron-banded wheels that resulted in a "bone-shaking experience for riders". The velocipede's renaissance began in Paris during the late 1860s. Its early history is complex and has been shrouded in some mystery, not least because of conflicting patent claims: all that has been stated for sure is that a French metalworker attached pedals to the front wheel; at present, the earliest year bicycle historians agree on is 1864. The identity of the person who attached cranks is still an open question at International Cycling History Conferences (ICHC). The claims of Ernest Michaux and of Pierre Lallement, and the lesser claims of rear-pedaling Alexandre Lefebvre, have their supporters within the ICHC community. Bicycle historian David V. Herlihy documents that Lallement claimed to have created the pedal bicycle in Paris in 1863. He had seen someone riding a draisine in 1862 then originally came up with the idea to add pedals to it. It is a fact that he filed the earliest and only patent for a pedal-driven bicycle, in the US in 1866. Lallement's patent drawing shows a machine which looks exactly like Johnson's draisine, but with the pedals and rotary cranks attached to the front wheel hub, and a thin piece of iron over the top of the frame to act as a spring supporting the seat, for a slightly more comfortable ride. By the early 1860s, the blacksmith Pierre Michaux, besides producing parts for the carriage trade, was producing "vélocipède à pédales" on a small scale. The wealthy Olivier brothers Aimé and René were students in Paris at this time, and these shrewd young entrepreneurs adopted the new machine. In 1865 they travelled from Paris to Avignon on a velocipede in only eight days. They recognized the potential profitability of producing and selling the new machine. Together with their friend Georges de la Bouglise, they formed a partnership with Pierre Michaux, Michaux et Cie ("Michaux and company"), in 1868, avoiding use of the Olivier family name and staying behind the scenes, lest the venture prove to be a failure. This was the first company which mass-produced bicycles, replacing the early wooden frame with one made of two pieces of cast iron bolted together—otherwise, the early Michaux machines look exactly like Lallement's patent drawing. Together with a mechanic named Gabert in his hometown of Lyon, Aimé Olivier created a diagonal single-piece frame made of wrought iron which was much stronger, and as the first bicycle craze took hold, many other blacksmiths began forming companies to make bicycles using the new design. Velocipedes were expensive, and when customers soon began to complain about the Michaux serpentine cast-iron frames breaking, the Oliviers realized by 1868 that they needed to replace that design with the diagonal one which their competitors were already using, and the Michaux company continued to dominate the industry in its first years. On the new macadam paved boulevards of Paris it was easy riding, although initially still using what was essentially horse coach technology. It was still called "velocipede" in France, but in the United States, the machine was commonly called the "bone-shaker". Later improvements included solid rubber tires and ball bearings. Lallement had left Paris in July 1865, crossed the Atlantic, settled in Connecticut and patented the velocipede, and the number of associated inventions and patents soared in the US. The popularity of the machine grew on both sides of the Atlantic and by 1868–69 the velocipede craze was strong in rural areas as well. Even in a relatively small city such as Halifax, Nova Scotia, Canada, there were five velocipede rinks, and riding schools began opening in many major urban centers. Essentially, the velocipede was a stepping stone that created a market for bicycles that led to the development of more advanced and efficient machines. However, the Franco-Prussian war of 1870 destroyed the velocipede market in France, and the "bone-shaker" enjoyed only a brief period of popularity in the United States, which ended by 1870. There is debate among bicycle historians about why it failed in the United States, but one explanation is that American road surfaces were much worse than European ones, and riding the machine on these roads was simply too difficult. Certainly another factor was that Calvin Witty had purchased Lallement's patent, and his royalty demands soon crippled the industry. The UK was the only place where the bicycle never fell completely out of favour. In 1869, William Van Anden of Poughkeepsie, New York, USA, invented the freewheel for the bicycle. His design placed a ratchet device in the hub of the front wheel (the driven wheel on the 'velocipede' designs of the time), which allowed the rider to propel himself forward without pedaling constantly. Initially, bicycle enthusiasts rejected the idea of a freewheel because they believed it would complicate the mechanical functions of the bicycle. Bicycle enthusiasts believed that the bicycle was supposed to remain as simple as possible without any additional mechanisms, such as the freewheel. 1870s: the high-wheel bicycle The high-bicycle was the logical extension of the boneshaker, the front wheel enlarging to enable higher speeds (limited by the inside leg measurement of the rider), the rear wheel shrinking and the frame being made lighter. Frenchman Eugène Meyer is now regarded as the father of the high bicycle by the ICHC in place of James Starley. Meyer invented the wire-spoke tension wheel in 1869 and produced a classic high bicycle design until the 1880s. James Starley in Coventry added the tangent spokes and the mounting step to his famous bicycle named "Ariel". He is regarded as the father of the British cycling industry. Ball bearings, solid rubber tires and hollow-section steel frames became standard, reducing weight and making the ride much smoother. Depending on the rider's leg length, the front wheel could now have a diameter up to 60 in (1.5 m). Much later, when this type of bicycle was beginning to be replaced by a later design, it came to be referred to as the "ordinary bicycle". (While it was in common use no such distinguishing adjective was used, since there was then no other kind.) and was later nicknamed "penny-farthing" in England (a penny representing the front wheel, and a coin smaller in size and value, the farthing, representing the rear). They were fast, but unsafe. The rider was high up in the air and traveling at a great speed. If he hit a bad spot in the road he could easily be thrown over the front wheel and be seriously injured (two broken wrists were common, in attempts to break a fall) or even killed. "Taking a header" (also known as "coming a cropper"), was not at all uncommon. The rider's legs were often caught underneath the handlebars, so falling free of the machine was often not possible. The dangerous nature of these bicycles (as well as Victorian mores) made cycling the preserve of adventurous young men. The risk averse, such as elderly gentlemen, preferred the more stable tricycles or quadracycles. In addition, women's fashion of the day made the "ordinary" bicycle inaccessible. Queen Victoria owned Starley's "Royal Salvo" tricycle, though there is no evidence she actually rode it. Although French and English inventors modified the velocipede into the high-wheel bicycle, the French were still recovering from the Franco-Prussian war, so English entrepreneurs put the high-wheeler on the English market, and the machine became very popular there, Coventry, Oxford, Birmingham and Manchester being the centers of the English bicycle industry (and of the arms or sewing machine industries, which had the necessary metalworking and engineering skills for bicycle manufacturing, as in Paris and St. Etienne, and in New England). Soon bicycles found their way across the English Channel. By 1875, high-wheel bicycles were becoming popular in France, though ridership expanded slowly. In 1877, Joseph Henry Hughes' provisional patent application was allowed, titled "Improvements in the bearings of bicycles and velocipedes or carriages". Hughes, a local of Birmingham, described a ball bearing race for bicycle and carriage wheels which allowed for initial adjustment of the system to ensure optimal contacts between components, and for subsequent adjustments to compensate for wear of components from use. William Bown, an already successful owner of Bown Manufacturing Company, persuaded Hughes to sell rights to this patent to him. Having patented improvements to sewing machines and horse clippers himself, Bown also persuaded Hughes join him on further bearing innovations for the next decade. This turned into the successful Aeolus brand of ball bearings, used in the first ball-race-pedals and wheel-bearings for bicycles and carriage wheels. In the United States, Bostonians such as Frank Weston started importing bicycles in 1877 and 1878, and Albert Augustus Pope started production of his "Columbia" high-wheelers in 1878, and gained control of nearly all applicable patents, starting with Lallement's 1866 patent. Pope lowered the royalty (licensing fee) previous patent owners charged, and took his competitors to court over the patents. The courts supported him, and competitors either paid royalties ($10 per bicycle), or he forced them out of business. There seems to have been no patent issue in France, where English bicycles still dominated the market. In 1880, G.W. Pressey invented the high-wheeler American Star Bicycle, whose smaller front wheel was designed to decrease the frequency of "headers". By 1884 high-wheelers and tricycles were relatively popular among a small group of upper-middle-class people in all three countries, the largest group being in England. Their use also spread to the rest of the world, chiefly because of the extent of the British Empire. Pope also introduced mechanization and mass production (later copied and adopted by Ford and General Motors), vertically integrated, (also later copied and adopted by Ford), advertised aggressively (as much as ten percent of all advertising in U.S. periodicals in 1898 was by bicycle makers), promoted the Good Roads Movement (which had the side benefit of acting as advertising, and of improving sales by providing more places to ride), and litigated on behalf of cyclists (It would, however, be Western Wheel Works of Chicago which would drastically reduce production costs by introducing stamping to the production process in place of machining, significantly reducing costs, and thus prices.) In addition, bicycle makers adopted the annual model change (later derided as planned obsolescence, and usually credited to General Motors), which proved very successful. Even so, bicycling remained the province of the urban well-to-do, and mainly men, until the 1890s, and was an example of conspicuous consumption. The safety bicycle and the bike bubble: 1880s and 1890s The development of the safety bicycle was arguably the most important change in the history of the bicycle. It shifted their use and public perception from being a dangerous toy for sporting young men to being an everyday transport tool for men and women of all ages. Aside from the obvious safety problems, the high-wheeler's direct front wheel drive limited its top speed. One attempt to solve both problems with a chain-driven front wheel was the dwarf bicycle, exemplified by the Kangaroo. Inventors also tried a rear wheel chain drive. Although Harry John Lawson invented a rear-chain-drive bicycle in 1879 with his "bicyclette", it still had a huge front wheel and a small rear wheel. Detractors called it "The Crocodile", and it failed in the market. John Kemp Starley, James Starley's nephew, produced the first successful "safety bicycle", the "Rover," in 1885, which he never patented. It featured a steerable front wheel that had significant caster, equally sized wheels and a chain drive to the rear wheel. Widely imitated, the safety bicycle completely replaced the high-wheeler in North America and Western Europe by 1890. Meanwhile, John Dunlop's reinvention of the pneumatic bicycle tire in 1888 had made for a much smoother ride on paved streets; the previous type were quite smooth-riding, when used on the dirt roads common at the time. As with the original velocipede, safety bicycles had been much less comfortable than high-wheelers precisely because of the smaller wheel size, and frames were often buttressed with complicated bicycle suspension spring assemblies. The pneumatic tire made all of these obsolete, and frame designers found a diamond pattern to be the strongest and most efficient design. On 10 October 1899, Isaac R Johnson, an African-American inventor, lodged his patent for a folding bicycle – the first with a recognisably modern diamond frame, the pattern still used in 21st-century bicycles. The chain drive improved comfort and speed, as the drive was transferred to the non-steering rear wheel and allowed for smooth, relaxed and injury free pedaling (earlier designs that required pedalling the steering front wheel were difficult to pedal while turning, due to the misalignment of rotational planes of leg and pedal). With easier pedaling, the rider more easily turned corners. The pneumatic tire and the diamond frame improved rider comfort but do not form a crucial design or safety feature. A hard rubber tire on a bicycle is just as rideable but is bone jarring. The frame design allows for a lighter weight, and more simple construction and maintenance, hence lower price. Most likely the first electric bicycle was built in 1897 by Hosea W. Libbey. In the middle of the decade, bicycle sales were one of the few areas of the economy where sales were growing despite a severe economic depression, leading hundreds of manufacturers to enter business. This resulted in a downward spiral of market saturation, over-supply and intense price competition, eventually leading to the collapse of many manufacturers as the bicycle bubble burst. 20th century The roadster The ladies' version of the roadster's design was very much in place by the 1890s. It had a step-through frame rather than the diamond frame of the gentlemen's model so that ladies, with their dresses and skirts, could easily mount and ride their bicycles, and commonly came with a skirt guard to prevent skirts and dresses becoming entangled in the rear wheel and spokes. As with the gents' roadster, the frame was of steel construction and the positioning of the frame and handlebars gave the rider a very upright riding position. Though they originally came with front spoon-brakes, technological advancements meant that later models were equipped with the much-improved coaster brakes or rod-actuated rim or drum-brakes. The Dutch cycle industry grew rapidly from the 1890s onwards. Since by then it was the British who had the strongest and best-developed market in bike design, Dutch framemakers either copied them or imported them from England. In 1895, 85 percent of all bikes bought in the Netherlands were from Britain; the vestiges of that influence can still be seen in the solid, gentlemanly shape of a traditional Dutch bike even now. Though the ladies' version of the roadster largely fell out of fashion in England and many other Western nations as the 20th century progressed, it remains popular in the Netherlands; this is why some people refer to bicycles of this design as Dutch bikes. In Dutch the name of these bicycles is Omafiets ("grandma's bike"). Popularity in Europe, decline in US Cycling steadily became more important in Europe over the first half of the twentieth century, but it dropped off dramatically in the United States between 1900 and 1910. Automobiles became the preferred means of transportation. Over the 1920s, bicycles gradually became considered children's toys, and by 1940 most bicycles in the United States were made for children. In Europe cycling remained an adult activity, and bicycle racing, commuting, and "cyclotouring" were all popular activities. In addition, specialist bicycles for children appeared before 1916. From the early 20th century until after World War II, the roadster constituted most adult bicycles sold in the United Kingdom and in many parts of the British Empire. For many years after the advent of the motorcycle and automobile, they remained a primary means of adult transport. Major manufacturers in England were Raleigh and BSA, though Carlton, Phillips, Triumph, Rudge-Whitworth, Hercules, and Elswick Hopper also made them. Technical innovations Bicycles continued to evolve to suit the varied needs of riders. The derailleur developed in France between 1900 and 1910 among cyclotourists, and was improved over time. Only in the 1930s did European racing organizations allow racers to use gearing; until then they were forced to use a two-speed bicycle. The rear wheel had a sprocket on either side of the hub. To change gears, the rider had to stop, remove the wheel, flip it around, and remount the wheel. When racers were allowed to use derailleurs, racing times immediately dropped. World War II Although multiple-speed bicycles were widely known by this time, most or all military bicycles used in the Second World War were single-speed. Bicycles were used by paratroopers during the war to help them with transportation, creating the term "bomber bikes" to refer to US planes dropping bikes for troops to use. The German Volksgrenadier units each had a battalion of bicycle infantry attached. The Invasion of Poland saw many bicycle-riding scouts in use, with each bicycle company using 196 bicycles and 1 motorcycle. By September 1939, there were 41 bicycle companies mobilized. During the Second Sino-Japanese War, Japan used around 50,000 bicycle troops. The Malayan Campaign saw many bicycles used. The Japanese confiscated bicycles from civilians due to the abundance of bicycles among the civilian population. Japanese bicycle troops were efficient in both speed and carrying capacity, as they could carry of equipment compared to a normal British soldier, who could carry . China and the Flying Pigeon The Flying Pigeon was at the forefront of the bicycle phenomenon in the People's Republic of China. The vehicle was the government approved form of transport, and the nation became known as zixingche wang guo (自行车王国) the 'Kingdom of Bicycles'. A bicycle was regarded as one of the three "must-haves" of every citizen, alongside a sewing machine and watch – essential items in life that also offered a hint of wealth. The Flying Pigeon bicycle became a symbol of an egalitarian social system that promised little comfort but a reliable ride through life. Throughout the 1960s and 1970s, the logo became synonymous with almost all bicycles in the country. The Flying Pigeon became the single most popular mechanized vehicle on the planet, becoming so ubiquitous that Deng Xiaoping — the post-Mao leader who launched China's economic reforms in the 1970s — defined prosperity as "a Flying Pigeon in every household". In the early 1980s, Flying Pigeon was the country's biggest bike manufacturer, selling 3 million cycles in 1986. Its 20-kilo black single-speed models were popular with workers, and there was a waiting list of several years to get one, and even then buyers needed good guanxi (relationship) in addition to the purchase cost, which was about four months' wages for most workers. North America: Cruiser vs. racer At mid-century there were two predominant bicycle styles for recreational cyclists in North America. Heavyweight cruiser bicycles, preferred by the typical (hobby) cyclist, featuring balloon tires, pedal-driven "coaster" brakes and only one gear, were popular for their durability, comfort, streamlined appearance, and a significant array of accessories (lights, bells, springer forks, speedometers, etc.). Lighter cycles, with hand brakes, narrower tires, and a three-speed hub gearing system, often imported from England, first became popular in the United States in the late 1950s. These comfortable, practical bicycles usually offered generator-powered headlamps, safety reflectors, kickstands, and frame-mounted tire pumps. In the United Kingdom, like the rest of Europe, cycling was seen as less of a hobby, and lightweight but durable bikes had been preferred for decades. In the United States, the sports roadster was imported after World War II, and was known as the "English racer". It quickly became popular with adult cyclists seeking an alternative to the traditional youth-oriented cruiser bicycle. While the English racer was no racing bike, it was faster and better for climbing hills than the cruiser, thanks to its lighter weight, tall wheels, narrow tires, and internally geared rear hubs. In the late 1950s, U.S. manufacturers such as Schwinn began producing their own "lightweight" version of the English racer. In the late 1960s, Americans' increasing consciousness of the value of exercise and later the advantage of energy efficient transportation led to the American bike boom of the 1970s. Annual U.S. sales of adult bicycles doubled between 1960 and 1970, and doubled again between 1971 and 1975, the peak years of the adult cycling boom in the United States, eventually reaching nearly 17 million units. Most of these sales were to new cyclists, who overwhelmingly preferred models imitating popular European derailleur-equipped racing bikesvariously called sports models, sport/tourers, or simply ten-speedsto the older roadsters with hub gears which remained much the same as they had been since the 1930s. These lighter bicycles, long used by serious cyclists and by racers, featured dropped handlebars, narrow tires, derailleur gears, five to fifteen speeds, and a narrow 'racing' type saddle. By 1980, racing and sport/touring derailleur bikes dominated the market in North America. Fatbike was invented for off-road usage in 1980. Europe In Britain, the utility roadster declined noticeably in popularity during the early 1970s, as a boom in recreational cycling caused manufacturers to concentrate on lightweight (), affordable derailleur sport bikes, actually slightly-modified versions of the racing bicycle of the era. In the early 1980s, Swedish company Itera invented a new type of bicycle, made entirely of plastic. It was a commercial failure. In the 1980s, UK cyclists began to shift from road-only bicycles to all-terrain models such as the mountain bike. The mountain bike's sturdy frame and load-carrying ability gave it additional versatility as a utility bike, usurping the role previously filled by the roadster. By 1990, the roadster was almost dead; while annual UK bicycle sales reached an all-time record of 2.8 million, almost all of them were mountain and road/sport models. BMX bikes BMX bikes are specially designed bicycles that usually have 16 to 24-inch wheels (the norm being the 20-inch wheel), which originated in the state of California in the early 1970s when teenagers imitated their motocross heroes on their bicycles. Children were racing standard road bikes off-road, around purpose-built tracks in the Netherlands. The 1971 motorcycle racing documentary On Any Sunday is generally credited with inspiring the movement nationally in the US. In the opening scene, kids are shown riding their Schwinn Sting-Rays off-road. It was not until the middle of the decade the sport achieved critical mass, and manufacturers began creating bicycles designed specially for the sport. It has grown into an international sport with several different disciplines such as Freestyle, Racing, Street, and Flatland. Mountain bikes In 1981, the first mass-produced mountain bike appeared, intended for use off-pavement over a variety of surfaces. It was an immediate success, and examples flew off retailers' shelves during the 1980s, their popularity spurred by the novelty of all-terrain cycling and the increasing desire of urban dwellers to escape their surroundings via mountain biking and other extreme sports. These cycles featured sturdier frames, wider tires with large knobs for increased traction, a more upright seating position (to allow better visibility and shifting of body weight), and increasingly, various front and rear suspension designs. By 2000, mountain bike sales had far outstripped that of racing, sport/racer, and touring bicycles. 21st century The 21st century has seen a continued application of technology to bicycles (which started in the 20th century): in designing them, building them, and using them. Bicycle frames and components continue to get lighter and more aerodynamic without sacrificing strength largely through the use of computer aided design, finite element analysis, and computational fluid dynamics. Recent discoveries about bicycle stability have been facilitated by computer simulations. Once designed, new technology is applied to manufacturing such as hydroforming and automated carbon fiber layup. Finally, electronic gadgetry has expanded from just cyclocomputers to now include cycling power meters and electronic gear-shifting systems. Hybrid and commuter bicycles In recent years, bicycle designs have trended towards increased specialization, as the number of casual, recreational and commuter cyclists has grown. For these groups, the industry responded with the hybrid bicycle, sometimes marketed as a city bike, cross bike, or commuter bike. Hybrid bicycles combine elements of road racing and mountain bikes, though the term is applied to a wide variety of bicycle types. Hybrid bicycles and commuter bicycles can range from fast and light racing-type bicycles with flat bars and other minimal concessions to casual use, to wider-tired bikes designed for primarily for comfort, load-carrying, and increased versatility over a range of different road surfaces. Enclosed hub gears have become popular again – now with up to 8, 11 or 14 gears – for such bicycles due to ease of maintenance and improved technology. Recumbent bicycle The recumbent bicycle was invented in 1893. In 1934, the Union Cycliste Internationale banned recumbent bicycles from all forms of officially sanctioned racing, at the behest of the conventional bicycle industry, after relatively little-known Francis Faure beat world champion Henri Lemoine and broke Oscar Egg's hour record by half a mile while riding Mochet's Velocar. Some authors assert that this resulted in the stagnation of the upright racing bike's frame geometry which has remained essentially unchanged for 70 years. This stagnation finally started to reverse with the formation of the International Human Powered Vehicle Association which holds races for "banned" classes of bicycle. Sam Whittingham set a human powered speed record of 132 km/h (82 mph) on level ground in a faired recumbent streamliner in 2009 at Battle Mountain. While historically most bike frames have been steel, recent designs, particularly of high-end racing bikes, have made extensive use of carbon and aluminum frames. Recent years have also seen a resurgence of interest in balloon tire cruiser bicycles for their low-tech comfort, reliability, and style. In addition to influences derived from the evolution of American bicycling trends, European, Asian and African cyclists have also continued to use traditional roadster bicycles, as their rugged design, enclosed chainguards, and dependable hub gearing make them ideal for commuting and utility cycling duty. See also Bicycling and feminism Bike boom, also known as "bicycle craze", a name used for several periods in cycling history Cyclability Hour record Timeline of transportation technology Electric bicycle References Further reading Bijker, Wiebe E. (1995). Of bicycles, bakelites, and bulbs: toward a theory of sociotechnical change. Cambridge, Massachusetts: MIT Press. . Cycle History vol. 1–24, Proceedings of the International Cycling History Conference (ICHC), 1990–2014 Friss, Evan. The Cycling City: Bicycles and Urban America in the 1890s (University of Chicago Press, 2015). x, 267 pp. Tony Hadland & Hans-Erhard Lessing: Bicycle Design – An Illustrated History. The MIT-Press, Cambridge (USA) 2014, David Gordon Wilson Bicycling Science 3rd ed. 2004 David V. Herlihy Bicycle – The History. 2004 Hans-Erhard Lessing Automobilitaet – Karl Drais und die unglaublichen Anfaenge, 2003 (in German) Pryor Dodge The Bicycle 1996 (French ed 1996, German eds 1997, 2002, 2007) How I Saved The British Empire. Reminiscences of a Bicycling Tour of Great Britain in the Year 1901 A novel released by Ailemo Books in July 2015. Author Michael Waldock. . Library of Congress: 2015909543. External links International Cycling History Conference (ICHC) Karl-Drais memorial Karl Drais seen by ADFC Mannheim – Focus on events in Mannheim, being the place of his invention. A 3-page Drais biography is available in more than 15 languages. Menotomy Vintage Bicycles – Antique bicycle photos, features, price guide and research tools. Metz Bicycle Museum in Freehold, NJ Myths and Milestones in Bicycle Evolution by William Hudson (accessed 2005-11-17) A Quick History of Bicycles from the Pedaling History Bicycle Museum (accessed 2005-01-06) Bicyclette of Harry John Lawson VeloPress has published dozens of books on the history of cycling and the bicycle. The Wheelmen organization History of technology
History of the bicycle
Technology
8,032
24,661,031
https://en.wikipedia.org/wiki/Hygrophorus%20subalpinus
Hygrophorus subalpinus, commonly known as the subalpine waxycap, is a species of white snowbank fungus in the family Hygrophoraceae. Found in the mountains of western North America, it is found growing on the ground under conifers, usually near snowbanks. Description The cap of H. subalpinus is typically in diameter, with a convex shape that becomes flattened in age; sometimes it develops a central umbo (a rounded elevation resembling a nipple). The cap is sticky, white, and the cap margin often has fragments of the veil adhering. The flesh is soft, thick and white. The gills, which are attached decurrently to the stipe (running down the length of the stipe), are narrow, packed closely together, and white-colored. The stipe is white, long and thick at the apex; when young the base of the stipe is bulbous but as it grows it thins and becomes almost the same width as at the top of the stem. A membranous annulus is present, placed low on the stipe. The mushroom has virtually no taste. Microscopic characteristics The spores are white in deposit; microscopically, they are ellipsoid and smooth, with dimensions of 8–10 by 4.5–5 μm. There are no cystidia present in the gills of this species, and clamp connections are present on the hyphae. Edibility Hygrophorus subalpinus is said to be edible, but bland. David Arora notes that it "does not have the greatest texture and flavor". One guide recommends it as a substitute for bamboo shoots. Habitat and distribution The fruit bodies of H. subalpinus grows in large clusters under conifers, often near snowbanks, and typically at high elevations, such as on mountains. It usually appears after the snow in the area has receded, sometimes growing partly underground. It is found in North America, from the Rocky Mountains to the Pacific Northwest. Similar species The external appearance of Hygrophorus ponderatus resembles H. subalpinus, but the former species has a sticky or slimy cap surface, a veil that appears to be made of fibers (rather than a membrane), and narrower gills. Russula brevipes is also similar. See also List of Hygrophorus species References External links Edible fungi Fungi described in 1941 Fungi of North America Snowbank fungi subalpinus Fungus species
Hygrophorus subalpinus
Biology
525
5,867,217
https://en.wikipedia.org/wiki/Archie%27s%20law
In petrophysics, Archie's law is a purely empirical law relating the measured electrical conductivity of a porous rock to its porosity and fluid saturation. It is named after Gus Archie (1907–1978) and laid the foundation for modern well log interpretation, as it relates borehole electrical conductivity measurements to hydrocarbon saturations. Statement of the law The in-situ electrical conductivity () of a fluid saturated, porous rock is described as where denotes the porosity represents the electrical conductivity of the aqueous solution (fluid or liquid phase) is the water saturation, or more generally the fluid saturation, of the pores is the cementation exponent of the rock (usually in the range 1.8–2.0 for sandstones) is the saturation exponent (usually close to 2) is the tortuosity factor. This relationship attempts to describe ion flow (mostly sodium and chloride) in clean, consolidated sands, with varying intergranular porosity. Electrical conduction is assumed to be exclusively performed by ions dissolved in the pore-filling fluid. Electrical conduction is considered to be absent in the rock grains of the solid phase or in organic fluids other than water (oil, hydrocarbon, gas). Reformulated for resistivity measurements The electrical resistivity, the inverse of the electrical conductivity , is expressed as with for the total fluid saturated rock resistivity, and for the resistivity of the fluid itself (w meaning water or an aqueous solution containing dissolved salts with ions bearing electricity in solution). The factor is also called the formation factor, where (index standing for total) is the resistivity of the rock saturated with the fluid and is the resistivity of the fluid (index standing for water) inside the porosity of the rock. The porosity being saturated with the fluid (often water, ), . In case the fluid filling the porosity is a mixture of water and hydrocarbon (petroleum, oil, gas), a resistivity index () can be defined: Where is the resistivity of the rock saturated in water only. Parameters Cementation exponent, m The cementation exponent models how much the pore network increases the resistivity, as the rock itself is assumed to be non-conductive. If the pore network were to be modelled as a set of parallel capillary tubes, a cross-section area average of the rock's resistivity would yield porosity dependence equivalent to a cementation exponent of 1. However, the tortuosity of the rock increases this to a higher number than 1. This relates the cementation exponent to the permeability of the rock, increasing permeability decreases the cementation exponent. The exponent has been observed near 1.3 for unconsolidated sands, and is believed to increase with cementation. Common values for this cementation exponent for consolidated sandstones are 1.8 < < 2.0. In carbonate rocks, the cementation exponent shows higher variance due to strong diagenetic affinity and complex pore structures. Values between 1.7 and 4.1 have been observed. The cementation exponent is usually assumed not to be dependent on temperature. Saturation exponent, n The saturation exponent usually is fixed to values close to 2. The saturation exponent models the dependency on the presence of non-conductive fluid (hydrocarbons) in the pore-space, and is related to the wettability of the rock. Water-wet rocks will, for low water saturation values, maintain a continuous film along the pore walls making the rock conductive. Oil-wet rocks will have discontinuous droplets of water within the pore space, making the rock less conductive. Tortuosity factor, a The constant , called the tortuosity factor, cementation intercept, lithology factor or, lithology coefficient is sometimes used. It is meant to correct for variation in compaction, pore structure and grain size. The parameter is called the tortuosity factor and is related to the path length of the current flow. The value lies in the range 0.5 to 1.5, and it may be different in different reservoirs. However a typical value to start with for a sandstone reservoir might be 0.6, which then can be tuned during log data matching process with other sources of data such as core. Measuring the exponents In petrophysics, the only reliable source for the numerical value of both exponents is experiments on sand plugs from cored wells. The fluid electrical conductivity can be measured directly on produced fluid (groundwater) samples. Alternatively, the fluid electrical conductivity and the cementation exponent can also be inferred from downhole electrical conductivity measurements across fluid-saturated intervals. For fluid-saturated intervals () Archie's law can be written Hence, plotting the logarithm of the measured in-situ electrical conductivity against the logarithm of the measured in-situ porosity (Pickett plot), according to Archie's law a straight-line relationship is expected with slope equal to the cementation exponent and intercept equal to the logarithm of the in-situ fluid electrical conductivity. Sands with clay/shaly sands Archie's law postulates that the rock matrix is non-conductive. For sandstone with clay minerals, this assumption is no longer true in general, due to the clay's structure and cation exchange capacity. The Waxman–Smits equation is one model that tries to correct for this. See also Birch's law Byerlee's law References Geophysics Equations Well logging
Archie's law
Physics,Mathematics,Engineering
1,177
48,432,265
https://en.wikipedia.org/wiki/Leccinum%20subrobustum
Leccinum subrobustum is a species of bolete fungus in the family Boletaceae. It was described as new to science in 1968 by mycologists Alexander H. Smith, Harry Delbert Thiers, and Roy Watling. See also List of Leccinum species List of North American boletes References subrobustum Fungi described in 1968 Fungi of North America Taxa named by Alexander H. Smith Taxa named by Harry Delbert Thiers Taxa named by Roy Watling Fungus species
Leccinum subrobustum
Biology
104
47,548,981
https://en.wikipedia.org/wiki/Alexander%20Barvinok
Alexander I. Barvinok (born March 27, 1963) is a professor of mathematics at the University of Michigan. Barvinok received his Ph.D. from St. Petersburg State University in 1988 under the supervision of Anatoly Moiseevich Vershik. In 1999, Barvinok received the Presidential Early Career Award for Scientists and Engineers (PECASE) from President Bill Clinton. Barvinok gave an invited talk at the 2006 International Congress of Mathematicians in Madrid. In 2012, Barvinok became a Fellow of the American Mathematical Society. In 2023, Barvinok left the American Mathematical Society by refusing to renew his membership in protest of its non-opposition to "DEI statements" and "compelled language", referencing his experiences in the Soviet Union. References Living people Fellows of the American Mathematical Society 20th-century American mathematicians 21st-century American mathematicians Russian mathematicians University of Michigan faculty Combinatorialists Recipients of the Presidential Early Career Award for Scientists and Engineers 1963 births
Alexander Barvinok
Mathematics
201
11,318,393
https://en.wikipedia.org/wiki/Halogeton%20sativus
Halogeton sativus is a species of flowering plant in the family Amaranthaceae. It is native to Spain, Morocco and Algeria. Rich in salt, in the past it was cultivated to produce soda ash for glass-makers. References Amaranthaceae Halophytes
Halogeton sativus
Chemistry
59
1,338,096
https://en.wikipedia.org/wiki/Tilth
Tilth is a physical condition of soil, especially in relation to its suitability for planting or growing a crop. Factors that determine tilth include the formation and stability of aggregated soil particles, moisture content, degree of aeration, soil biota, rate of water infiltration and drainage. Tilth can change rapidly, depending on environmental factors, such as changes in moisture, tillage and soil amendments. The objective of tillage (mechanical manipulation of the soil) is to improve tilth, thereby increasing crop production; in the long term, however, conventional tillage, especially plowing, often has the opposite effect, causing the soil carbon sponge to oxidize, break down and become compacted. Soil with good tilth is spongy with large pore spaces for air infiltration and water movement. Roots grow only where the soil tilth allows for adequate levels of soil oxygen. Such soil also holds a reasonable supply of water and nutrients. Tillage, organic matter amendments, fertilization and irrigation can each improve tilth, but when used excessively, can have the opposite effect. Crop rotation and cover crops can rebuild the soil carbon sponge and positively affect tilth. A combined approach can produce the greatest improvement. Aggregation Good tilth shares a balanced relation between soil-aggregate tensile strength and friability, in which it has a stable mixture of aggregate soil particles that can be readily broken up by shallow, non-abrasive tilling. A high tensile strength will result in large cemented clods of compacted soil with low friability. Proper management of agricultural soils can positively affect soil aggregation and improve tilth quality. Aggregation is positively associated with tilth. With finer-textured soils, aggregates may in turn be made up of smaller aggregates. Aggregation implies substantial pores between individual aggregates. Aggregation is important in the subsoil, the layer below tillage. Such aggregates involve larger (2- to 6-inch) blocks of soil that are more angular and not as distinctive. These aggregates are less affected by biological activity than the tillage layer. Subsurface aggregates are important for root growth deep into the profile. Deep roots allow greater access to moisture, which helps in drought periods. Subsoil aggregates can also be compacted, mainly by heavy equipment on wet soil. Another significant source of subsoil compaction is the practice of plowing with tractor wheels in the open furrow. Pore size Soil that is well aggregated has a range of pore sizes. Each pore size plays a role in soil's physical functioning. Large pores drain rapidly and are needed for good air exchange during wet periods, preventing oxygen deficiency that can drown plants and increase pest problems. Oxygen-deficient wet soils increase denitrification – conversion of nitrogen to gaseous forms. In degraded soil, large pores are compressed into small ones. Small pores are critical for water retention and help a crop endure dry periods with minimal yield loss. Management Soil tilth is naturally maintained by the interaction of plant roots with the soil biota. Short lived tilth can be obtained through mechanical and biological manipulation. Tillage In 2021, the globally tilled soil volume was estimated at 1840 km3/yr. This value exceeds by two orders of magnitude the global total of all engineering earthworks. For comparison globally, the natural process of soil bioturbation by plant roots and earthworms, was estimated at 960 km3/yr. Mechanical soil cultivation practices, including primary tillage (mold-board or chisel plowing) followed by secondary tillage (disking, harrowing, etc.), break up and aerate soil. Mechanical traffic and intensive tilling methods have a negative impact on soil aggregates, friability, soil porosity, and soil-bulk density. When soils become degraded and compacted, such tillage practices are often deemed necessary. The tilth created by tillage, however, tends to be unstable, because the aggregation is obtained through the physical manipulation of the soil, which is short lived, especially after years of intensive tillage. The compaction of soil aggregates can also decrease soil biota due to the low levels of oxygen in the top-soil. The resulting high soil-bulk density results in lower water infiltration from rainfall or conventional irrigation (surface, sprinkler, center-pivot); in turn, the series of processes will naturally erode and dissolve small soil particles and organic matter. The consequences from these processes cyclically require more tilling and intervention, thus tillage practices have the capability to disrupt biological mechanisms that stabilize soil structure, the soil carbon sponge and tilth quality. Biological The preferred scenario for good tilth is as the result of natural soil-building processes, provided by the activity of plant roots, microorganisms, earthworms and other beneficial organisms. Such stable aggregates break apart during tillage/planting and readily provide good tilth. Soil biota and organic matter work in unison to bind soil aggregates and establish a natural soil stability – a soil carbon sponge. Plant root exudates feed bacteria that emit extracellular polysaccharides (EPS), and feed the growth of fungal hyphae, to form a soil carbon sponge with the dispersed clay particles. These active tilth-forming processes contribute to the formation and stabilization of soil structure. The resulting soil structure reduces tensile strength and soil-bulk density while still forming soil aggregates through their abiotic/biotic binding mechanisms that resist breakdown during water saturation. The fungal hyphae networks can establish a role of enmeshment with EPS and rhizodeposition, thus improving aggregate stability. However, these organic materials are themselves subject to biological degradation, requiring active amendments with organic material, and minimal mechanical tillage. Tilth quality is heavily dependent on these naturally binding processes between biotic microorganisms and abiotic soil particles, as well as the necessary input of organic matter. All constituents in this naturally binding network must be supplied or managed in agriculture to ensure the sustainability of their presence through growing seasons. Rotation Crop rotation can help restore tilth in compacted soils. Two processes contribute to this gain. First, accelerated organic matter decomposition from tillage ends under the sod crop. Another way to achieve this is via no-till farming. Second, grass and legume sods develop extensive root systems that continually grow and die off. The dead roots supply a source of active organic matter, which feeds soil organisms that create aggregation – the soil carbon sponge. Beneficial organisms need continual supplies of organic matter to sustain themselves and they deposit the digested materials on soil aggregates and thereby stabilize them. Also, the living roots and symbiotic microorganisms (for example, mycorrhizal fungi) can exude organic materials that nourish soil organisms and help with aggregation. Grass and legume sod crops therefore deposit more organic matter in the soil than most other crops. Some annual rotation crops, such as buckwheat, also have dense, fibrous, root systems and can improve tilth. Crop mixtures with different rooting systems can be beneficial. For example, red clover seeded into winter wheat provides additional roots and a more protein-rich soil organic matter. Other rotation crops are more valuable for improving subsoils. Perennial crops, such as alfalfa, have strong, deep, penetrating tap roots that can push through hard layers, especially during wet periods when the soil is soft. These deep roots establish pathways for water and future plant roots, and produce soil organic matter. Crops rotation can extend the period of active growth compared to conventional row crops, leaving more organic material behind. For example, in a corn–soybean rotation, active growth occurs 32% of the time, while a dry bean–winter wheat–corn rotation is active 72% of the time. Crops such as rye, wheat, oat, barley, pea and cool-season grasses grow actively in the late fall and early spring when other crops are inactive. They are beneficial both as rotation and cover crops, although intensive tillage can negate their effects. Soil types The soil management practices required to maintain soil tilth are a function of the type of soil. Sandy and gravelly soils are naturally deficient in small pores and are therefore drought prone, whereas loams and clays can retain and thus supply crops with more water. Coarse-textured, sandy soils Sandy soil has lower capacity to hold water and nutrients. Water is applied more frequently in smaller amounts to avoid it leaching and carrying nutrients below the root zone. Routine application of organic matter increases sandy soil's ability to hold water and nutrients by 10 times or more. Fine-textured, clay soils Clay soils lack large pores, restricting both water and air movement. During irrigation or rain events, the limited large pore space in fine-textured soils quickly fills with water, reducing soil oxygen levels. In addition to routine application of organic matter, microorganisms and earthworms perform a crucial assist to soil tilth. As microorganisms decompose the organic matter, soil particles bind together into larger aggregates, increasing large pore space. Clay soils are more subject to soil compaction, which reduces large pore spaces. Gravelly and decomposed granite soils Such soils natively have little tilth, especially once they have been disturbed. Adding organic matter up to 25% by volume can help compensate. For example, if tilling to a depth of eight inches, add two inches of organic materials. See also Effective microorganisms Korean natural farming Natural farming Permaculture References Soil biology Soil chemistry Soil improvers Soil science
Tilth
Chemistry,Biology
1,994
2,867,172
https://en.wikipedia.org/wiki/Xi%20Aquilae
Xi Aquilae (ξ Aquilae, abbreviated Xi Aql, ξ Aql), officially named Libertas , is a red-clump giant star located at a distance of from the Sun in the equatorial constellation of Aquila. As of 2008, an extrasolar planet (designated Xi Aquilae b, later named Fortitudo) has been confirmed in orbit around the star. Nomenclature ξ Aquilae (Latinised to Xi Aquilae) is the star's Bayer designation. Following its discovery the planet was designated Xi Aquilae b. In July 2014 the International Astronomical Union launched NameExoWorlds, a process for giving proper names to certain exoplanets and their host stars. The process involved public nomination and voting for the new names. In December 2015, the IAU announced the winning names were Libertas for this star and Fortitudo for its planet. The winning names were those submitted by Libertyer, a student club at Hosei University of Tokyo, Japan. The names which were originally proposed were in English and were 'Liberty' and 'Fortitude', but to comply with the IAU's rules they were modified to be Latin versions of the same words, and so the final names became 'Libertas' and 'Fortitudo' respectively. 'Aquila' is Latin for 'eagle', a popular symbol of liberty and embodiment of fortitude—emotional and mental strength in the face of adversity. In 2016, the IAU organized a Working Group on Star Names (WGSN) to catalog and standardize proper names for stars. In its first bulletin of July 2016, the WGSN explicitly recognized the names of exoplanets and their host stars approved by the Executive Committee Working Group Public Naming of Planets and Planetary Satellites, including the names of stars adopted during the 2015 NameExoWorlds campaign. This star is now so entered in the IAU Catalog of Star Names. Properties This star has an apparent visual magnitude of 4.722, which, according to the Bortle Dark-Sky scale, is bright enough to be viewed with the naked eye from dark suburban skies. The orbital motion of the Earth causes this star to undergo an annual parallax shift of 17.51 milliarcseconds. From this measurement, the distance to this star can be determined, yielding an estimate of approximately 186 light-years with an error of 1 light year. The magnitude of the star is diminished by 0.09 from the extinction caused by interstellar gas and dust. The spectrum of this star is considered a standard example of the stellar classification G9.5 IIIb, where the G9.5 means that it belongs to the category of G-type stars while the luminosity class of IIIb indicates that, at an estimated age of nearly one billion year, is an evolved star that has reached the giant stage. It is in the red clump, meaning it is generating energy through the fusion of helium into carbon at its core. Xi Aquilae has an estimated 174% of the Sun's mass. Its size has been measured using interferometry at the Navy Precision Optical Interferometer, which yields a radius ten times that of the Sun. It is radiating 58.5 times the Sun's luminosity at an effective temperature of , giving it the golden-hued glow of a G-type star. The possibility of a binary stellar companion can be ruled out based upon observations with the CHARA array. Planetary system In 2008, the presence of a planetary companion was announced, based upon Doppler spectroscopy results from the Okayama Astrophysical Observatory. This object, designated as Xi Aquilae b, has at least 2.8 Jupiter masses and is orbiting at an estimated 0.68 astronomical unit from the star with a period of 136.75 days. Any planets that once orbited to the interior of this object may have been consumed as the star entered the red giant stage and expanded in radius. Later in 2024, astrometric measurements place an upper limit in the mass of based on Gaia astrometry. References External links HR 7595 Image Xi Aquilae wikisky.org G-type giants Horizontal-branch stars Planetary systems with one confirmed planet Aquila (constellation) Libertas Aquilae, Xi Durchmusterung objects Aquilae, 59 188310 097938 7595
Xi Aquilae
Astronomy
921
40,123,041
https://en.wikipedia.org/wiki/Kifunensine
Kifunensine is an alkaloid originally isolated from Kitasatosporia kifunense, an actinobacterium (formerly called an actinomycete). It is a neutral, stable compound. Kifunensine is a potent inhibitor of the mannosidase I enzyme and is primarily used in cell culture to make high mannose glycoproteins. Inside a cell, it prevents endoplasmic reticulum mannosidase I (ERM1) from trimming mannose residues from precursor glycoproteins. Kifunensine shows no inhibitory action against mannosidase II or the endoplasmic reticulum alpha-mannosidase, and it weakly inhibits arylmannosidase. When incorporated in cell culture media, kifunensine has shown no significant impact on cell growth or glycoprotein production yield. Kifunensine has shown potential for treatment of sarcoglycanopathies and lysosomal storage disorders. History Kifunensine was first isolated by Iwami et al. in 1987, and described as a new type of immunoactive substance. It was originally prepared by culturing the actinobacterium Kitasatosporia kifunense in a suitable medium at 25–33 °C for several days, followed by extraction of the alkaloid. The structure of kifunensine was published in 1989 by Kayakiri et al. Enzyme inhibition Kifunensine is a potent inhibitor of the mannosidase I enzyme. It is 50 to 100 times more potent than deoxymannojirimycin – an alkaloid with a similar structure. Kifunensine inhibits human endoplasmic reticulum α-1,2-mannosidase I and Golgi Class I mannosidases IA, IB and IC with Ki values of 130 and 23 nM, respectively. Being a neutral molecule (cf other mannosidase inhibitors such as deoxymannojirimycin), it can permeate inside cells. Once inside a cell, kifunensine blocks endoplasmic reticulum (ER) mannosidase I (ERM1). This blocks processing of glycoproteins in the ER, to leave them with glycoforms with mainly nine mannose residues attached to two N-acetylglucosamine residues (Man9GlcNAc2). The addition of 5–20 μM kifunensine to mammalian cell culture media is sufficient to achieve complete mannosidase I inhibition. Kifunensine does not inhibit mannosidase II or the endoplasmic reticulum alpha-mannosidase. It weakly inhibits arylmannosidase. Synthesis Kayakiri et al. published a synthesis of kifunensine from D-glucose in 1990 and a synthesis of 8-epi-kifunensine in 1991. A synthesis of kifunensine and some analogues, from L-ascorbic acid, was published by Hering et al. in 2005. Kifunensine is now made by GlycoSyn in a commercial process from N-acetylmannosamine in eight steps via a patented process. Uses Production of high mannose glycoproteins in cell culture Kifunensine's inhibitory action has led to its use in the preparation of high mannose glycoproteins by culture of transformed mammalian cells. It is easier to modify the glycosylation of a glycoprotein by using a culture media ingredient with an existing transformed cell line than by generating a new cell line, especially if many cell lines or leads are being screened. Therapeutic uses Kifunensine's use as a therapeutic is currently being researched in several conditions that benefit from its ability to inhibit mannosidase I. Sarcoglycanopathies Sarcoglycanopathies are autosomal recessive muscular disorders of the Limb–girdle Muscular Dystrophy (LMGD) group. Four forms, LGMD 2C, 2D, 2E and 2F have been identified, which result from defects in the γ-, α-, β- and δ-sarcoglycan genes. There are fewer than 1,500 patients with sarcoglycanopathy in the European Union. In cell-based assays and in an animal model, kifunensine was found to be particularly suited to addressing LGMD 2D (R77C substitution), which has been diagnosed in patients in Europe, Africa, Japan and Brazil. Kifunensine was granted orphan drug status for the treatment of each of γ-, α-, β- and δ-sarcoglycanopathy by the European Medicines Agency in October 2011. A patent for the treatment of sarcoglycanopathies is held by Genethon. Claim 11 relates to the use of kifunensine as an inhibitor of the endoplasmic reticulum associated degradation (ERAD) pathway, particularly of mannosidase I. The development of kifunensine was put on hold due to side effects that need further analysis. Lysosomal storage disorders In the lysosomal storage disorders – Gaucher's disease and Tay–Sachs disease – endoplasmic reticulum-associated degradation (ERAD) prevents the native folding of mutated lysosomal enzymes in a patient's fibroblasts. Kifunensine, given in very low concentration (50 nM), inhibits the endoplasmic reticulum mannosidase I and interferes with early substrate recognition, prolonged ER retention and substrate folding. It did not cause irremediably misfolded proteins to accumulate or induce apoptosis in the cells. In addition, the combination of ERAD inhibition using kifunensine with proteostasis modulation (MG-132 = Z-Leu-Leu-Leu-al) to enhance the cellular folding capacity, resulted in the synergistic rescue of mutated enzymes. A patent held by William Marsh Rice University makes the following claims: #9 : A method comprising administering to a subject a therapeutically effective amount of at least one inhibitor of ER-associated degradation #10 : The method of claim 9 wherein the subject has Gaucher's disease or Tay–Sachs disease #11 : The method of claim 9 wherein the inhibitor is eeyarestatin I or kifunensine. References Alkaloids Primary alcohols Cyclohexanols
Kifunensine
Chemistry
1,394
36,126,852
https://en.wikipedia.org/wiki/Feature%20hashing
In machine learning, feature hashing, also known as the hashing trick (by analogy to the kernel trick), is a fast and space-efficient way of vectorizing features, i.e. turning arbitrary features into indices in a vector or matrix. It works by applying a hash function to the features and using their hash values as indices directly (after a modulo operation), rather than looking the indices up in an associative array. In addition to its use for encoding non-numeric values, feature hashing can also be used for dimensionality reduction. This trick is often attributed to Weinberger et al. (2009), but there exists a much earlier description of this method published by John Moody in 1989. Motivation Motivating example In a typical document classification task, the input to the machine learning algorithm (both during learning and classification) is free text. From this, a bag of words (BOW) representation is constructed: the individual tokens are extracted and counted, and each distinct token in the training set defines a feature (independent variable) of each of the documents in both the training and test sets. Machine learning algorithms, however, are typically defined in terms of numerical vectors. Therefore, the bags of words for a set of documents is regarded as a term-document matrix where each row is a single document, and each column is a single feature/word; the entry in such a matrix captures the frequency (or weight) of the 'th term of the vocabulary in document . (An alternative convention swaps the rows and columns of the matrix, but this difference is immaterial.) Typically, these vectors are extremely sparse—according to Zipf's law. The common approach is to construct, at learning time or prior to that, a dictionary representation of the vocabulary of the training set, and use that to map words to indices. Hash tables and tries are common candidates for dictionary implementation. E.g., the three documents John likes to watch movies. Mary likes movies too. John also likes football. can be converted, using the dictionary to the term-document matrix (Punctuation was removed, as is usual in document classification and clustering.) The problem with this process is that such dictionaries take up a large amount of storage space and grow in size as the training set grows. On the contrary, if the vocabulary is kept fixed and not increased with a growing training set, an adversary may try to invent new words or misspellings that are not in the stored vocabulary so as to circumvent a machine learned filter. To address this challenge, Yahoo! Research attempted to use feature hashing for their spam filters. Note that the hashing trick isn't limited to text classification and similar tasks at the document level, but can be applied to any problem that involves large (perhaps unbounded) numbers of features. Mathematical motivation Mathematically, a token is an element in a finite (or countably infinite) set . Suppose we only need to process a finite corpus, then we can put all tokens appearing in the corpus into , meaning that is finite. However, suppose we want to process all possible words made of the English letters, then is countably infinite. Most neural networks can only operate on real vector inputs, so we must construct a "dictionary" function . When is finite, of size , then we can use one-hot encoding to map it into . First, arbitrarily enumerate , then define . In other words, we assign a unique index to each token, then map the token with index to the unit basis vector . One-hot encoding is easy to interpret, but it requires one to maintain the arbitrary enumeration of . Given a token , to compute , we must find out the index of the token . Thus, to implement efficiently, we need a fast-to-compute bijection , then we have . In fact, we can relax the requirement slightly: It suffices to have a fast-to-compute injection , then use . In practice, there is no simple way to construct an efficient injection . However, we do not need a strict injection, but only an approximate injection. That is, when , we should probably have , so that probably . At this point, we have just specified that should be a hashing function. Thus we reach the idea of feature hashing. Algorithms Feature hashing (Weinberger et al. 2009) The basic feature hashing algorithm presented in (Weinberger et al. 2009) is defined as follows. First, one specifies two hash functions: the kernel hash , and the sign hash . Next, one defines the feature hashing function:Finally, extend this feature hashing function to strings of tokens bywhere is the set of all finite strings consisting of tokens in . Equivalently, Geometric properties We want to say something about the geometric property of , but , by itself, is just a set of tokens, we cannot impose a geometric structure on it except the discrete topology, which is generated by the discrete metric. To make it nicer, we lift it to , and lift from to by linear extension: There is an infinite sum there, which must be handled at once. There are essentially only two ways to handle infinities. One may impose a metric, then take its completion, to allow well-behaved infinite sums, or one may demand that nothing is actually infinite, only potentially so. Here, we go for the potential-infinity way, by restricting to contain only vectors with finite support: , only finitely many entries of are nonzero. Define an inner product on in the obvious way: As a side note, if is infinite, then the inner product space is not complete. Taking its completion would get us to a Hilbert space, which allows well-behaved infinite sums. Now we have an inner product space, with enough structure to describe the geometry of the feature hashing function . First, we can see why is called a "kernel hash": it allows us to define a kernel byIn the language of the "kernel trick", is the kernel generated by the "feature map" Note that this is not the feature map we were using, which is . In fact, we have been using another kernel , defined by The benefit of augmenting the kernel hash with the binary hash is the following theorem, which states that is an isometry "on average". The above statement and proof interprets the binary hash function not as a deterministic function of type , but as a random binary vector with unbiased entries, meaning that for any . This is a good intuitive picture, though not rigorous. For a rigorous statement and proof, see Pseudocode implementation Instead of maintaining a dictionary, a feature vectorizer that uses the hashing trick can build a vector of a pre-defined length by applying a hash function to the features (e.g., words), then using the hash values directly as feature indices and updating the resulting vector at those indices. Here, we assume that feature actually means feature vector. function hashing_vectorizer(features : array of string, N : integer): x := new vector[N] for f in features: h := hash(f) x[h mod N] += 1 return x Thus, if our feature vector is ["cat","dog","cat"] and hash function is if is "cat" and if is "dog". Let us take the output feature vector dimension () to be 4. Then output will be [0,2,1,0]. It has been suggested that a second, single-bit output hash function be used to determine the sign of the update value, to counter the effect of hash collisions. If such a hash function is used, the algorithm becomes function hashing_vectorizer(features : array of string, N : integer): x := new vector[N] for f in features: h := hash(f) idx := h mod N if ξ(f) == 1: x[idx] += 1 else: x[idx] -= 1 return x The above pseudocode actually converts each sample into a vector. An optimized version would instead only generate a stream of pairs and let the learning and prediction algorithms consume such streams; a linear model can then be implemented as a single hash table representing the coefficient vector. Extensions and variations Learned feature hashing Feature hashing generally suffers from hash collision, which means that there exist pairs of different tokens with the same hash: . A machine learning model trained on feature-hashed words would then have difficulty distinguishing and , essentially because is polysemic. If is rare, then performance degradation is small, as the model could always just ignore the rare case, and pretend all means . However, if both are common, then the degradation can be serious. To handle this, one can train supervised hashing functions that avoids mapping common tokens to the same feature vectors. Applications and practical performance Ganchev and Dredze showed that in text classification applications with random hash functions and several tens of thousands of columns in the output vectors, feature hashing need not have an adverse effect on classification performance, even without the signed hash function. Weinberger et al. (2009) applied their version of feature hashing to multi-task learning, and in particular, spam filtering, where the input features are pairs (user, feature) so that a single parameter vector captured per-user spam filters as well as a global filter for several hundred thousand users, and found that the accuracy of the filter went up. Chen et al. (2015) combined the idea of feature hashing and sparse matrix to construct "virtual matrices": large matrices with small storage requirements. The idea is to treat a matrix as a dictionary, with keys in , and values in . Then, as usual in hashed dictionaries, one can use a hash function , and thus represent a matrix as a vector in , no matter how big is. With virtual matrices, they constructed HashedNets, which are large neural networks taking only small amounts of storage. Implementations Implementations of the hashing trick are present in: Apache Mahout Gensim scikit-learn sofia-ml Vowpal Wabbit Apache Spark R TensorFlow Dask-ML See also References External links Hashing Representations for Machine Learning on John Langford's website What is the "hashing trick"? - MetaOptimize Q+A Hashing Machine learning Articles with example pseudocode
Feature hashing
Engineering
2,171
35,402,084
https://en.wikipedia.org/wiki/Lexicographically%20minimal%20string%20rotation
In computer science, the lexicographically minimal string rotation or lexicographically least circular substring is the problem of finding the rotation of a string possessing the lowest lexicographical order of all such rotations. For example, the lexicographically minimal rotation of "bbaaccaadd" would be "aaccaaddbb". It is possible for a string to have multiple lexicographically minimal rotations, but for most applications this does not matter as the rotations must be equivalent. Finding the lexicographically minimal rotation is useful as a way of normalizing strings. If the strings represent potentially isomorphic structures such as graphs, normalizing in this way allows for simple equality checking. A common implementation trick when dealing with circular strings is to concatenate the string to itself instead of having to perform modular arithmetic on the string indices. Algorithms The Naive Algorithm The naive algorithm for finding the lexicographically minimal rotation of a string is to iterate through successive rotations while keeping track of the most lexicographically minimal rotation encountered. If the string is of length , this algorithm runs in time in the worst case. Booth's Algorithm An efficient algorithm was proposed by Booth (1980). The algorithm uses a modified preprocessing function from the Knuth–Morris–Pratt string search algorithm. The failure function for the string is computed as normal, but the string is rotated during the computation so some indices must be computed more than once as they wrap around. Once all indices of the failure function have been successfully computed without the string rotating again, the minimal lexicographical rotation is known to be found and its starting index is returned. The correctness of the algorithm is somewhat difficult to understand, but it is easy to implement. def least_rotation(s: str) -> int: """Booth's lexicographically minimal string rotation algorithm.""" n = len(s) f = [-1] * (2 * n) k = 0 for j in range(1, 2 * n): i = f[j - k - 1] while i != -1 and s[j % n] != s[(k + i + 1) % n]: if s[j % n] < s[(k + i + 1) % n]: k = j - i - 1 i = f[i] if i == -1 and s[j % n] != s[(k + i + 1) % n]: if s[j % n] < s[(k + i + 1) % n]: k = j f[j - k] = -1 else: f[j - k] = i + 1 return k Of interest is that removing all lines of code which modify the value of results in the original Knuth-Morris-Pratt preprocessing function, as (representing the rotation) will remain zero. Booth's algorithm runs in time, where is the length of the string. The algorithm performs at most comparisons in the worst case, and requires auxiliary memory of length to hold the failure function table. Shiloach's Fast Canonization Algorithm Shiloach (1981) proposed an algorithm improving on Booth's result in terms of performance. It was observed that if there are equivalent lexicographically minimal rotations of a string of length , then the string must consist of equal substrings of length . The algorithm requires only comparisons and constant space in the worst case. The algorithm is divided into two phases. The first phase is a quick sieve which rules out indices that are obviously not starting locations for the lexicographically minimal rotation. The second phase then finds the lexicographically minimal rotation start index from the indices which remain. Duval's Lyndon Factorization Algorithm Duval (1983) proposed an efficient algorithm involving the factorization of the string into its component Lyndon words, which runs in linear time with a constant memory requirement. Variants Shiloach (1979) proposed an algorithm to efficiently compare two circular strings for equality without a normalization requirement. An additional application which arises from the algorithm is the fast generation of certain chemical structures without repetitions. See also Lyndon word Knuth–Morris–Pratt algorithm References Problems on strings Lexicography Articles with example code
Lexicographically minimal string rotation
Mathematics
892
14,346,153
https://en.wikipedia.org/wiki/Steuart%20Pringle
Lieutenant General Sir Steuart Robert Pringle (21 July 1928 – 18 April 2013) was a Scottish Royal Marines officer who served as Commandant General Royal Marines from 1981 to 1985. He was seriously injured by an IRA car bomb in 1981, in which he lost his right leg. He was styled as the 10th Baronet of Stichill from 1961 to 2016, when a court accepted DNA evidence that established he was not the biological grandson of the 8th baronet. His cousin Murray Pringle inherited the baronetcy instead of Sir Steuart's eldest son and expected heir. Early life and education Pringle was born in Dover on 21 July 1928, the only child of Sir Norman Hamilton Pringle of Stichill, 9th Baronet (1903–1961), and his first wife, Winifred Olive Curran (died 1975). He was educated at Sherborne School. Military career Pringle joined the Royal Marines in 1946. He was appointed commanding officer of 45 Commando in 1971 and had a tour at Headquarters Commando Forces from 1974 in which role he was promoted from lieutenant colonel to colonel. Promoted to major-general on 1 February 1978 (local major-general from 20 February 1978), he then became Major General Commando Forces. Pringle went on to be chief of staff to the Commandant General Royal Marines in 1979 and Commandant General Royal Marines in 1981. On 17 October 1981, he was injured by an IRA car bomb attached to his red Volkswagen car outside his home in Dulwich, South London as he went to take his pet black Labrador, Bella, to the park for a run. One of the first questions he asked was, "How's my dog?". Bella was unscathed but Pringle lost his right leg in the incident and badly injured his left. As Commandant General of the Royal Marines, he was seen welcoming the Commandos home following the Falklands War. He was named BBC Pebble Mill Man of the Year for his "outstanding achievement and bravery". He later returned to duties, and retired in June 1984. Later life In retirement he became chairman and Chief Executive of the Chatham Historic Dockyard Trust. He died in London on 18 April 2013. Honours Pringle was appointed a Knight Commander of the Order of the Bath (KCB) in the 1982 Birthday Honours. He was awarded an Honorary DSc of City University London in 1982 and an Honorary LLD of Exeter University in 1994. He was also an Honorary Admiral of the Texas Navy. Personal life In 1953, Sir Steuart married Jacqueline Marie Gladwell, only daughter of Wilfrid Hubert Gladwell. They had two sons and two daughters. His eldest son, Simon, had been the heir apparent to the baronetcy. DNA case Norman Hamilton Pringle and his son Sir Steuart were recognised as the 9th and 10th Pringle Baronets of Nova Scotia, respectively, during their lifetimes; however, questions had been raised in the family as to whether Norman was the biological child of Sir Norman Robert Pringle, 8th Baronet (1871–1919). The 8th Baronet had married Florence Madge Vaughan on 16 October 1902 but she gave birth to Norman only seven months later, on 13 May 1903, leading to questions of legitimacy that were not resolved until more than a century later. In 2009, Sir Steuart agreed to DNA testing for a project launched by his first cousin Murray Pringle (born 1941), an accountant who was attempting to restore a clan chief to Clan Pringle, which has been an armigerous clan since 1737. The results indicated that Sir Steuart's paternal DNA was not consistent with that of other Pringles, but Murray heeded advice that the issue of the legitimate claimant to the baronetcy should not be contested during Sir Steuart's lifetime. After he died in 2013, both Simon (Sir Steuart's eldest son) and Murray attempted to claim the baronetcy. In 2016, a court agreed Murray Pringle was the rightful heir to the baronetcy instead of his first cousin once removed Simon, as DNA evidence demonstrated that Sir Steuart's father was not the biological son of Sir Norman Pringle, 8th Baronet. There were two younger sons – Ronald Steuart (1905–1968; Murray Pringle's father), and James Drummond (1906–1960). Norman Hamilton was proven with a "high degree of probability" to be fathered by someone outside the Pringle clan, and Sir Steuart and his father were removed posthumously from the Official Roll of the Baronetage. Murray Pringle was declared the 10th Baronet and his father the de jure 9th Baronet. However, as a Knight Commander of the Order of the Bath, Sir Steuart was still styled as Sir. References 1928 births 2013 deaths Baronets in the Baronetage of Nova Scotia British amputees British military personnel of the Cyprus Emergency British military personnel of the Indonesia–Malaysia confrontation British military personnel of the Malayan Emergency British military personnel of the Suez Crisis Car bomb victims Explosion survivors Knights Commander of the Order of the Bath People educated at Sherborne School People of The Troubles (Northern Ireland) Royal Marines lieutenant generals Military personnel from Kent 20th-century Royal Marines personnel
Steuart Pringle
Chemistry
1,064
24,703,939
https://en.wikipedia.org/wiki/Utah%20oil%20sands
In the United States a large supply of oil sands are found in Eastern Utah. These deposits of bitumen or heavy crude oil have the ability to generate about 12 to 19 billion barrels from a number of prominent sites. History Since the early 1900s the oil sand deposits have been extracted mainly for the use of road pavement. Later, in the 1970s, oil companies began to experiment with the deposits in the hope of using it for their benefit. These experiments ended in the late 1980s when the technologies being used were concluded inefficient and too expensive. Recently, oil companies have again become interested in Utah's oil sands. Now that conventional oil is becoming harder to find, oil sands have become an alternative fuel source. Production sites Utah's oil sands are made up of several different deposits all consisting of different amounts of heavy or crude oil. These sites are mostly found on public lands. They are mainly close together and many are found within the Uintah Basin of Utah, which is a section of the Colorado Plateaus province. Some of these sites include Sunnyside, P.R. Spring, Asphalt Ridge, Hill Creek, Circle Ridge, Circle Cliffs, White Rocks, and the Tar Sand Triangle, the highest deposit. Tar Sand Triangle The Tar Sand Triangle is located in Southeastern Utah and covers an area of . It is located between the Dirty Devil and Colorado Rivers in Wayne and Garfield Counties. The Tar Sand Triangle is the largest deposit of oil sands in the United States known today. It contains about 6.3 billion barrels of heavy oil, but is thought to have originally held more. At one point the Tar Sand Triangle could have consisted of 16 billion barrels of heavy oil, almost as much as in Utah today. See also Oil Sands Athabasca Oil Sands History of the petroleum industry in the United States Utah Oil Sands Joint Venture References Bituminous sands Oil fields in Utah
Utah oil sands
Chemistry
380
34,197
https://en.wikipedia.org/wiki/X-ray
An X-ray (also known in many languages as Röntgen radiation) is a form of high-energy electromagnetic radiation with a wavelength shorter than those of ultraviolet rays and longer than those of gamma rays. Roughly, X-rays have a wavelength ranging from 10 nanometers to 10 picometers, corresponding to frequencies in the range of 30 petahertz to 30 exahertz ( to ) and photon energies in the range of 100 eV to 100 keV, respectively. X-rays were discovered in 1895 by the German scientist Wilhelm Conrad Röntgen, who named it X-radiation to signify an unknown type of radiation. X-rays can penetrate many solid substances such as construction materials and living tissue, so X-ray radiography is widely used in medical diagnostics (e.g., checking for broken bones) and material science (e.g., identification of some chemical elements and detecting weak points in construction materials). However X-rays are ionizing radiation and exposure can be hazardous to health, causing DNA damage, cancer and, at higher intensities, burns and radiation sickness. Their generation and use is strictly controlled by public health authorities. History Pre-Röntgen observations and research X-rays were originally noticed in science as a type of unidentified radiation emanating from discharge tubes by experimenters investigating cathode rays produced by such tubes, which are energetic electron beams that were first observed in 1869. Early researchers noticed effects that were attributable to them in many of the early Crookes tubes (invented around 1875). Crookes tubes created free electrons by ionization of the residual air in the tube by a high DC voltage of anywhere between a few kilovolts and 100 kV. This voltage accelerated the electrons coming from the cathode to a high enough velocity that they created X-rays when they struck the anode or the glass wall of the tube. The earliest experimenter thought to have (unknowingly) produced X-rays was William Morgan. In 1785, he presented a paper to the Royal Society of London describing the effects of passing electrical currents through a partially evacuated glass tube, producing a glow created by X-rays. This work was further explored by Humphry Davy and his assistant Michael Faraday. Starting in 1888, Philipp Lenard conducted experiments to see whether cathode rays could pass out of the Crookes tube into the air. He built a Crookes tube with a "window" at the end made of thin aluminium, facing the cathode so the cathode rays would strike it (later called a "Lenard tube"). He found that something came through, that would expose photographic plates and cause fluorescence. He measured the penetrating power of these rays through various materials. It has been suggested that at least some of these "Lenard rays" were actually X-rays. Helmholtz formulated mathematical equations for X-rays. He postulated a dispersion theory before Röntgen made his discovery and announcement. He based it on the electromagnetic theory of light. However, he did not work with actual X-rays. In early 1890, photographer William Jennings and associate professor of the University of Pennsylvania Arthur W. Goodspeed were making photographs of coins with electric sparks. On 22 February after the end of their experiments two coins were left on a stack of photographic plates before Goodspeed demonstrated to Jennings the operation of Crookes tubes. While developing the plates, Jennings noticed disks of unknown origin on some of the plates, but nobody could explain them, and they moved on. Only in 1896 they realized that they accidentally made an X-ray photograph (they didn't claim a discovery). Also in 1890, Roentgen's assistant Ludwig Zehnder noticed a flash of light from a fluorescent screen immediately before the covered tube he was switching on punctured. When Stanford University physics professor Fernando Sanford conducted his "electric photography" experiments in 1891–1893 by photographing coins in the light of electric sparks, like Jennings and Goodspeed, he may have unknowingly generated and detected X-rays. His letter of 6 January 1893 to the Physical Review was duly published and an article entitled Without Lens or Light, Photographs Taken With Plate and Object in Darkness appeared in the San Francisco Examiner. In 1894, Nikola Tesla noticed damaged film in his lab that seemed to be associated with Crookes tube experiments and began investigating this invisible, radiant energy. After Röntgen identified the X-ray, Tesla began making X-ray images of his own using high voltages and tubes of his own design, as well as Crookes tubes. Discovery by Röntgen On 8 November 1895, German physics professor Wilhelm Röntgen stumbled on X-rays while experimenting with Lenard tubes and Crookes tubes and began studying them. He wrote an initial report "On a new kind of ray: A preliminary communication" and on 28 December 1895, submitted it to Würzburg's Physical-Medical Society journal. This was the first paper written on X-rays. Röntgen referred to the radiation as "X", to indicate that it was an unknown type of radiation. Some early texts refer to them as Chi-rays, having interpreted "X" as the uppercase Greek letter Chi, Χ. There are conflicting accounts of his discovery because Röntgen had his lab notes burned after his death, but this is a likely reconstruction by his biographers: Röntgen was investigating cathode rays from a Crookes tube which he had wrapped in black cardboard so that the visible light from the tube would not interfere, using a fluorescent screen painted with barium platinocyanide. He noticed a faint green glow from the screen, about away. Röntgen realized some invisible rays coming from the tube were passing through the cardboard to make the screen glow. He found they could also pass through books and papers on his desk. Röntgen threw himself into investigating these unknown rays systematically. Two months after his initial discovery, he published his paper. Röntgen discovered their medical use when he made a picture of his wife's hand on a photographic plate formed due to X-rays. The photograph of his wife's hand was the first photograph of a human body part using X-rays. When she saw the picture, she said "I have seen my death." The discovery of X-rays generated significant interest. Röntgen's biographer Otto Glasser estimated that, in 1896 alone, as many as 49 essays and 1044 articles about the new rays were published. This was probably a conservative estimate, if one considers that nearly every paper around the world extensively reported about the new discovery, with a magazine such as Science dedicating as many as 23 articles to it in that year alone. Sensationalist reactions to the new discovery included publications linking the new kind of rays to occult and paranormal theories, such as telepathy. The name X-rays stuck, although (over Röntgen's great objections) many of his colleagues suggested calling them Röntgen rays. They are still referred to as such in many languages, including German, Hungarian, Ukrainian, Danish, Polish, Czech, Bulgarian, Swedish, Finnish, Portuguese, Estonian, Slovak, Slovenian, Turkish, Russian, Latvian, Lithuanian, Albanian, Japanese, Dutch, Georgian, Hebrew, Icelandic, and Norwegian. Röntgen received the first Nobel Prize in Physics for his discovery. Advances in radiology Röntgen immediately noticed X-rays could have medical applications. Along with his 28 December Physical-Medical Society submission, he sent a letter to physicians he knew around Europe (1 January 1896). News (and the creation of "shadowgrams") spread rapidly with Scottish electrical engineer Alan Archibald Campbell-Swinton being the first after Röntgen to create an X-ray photograph (of a hand). Through February, there were 46 experimenters taking up the technique in North America alone. The first use of X-rays under clinical conditions was by John Hall-Edwards in Birmingham, England on 11 January 1896, when he radiographed a needle stuck in the hand of an associate. On 14 February 1896, Hall-Edwards was also the first to use X-rays in a surgical operation. In early 1896, several weeks after Röntgen's discovery, Ivan Romanovich Tarkhanov irradiated frogs and insects with X-rays, concluding that the rays "not only photograph, but also affect the living function". At around the same time, the zoological illustrator James Green began to use X-rays to examine fragile specimens. George Albert Boulenger first mentioned this work in a paper he delivered before the Zoological Society of London in May 1896. The book Sciagraphs of British Batrachians and Reptiles (sciagraph is an obsolete name for an X-ray photograph), by Green and James H. Gardiner, with a foreword by Boulenger, was published in 1897. The first medical X-ray made in the United States was obtained using a discharge tube of Ivan Puluj's design. In January 1896, on reading of Röntgen's discovery, Frank Austin of Dartmouth College tested all of the discharge tubes in the physics laboratory and found that only the Puluj tube produced X-rays. This was a result of Puluj's inclusion of an oblique "target" of mica, used for holding samples of fluorescent material, within the tube. On 3 February 1896, Gilman Frost, professor of medicine at the college, and his brother Edwin Frost, professor of physics, exposed the wrist of Eddie McCarthy, whom Gilman had treated some weeks earlier for a fracture, to the X-rays and collected the resulting image of the broken bone on gelatin photographic plates obtained from Howard Langill, a local photographer also interested in Röntgen's work. Many experimenters, including Röntgen himself in his original experiments, came up with methods to view X-ray images "live" using some form of luminescent screen. Röntgen used a screen coated with barium platinocyanide. On 5 February 1896, live imaging devices were developed by both Italian scientist Enrico Salvioni (his "cryptoscope") and William Francis Magie of Princeton University (his "Skiascope"), both using barium platinocyanide. American inventor Thomas Edison started research soon after Röntgen's discovery and investigated materials' ability to fluoresce when exposed to X-rays, finding that calcium tungstate was the most effective substance. In May 1896, he developed the first mass-produced live imaging device, his "Vitascope", later called the fluoroscope, which became the standard for medical X-ray examinations. Edison dropped X-ray research around 1903, before the death of Clarence Madison Dally, one of his glassblowers. Dally had a habit of testing X-ray tubes on his own hands, developing a cancer in them so tenacious that both arms were amputated in a futile attempt to save his life; in 1904, he became the first known death attributed to X-ray exposure. During the time the fluoroscope was being developed, Serbian American physicist Mihajlo Pupin, using a calcium tungstate screen developed by Edison, found that using a fluorescent screen decreased the exposure time it took to create an X-ray for medical imaging from an hour to a few minutes. In 1901, U.S. President William McKinley was shot twice in an assassination attempt while attending the Pan American Exposition in Buffalo, New York. While one bullet only grazed his sternum, another had lodged somewhere deep inside his abdomen and could not be found. A worried McKinley aide sent word to inventor Thomas Edison to rush an X-ray machine to Buffalo to find the stray bullet. It arrived but was not used. While the shooting itself had not been lethal, gangrene had developed along the path of the bullet, and McKinley died of septic shock due to bacterial infection six days later. Hazards discovered With the widespread experimentation with X‑rays after their discovery in 1895 by scientists, physicians, and inventors came many stories of burns, hair loss, and worse in technical journals of the time. In February 1896, Professor John Daniel and William Lofland Dudley of Vanderbilt University reported hair loss after Dudley was X-rayed. A child who had been shot in the head was brought to the Vanderbilt laboratory in 1896. Before trying to find the bullet, an experiment was attempted, for which Dudley "with his characteristic devotion to science" volunteered. Daniel reported that 21 days after taking a picture of Dudley's skull (with an exposure time of one hour), he noticed a bald spot in diameter on the part of his head nearest the X-ray tube: "A plate holder with the plates towards the side of the skull was fastened and a coin placed between the skull and the head. The tube was fastened at the other side at a distance of one-half-inch [] from the hair." Beyond burns, hair loss, and cancer, X-rays can be linked to infertility in males based on the amount of radiation used. In August 1896, H. D. Hawks, a graduate of Columbia College, suffered severe hand and chest burns from an X-ray demonstration. It was reported in Electrical Review and led to many other reports of problems associated with X-rays being sent in to the publication. Many experimenters including Elihu Thomson at Edison's lab, William J. Morton, and Nikola Tesla also reported burns. Elihu Thomson deliberately exposed a finger to an X-ray tube over a period of time and suffered pain, swelling, and blistering. Other effects were sometimes blamed for the damage including ultraviolet rays and (according to Tesla) ozone. Many physicians claimed there were no effects from X-ray exposure at all. On 3 August 1905, in San Francisco, California, Elizabeth Fleischman, an American X-ray pioneer, died from complications as a result of her work with X-rays. Hall-Edwards developed a cancer (then called X-ray dermatitis) sufficiently advanced by 1904 to cause him to write papers and give public addresses on the dangers of X-rays. His left arm had to be amputated at the elbow in 1908, and four fingers on his right arm soon thereafter, leaving only a thumb. He died of cancer in 1926. His left hand is kept at Birmingham University. 20th century and beyond The many applications of X-rays immediately generated enormous interest. Workshops began making specialized versions of Crookes tubes for generating X-rays and these first-generation cold cathode or Crookes X-ray tubes were used until about 1920. A typical early 20th-century medical X-ray system consisted of a Ruhmkorff coil connected to a cold cathode Crookes X-ray tube. A spark gap was typically connected to the high voltage side in parallel to the tube and used for diagnostic purposes. The spark gap allowed detecting the polarity of the sparks, measuring voltage by the length of the sparks thus determining the "hardness" of the vacuum of the tube, and it provided a load in the event the X-ray tube was disconnected. To detect the hardness of the tube, the spark gap was initially opened to the widest setting. While the coil was operating, the operator reduced the gap until sparks began to appear. A tube in which the spark gap began to spark at around was considered soft (low vacuum) and suitable for thin body parts such as hands and arms. A spark indicated the tube was suitable for shoulders and knees. An spark would indicate a higher vacuum suitable for imaging the abdomen of larger individuals. Since the spark gap was connected in parallel to the tube, the spark gap had to be opened until the sparking ceased to operate the tube for imaging. Exposure time for photographic plates was around half a minute for a hand to a couple of minutes for a thorax. The plates may have a small addition of fluorescent salt to reduce exposure times. Crookes tubes were unreliable. They had to contain a small quantity of gas (invariably air) as a current will not flow in such a tube if they are fully evacuated. However, as time passed, the X-rays caused the glass to absorb the gas, causing the tube to generate "harder" X-rays until it soon stopped operating. Larger and more frequently used tubes were provided with devices for restoring the air, known as "softeners". These often took the form of a small side tube that contained a small piece of mica, a mineral that traps relatively large quantities of air within its structure. A small electrical heater heated the mica, causing it to release a small amount of air, thus restoring the tube's efficiency. However, the mica had a limited life, and the restoration process was difficult to control. In 1904, John Ambrose Fleming invented the thermionic diode, the first kind of vacuum tube. This used a hot cathode that caused an electric current to flow in a vacuum. This idea was quickly applied to X-ray tubes, and hence heated-cathode X-ray tubes, called "Coolidge tubes", completely replaced the troublesome cold cathode tubes by about 1920. In about 1906, the physicist Charles Barkla discovered that X-rays could be scattered by gases, and that each element had a characteristic X-ray spectrum. He won the 1917 Nobel Prize in Physics for this discovery. In 1912, Max von Laue, Paul Knipping, and Walter Friedrich first observed the diffraction of X-rays by crystals. This discovery, along with the early work of Paul Peter Ewald, William Henry Bragg, and William Lawrence Bragg, gave birth to the field of X-ray crystallography. In 1913, Henry Moseley performed crystallography experiments with X-rays emanating from various metals and formulated Moseley's law which relates the frequency of the X-rays to the atomic number of the metal. The Coolidge X-ray tube was invented the same year by William D. Coolidge. It made possible the continuous emissions of X-rays. Modern X-ray tubes are based on this design, often employing the use of rotating targets which allow for significantly higher heat dissipation than static targets, further allowing higher quantity X-ray output for use in high-powered applications such as rotational CT scanners. The use of X-rays for medical purposes (which developed into the field of radiation therapy) was pioneered by Major John Hall-Edwards in Birmingham, England. Then in 1908, he had to have his left arm amputated because of the spread of X-ray dermatitis on his arm. Medical science also used the motion picture to study human physiology. In 1913, a motion picture was made in Detroit showing a hard-boiled egg inside a human stomach. This early X-ray movie was recorded at a rate of one still image every four seconds. Dr Lewis Gregory Cole of New York was a pioneer of the technique, which he called "serial radiography". In 1918, X-rays were used in association with motion picture cameras to capture the human skeleton in motion. In 1920, it was used to record the movements of tongue and teeth in the study of languages by the Institute of Phonetics in England. In 1914, Marie Curie developed radiological cars to support soldiers injured in World War I. The cars would allow for rapid X-ray imaging of wounded soldiers so battlefield surgeons could quickly and more accurately operate. From the early 1920s through to the 1950s, X-ray machines were developed to assist in the fitting of shoes and were sold to commercial shoe stores. Concerns regarding the impact of frequent or poorly controlled use were expressed in the 1950s, leading to the practice's eventual end that decade. The X-ray microscope was developed during the 1950s. The Chandra X-ray Observatory, launched on 23 July 1999, has been allowing the exploration of the very violent processes in the universe that produce X-rays. Unlike visible light, which gives a relatively stable view of the universe, the X-ray universe is unstable. It features stars being torn apart by black holes, galactic collisions, and novae, and neutron stars that build up layers of plasma that then explode into space. An X-ray laser device was proposed as part of the Reagan Administration's Strategic Defense Initiative in the 1980s, but the only test of the device (a sort of laser "blaster" or death ray, powered by a thermonuclear explosion) gave inconclusive results. For technical and political reasons, the overall project (including the X-ray laser) was defunded (though was later revived by the second Bush Administration as National Missile Defense using different technologies). Phase-contrast X-ray imaging refers to a variety of techniques that use phase information of an X-ray beam to form the image. Due to its good sensitivity to density differences, it is especially useful for imaging soft tissues. It has become an important method for visualizing cellular and histological structures in a wide range of biological and medical studies. There are several technologies being used for X-ray phase-contrast imaging, all using different principles to convert phase variations in the X-rays emerging from an object into intensity variations. These include propagation-based phase contrast, Talbot interferometry, refraction-enhanced imaging, and X-ray interferometry. These methods provide higher contrast compared to normal absorption-based X-ray imaging, making it possible to distinguish from each other details that have almost similar density. A disadvantage is that these methods require more sophisticated equipment, such as synchrotron or microfocus X-ray sources, X-ray optics, and high resolution X-ray detectors. Energy ranges Soft and hard X-rays X-rays with high photon energies above 5–10 keV (below 0.2–0.1 nm wavelength) are called hard X-rays, while those with lower energy (and longer wavelength) are called soft X-rays. The intermediate range with photon energies of several keV is often referred to as tender X-rays. Due to their penetrating ability, hard X-rays are widely used to image the inside of objects (e.g. in medical radiography and airport security). The term X-ray is metonymically used to refer to a radiographic image produced using this method, in addition to the method itself. Since the wavelengths of hard X-rays are similar to the size of atoms, they are also useful for determining crystal structures by X-ray crystallography. By contrast, soft X-rays are easily absorbed in air; the attenuation length of 600 eV (~2 nm) X-rays in water is less than 1 micrometer. Gamma rays There is no consensus for a definition distinguishing between X-rays and gamma rays. One common practice is to distinguish between the two types of radiation based on their source: X-rays are emitted by electrons, while gamma rays are emitted by the atomic nucleus. This definition has several problems: other processes can also generate these high-energy photons, or sometimes the method of generation is not known. One common alternative is to distinguish X- and gamma radiation on the basis of wavelength (or, equivalently, frequency or photon energy), with radiation shorter than some arbitrary wavelength, such as 10−11 m (0.1 Å), defined as gamma radiation. This criterion assigns a photon to an unambiguous category, but is only possible if wavelength is known. (Some measurement techniques do not distinguish between detected wavelengths.) However, these two definitions often coincide since the electromagnetic radiation emitted by X-ray tubes generally has a longer wavelength and lower photon energy than the radiation emitted by radioactive nuclei. Occasionally, one term or the other is used in specific contexts due to historical precedent, based on measurement (detection) technique, or based on their intended use rather than their wavelength or source. Thus, gamma-rays generated for medical and industrial uses, for example radiotherapy, in the ranges of 6–20 MeV, can in this context also be referred to as X-rays. Properties X-ray photons carry enough energy to ionize atoms and disrupt molecular bonds. This makes it a type of ionizing radiation, and therefore harmful to living tissue. A very high radiation dose over a short period of time causes burns and radiation sickness, while lower doses can give an increased risk of radiation-induced cancer. In medical imaging, this increased cancer risk is generally greatly outweighed by the benefits of the examination. The ionizing capability of X-rays can be used in cancer treatment to kill malignant cells using radiation therapy. It is also used for material characterization using X-ray spectroscopy. Hard X-rays can traverse relatively thick objects without being much absorbed or scattered. For this reason, X-rays are widely used to image the inside of visually opaque objects. The most often seen applications are in medical radiography and airport security scanners, but similar techniques are also important in industry (e.g. industrial radiography and industrial CT scanning) and research (e.g. small animal CT). The penetration depth varies with several orders of magnitude over the X-ray spectrum. This allows the photon energy to be adjusted for the application so as to give sufficient transmission through the object and at the same time provide good contrast in the image. X-rays have much shorter wavelengths than visible light, which makes it possible to probe structures much smaller than can be seen using a normal microscope. This property is used in X-ray microscopy to acquire high-resolution images, and also in X-ray crystallography to determine the positions of atoms in crystals. Interaction with matter X-rays interact with matter in three main ways, through photoabsorption, Compton scattering, and Rayleigh scattering. The strength of these interactions depends on the energy of the X-rays and the elemental composition of the material, but not much on chemical properties, since the X-ray photon energy is much higher than chemical binding energies. Photoabsorption or photoelectric absorption is the dominant interaction mechanism in the soft X-ray regime and for the lower hard X-ray energies. At higher energies, Compton scattering dominates. Photoelectric absorption The probability of a photoelectric absorption per unit mass is approximately proportional to , where is the atomic number and is the energy of the incident photon. This rule is not valid close to inner shell electron binding energies where there are abrupt changes in interaction probability, so called absorption edges. However, the general trend of high absorption coefficients and thus short penetration depths for low photon energies and high atomic numbers is very strong. For soft tissue, photoabsorption dominates up to about 26 keV photon energy where Compton scattering takes over. For higher atomic number substances, this limit is higher. The high amount of calcium () in bones, together with their high density, is what makes them show up so clearly on medical radiographs. A photoabsorbed photon transfers all its energy to the electron with which it interacts, thus ionizing the atom to which the electron was bound and producing a photoelectron that is likely to ionize more atoms in its path. An outer electron will fill the vacant electron position and produce either a characteristic X-ray or an Auger electron. These effects can be used for elemental detection through X-ray spectroscopy or Auger electron spectroscopy. Compton scattering Compton scattering is the predominant interaction between X-rays and soft tissue in medical imaging. Compton scattering is an inelastic scattering of the X-ray photon by an outer shell electron. Part of the energy of the photon is transferred to the scattering electron, thereby ionizing the atom and increasing the wavelength of the X-ray. The scattered photon can go in any direction, but a direction similar to the original direction is more likely, especially for high-energy X-rays. The probability for different scattering angles is described by the Klein–Nishina formula. The transferred energy can be directly obtained from the scattering angle from the conservation of energy and momentum. Rayleigh scattering Rayleigh scattering is the dominant elastic scattering mechanism in the X-ray regime. Inelastic forward scattering gives rise to the refractive index, which for X-rays is only slightly below 1. Production Whenever charged particles (electrons or ions) of sufficient energy hit a material, X-rays are produced. Production by electrons X-rays can be generated by an X-ray tube, a vacuum tube that uses a high voltage to accelerate the electrons released by a hot cathode to a high velocity. The high velocity electrons collide with a metal target, the anode, creating the X-rays. In medical X-ray tubes the target is usually tungsten or a more crack-resistant alloy of rhenium (5%) and tungsten (95%), but sometimes molybdenum for more specialized applications, such as when softer X-rays are needed as in mammography. In crystallography, a copper target is most common, with cobalt often being used when fluorescence from iron content in the sample might otherwise present a problem. When even lower energies are needed, as in X-ray photoelectron spectroscopy, the Kα X-rays from an aluminium or magnesium target are often used. The maximum energy of the produced X-ray photon is limited by the energy of the incident electron, which is equal to the voltage on the tube times the electron charge, so an 80 kV tube cannot create X-rays with an energy greater than 80 keV. When the electrons hit the target, X-rays are created by two different atomic processes: Characteristic X-ray emission (X-ray electroluminescence): If the electron has enough energy, it can knock an orbital electron out of the inner electron shell of the target atom. After that, electrons from higher energy levels fill the vacancies, and X-ray photons are emitted. This process produces an emission spectrum of X-rays at a few discrete frequencies, sometimes referred to as spectral lines. Usually, these are transitions from the upper shells to the K shell (called K lines), to the L shell (called L lines) and so on. If the transition is from 2p to 1s, it is called Kα, while if it is from 3p to 1s it is Kβ. The frequencies of these lines depend on the material of the target and are therefore called characteristic lines. The Kα line usually has greater intensity than the Kβ one and is more desirable in diffraction experiments. Thus the Kβ line is filtered out by a filter. The filter is usually made of a metal having one proton less than the anode material (e.g. Ni filter for Cu anode or Nb filter for Mo anode). Bremsstrahlung: This is radiation given off by the electrons as they are scattered by the strong electric field near the nuclei. These X-rays have a continuous spectrum. The frequency of Bremsstrahlung is limited by the energy of incident electrons. So, the resulting output of a tube consists of a continuous Bremsstrahlung spectrum falling off to zero at the tube voltage, plus several spikes at the characteristic lines. The voltages used in diagnostic X-ray tubes range from roughly 20 kV to 150 kV and thus the highest energies of the X-ray photons range from roughly 20 keV to 150 keV. Both of these X-ray production processes are inefficient, with only about one percent of the electrical energy used by the tube converted into X-rays, and thus most of the electric power consumed by the tube is released as waste heat. When producing a usable flux of X-rays, the X-ray tube must be designed to dissipate the excess heat. A specialized source of X-rays which is becoming widely used in research is synchrotron radiation, which is generated by particle accelerators. Its unique features are X-ray outputs many orders of magnitude greater than those of X-ray tubes, wide X-ray spectra, excellent collimation, and linear polarization. Short nanosecond bursts of X-rays peaking at 15 keV in energy may be reliably produced by peeling pressure-sensitive adhesive tape from its backing in a moderate vacuum. This is likely to be the result of recombination of electrical charges produced by triboelectric charging. The intensity of X-ray triboluminescence is sufficient for it to be used as a source for X-ray imaging. Production by fast positive ions X-rays can also be produced by fast protons or other positive ions. The proton-induced X-ray emission or particle-induced X-ray emission is widely used as an analytical procedure. For high energies, the production cross section is proportional to Z12Z2−4, where Z1 refers to the atomic number of the ion, Z2 refers to that of the target atom. An overview of these cross sections is given in the same reference. Production in lightning and laboratory discharges X-rays are also produced in lightning accompanying terrestrial gamma-ray flashes. The underlying mechanism is the acceleration of electrons in lightning related electric fields and the subsequent production of photons through Bremsstrahlung. This produces photons with energies of some few keV and several tens of MeV. In laboratory discharges with a gap size of approximately 1 meter length and a peak voltage of 1 MV, X-rays with a characteristic energy of 160 keV are observed. A possible explanation is the encounter of two streamers and the production of high-energy run-away electrons; however, microscopic simulations have shown that the duration of electric field enhancement between two streamers is too short to produce a significant number of run-away electrons. Recently, it has been proposed that air perturbations in the vicinity of streamers can facilitate the production of run-away electrons and hence of X-rays from discharges. Detectors X-ray detectors vary in shape and function depending on their purpose. Imaging detectors such as those used for radiography were originally based on photographic plates and later photographic film, but are now mostly replaced by various digital detector types such as image plates and flat panel detectors. For radiation protection direct exposure hazard is often evaluated using ionization chambers, while dosimeters are used to measure the radiation dose the person has been exposed to. X-ray spectra can be measured either by energy dispersive or wavelength dispersive spectrometers. For X-ray diffraction applications, such as X-ray crystallography, hybrid photon counting detectors are widely used. Medical uses Since Röntgen's discovery that X-rays can identify bone structures, X-rays have been used for medical imaging. The first medical use was less than a month after his paper on the subject. Up to 2010, five billion medical imaging examinations had been conducted worldwide. Radiation exposure from medical imaging in 2006 made up about 50% of total ionizing radiation exposure in the United States. Projectional radiographs Projectional radiography is the practice of producing two-dimensional images using X-ray radiation. Bones contain a high concentration of calcium, which, due to its relatively high atomic number, absorbs X-rays efficiently. This reduces the amount of X-rays reaching the detector in the shadow of the bones, making them clearly visible on the radiograph. The lungs and trapped gas also show up clearly because of lower absorption compared to tissue, while differences between tissue types are harder to see. Projectional radiographs are useful in the detection of pathology of the skeletal system as well as for detecting some disease processes in soft tissue. Some notable examples are the very common chest X-ray, which can be used to identify lung diseases such as pneumonia, lung cancer, or pulmonary edema, and the abdominal x-ray, which can detect bowel (or intestinal) obstruction, free air (from visceral perforations), and free fluid (in ascites). X-rays may also be used to detect pathology such as gallstones (which are rarely radiopaque) or kidney stones which are often (but not always) visible. Traditional plain X-rays are less useful in the imaging of soft tissues such as the brain or muscle. One area where projectional radiographs are used extensively is in evaluating how an orthopedic implant, such as a knee, hip or shoulder replacement, is situated in the body with respect to the surrounding bone. This can be assessed in two dimensions from plain radiographs, or it can be assessed in three dimensions if a technique called '2D to 3D registration' is used. This technique purportedly negates projection errors associated with evaluating implant position from plain radiographs. Dental radiography is commonly used in the diagnoses of common oral problems, such as cavities. In medical diagnostic applications, the low energy (soft) X-rays are unwanted, since they are totally absorbed by the body, increasing the radiation dose without contributing to the image. Hence, a thin metal sheet, often of aluminium, called an X-ray filter, is usually placed over the window of the X-ray tube, absorbing the low energy part in the spectrum. This is called hardening the beam since it shifts the center of the spectrum towards higher energy (or harder) X-rays. To generate an image of the cardiovascular system, including the arteries and veins (angiography) an initial image is taken of the anatomical region of interest. A second image is then taken of the same region after an iodinated contrast agent has been injected into the blood vessels within this area. These two images are then digitally subtracted, leaving an image of only the iodinated contrast outlining the blood vessels. The radiologist or surgeon then compares the image obtained to normal anatomical images to determine whether there is any damage or blockage of the vessel. Computed tomography Computed tomography (CT scanning) is a medical imaging modality where tomographic images or slices of specific areas of the body are obtained from a large series of two-dimensional X-ray images taken in different directions. These cross-sectional images can be combined into a three-dimensional image of the inside of the body. CT scans are a quicker and more cost effective imaging modality that can be used for diagnostic and therapeutic purposes in various medical disciplines. Fluoroscopy Fluoroscopy is an imaging technique commonly used by physicians or radiation therapists to obtain real-time moving images of the internal structures of a patient through the use of a fluoroscope. In its simplest form, a fluoroscope consists of an X-ray source and a fluorescent screen, between which a patient is placed. However, modern fluoroscopes couple the screen to an X-ray image intensifier and CCD video camera allowing the images to be recorded and played on a monitor. This method may use a contrast material. Examples include cardiac catheterization (to examine for coronary artery blockages), embolization procedures (to stop bleeding during hemorrhoidal artery embolization), and barium swallow (to examine for esophageal disorders and swallowing disorders). As of recent, modern fluoroscopy utilizes short bursts of x-rays, rather than a continuous beam, to effectively lower radiation exposure for both the patient and operator. Radiotherapy The use of X-rays as a treatment is known as radiation therapy and is largely used for the management (including palliation) of cancer; it requires higher radiation doses than those received for imaging alone. X-rays beams are used for treating skin cancers using lower energy X-ray beams while higher energy beams are used for treating cancers within the body such as brain, lung, prostate, and breast. Adverse effects X-rays are a form of ionizing radiation, and are classified as a carcinogen by both the World Health Organization's International Agency for Research on Cancer and the U.S. government. Diagnostic X-rays (primarily from CT scans due to the large dose used) increase the risk of developmental problems and cancer in those exposed. It is estimated that 0.4% of current cancers in the United States are due to computed tomography (CT scans) performed in the past and that this may increase to as high as 1.5–2% with 2007 rates of CT usage. Experimental and epidemiological data currently do not support the proposition that there is a threshold dose of radiation below which there is no increased risk of cancer. However, this is under increasing doubt. Cancer risk can start at the exposure of 1100 mGy. It is estimated that the additional radiation from diagnostic X-rays will increase the average person's cumulative risk of getting cancer by age 75 by 0.6–3.0%. The amount of absorbed radiation depends upon the type of X-ray test and the body part involved. CT and fluoroscopy entail higher doses of radiation than do plain X-rays. To place the increased risk in perspective, a plain chest X-ray will expose a person to the same amount from background radiation that people are exposed to (depending upon location) every day over 10 days, while exposure from a dental X-ray is approximately equivalent to 1 day of environmental background radiation. Each such X-ray would add less than 1 per 1,000,000 to the lifetime cancer risk. An abdominal or chest CT would be the equivalent to 2–3 years of background radiation to the whole body, or 4–5 years to the abdomen or chest, increasing the lifetime cancer risk between 1 per 1,000 to 1 per 10,000. This is compared to the roughly 40% chance of a US citizen developing cancer during their lifetime. For instance, the effective dose to the torso from a CT scan of the chest is about 5 mSv, and the absorbed dose is about 14 mGy. A head CT scan (1.5 mSv, 64 mGy) that is performed once with and once without contrast agent, would be equivalent to 40 years of background radiation to the head. Accurate estimation of effective doses due to CT is difficult with the estimation uncertainty range of about ±19% to ±32% for adult head scans depending upon the method used. The risk of radiation is greater to a fetus, so in pregnant patients, the benefits of the investigation (X-ray) should be balanced with the potential hazards to the fetus. If there is 1 scan in 9 months, it can be harmful to the fetus. Therefore, women who are pregnant get ultrasounds as their diagnostic imaging because this does not use radiation. If there is too much radiation exposure there could be harmful effects on the fetus or the reproductive organs of the mother. In the US, there are an estimated 62 million CT scans performed annually, including more than 4 million on children. Avoiding unnecessary X-rays (especially CT scans) reduces radiation dose and any associated cancer risk. Medical X-rays are a significant source of human-made radiation exposure. In 1987, they accounted for 58% of exposure from human-made sources in the United States. Since human-made sources accounted for only 18% of the total radiation exposure, most of which came from natural sources (82%), medical X-rays only accounted for 10% of total American radiation exposure; medical procedures as a whole (including nuclear medicine) accounted for 14% of total radiation exposure. By 2006, however, medical procedures in the United States were contributing much more ionizing radiation than was the case in the early 1980s. In 2006, medical exposure constituted nearly half of the total radiation exposure of the U.S. population from all sources. The increase is traceable to the growth in the use of medical imaging procedures, in particular computed tomography (CT), and to the growth in the use of nuclear medicine. Dosage due to dental X-rays varies significantly depending on the procedure and the technology (film or digital). Depending on the procedure and the technology, a single dental X-ray of a human results in an exposure of 5 to 40 μSv. A full mouth series of X-rays may result in an exposure of up to 60 (digital) to 180 (film) μSv, for a yearly average of up to 400 μSv. Financial incentives have been shown to have a significant impact on X-ray use with doctors who are paid a separate fee for each X-ray providing more X-rays. Early photon tomography or EPT (as of 2015) along with other techniques are being researched as potential alternatives to X-rays for imaging applications. Other uses Other notable uses of X-rays include: X-ray crystallography in which the pattern produced by the diffraction of X-rays through the closely spaced lattice of atoms in a crystal is recorded and then analysed to reveal the nature of that lattice. A related technique, fiber diffraction, was used by Rosalind Franklin to discover the double helical structure of DNA. X-ray astronomy, which is an observational branch of astronomy, which deals with the study of X-ray emission from celestial objects. X-ray microscopic analysis, which uses electromagnetic radiation in the soft X-ray band to produce images of very small objects. X-ray fluorescence, a technique in which X-rays are generated within a specimen and detected. The outgoing energy of the X-ray can be used to identify the composition of the sample. Industrial radiography uses X-rays for inspection of industrial parts, particularly welds. Radiography of cultural objects, most often X-rays of paintings to reveal underdrawing, pentimenti alterations in the course of painting or by later restorers, and sometimes previous paintings on the support. Many pigments such as lead white show well in radiographs. X-ray spectromicroscopy has been used to analyse the reactions of pigments in paintings. For example, in analysing colour degradation in the paintings of van Gogh. Authentication and quality control of packaged items. Industrial CT (computed tomography), a process that uses X-ray equipment to produce three-dimensional representations of components both externally and internally. This is accomplished through computer processing of projection images of the scanned object in many directions. Airport security luggage scanners use X-rays for inspecting the interior of luggage for security threats before loading on aircraft. truck scanners and domestic police departments use X-rays for inspecting the interior of trucks. X-ray art and fine art photography, artistic use of X-rays, for example the works by Stane Jagodič X-ray hair removal, a method popular in the 1920s but now banned by the FDA. Shoe-fitting fluoroscopes were popularized in the 1920s, banned in the US in the 1960s, in the UK in the 1970s, and later in continental Europe. Roentgen stereophotogrammetry is used to track movement of bones based on the implantation of markers X-ray photoelectron spectroscopy is a chemical analysis technique relying on the photoelectric effect, usually employed in surface science. Radiation implosion is the use of high energy X-rays generated from a fission explosion (an A-bomb) to compress nuclear fuel to the point of fusion ignition (an H-bomb). Visibility While generally considered invisible to the human eye, in special circumstances X-rays can be visible. Brandes, in an experiment a short time after Röntgen's landmark 1895 paper, reported after dark adaptation and placing his eye close to an X-ray tube, seeing a faint "blue-gray" glow which seemed to originate within the eye itself. Upon hearing this, Röntgen reviewed his record books and found he too had seen the effect. When placing an X-ray tube on the opposite side of a wooden door Röntgen had noted the same blue glow, seeming to emanate from the eye itself, but thought his observations to be spurious because he only saw the effect when he used one type of tube. Later he realized that the tube which had created the effect was the only one powerful enough to make the glow plainly visible and the experiment was thereafter readily repeatable. The knowledge that X-rays are actually faintly visible to the dark-adapted naked eye has largely been forgotten today; this is probably due to the desire not to repeat what would now be seen as a recklessly dangerous and potentially harmful experiment with ionizing radiation. It is not known what exact mechanism in the eye produces the visibility: it could be due to conventional detection (excitation of rhodopsin molecules in the retina), direct excitation of retinal nerve cells, or secondary detection via, for instance, X-ray induction of phosphorescence in the eyeball with conventional retinal detection of the secondarily produced visible light. Though X-rays are otherwise invisible, it is possible to see the ionization of the air molecules if the intensity of the X-ray beam is high enough. The beamline from the wiggler at the European Synchrotron Radiation Facility is one example of such high intensity. Units of measure and exposure The measure of X-rays ionizing ability is called the exposure: The coulomb per kilogram (C/kg) is the SI unit of ionizing radiation exposure, and it is the amount of radiation required to create one coulomb of charge of each polarity in one kilogram of matter. The roentgen (R) is an obsolete traditional unit of exposure, which represented the amount of radiation required to create one electrostatic unit of charge of each polarity in one cubic centimeter of dry air. 1 roentgen = . However, the effect of ionizing radiation on matter (especially living tissue) is more closely related to the amount of energy deposited into them rather than the charge generated. This measure of energy absorbed is called the absorbed dose: The gray (Gy), which has units of (joules/kilogram), is the SI unit of absorbed dose, and it is the amount of radiation required to deposit one joule of energy in one kilogram of any kind of matter. The rad is the (obsolete) corresponding traditional unit, equal to 10 millijoules of energy deposited per kilogram. 100 rad = 1 gray. The equivalent dose is the measure of the biological effect of radiation on human tissue. For X-rays it is equal to the absorbed dose. The Roentgen equivalent man (rem) is the traditional unit of equivalent dose. For X-rays it is equal to the rad, or, in other words, 10 millijoules of energy deposited per kilogram. 100 rem = 1 Sv. The sievert (Sv) is the SI unit of equivalent dose, and also of effective dose. For X-rays the "equivalent dose" is numerically equal to a Gray (Gy). 1 Sv = 1 Gy. For the "effective dose" of X-rays, it is usually not equal to the Gray (Gy). See also References External links Röntgen's discovery of X-rays (PDF; English translation) Oakley, P. A., Navid Ehsani, N., & Harrison, D. E. (2020). 5 Reasons Why Scoliosis X-Rays Are Not Harmful. Dose-Response. https://doi.org/10.1177/1559325820957797 1895 in Germany 1895 in science Electromagnetic spectrum IARC Group 1 carcinogens Ionizing radiation Medical physics Radiography Wilhelm Röntgen
X-ray
Physics
10,597
20,700,895
https://en.wikipedia.org/wiki/Resistance%20distance%20%28mechanics%29
Mechanics
Resistance distance (mechanics)
Physics,Engineering
3
41,018
https://en.wikipedia.org/wiki/Demand%20load
In telecommunications, the term demand load can have the following meanings: In general, the total power required by a facility. The demand load is the sum of the operational load (including any tactical load) and nonoperational demand loads. It is determined by applying the proper demand factor to each of the connected loads and a diversity factor to the sum total. At a communications center, the power required by all automatic switching, synchronous, and terminal equipment (operated simultaneously on-line or in standby), control and keying equipment, plus lighting, ventilation, and air- conditioning equipment required to maintain full continuity of communications. The power required for ventilating equipment, shop lighting, and other support items that may be operated simultaneously with the technical load. The sum of the technical demand and nontechnical demand loads of an operating facility. References Telecommunications engineering
Demand load
Engineering
177
36,172,654
https://en.wikipedia.org/wiki/Uranium%20hexachloride
Uranium hexachloride () is an inorganic chemical compound of uranium in the +6 oxidation state. is a metal halide composed of uranium and chlorine. It is a multi-luminescent dark green crystalline solid with a vapor pressure between 1-3 mmHg at 373.15 K. is stable in a vacuum, dry air, nitrogen and helium at room temperature. It is soluble in carbon tetrachloride (). Compared to the other uranium halides, little is known about . Structure and Bonding Uranium hexachloride has an octahedral geometry, with point group Oh. Its lattice (dimensions: 10.95 ± 0.02 Å x 6.03 ± 0.01 Å) is hexagonal in shape with three molecules per cell; the average theoretical U-Cl bond is 2.472 Å long (the experimental U-Cl length found by X-ray diffraction is 2.42 Å), and the distance between two adjacent chlorine atoms is 3.65 Å. Chemical properties Uranium hexachloride is a highly hygroscopic compound and decomposes readily when exposed to ordinary atmospheric conditions. therefore it should be handled in either a vacuum apparatus or in a dry box. Thermal decomposition is stable up to temperatures between 120 °C and 150 °C. The decomposition of results in a solid phase transition from one crystal form of to another more stable form. However, the decomposition of gaseous produces . The activation energy for this reaction is about 40 kcal per mole. Solubility is not a very soluble compound. It dissolves in to give a brown solution. It is slightly soluble in isobutyl bromide and in fluorocarbon (). Reaction with hydrogen fluoride When is reacted with purified anhydrous liquid hydrogen fluoride (HF) at room temperature produces . Synthesis Uranium hexachloride can be synthesized from the reaction of uranium trioxide () with a mixture of liquid and hot chlorine (). The yield can be increased if the reaction carried out in the presence of . The is converted to , which in turn reacts with the excess to form . It requires a substantial amount of heat for the reaction to take place; the temperature range is from 65 °C to 170 °C depending on the amount of reactant (ideal temperature 100 °C - 125 °C). The reaction is carried out in a closed gas-tight vessel (for example a glovebox) that can withstand the pressure that builds up. Step 1: Step 2: Overall reaction: This metal hexahalide can also be synthesized by blowing gas over sublimed at 350 °C. Step 1: Step 2: Overall Reaction: References Uranium(VI) compounds Chlorides Actinide halides
Uranium hexachloride
Chemistry
570
19,385,671
https://en.wikipedia.org/wiki/Rectum
The rectum (: rectums or recta) is the final straight portion of the large intestine in humans and some other mammals, and the gut in others. Before expulsion through the anus or cloaca, the rectum stores the feces temporarily. The adult human rectum is about long, and begins at the rectosigmoid junction (the end of the sigmoid colon) at the level of the third sacral vertebra or the sacral promontory depending upon what definition is used. Its diameter is similar to that of the sigmoid colon at its commencement, but it is dilated near its termination, forming the rectal ampulla. It terminates at the level of the anorectal ring (the level of the puborectalis sling) or the dentate line, again depending upon which definition is used. In humans, the rectum is followed by the anal canal, which is about long, before the gastrointestinal tract terminates at the anal verge. The word rectum comes from the Latin rēctum intestīnum, meaning straight intestine. Structure The human rectum is a part of the lower gastrointestinal tract. The rectum is a continuation of the sigmoid colon, and connects to the anus. The rectum follows the shape of the sacrum and ends in an expanded section called an ampulla where feces is stored before its release via the anal canal. An ampulla () is a cavity, or the dilated end of a duct, shaped like a Roman ampulla. The rectum joins with the sigmoid colon at the level of S3, and joins with the anal canal as it passes through the pelvic floor muscles. Unlike other portions of the colon, the rectum does not have distinct taeniae coli. The taeniae blend with one another in the sigmoid colon five centimeters above the rectum, becoming a singular longitudinal muscle that surrounds the rectum on all sides for its entire length. Blood supply and drainage The blood supply of the rectum changes between the top and bottom portions. The top two thirds is supplied by the superior rectal artery. The lower third is supplied by the middle and inferior rectal arteries. The superior rectal artery is a single artery that is a continuation of the inferior mesenteric artery, when it crosses the pelvic brim. It enters the mesorectum at the level of S3, and then splits into two branches, which run at the lateral back part of the rectum, and then the sides of the rectum. These then end in branches in the submucosa, which join with () with branches of the middle and inferior rectal arteries. Microanatomy The microanatomy of the wall of the rectum is similar to the rest of the gastrointestinal tract; namely, that it possesses a mucosa with a lining of a single layer of column-shaped cells with mucus-secreting goblet cells interspersed, resting on a lamina propria, with a layer of smooth muscle called muscularis mucosa. This sits on an underlying submucosa of connective tissue, surrounded by a muscularis propria of two bands of muscle, an inner circular band and an outer longitudinal one. There are a higher concentration of goblet cells in the rectal mucosa than other parts of the gastrointestinal tract. The lining of the rectum changes sharply at the line where the rectum meets the anus. Here, the lining changes from the column-shaped cells of the rectum to multiple layers of flat cells. Function The rectum acts as a temporary storage site for feces. The rectum receives fecal material from the descending colon, transmitted through regular muscle contractions called peristalsis. As the rectal walls expand due to the materials filling it from within, stretch receptors from the nervous system located in the rectal walls stimulate the desire to pass feces, a process called defecation. An internal and external anal sphincter, and resting contraction of the puborectalis, prevent leakage of feces (fecal incontinence). As the rectum becomes more distended, the sphincters relax and a reflex expulsion of the contents of the rectum occurs. Expulsion occurs through contractions of the muscles of the rectum. The urge to voluntarily defecate occurs after the rectal pressure increases to beyond 18 mmHg; and reflex expulsion at 55 mmHg. In voluntary defecation, in addition to contraction of the rectal muscles and relaxation of the external anal sphincter, abdominal muscle contraction, and relaxation of the puborectalis muscle occurs. This acts to make the angle between the rectum and anus straighter, and facilitate defecation. Clinical significance Examination For the diagnosis of certain ailments, a rectal exam may be done. These include faecal impaction, prostatic cancer and benign prostatic hypertrophy in men, faecal incontinence, and internal haemorrhoids. Forms of medical imaging used to examine the rectum include CT scans and MRI scans. An ultrasound probe may be inserted into the rectum to view nearby structures such as the prostate. Colonoscopy and sigmoidoscopy are forms of endoscopy that use a guided camera to directly view the rectum. The instruments may have the ability to take biopsies if needed, for diagnosis of diseases such as cancer. A proctoscope is another instrument that is used to visualise the rectum. Body temperature can also be taken in the rectum. Rectal temperature can be taken by inserting a medical thermometer not more than into the rectum via the anus. A mercury thermometer should be inserted for 3 to 5 minutes; a digital thermometer should remain inserted until it beeps. Normal rectal temperature generally ranges from and is about above oral (mouth) temperature and about above axilla (armpit) temperature. Availability of less invasive temperature-taking methods including tympanic (ear) and forehead thermometers has facilitated reduced use of this method. Route of administration Some medications are also administered via the rectum (). By their definitions, suppositories are inserted, and enemas are injected into the rectum. Medications might be given via the rectum to relieve constipation, to treat conditions near the rectum, such as fissures or haemorrhoids, or to give medications that are systemically active when taking them by mouth is not possible. People do not tend to like medications administered by this route because of both cultural issues, discomfort, and issues that may affect the medication working, such as leakage. Constipation One cause of constipation is faecal impaction in the rectum, in which a dry, hard stool forms. Constipation is most commonly due to dietary and lifestyle factors such as inadequate hydration, immobility, and lack of dietary fibre, although there are many potential causes. Such causes may include obstruction because of narrowing, local disease (such as Crohn's disease, fissures or haemorrhoids), or diseases affecting the neurological control of the bowel, or slow bowel transit time, including spinal cord injury and multiple sclerosis; use of medications such as opioids, and conditions such as diabetes mellitus, as well as severe illness. High calcium levels and low thyroid activity may also cause constipation. Testing may be carried out to investigate the cause. This may include blood tests such as biochemistry, calcium levels, thyroid function tests. A digital rectal examination may be performed to see if there is stool in the rectum, and whether there is an obstruction. When symptoms such as weight loss, bleeding through the rectum, or pain are present, additional investigations such as a CT scan may be ordered. If constipation persists despite simple treatments, testing may also include anal manometry to measure pressures in the anus and rectum, electrophysiological studies, and magnetic resonance proctography. In general however, constipation is treated by improving factors such as hydration, exercise, and dietary fibre. Laxatives may be used. Constipation that persists may require enemas or suppositories. Sometimes, use of the fingers or hand (manual evacuation) is required. Although peristalsis in the colon delivers material to the rectum, laxatives such as bisacodyl or senna that induce peristalsis in the large bowel do not appear to initiate peristalsis in the rectum. They induce a sensation of rectal fullness and contraction that frequently leads to defecation, but without the distinct waves of activity characteristic of peristalsis. Inflammation Proctitis is inflammation of the anus and the rectum. Ulcerative colitis, one form of inflammatory bowel disease that causes ulcers that affect the rectum. This may be episodic, over a person's lifetime. These may cause blood to be visible in the stool. , the cause is unknown. Cancer Rectal cancer, a subgroup of colorectal cancer specific to the rectum. Other diseases Other diseases of the rectum include: Rectal prolapse, referring to the prolapse of the rectum into the anus or external area. This is commonly caused by a weakened pelvic floor after childbirth In the context of mesenteric ischemia, the upper rectum is sometimes referred to as Sudeck's point and is of clinical importance as a watershed region between the inferior mesenteric artery circulation and the internal iliac artery circulation via the middle rectal artery and thus prone to ischemia. Sudeck's point is often referred to along with Griffith's point at the splenic flexure as a watershed region. Society and culture Sexual stimulation Due to the proximity of the anterior wall of the rectum to the vagina in females or to the prostate in males, and the shared nerves thereof, the rectum is an erogenous zone and its stimulation or penetration can result in sexual arousal. History Etymology English rectum is derived from the Latin intestinum rectum 'straight gut', a calque of Ancient Greek ἀπευθυσμένον ἔντερον, derived from ἀπευθύνειν, to make straight, and ἔντερον, gut, attested in the writings of Greek physician Galen. During his anatomic investigations on animal corpses, Galen observed the rectum to be straight instead of curved as in humans. The expressions ἀπευθυσμένον ἔντερον and intestinum rectum are therefore not appropriate descriptions of the rectum in humans. Apeuthysmenon is the Latinization of ἀπευθυσμένον and euthyenteron has a similar meaning (εὐθύς 'straight). Much of the knowledge of the anatomy of the rectum comes from detailed descriptions provided by Andreas Vesalius in 1543. See also Gastrointestinal tract Murphy drip Pectinate line Rectal prolapse Rectal thermometry References Sources External links Digestive system Anatomical terminology
Rectum
Biology
2,380
33,676,217
https://en.wikipedia.org/wiki/Sensory-motor%20coupling
Sensory-motor coupling is the coupling or integration of the sensory system and motor system. Sensorimotor integration is not a static process. For a given stimulus, there is no one single motor command. "Neural responses at almost every stage of a sensorimotor pathway are modified at short and long timescales by biophysical and synaptic processes, recurrent and feedback connections, and learning, as well as many other internal and external variables". Overview The integration of the sensory and motor systems allows an animal to take sensory information and use it to make useful motor actions. Additionally, outputs from the motor system can be used to modify the sensory system's response to future stimuli. To be useful it is necessary that sensory-motor integration be a flexible process because the properties of the world and ourselves change over time. Flexible sensorimotor integration would allow an animal the ability to correct for errors and be useful in multiple situations. To produce the desired flexibility it's probable that nervous systems employ the use of internal models and efference copies. Transform sensory coordinates to motor coordinates Prior to movement, an animal's current sensory state is used to generate a motor command. To generate a motor command, first, the current sensory state is compared to the desired or target state. Then, the nervous system transforms the sensory coordinates into the motor system's coordinates, and the motor system generates the necessary commands to move the muscles so that the target state is reached. Efference copy An important aspect of sensorimotor integration is the efference copy. The efference copy is a copy of a motor command that is used in internal models to predict what the new sensory state will be after the motor command has been completed. The efference copy can be used by the nervous system to distinguish self-generated environmental changes, compare an expected response to what actually occurs in the environment, and to increase the rate at which a command can be issued by predicting an organism's state prior to receiving sensory input. Internal model An internal model is a theoretical model used by a nervous system to predict the environmental changes that result from a motor action. The assumption is that the nervous system has an internal representation of how a motor apparatus, the part of the body that will be moved, behaves in an environment. Internal models can be classified as either a forward model or an inverse model. Forward model A forward model is a model used by the nervous system to predict the new state of the motor apparatus and the sensory stimuli that result from a motion. The forward model takes the efference copy as an input and outputs the expected sensory changes. Forward models offer several advantages to an organism. Advantages: The estimated future state can be used to coordinate movement before sensory feedback is returned. The output of a forward model can be used to differentiate between self-generated stimuli and non-self-generated stimuli. The estimated sensory feedback can be used to alter an animal's perception related to self-generated motion. The difference between the expected sensory state and sensory feedback can be used to correct errors in movement and the model. Inverse model An inverse model behaves oppositely of a forward model. Inverse models are used by nervous systems to estimate either the motor command that caused a change in sensory information or to determine the motor command that will reach the target state. Examples Gaze stabilization During flight, it is important for a fly to maintain a level gaze; however, it is possible for a fly to rotate. The rotation is detected visually as a rotation of the environment termed optical flow. The input of the optical flow is then converted into a motor command to the fly's neck muscles so that the fly will maintain a level gaze. This reflex is diminished in a stationary fly compared to when it is flying or walking. Singing crickets Male crickets sing by rubbing their forewings together. The sounds produced are loud enough to reduce the cricket's auditory system's response to other sounds. This desensitization is caused by the hyperpolarization of the Omega 1 neuron (ON1), an auditory interneuron, due to activation by auditory stimulation. To reduce self-desensitization, the cricket's thoracic central pattern generator sends a corollary discharge, an efference copy that is used to inhibit an organism's response to self-generated stimuli, to the auditory system. The corollary discharge is used to inhibit the auditory system's response to the cricket's own song and prevent desensitization. This inhibition allows the cricket to remain responsive to external sounds such as a competing male's song. Speech Sensorimotor integration is involved in the development, production, and perception of speech. Speech development Two key elements of speech development are babbling and audition. The linking of a motor action to a heard sound is thought to be learned. One reason for this is that deaf infants do not canonically babble. Another is that an infant's perception is known to be affected by his babbling. One model of speech development proposes that the sounds produced by babbling are compared to the sounds produced in the language used around the infant and that association of a motor command to a sound is learned. Speech production Audition plays a critical role in the production and maintenance of speech. As an example, people who experience adult-onset deafness become less able to produce accurate speech. This decline is because they lack auditory feedback. Another example is acquisition of a new accent as a result of living in an area with a different accent. These changes can be explained through the use of a forward model. In this forward model, the motor cortex sends a motor command to the vocal tract and an efference copy to the internal model of the vocal tract. The internal model predicts what sounds will be produced. This prediction is used to check that the motor command will produce the goal sound so that corrections may be made. The internal model's estimate is also compared to the produced sound to generate an error estimate. The error estimate is used to correct the internal model. The updated internal model will then be used to generate future motor commands. Speech perception Sensorimotor integration is not critical to the perception of speech; however, it does perform a modulatory function. This is supported by the fact that people who either have impaired speech production or lack the ability to speak are still capable of perceiving speech. Furthermore, experiments in which motor areas related to speech were stimulated altered but did not prevent the perception of speech. Patient R.W. Patient R.W. was a man who suffered damage in his parietal and occipital lobes, areas of the brain related to processing visual information, due to a stroke. As a result of his stroke, he experienced vertigo when he tried to track a moving object with his eyes. The vertigo was caused by his brain interpreting the world as moving. In normal people, the world is not perceived as in moving when tracking an object despite the fact that the image of the world is moved across the retina as the eye moves. The reason for this is that the brain predicts the movement of the world across the retina as a consequence of moving the eyes. R.W., however, was unable to make this prediction. Disorders Parkinson's Patients with Parkinson's disease often show symptoms of bradykinesia and hypometria. These patients are more dependent on external cues rather than proprioception and kinesthesia when compared to other people. In fact, studies using external vibrations to create proprioceptive errors in movement show that Parkinson's patients perform better than healthy people. Patients have also been shown to underestimate the movement of limb when it was moved by researchers. Additionally, studies on somatosensory evoked potentials have evidenced that the motor problems are likely related to an inability to properly process the sensory information and not in the generation of the information. Huntington's Huntington's patients often have trouble with motor control. In both quinolinic models and patients, it has been shown that people with Huntington's have abnormal sensory input. Additionally, patients have been shown to have a decrease in the inhibition of the startle reflex. This decrease indicates a problem with proper sensorimotor integration. The " various problems in integrating sensory information explain why patients with HD are unable to control voluntary movements accurately." Dystonia Dystonia is another motor disorder that presents sensorimotor integration abnormalities. There are multiple pieces of evidence that indicate focal dystonia is related to improper linking or processing of afferent sensory information in the motor regions of the brain. For example, dystonia can be partially relieved through the use of a sensory trick. A sensory trick is the application of a stimulus to an area near to the location affected by dystonia that provides relief. Positron emission tomography studies have shown that the activity in both the supplementary motor area and primary motor cortex are reduced by the sensory trick. More research is necessary on sensorimotor integration dysfunction as it relates to non-focal dystonia. Restless leg syndrome Restless leg syndrome (RLS) is a sensorimotor disorder. People with RLS are plagued with feelings of discomfort and the urge to move in the legs. These symptoms occur most frequently at rest. Research has shown that the motor cortex has increased excitability in RLS patients compared to healthy people. Somatosensory evoked potentials from the stimulation of both posterior nerve and median nerve are normal. The normal SEPs indicate that the RLS is related to abnormal sensorimotor integration. In 2010, Vincenzo Rizzo et al. provided evidence that RLS sufferers have lower than normal short latency afferent inhibition (SAI), inhibition of the motor cortex by afferent sensory signals. The decrease of SAI indicates the presence of abnormal sensory-motor integration in RLS patients. See also Motor control Motor learning Motor goal Motor coordination Multisensory integration Sensory processing References Motor cognition Neurology Motor control Nervous system Sensory systems
Sensory-motor coupling
Biology
2,039
75,391,593
https://en.wikipedia.org/wiki/Volenrelaxin
Volenrelaxin (LY3540378) is a long-acting, synthetic analogue of relaxin developed by Eli Lilly and Company to treat heart failure. References Drugs developed by Eli Lilly and Company
Volenrelaxin
Chemistry
45
17,570
https://en.wikipedia.org/wiki/Linear%20equation
In mathematics, a linear equation is an equation that may be put in the form where are the variables (or unknowns), and are the coefficients, which are often real numbers. The coefficients may be considered as parameters of the equation and may be arbitrary expressions, provided they do not contain any of the variables. To yield a meaningful equation, the coefficients are required to not all be zero. Alternatively, a linear equation can be obtained by equating to zero a linear polynomial over some field, from which the coefficients are taken. The solutions of such an equation are the values that, when substituted for the unknowns, make the equality true. In the case of just one variable, there is exactly one solution (provided that ). Often, the term linear equation refers implicitly to this particular case, in which the variable is sensibly called the unknown. In the case of two variables, each solution may be interpreted as the Cartesian coordinates of a point of the Euclidean plane. The solutions of a linear equation form a line in the Euclidean plane, and, conversely, every line can be viewed as the set of all solutions of a linear equation in two variables. This is the origin of the term linear for describing this type of equation. More generally, the solutions of a linear equation in variables form a hyperplane (a subspace of dimension ) in the Euclidean space of dimension . Linear equations occur frequently in all mathematics and their applications in physics and engineering, partly because non-linear systems are often well approximated by linear equations. This article considers the case of a single equation with coefficients from the field of real numbers, for which one studies the real solutions. All of its content applies to complex solutions and, more generally, to linear equations with coefficients and solutions in any field. For the case of several simultaneous linear equations, see system of linear equations. One variable A linear equation in one variable can be written as with . The solution is . Two variables A linear equation in two variables and can be written as where and are not both . If and are real numbers, it has infinitely many solutions. Linear function If , the equation is a linear equation in the single variable for every value of . It has therefore a unique solution for , which is given by This defines a function. The graph of this function is a line with slope and -intercept The functions whose graph is a line are generally called linear functions in the context of calculus. However, in linear algebra, a linear function is a function that maps a sum to the sum of the images of the summands. So, for this definition, the above function is linear only when , that is when the line passes through the origin. To avoid confusion, the functions whose graph is an arbitrary line are often called affine functions, and the linear functions such that are often called linear maps. Geometric interpretation Each solution of a linear equation may be viewed as the Cartesian coordinates of a point in the Euclidean plane. With this interpretation, all solutions of the equation form a line, provided that and are not both zero. Conversely, every line is the set of all solutions of a linear equation. The phrase "linear equation" takes its origin in this correspondence between lines and equations: a linear equation in two variables is an equation whose solutions form a line. If , the line is the graph of the function of that has been defined in the preceding section. If , the line is a vertical line (that is a line parallel to the -axis) of equation which is not the graph of a function of . Similarly, if , the line is the graph of a function of , and, if , one has a horizontal line of equation Equation of a line There are various ways of defining a line. In the following subsections, a linear equation of the line is given in each case. Slope–intercept form or Gradient-intercept form A non-vertical line can be defined by its slope , and its -intercept (the coordinate of its intersection with the -axis). In this case, its linear equation can be written If, moreover, the line is not horizontal, it can be defined by its slope and its -intercept . In this case, its equation can be written or, equivalently, These forms rely on the habit of considering a nonvertical line as the graph of a function. For a line given by an equation these forms can be easily deduced from the relations Point–slope form or Point-gradient form A non-vertical line can be defined by its slope , and the coordinates of any point of the line. In this case, a linear equation of the line is or This equation can also be written for emphasizing that the slope of a line can be computed from the coordinates of any two points. Intercept form A line that is not parallel to an axis and does not pass through the origin cuts the axes into two different points. The intercept values and of these two points are nonzero, and an equation of the line is (It is easy to verify that the line defined by this equation has and as intercept values). Two-point form Given two different points and , there is exactly one line that passes through them. There are several ways to write a linear equation of this line. If , the slope of the line is Thus, a point-slope form is By clearing denominators, one gets the equation which is valid also when (for verifying this, it suffices to verify that the two given points satisfy the equation). This form is not symmetric in the two given points, but a symmetric form can be obtained by regrouping the constant terms: (exchanging the two points changes the sign of the left-hand side of the equation). Determinant form The two-point form of the equation of a line can be expressed simply in terms of a determinant. There are two common ways for that. The equation is the result of expanding the determinant in the equation The equation can be obtained by expanding with respect to its first row the determinant in the equation Besides being very simple and mnemonic, this form has the advantage of being a special case of the more general equation of a hyperplane passing through points in a space of dimension . These equations rely on the condition of linear dependence of points in a projective space. More than two variables A linear equation with more than two variables may always be assumed to have the form The coefficient , often denoted is called the constant term (sometimes the absolute term in old books). Depending on the context, the term coefficient can be reserved for the with . When dealing with variables, it is common to use and instead of indexed variables. A solution of such an equation is a -tuple such that substituting each element of the tuple for the corresponding variable transforms the equation into a true equality. For an equation to be meaningful, the coefficient of at least one variable must be non-zero. If every variable has a zero coefficient, then, as mentioned for one variable, the equation is either inconsistent (for ) as having no solution, or all are solutions. The -tuples that are solutions of a linear equation in are the Cartesian coordinates of the points of an -dimensional hyperplane in an Euclidean space (or affine space if the coefficients are complex numbers or belong to any field). In the case of three variables, this hyperplane is a plane. If a linear equation is given with , then the equation can be solved for , yielding If the coefficients are real numbers, this defines a real-valued function of real variables. See also Linear equation over a ring Algebraic equation Line coordinates Linear inequality Nonlinear equation Notes References External links Elementary algebra Equations
Linear equation
Mathematics
1,561
45,631,247
https://en.wikipedia.org/wiki/Lentinula%20aciculospora
Lentinula aciculospora is a species of agaric fungus in the family Omphalotaceae. Described as new to science in 2001, it is known only from Costa Rica, where it grows on oak wood. Fruitbodies are similar in external appearance to others members of the genus Lentinula (including shiitake), but L. aciculospora can be distinguished from those species microscopically by its distinctive elongated, cylindrical spores. References External links Fungi described in 2001 Fungi of Central America Taxa named by Ron Petersen Fungus species
Lentinula aciculospora
Biology
115
6,601,335
https://en.wikipedia.org/wiki/Partial%20specific%20volume
The partial specific volume express the variation of the extensive volume of a mixture in respect to composition of the masses. It is the partial derivative of volume with respect to the mass of the component of interest. where is the partial specific volume of a component defined as: The PSV is usually measured in milliLiters (mL) per gram (g), proteins > 30 kDa can be assumed to have a partial specific volume of 0.708 mL/g. Experimental determination is possible by measuring the natural frequency of a U-shaped tube filled successively with air, buffer and protein solution. Properties The weighted sum of partial specific volumes of a mixture or solution is an inverse of density of the mixture namely the specific volume of the mixture. See also Partial molar property Apparent molar property References Mass density
Partial specific volume
Physics,Chemistry,Biology
163
56,197,671
https://en.wikipedia.org/wiki/Tolylfluanid
Tolylfluanid is an organic chemical compound that is used as an active ingredient in fungicides and wood preservatives. Synthesis The synthesis of tolylfluanid begins with the reaction of dimethylamine and sulfuryl chloride. The product further reacts with p-toluidine and dichlorofluoromethanesulfenyl chloride to yield the final product. Use Tolylfluanid is used on fruit and ornamental plants against gray mold (Botrytis), against late blight on tomatoes and against powdery mildew on cucumbers. Environmental behavior Tolylfluanid hydrolyzes slowly in acidic conditions. The half-life is shorter when the pH is high; at pH = 7, it is at least 2 days. In aerobic media (pH = 7.7-8.0), tolylfluanid hydrolytically and microbially decomposes to N,N-dimethyl-N-(4-methylphenyl) sulfamide (DMST) and dimethylsulfamide. After 14 days, tolylfluanid is generally considered to have degraded. The half-life of DMST is 50-70 days. Absorption, metabolism and excretion Tolylfluanid is rapidly and almost completely absorbed in the gastrointestinal tract. The highest concentrations are found in the blood, lungs, liver, kidneys, spleen and thyroid gland. 99% is excreted in the urine within two days, although there is some accumulation in the thyroid gland. References External links EPA Factsheet Tolyfluanid (PubChem @ NIH) Sulfamides Organochlorides Organofluorides Fungicides 4-Tolyl compounds
Tolylfluanid
Biology
377
4,248,537
https://en.wikipedia.org/wiki/Mumbai%20Refinery%20%28HPCL%29
The HPCL Mumbai refinery is one of the most complex refineries in the country, is constructed on an area of 321 acres. This versatile refinery which is the first of India's modern refineries, symbolizes the country's industrial strength and progress in the oil industry. Mumbai Refinery has grown over the years as the main hub of petroleum products. The refinery has reached to present level through several upgradation and restructuring processes. History The Mumbai Refinery was commissioned by Esso Standard in 1954, with an installed capacity of 1.25 million tonnes per year. The lube refinery, Lube India Ltd, was commissioned in 1969 with a capacity of 165 million tonnes per year of Lube Oil Base Stock (LOBS) production. Crude processing capacity increased to 3.5 million tonnes per year during 1969. In 1974, the Government of India took over Esso and Lube India by the Esso (Acquisition of Undertakings in India) Act 1974 and formed HPCL. Expansion of the fuels block was carried out by installing new 2 million tonnes per year crude units in 1985. A second expansion of Lube Refinery took place to increase the capacity of the refinery to 335 million tonnes per year, so far the largest in India. The installed capacity of the refinery was later enhanced to 6.5 million tonnes per year. The current installed capacity of the refinery is 9.5 million tonnes per year. References Buildings and structures in Mumbai Oil refineries in India 1954 establishments in Bombay State Economy of Mumbai Energy in Maharashtra Hindustan Petroleum Hindustan Petroleum buildings and structures
Mumbai Refinery (HPCL)
Chemistry
318
18,268,930
https://en.wikipedia.org/wiki/AguaClara
AguaClara Cornell is an engineering based project team within Cornell University's College of Engineering that designs sustainable water treatment plants using open source technology. The program's mission is to uphold and protect “the fundamental human right to access safe drinking water. We are committed to the ongoing development of resilient, gravity-powered drinking water and wastewater treatment technologies.” AguaClara plants are unique among municipal-scale facilities in that they have no electrical or complex mechanical components and instead operate through hydraulic processes driven by gravity. The AguaClara Cornell program provides undergraduate and graduate students the opportunity to enhance their education through hands-on experience working on projects with real applications. In 2012, the National Academy of Engineering showcased AguaClara as one of the 29 engineering program at US colleges that effectively incorporates real world experiences in their curriculum. In 2017, a non-profit organization, AguaClara Reach, was formed with the continued mission of bringing clean drinking water on tap to communities around the world. AguaClara Reach works with AguaClara Cornell to pilot the latest open-source innovations developed in the lab, while sharing lessons learned from the field to drive further research. In Honduras, implementation partner Agua Para el Pueblo (Water for People), a NGO working in Honduras who manages the construction and technical support for AguaClara plants. AguaClara Reach partners with Gram Vikas in India to build Hydrodosers. The Hydrodoser, an AguaClara technology, is a modular, easy to install unit that, on its own, can be used to dose chlorine to disinfect water that has no more than 5 NTU of turbidity, which is typical of well water. History AguaClara was formed in 2005 by Cornell University senior lecturer Monroe Weber-Shirk, who volunteered in Central American refugee camps during the 1980s. Weber-Shirk used the connections he developed through his volunteer work to partner with Jacabo Nuñez, the director of Agua para el Pueblo to find the answer to a crucial question: What can we do to treat the dirty water that we are providing to rural communities? In 2005, he founded the AguaClara program to address the need for sustainable municipal scale water treatment in resource poor communities. The first AguaClara plant was built in 2006 in Ojojona to serve a population of 2000 people. Since 2005, Agua Para el Pueblo has commissioned eighteen drinking water treatment facilities implementing AguaClara technology across Honduras. Upon request of local communities in neighboring Nicaragua, an additional two facilities were commissioned in that country in 2017. In 2017 with the founding of AguaClara Reach, the project team appended Cornell to its name to distinguish it from its non-profit counterpart. Design tool AguaClara Cornell has developed an automated design tool that allows interested parties to input basic design parameters such as flow rate into a simple frontend and receive customized designs via email in five minutes or less. The user frontend communicates with the AguaClara server to populate MathCad scripts that calculate design parameters for input into AutoCAD scripts, which produce the final design. The design algorithms can be continuously improved and any changes will be immediately implemented the next time a design is requested. The AguaClara design tool applies an economy of scale to water treatment design, in that there are almost no marginal costs to produce an additional design. This is significant considering that the World Health Organization estimates the global unmet demand for improved water at approximately 844 million people, including 100 million using surface water sources that would be viable for treatment with AguaClara technology. From the AguaClara website: Plants AguaClara designs gravity-powered water treatment plants that require no electricity and are constructed by its implementation partners. The plants use hydraulic flocculators and high-flow vertical-flow sedimentation tanks to remove turbidity from surface waters. La 34, or "La treinta y quatro," once a numbered plantation run by United Fruit, is the first site of an AguaClara plant. Construction on the La 34 plant began in December 2004 and was inaugurated in August 2005. The plant serves a population of 2000 with a design flow of 285 LPM. Marcala The Marcala plant began in the Fall of 2007 and was completed in June 2008. The plant was upgraded in May 2011 to a flow rate of 3200 LPM. Cuatro Comunidades In the Fall of 2008, the AguaClara team designed a water treatment plant with shallower tanks that doesn't need an elevated platform for the plant operator. The full scale pilot facility for this new design was built for the four communities of Los Bayos, Rio Frio, Aldea Bonito and Las Jaguas. Construction was completed in March 2009. Sponsors The Sanjuan Fund Ken Brown '74 & Elizabeth Sanjuan Rotary Clubs Cornell University School of Civil & Environmental Engineering Cornell University College of Engineering Engineers for a Sustainable World National Rural Water Association EPA P3 Award Student design competition for sustainability Kaplan Family Distinguished Faculty Fellowships (CU Public Service) Awards and recognition 2012 NAE "Infusing World Experiences into Engineering Education" 2011 Intel Environment Tech Award See also Water purification Cornell University Notes and references This article incorporates text from the old AguaClara website and the new AguaClara website, licensed under a Creative Commons Attribution-Share Alike 3.0 United States License. External links Water treatment Industrial buildings in Honduras Cornell University student organizations 2005 establishments in New York (state) Non-profit organizations based in New York (state) Organizations established in 2005
AguaClara
Chemistry,Engineering,Environmental_science
1,156
67,243,270
https://en.wikipedia.org/wiki/DHL%20MoonBox
DHL MoonBox was a mementos box that was launched to the Moon on Astrobotic Technology's Peregrine Lunar Lander in 2024. 151 MoonBox capsules, also known as "Moonpods", were made by DHL, each containing items intended to be shipped to the lunar surface. The capsules measured up to 1 inch wide and 2 inches high (2.5 by 5.1 cm), and contained items from the USA, UK, Canada, Nepal, Germany and Belgium. The items included stories written by children and a rock from Mount Everest. DHL also included a data stick which contained 100,000 images from those who responded to its "Who do you love to the moon and back?" campaign. Landing of the Peregrine on the moon was later abandoned due to a propellant leak. It re-entered Earth's atmosphere and was destroyed on 18 January 2024. The payloads in the Moonbox include: References External links Astrobotic - MoonBox Message artifacts Peregrine Payloads
DHL MoonBox
Astronomy
216
47,801,453
https://en.wikipedia.org/wiki/Spectral%20correlation%20density
The spectral correlation density (SCD), sometimes also called the cyclic spectral density or spectral correlation function, is a function that describes the cross-spectral density of all pairs of frequency-shifted versions of a time-series. The spectral correlation density applies only to cyclostationary processes because stationary processes do not exhibit spectral correlation. Spectral correlation has been used both in signal detection and signal classification. The spectral correlation density is closely related to each of the bilinear time-frequency distributions, but is not considered one of Cohen's class of distributions. Definition The cyclic auto-correlation function of a time-series is calculated as follows: where (*) denotes complex conjugation. By the Wiener–Khinchin theorem [questionable, discuss], the spectral correlation density is then: Estimation methods The SCD is estimated in the digital domain with an arbitrary resolution in frequency and time. There are several estimation methods currently used in practice to efficiently estimate the spectral correlation for use in real-time analysis of signals due to its high computational complexity. Some of the more popular ones are the FFT Accumulation Method (FAM) and the Strip-Spectral Correlation Algorithm. A fast-spectral-correlation (FSC) algorithm has recently been introduced. FFT accumulation method (FAM) This section describes the steps for one to compute the SCD on computers. If with MATLAB or the NumPy library in Python, the steps are rather simple to implement. The FFT accumulation method (FAM) is a digital approach to calculating the SCD. Its input is a large block of IQ samples, and the output is a complex-valued image, the SCD. Let the signal, or block of IQ samples, be a complex valued tensor, or multidimensional array, of shape , where each element is an IQ sample. The first step of the FAM is to break into a matrix of frames of size with overlap. where is the separation between window beginnings. Overlap is achieved when . is a tensor of shape , and depends on how many frames were able to fit in . Next a windowing function of shape , like the Hamming window, is applied to each row in . where is element-wise multiplication. Next the FFT is taken on each row in is commonly known as the waterfall plot, or spectrogram. The next step in the FAM is for the phase to be corrected from delay of the FFTed frames. where is a tensor of shape corresponding to each digital frequencies in the FFTs Next the FFTs are autocorrelated to create a tensor of shape . where denotes complex conjugate. In other terms, if we let be a matrix of shape , we can rewrite as where H denotes Hermitian (conjugate and transpose) of a matrix. The next step is to take the FFT of along the first axis. is the full SCD, but in the shape of a 3-dimensional tensor. What we aim for is a 2-dimensional tensor (a matrix or image) of shape where each entry corresponds to a particular frequency and cyclic frequency . All values of in can be arranged in the tensor , and all values of in in the tensor . Here, and are normalized frequencies. where . Now the SCD image can be arranges in the form of a matrix with zeros where there are no values for a particular pair in , and entries from where it is valid as per and . Estimating the SCD by skipping the second FFT The full SCD is a rather large and computationally complex, mostly due to the second round of FFTs. Fortunately, from an estimate of the SCD can be calculated as For even less computational complexity, we can compute as because averaging all values in an FFT window before or after an FFT are equivalent. Note that will look like a 45 degree rotated version of the true SCD . References Further reading Napolitano, Antonio (2012-12-07). Generalizations of Cyclostationary Signal Processing: Spectral Analysis and Applications. John Wiley & Sons. . Pace, Phillip E. (2004-01-01). Detecting and Classifying Low Probability of Intercept Radar. Artech House. . Signal processing Time–frequency analysis
Spectral correlation density
Physics,Technology,Engineering
866
28,308,849
https://en.wikipedia.org/wiki/Suillus%20placidus
Suillus placidus, is a species of fungus in the genus Suillus. It is an edible pored mushroom found in European and North American coniferous forests, growing in association with several species of pine of the subgenus Strobus. Description The cap of Suillus placidus is hemispherical when young, later becoming convex. It is ivory white in colour and very slimy, growing to 10 cm in diameter. The stem is slender, ringless and ivory white with grey granular dots or blotches near the top. The soft flesh is yellowish white with a mild taste. The spores are ochre. Ecology Suillus placidus is found in Asia, Europe and North America occurring exclusively alongside species of five-needled pine of the subgenus Strobus. The ectomycorrhizal association is beneficial for both fungus and tree, and is a form of symbiosis. In Asia, it is known to occur in the Russian Far East with Siberian pine (Pinus sibirica), Siberian dwarf pine (P. pumila) and Korean pine (P. koraiensis). It has also been reported in China. It is rarely seen in Europe, where it is known to form ectomycorrhizal associations with Swiss pine (Pinus cembra) and introduced eastern white pine (P. strobus). In north-eastern North America, its range coincides with that of the native eastern white pine (P. strobus). The fungus fruits in summer and autumn with fruiting bodies occurring singly or in small groups. Edibility Suillus placidus is reportedly edible, but of mediocre quality. See also List of North American boletes References External links placidus Fungi of Europe Edible fungi Fungus species
Suillus placidus
Biology
377
11,523,749
https://en.wikipedia.org/wiki/Spinplasmonics
Spinplasmonics is a field of nanotechnology combining spintronics and plasmonics. The field was pioneered by Professor Abdulhakem Elezzabi at the University of Alberta in Canada. In a simple spinplasmonic device, light waves couple to electron spin states in a metallic structure. The most elementary spinplasmonic device consists of a bilayer structure made from magnetic and nonmagnetic metals. It is the nanometer scale interface between such metals that gives rise to an electron spin phenomenon. The plasmonic current is generated by optical excitation and its properties are manipulated by applying a weak magnetic field. Electrons with a specific spin state can cross the interfacial barrier, but those with a different spin state are impeded. Essentially, switching operations are performed with the electrons spin and then sent out as a light signal. Spinplasmonic devices potentially have the advantages of high speed, miniaturization, low power consumption, and multifunctionality. On a length scale that is less than a single magnetic domain size, the interaction between atomic spins realigns the magnetic moments. Unlike semiconductor-based devices, smaller spinplasmonics devices are expected to be more efficient in transporting the spin-polarized electron current. See also Plasmon Spintronics Spin pumping Spin transfer List of emerging technologies References A. Y. Elezzabi. (December 2007). "The dawn of spinplasmonics". Nano Today 2 (6), p. 48. Further reading Press release from the University of Alberta Spinplasmonics: a new route for active plasmonics Spintronics Plasmonics
Spinplasmonics
Physics,Chemistry,Materials_science
348
8,651,021
https://en.wikipedia.org/wiki/Micellar%20liquid%20chromatography
Micellar liquid chromatography (MLC) is a form of reversed phase liquid chromatography that uses an aqueous micellar solutions as the mobile phase. Theory The use of micelles in high performance liquid chromatography was first introduced by Armstrong and Henry in 1980. The technique is used mainly to enhance retention and selectivity of various solutes that would otherwise be inseparable or poorly resolved. Micellar liquid chromatography (MLC) has been used in a variety of applications including separation of mixtures of charged and neutral solutes, direct injection of serum and other physiological fluids, analysis of pharmaceutical compounds, separation of enantiomers, analysis of inorganic organometallics, and a host of others. One of the main drawbacks of the technique is the reduced efficiency that is caused by the micelles. Despite the sometimes poor efficiency, MLC is a better choice than ion-exchange LC or ion-pairing LC for separation of charged molecules and mixtures of charged and neutral species. Some of the aspects which will be discussed are the theoretical aspects of MLC, the use of models in predicting retentive characteristics of MLC, the effect of micelles on efficiency and selectivity, and general applications of MLC. Reverse phase high-performance liquid chromatography (RP-HPLC) involves a non-polar stationary phase, often a hydrocarbon chain, and a polar mobile or liquid phase. The mobile phase generally consists of an aqueous portion with an organic addition, such as methanol or acetonitrile. When a solution of analytes is injected into the system, the components begin to partition out of the mobile phase and interact with the stationary phase. Each component interacts with the stationary phase in a different manner depending upon its polarity and hydrophobicity. In reverse phase HPLC, the solute with the greatest polarity will interact less with the stationary phase and spend more time in the mobile phase. As the polarity of the components decreases, the time spent in the column increases. Thus, a separation of components is achieved based on polarity. The addition of micelles to the mobile phase introduces a third phase into which the solutes may partition. Micelles Micelles are composed of surfactant, or detergent, monomers with a hydrophobic moiety, or tail, on one end, and a hydrophilic moiety, or head group, on the other. The polar head group may be anionic, cationic, zwitterionic, or non-ionic. When the concentration of a surfactant in solution reaches its critical micelle concentration (CMC), it forms micelles which are aggregates of the monomers. The CMC is different for each surfactant, as is the number of monomers which make up the micelle, termed the aggregation number (AN). Table 1 lists some common detergents used to form micelles along with their CMC and AN where available. Many of the characteristics of micelles differ from those of bulk solvents. For example, the micelles are, by nature, spatially heterogeneous with a hydrocarbon, nearly anhydrous core and a highly solvated, polar head group. They have a high surface-to-volume ratio due to their small size and generally spherical shape. Their surrounding environment (pH, ionic strength, buffer ion, presence of a co-solvent, and temperature) has an influence on their size, shape, critical micelle concentration, aggregation number and other properties. Another important property of micelles is the Krafft point, the temperature at which the solubility of the surfactant is equal to its CMC. For HPLC applications involving micelles, it is best to choose a surfactant with a low Krafft point and CMC. A high CMC would require a high concentration of surfactant which would increase the viscosity of the mobile phase, an undesirable condition. Additionally, a Krafft point should be well below room temperature to avoid having to apply heat to the mobile phase. To avoid potential interference with absorption detectors, a surfactant should also have a small molar absorptivity at the chosen wavelength of analysis. Light scattering should not be a concern due to the small size, a few nanometers, of the micelle. The effect of organic additives on micellar properties is another important consideration. A small amount of organic solvent is often added to the mobile phase to help improve efficiency and to improve separations of compounds. Care needs to be taken when determining how much organic to add. Too high a concentration of the organic may cause the micelle to disperse, as it relies on hydrophobic effects for its formation. The maximum concentration of organic depends on the organic solvent itself, and on the micelle. This information is generally not known precisely, but a generally accepted practice is to keep the volume percentage of organic below 15–20%. Research Fischer and Jandera studied the effect of changing the concentration of methanol on CMC values for three commonly used surfactants. Two cationic, hexadecyltrimethylammonium bromide (CTAB), and N-(a-carbethoxypentadecyl) trimethylammonium bromide (Septonex), and one anionic surfactant, sodium dodecyl sulphate (SDS) were chosen for the experiment. Generally speaking, the CMC increased as the concentration of methanol increased. It was then concluded that the distribution of the surfactant between the bulk mobile phase and the micellar phase shifts toward the bulk as the methanol concentration increases. For CTAB, the rise in CMC is greatest from 0–10% methanol, and is nearly constant from 10–20%. Above 20% methanol, the micelles disaggregate and do not exist. For SDS, the CMC values remain unaffected below 10% methanol, but begin to increase as the methanol concentration is further increased. Disaggregation occurs above 30% methanol. Finally, for Septonex, only a slight increase in CMC is observed up to 20%, with disaggregation occurring above 25%. As has been asserted, the mobile phase in MLC consists of micelles in an aqueous solvent, usually with a small amount of organic modifier added to complete the mobile phase. A typical reverse phase alkyl-bonded stationary phase is used. The first discussion of the thermodynamics involved in the retention mechanism was published by Armstrong and Nome in 1981. In MLC, there are three partition coefficients which must be taken into account. The solute will partition between the water and the stationary phase (KSW), the water and the micelles (KMW), and the micelles and the stationary phase (KSM). Armstrong and Nome derived an equation describing the partition coefficients in terms of the retention factor, formally capacity factor, k¢. In HPLC, the capacity factor represents the molar ratio of the solute in the stationary phase to the mobile phase. The capacity factor is easily measure based on retention times of the compound and any unretained compound. The equation rewritten by Guermouche et al. is presented here: 1/k¢ = [n • (KMW-1)/(f • KSW)] • CM +1/(f • KSW) Where: k¢ is the capacity factor of the solute KSW is the partition coefficient of the solute between the stationary phase and the water KMW is the partition coefficient of the solute between the micelles and the water f is the phase volume ratio (stationary phase volume/mobile phase volume) n is the molar volume of the surfactant CM is the concentration of the micelle in the mobile phase (total surfactant concentration - critical micelle concentration) A plot of 1/k¢ verses CM gives a straight line in which KSW can be calculated from the intercept and KMW can be obtained from the ratio of the slope to the intercept. Finally, KSM can be obtained from the ratio of the other two partition coefficients: KSM = KSW/ KMW As can be observed from Figure 1, KMW is independent of any effects from the stationary phase, assuming the same micellar mobile phase. The validity of the retention mechanism proposed by Armstrong and Nome has been successfully, and repeated confirmed experimentally. However, some variations and alternate theories have also been proposed. Jandera and Fischer developed equations to describe the dependence of retention behavior on the change in micellar concentrations. They found that the retention of most compounds tested decreased with increasing concentrations of micelles. From this, it can be surmised that the compounds associate with the micelles as they spend less time associated with the stationary phase. Foley proposed a similar retentive model to that of Armstrong and Nome which was a general model for secondary chemical equilibria in liquid chromatography. While this model was developed in a previous reference, and could be used for any secondary chemical equilibria such as acid-base equilibria, and ion-pairing, Foley further refined the model for MLC. When an equilibrant (X), in this case surfactant, is added to the mobile phase, a secondary equilibria is created in which an analyte will exist as free analyte (A), and complexed with the equilibrant (AX). The two forms will be retained by the stationary phase to different extents, thus allowing the retention to be varied by adjusting the concentration of equilibrant (micelles). The resulting equation solved for capacity factor in terms of partition coefficients is much the same as that of Armstrong and Nome: 1/k¢ = (KSM/k¢S) • [M] + 1/k¢S Where: k¢ is the capacity factor of the complexed solute and the free solute k¢S is the capacity factor of the free solute KSM is the partition coefficient of the solute between the stationary phase and the micelle [M] may be either the concentration of surfactant or the concentration of micelle Foley used the above equation to determine the solute-micelle association constants and free solute retention factors for a variety of solutes with different surfactants and stationary phases. From this data, it is possible to predict the type and optimum surfactant concentrations needed for a given solute or solutes. Foley has not been the only researcher interested in determining the solute-micelle association constants. A review article by Marina and Garcia with 53 references discusses the usefulness of obtaining solute-micelle association constants. The association constants for two solutes can be used to help understand the retention mechanism. The separation factor of two solutes, a, can be expressed as KSM1/KSM2. If the experimental a coincides with the ratio of the two solute-micelle partition coefficients, it can be assumed that their retention occurs through a direct transfer from the micellar phase to the stationary phase. In addition, calculation of a would allow for prediction of separation selectivity before the analysis is performed, provided the two coefficients are known. The desire to predict retention behavior and selectivity has led to the development of several mathematical models. Changes in pH, surfactant concentration, and concentration of organic modifier play a significant role in determining the chromatographic separation. Often one or more of these parameters need to be optimized to achieve the desired separation, yet the optimum parameters must take all three variables into account simultaneously. The review by Garcia-Alvarez-Coque et al. mentioned several successful models for varying scenarios, a few of which will be mentioned here. The classic models by Armstrong and Nome and Foley are used to describe the general cases. Foley's model applies to many cases and has been experimentally verified for ionic, neutral, polar and nonpolar solutes; anionic, cationic, and non-ionic surfactants, and C8, C¬18, and cyano stationary phases. The model begins to deviate for highly and lowly retained solutes. Highly retained solutes may become irreversibly bound to the stationary phase, where lowly retained solutes may elute in the column void volume. Other models proposed by Arunyanart and Cline-Love and Rodgers and Khaledi describe the effect of pH on the retention of weak acids and bases. These authors derived equations relating pH and micellar concentration to retention. As the pH varies, sigmoidal behavior is observed for the retention of acidic and basic species. This model has been shown to accurately predict retention behavior. Still other models predict behavior in hybrid micellar systems using equations or modeling behavior based on controlled experimentation. Additionally, models accounting for the simultaneous effect of pH, micelle and organic concentration have been suggested. These models allow for further enhancement of the optimization of the separation of weak acids and bases. One research group, Rukhadze, et al. derived a first order linear relationship describing the influence of micelle and organic concentration, and pH on the selectivity and resolution of seven barbiturates. The researchers discovered that a second order mathematical equation would more precisely fit the data. The derivations and experimental details are beyond the scope of this discussion. The model was successful in predicting the experimental conditions necessary to achieve a separation for compounds which are traditionally difficult to resolve. Jandera, Fischer, and Effenberger approached the modeling problem in yet another way. The model used was based on lipophilicity and polarity indices of solutes. The lipophilicity index relates a given solute to a hypothetical number of carbon atoms in an alkyl chain. It is based and depends on a given calibration series determined experimentally. The lipophilicity index should be independent of the stationary phase and organic modifier concentration. The polarity index is a measure of the polarity of the solute-solvent interactions. It depends strongly on the organic solvent, and somewhat on the polar groups present in the stationary phase. 23 compounds were analyzed with varying mobile phases and compared to the lipophilicity and polarity indices. The results showed that the model could be applied to MLC, but better predictive behavior was found with concentrations of surfactant below the CMC, sub-micellar. A final type of model based on molecular properties of a solute is a branch of quantitative structure-activity relationships (QSAR). QSAR studies attempt to correlate biological activity of drugs, or a class of drugs, with structures. The normally accepted means of uptake for a drug, or its metabolite, is through partitioning into lipid bilayers. The descriptor most often used in QSAR to determine the hydrophobicity of a compound is the octanol-water partition coefficient, log P. MLC provides an attractive and practical alternative to QSAR. When micelles are added to a mobile phase, many similarities exist between the micellar mobile phase/stationary phase and the biological membrane/water interface. In MLC, the stationary phase become modified by the adsorption of surfactant monomers which are structurally similar to the membranous hydrocarbon chains in the biological model. Additionally, the hydrophilic/hydrophobic interactions of the micelles are similar to that in the polar regions of a membrane. Thus, the development of quantitative structure-retention relationships (QRAR) has become widespread. Escuder-Gilabert et al. tested three different QRAR retention models on ionic compounds. Several classes of compounds were tested including catecholamines, local anesthetics, diuretics, and amino acids. The best model relating log K and log P was found to be one in which the total molar charge of a compound at a given pH is included as a variable. This model proved to give fairly accurate predictions of log P, R > 0.9. Other studies have been performed which develop predictive QRAR models for tricyclic antidepressants and barbiturates. Efficiency The main limitation in the use of MLC is the reduction in efficiency (peak broadening) that is observed when purely aqueous micellar mobile phases are used. Several explanations for the poor efficiency have been theorized. Poor wetting of the stationary phase by the micellar aqueous mobile phase, slow mass transfer between the micelles and the stationary phase, and poor mass transfer within the stationary phase have all been postulated as possible causes. To enhance efficiency, the most common approaches have been the addition of small amounts of isopropyl alcohol and increase in temperature. A review by Berthod studied the combined theories presented above and applied the Knox equation to independently determine the cause of the reduced efficiency. The Knox equation is commonly used in HPLC to describe the different contributions to overall band broadening of a solute. The Knox equation is expressed as: h = An^(1/3)+ B/n + Cn Where: h = the reduced plate height count (plate height/stationary phase particle diameter) n = the reduced mobile phase linear velocity (velocity times stationary phase particle diameter/solute diffusion coefficient in the mobile phase) A, B, and C are constants related to solute flow anisotropy (eddy diffusion), molecular longitudinal diffusion, and mass transfer properties respectively. Berthod's use of the Knox equation to experimentally determine which of the proposed theories was most correct led him to the following conclusions. The flow anisotropy in micellar phase seems to be much greater than in traditional hydro-organic mobile phases of similar viscosity. This is likely due to the partial clogging of the stationary phase pores by adsorbed surfactant molecules. Raising the column temperature served to both decrease viscosity of the mobile phase and the amount of adsorbed surfactant. Both results reduce the A term and the amount of eddy diffusion, and thereby increase efficiency. The increase in the B term, as related to longitudinal diffusion, is associated with the decrease in the solute diffusion coefficient in the mobile phase, DM, due to the presence of the micelles, and an increase in the capacity factor, k¢. Again, this is related to surfactant adsorption on the stationary phase causing a dramatic decrease in the solute diffusion coefficient in the stationary phase, DS. Again an increase in temperature, now coupled with an addition of alcohol to the mobile phase, drastically decreases the amount of the absorbed surfactant. In turn, both actions reduce the C term caused by a slow mass transfer from the stationary phase to the mobile phase. Further optimization of efficiency can be gained by reducing the flow rate to one closely matched to that derived from the Knox equation. Overall, the three proposed theories seemed to have contributing effects of the poor efficiency observed, and can be partially countered by the addition of organic modifiers, particularly alcohol, and increasing the column temperature. Applications Despite the reduced efficiency verses reversed phase HPLC, hundreds of applications have been reported using MLC. One of the most advantageous is the ability to directly inject physiological fluids. Micelles have an ability to solubilize proteins which enables MLC to be useful in analyzing untreated biological fluids such as plasma, serum, and urine. Martinez et al. found MLC to be highly useful in analyzing a class of drugs called b-antagonists, so called beta-blockers, in urine samples. The main advantage of the use of MLC with this type of sample, is the great time savings in sample preparation. Alternative methods of analysis including reversed phase HPLC require lengthy extraction and sample work up procedures before analysis can begin. With MLC, direct injection is often possible, with retention times of less than 15 minutes for the separation of up to nine b-antagonists. Another application compared reversed phase HPLC with MLC for the analysis of desferrioxamine in serum. Desferrioxamine (DFO) is a commonly used drug for removal of excess iron in patients with chronic and acute levels. The analysis of DFO along with its chelated complexes, Fe(III) DFO and Al(III) DFO has proven to be difficult at best in previous attempts. This study found that direct injection of the serum was possible for MLC, verses an ultrafiltration step necessary in HPLC. This analysis proved to have difficulties with the separation of the chelated DFO compounds and with the sensitivity levels for DFO itself when MLC was applied. The researcher found that, in this case, reverse phase HPLC, was a better, more sensitive technique despite the time savings in direct injection. Analysis of pharmaceuticals by MLC is also gaining popularity. The selectivity and peak shape of MLC over commonly used ion-pair chromatography is much enhanced. MLC mimics, yet enhances, the selectivity offered by ion-pairing reagents for the separation of active ingredients in pharmaceutical drugs. For basic drugs, MLC improves the excessive peak tailing frequently observed in ion-pairing. Hydrophilic drugs are often unretained using conventional HPLC, are retained by MLC due to solubilization into the micelles. Commonly found drugs in cold medications such as acetaminophen, L-ascorbic acid, phenylpropanolamine HCL, tipepidine hibenzate, and chlorpheniramine maleate have been successfully separated with good peak shape using MLC. Additional basic drugs like many narcotics, such as codeine and morphine, have also been successfully separated using MLC. Another novel application of MLC involves the separation and analysis of inorganic compounds, mostly simple ions. This is a relatively new area for MLC, but has seen some promising results. MLC has been observed to provide better selectivity of inorganic ions that ion-exchange or ion-pairing chromatography. While this application is still in the beginning stages of development, the possibilities exist for novel, much enhanced separations of inorganic species. Since the technique was first reported on in 1980, micellar liquid chromatography has been used in hundreds of applications. This micelle controlled technique provides for unique opportunities for solving complicated separation problems. Despite the poor efficiency of MLC, it has been successfully used in many applications. The use of MLC in the future appears to be extremely advantages in the areas of physiological fluids, pharmaceuticals, and even inorganic ions. The technique has proven to be superior over ion-pairing and ion-exchange for many applications. As new approaches are developed to combat the poor efficiency of MLC, its application is sure to spread and gain more acceptance. References Chromatography
Micellar liquid chromatography
Chemistry
4,759
22,366,672
https://en.wikipedia.org/wiki/Acoustic%20Doppler%20velocimetry
Acoustic Doppler velocimetry (ADV) is designed to record instantaneous velocity components at a single-point with a relatively high frequency. Measurements are performed by measuring the velocity of particles in a remote sampling volume based upon the Doppler shift effect. Probe specs and features The probe head includes one transmitter and between two and four receivers. The remote sampling volume is located typically 5 or 10 cm from the tip of the transmitter, but some studies showed that the distance might change slightly. The sampling volume size is determined by the sampling conditions and manual setup. In a standard configuration, the sampling volume is about a cylinder of water with a diameter of 6 mm and a height of 9 mm, although newer laboratory ADVs may have smaller sampling volume (e.g. Sontek microADV, Nortek Vectrino+). A typical ADV system equipped with N receivers records simultaneously 4.N values with each sample. That is, for each receiver, a velocity component, a signal strength value, a signal-to-noise (SNR) and a correlation value. The signal strength, SNR and correlation values are used primarily to determine the quality and accuracy of the velocity data, although the signal strength (acoustic backscatter intensity) may related to the instantaneous suspended sediment concentration with proper calibration. The velocity component is measured along the line connecting the sampling volume to the receiver. The velocity data must be transformed into a Cartesian system of coordinates and the trigonometric transformation may cause some velocity resolution errors. Although acoustic Doppler velocimetry (ADV) has become a popular technique in laboratory in field applications, several researchers pointed out accurately that the ADV signal outputs include the combined effects of turbulent velocity fluctuations, Doppler noise, signal aliasing, turbulent shear and other disturbances. Evidences included by high levels of noise and spikes in all velocity components. In turbulent flows, the ADV velocity outputs are a combination of Doppler noise, signal aliasing, velocity fluctuations, installation vibrations and other disturbances. The signal may be further affected adversely by velocity shear across the sampling volume and boundary proximity. Lemmin and Lhermitte, Chanson et al., and Blanckaert and Lemmin discussed the inherent Doppler noise of an ADV system. Spikes may be caused by aliasing of the Doppler signal. McLelland and Nicholas explained the physical processes while Nikora and Goring, Goring and Nikora and Wahl developed techniques to eliminate aliasing errors called "spikes". These methods were developed for steady flow situations and tested in man-made channels. Not all of them are reliable, and the phase-space thresholding despiking technique appears to be a robust method in steady flows ). Simply, "raw" ADV velocity data are not "true" turbulent velocities and they should never be used without adequate post-processing (e.g.,). Chanson presented a summary of experiences gained during laboratory and field investigations with both Sontek and Nortek ADV systems. References Measurement
Acoustic Doppler velocimetry
Physics,Mathematics
633
14,583,171
https://en.wikipedia.org/wiki/LGR6
Leucine-rich repeat-containing G-protein coupled receptor 6 is a protein that in humans is encoded by the LGR6 gene. Along with the other G-protein coupled receptors LGR4 and LGR5, LGR6 is a Wnt signaling pathway mediator. LGR6 also acts as an epithelial stem cell marker in squamous cell carcinoma in mice in vivo. This gene encodes a member of the leucine-rich repeat-containing subgroup of the G protein-coupled 7-transmembrane protein superfamily. The encoded protein is a glycoprotein hormone receptor with a large N-terminal extracellular domain that contains leucine-rich repeats important for the formation of a horseshoe-shaped interaction motif for ligand binding. Alternative splicing of this gene results in multiple transcript variants. References Further reading G protein-coupled receptors
LGR6
Chemistry
183
486,341
https://en.wikipedia.org/wiki/Surface-wave-sustained%20discharge
A surface-wave-sustained discharge is a plasma that is excited by propagation of electromagnetic surface waves. Surface wave plasma sources can be divided into two groups depending upon whether the plasma generates part of its own waveguide by ionisation or not. The former is called a self-guided plasma. The surface wave mode allows the generation of uniform high-frequency-excited plasmas in volumes whose lateral dimensions extend over several wavelengths of the electromagnetic wave, e.g. for microwaves of 2.45 GHz in vacuum the wavelength amounts to 12.2 cm. Theory For a long time, microwave plasma sources without a magnetic field were not considered suitable for the generation of high density plasmas. Electromagnetic waves cannot propagate in over-dense plasmas. The wave is reflected at the plasma surface due to the skin effect and becomes an evanescent wave. Its penetration depth corresponds to the skin depth , which can be approximated by The non-vanishing penetration depth of an evanescent wave opens an alternative way of heating a plasma: Instead of traversing the plasma, the conductivity of the plasma enables the wave to propagate along the plasma surface. The wave energy is then transferred to the plasma by an evanescent wave which enters the plasma perpendicular to its surface and decays exponentially with the skin depth. Transfer mechanism allows to generate over-dense plasmas with electron densities beyond the critical density. Design Surface-wave-sustained plasmas (SWP) can be operated in a large variety of recipient geometries. The pressure range accessible for surface-wave-excited plasmas depends on the process gas and the diameter of the recipient. The larger the chamber diameter, the lower the minimal pressure necessary for the SWP mode. Analogously, the maximal pressure where a stable SWP can be operated decreases with increasing diameter. The numerical modelling of SWPs is quite involved. The plasma is created by the electromagnetic wave, but it also reflects and guides this same wave. Therefore, a truly self-consistent description is necessary. References Waves in plasmas Surface waves
Surface-wave-sustained discharge
Physics
419
7,635,915
https://en.wikipedia.org/wiki/Mabetex%20Group
Mabetex Group is a civil engineering and construction company founded in 1991 by Behgjet Pacolli. The company, headquartered in Lugano, Switzerland, specialises in the construction and renovation of large buildings. Mabetex has carried out works on a turnkey base such as the restoration of historical buildings, the construction and planning of administrative and public buildings, as well as industrial plants and urban projects. Mabetex Group is best known for its renovation of Kremlin. Corporate structure Mabetex Group is the mother company of several businesses found and owned by Behgjet Pacolli. The core company within the Mabetex Group is the civil engineering bureau Mabco Constructions SA, formerly known as Mabetex Project Engineering SA, located in Lugano. Mabetex specialises in large scale civil engineering which comprises construction and renovation of big buildings and building complexes. The Group has had many projects in former Soviet Union states. Other firms within the Mabetex Group specialise in insurance and public broadcasting. Mabco Constructions SA had, in 1997, an annual revenue of CHF 630 million; in 1996, the entire Mabetex Group grossed around CHF 1.5 billion. By 2016, the revenue had increased to around CHF 1.61 billion. In total, the Mabetex Group has about 14,000 employees, of which around 3,500 work in Kosovo. The company has more than 12 branch offices. History Mabetex was founded in 1991 in Lugano, Switzerland by Behgjet Pacolli, who has since been serving as the president of Mabetex Group. The company rose in size through contracting with the public sector in post-Soviet Russia throughout the 1990s. Mabetex's reputation as a "can-do" contractor solidified with the renovation of the Kremlin, which the company finished in 1996. Around that time, Mabetex also began constructing most of the new buildings in Nur-Sultan (former Astana). After the Kosovo War, Mabetex initiated a foundation for redevelopment of Kosovo in July 1999. Mabetex reconstructed buildings and schools that had been destroyed in the war. In 2002, Mabetex sent first aid goods to refugees camps in Albania, and helped other institutions in Italy and Switzerland to envoy goods using Mabetex's trucks. In the early 2000s, Mabetex withdrew from the Russian market and has since been focussing mainly on the Kazakh construction market. Main Projects Russia In the early 1990s, the Mabetex Group began working in the Russian city of Yakutsk. From 1994 until 1998, Mabetex was commissioned to renovate the State Duma, Russian Opera House, Kremlin, and the White House in Moscow, the official home of the Russian government. Several floors of the building had been severely damaged in October 1993. Mabetex was asked to completely restore both the exterior and interior of the building. In total, Mabetex earnt USD 492 million for the renovation work. The renovation of the Kremlin alone cost USD 335 million. Kazakhstan Mabetex has been working in Kazakhstan, where it played an important role in the construction of the capital Astana. The company has, as of 2009, built almost 40% of the buildings in Nur-Sultan, on more than 1,000,000 m2 of land. Among them were the new Ak Orda Presidential Palace, located on the left bank of the Ishim River, the ministry of foreign affairs, the concert and theatre hall, the Opera house, a hospital, the Saryarka Velotreck ice-hockey stadium, and the main terminal of the Nursultan Nazarbayev International Airport. Switzerland Mabetex has been active in Switzerland since 1991. The group's first project was the Kazakh embassy building, located in a residential area of Geneva. The building houses several offices and conference rooms. Among other works, Mabetex has completed the Swiss Diamond Hotel, a luxury five-star hotel located in Lugano. Furthermore, Mabetex has reconstructed the five-star hotel Fluela in Davos, and the new Romantica Residence building complex in Melide-Lugano. Kosovo and Albania The Mabetex Group in Kosovo was involved in the reconstruction of the Parliament building. The work carried out included the renovation of the interior and the reconstruction of the exterior with glass facades. In 2021, the group won an EUR 104 million tender for the construction of Albania's new international airport in Vlora. Elsewhere Italy In Italy, the Mabetex Group was responsible in the study and project for the refurbishment of the La Fenice theatre in Venice after it was burnt. Uzbekistan In Tashkent, the capital of Uzbekistan, the group constructed the project for the City Hall. Gallery References External links Official Website Construction and civil engineering companies of Switzerland Swiss companies established in 1991 Construction and civil engineering companies
Mabetex Group
Engineering
996
47,869,553
https://en.wikipedia.org/wiki/CloudLinux%20OS
CloudLinux OS is a commercial Linux distribution marketed to shared hosting providers. It is developed by software company CloudLinux, Inc. CloudLinux OS is based on the CentOS operating system; it uses the OpenVZ kernel and the rpm package manager. Overview CloudLinux OS provides a modified kernel based on the OpenVZ kernel. The main feature is the Lightweight Virtual Environment (LVE) – a separate environment with its own CPU, memory, IO, IOPS, number of processes and other limits. Switching to CloudLinux OS is performed by a provided cldeploy script which installs its kernel, switches yum repositories and installs basic packages to allow LVE to work. After installation the server requires rebooting to load the newly installed kernel. CloudLinux OS doesn’t modify existing packages, so it is possible to boot the previous kernel in the regular way. Creation of AlmaLinux CloudLinux released the first beta for AlmaLinux OS, a free operating system intended as a substitute for CentOS, on February 1, 2021. On March 30, 2021, the same day as the first stable release, CloudLinux transferred the responsibility for development and governance of the project to the AlmaLinux OS Foundation. CloudLinux has promised $1 million in annual funding to the project, but does not own the project anymore. External links References Linux distributions RPM-based Linux distributions Web hosting
CloudLinux OS
Technology
295
12,148,549
https://en.wikipedia.org/wiki/C3H5ClO
{{DISPLAYTITLE:C3H5ClO}} The molecular formula C3H5ClO (molar mass: 92.52 g/mol, exact mass: 92.0029 u) may refer to: Chloroacetone, a colourless liquid with a pungent odour Epichlorohydrin, an organochlorine compound and an epoxide
C3H5ClO
Chemistry
85
3,967,032
https://en.wikipedia.org/wiki/Photosynthetic%20reaction%20centre
A photosynthetic reaction center is a complex of several proteins, biological pigments, and other co-factors that together execute the primary energy conversion reactions of photosynthesis. Molecular excitations, either originating directly from sunlight or transferred as excitation energy via light-harvesting antenna systems, give rise to electron transfer reactions along the path of a series of protein-bound co-factors. These co-factors are light-absorbing molecules (also named chromophores or pigments) such as chlorophyll and pheophytin, as well as quinones. The energy of the photon is used to excite an electron of a pigment. The free energy created is then used, via a chain of nearby electron acceptors, for a transfer of hydrogen atoms (as protons and electrons) from H2O or hydrogen sulfide towards carbon dioxide, eventually producing glucose. These electron transfer steps ultimately result in the conversion of the energy of photons to chemical energy. Transforming light energy into charge separation Reaction centers are present in all green plants, algae, and many bacteria. A variety in light-harvesting complexes exist across the photosynthetic species. Green plants and algae have two different types of reaction centers that are part of larger supercomplexes known as P700 in Photosystem I and P680 in Photosystem II. The structures of these supercomplexes are large, involving multiple light-harvesting complexes. The reaction center found in Rhodopseudomonas bacteria is currently best understood, since it was the first reaction center of known structure and has fewer polypeptide chains than the examples in green plants. A reaction center is laid out in such a way that it captures the energy of a photon using pigment molecules and turns it into a usable form. Once the light energy has been absorbed directly by the pigment molecules, or passed to them by resonance transfer from a surrounding light-harvesting complex, they release electrons into an electron transport chain and pass energy to a hydrogen donor such as H2O to extract electrons and protons from it. In green plants, the electron transport chain has many electron acceptors including pheophytin, quinone, plastoquinone, cytochrome bf, and ferredoxin, which result finally in the reduced molecule NADPH, while the energy used to split water results in the release of oxygen. The passage of the electron through the electron transport chain also results in the pumping of protons (hydrogen ions) from the chloroplast's stroma and into the lumen, resulting in a proton gradient across the thylakoid membrane that can be used to synthesize ATP using the ATP synthase molecule. Both the ATP and NADPH are used in the Calvin cycle to fix carbon dioxide into triose sugars. Classification Two classes of reaction centres are recognized. Type I, found in green-sulfur bacteria, Heliobacteria, and plant/cyanobacterial PS-I, use iron sulfur clusters as electron acceptors. Type II, found in chloroflexus, purple bacteria, and plant/cyanobacterial PS-II, use quinones. Not only do all members inside each class share common ancestry, but the two classes also, by means of common structure, appear related. Cyanobacteria, the precursor to chloroplasts found in green plants, have both photosystems with both types of reaction centers. Combining the two systems allows for producing oxygen. In purple bacteria (type II) This section deals with the type II system found in purple bacteria. Structure The bacterial photosynthetic reaction center has been an important model to understand the structure and chemistry of the biological process of capturing light energy. In the 1960s, Roderick Clayton was the first to purify the reaction center complex from purple bacteria. However, the first crystal structure (upper image at right) was determined in 1984 by Hartmut Michel, Johann Deisenhofer and Robert Huber for which they shared the Nobel Prize in 1988. This was also significant for being the first 3D crystal structure of any membrane protein complex. Four different subunits were found to be important for the function of the photosynthetic reaction center. The L and M subunits, shown in blue and purple in the image of the structure, both span the lipid bilayer of the plasma membrane. They are structurally similar to one another, both having 5 transmembrane alpha helices. Four bacteriochlorophyll b (BChl-b) molecules, two bacteriopheophytin b molecules (BPh) molecules, two quinones (QA and QB), and a ferrous ion are associated with the L and M subunits. The H subunit, shown in gold, lies on the cytoplasmic side of the plasma membrane. A cytochrome subunit, not shown here, contains four c-type hemes and is located on the periplasmic surface (outer) of the membrane. The latter sub-unit is not a general structural motif in photosynthetic bacteria. The L and M subunits bind the functional and light-interacting cofactors, shown here in green. Reaction centers from different bacterial species may contain slightly altered bacterio-chlorophyll and bacterio-pheophytin chromophores as functional co-factors. These alterations cause shifts in the colour of light that can be absorbed. The reaction center contains two pigments that serve to collect and transfer the energy from photon absorption: BChl and Bph. BChl roughly resembles the chlorophyll molecule found in green plants, but, due to minor structural differences, its peak absorption wavelength is shifted into the infrared, with wavelengths as long as 1000 nm. Bph has the same structure as BChl, but the central magnesium ion is replaced by two protons. This alteration causes both an absorbance maximum shift and a lowered redox potential. Mechanism The process starts when light is absorbed by two BChl molecules that lie near the periplasmic side of the membrane. This pair of chlorophyll molecules, often called the "special pair", absorbs photons at 870 nm or 960 nm, depending on the species and, thus, is called P870 (for Rhodobacter sphaeroides) or P960 (for Blastochloris viridis), with P standing for "pigment"). Once P absorbs a photon, it ejects an electron, which is transferred through another molecule of Bchl to the BPh in the L subunit. This initial charge separation yields a positive charge on P and a negative charge on the BPh. This process takes place in 10 picoseconds (10−11 seconds). The charges on the P+ and the BPh− could undergo charge recombination in this state, which would waste the energy and convert it into heat. Several factors of the reaction center structure serve to prevent this. First, the transfer of an electron from BPh− to P960+ is relatively slow compared to two other redox reactions in the reaction center. The faster reactions involve the transfer of an electron from BPh− (BPh− is oxidized to BPh) to the electron acceptor quinone (QA), and the transfer of an electron to P960+ (P960+ is reduced to P960) from a heme in the cytochrome subunit above the reaction center. The high-energy electron that resides on the tightly bound quinone molecule QA is transferred to an exchangeable quinone molecule QB. This molecule is loosely associated with the protein and is fairly easy to detach. Two electrons are required to fully reduce QB to QH2, taking up two protons from the cytoplasm in the process. The reduced quinone QH2 diffuses through the membrane to another protein complex (cytochrome bc1-complex) where it is oxidized. In the process the reducing power of the QH2 is used to pump protons across the membrane to the periplasmic space. The electrons from the cytochrome bc1-complex are then transferred through a soluble cytochrome c intermediate, called cytochrome c2, in the periplasm to the cytochrome subunit. In Cyanobacteria and plants Cyanobacteria, the precursor to chloroplasts found in green plants, have both photosystems with both types of reaction centers. Combining the two systems allows for producing oxygen. Oxygenic photosynthesis In 1772, the chemist Joseph Priestley carried out a series of experiments relating to the gases involved in respiration and combustion. In his first experiment, he lit a candle and placed it under an upturned jar. After a short period of time, the candle burned out. He carried out a similar experiment with a mouse in the confined space of the burning candle. He found that the mouse died a short time after the candle had been extinguished. However, he could revivify the foul air by placing green plants in the area and exposing them to light. Priestley's observations were some of the first experiments that demonstrated the activity of a photosynthetic reaction center. In 1779, Jan Ingenhousz carried out more than 500 experiments spread out over 4 months in an attempt to understand what was really going on. He wrote up his discoveries in a book entitled Experiments upon Vegetables. Ingenhousz took green plants and immersed them in water inside a transparent tank. He observed many bubbles rising from the surface of the leaves whenever the plants were exposed to light. Ingenhousz collected the gas that was given off by the plants and performed several different tests in attempt to determine what the gas was. The test that finally revealed the identity of the gas was placing a smouldering taper into the gas sample and having it relight. This test proved it was oxygen, or, as Joseph Priestley had called it, 'de-phlogisticated air'. In 1932, Robert Emerson and his student, William Arnold, used a repetitive flash technique to precisely measure small quantities of oxygen evolved by chlorophyll in the algae Chlorella. Their experiment proved the existence of a photosynthetic unit. Gaffron and Wohl later interpreted the experiment and realized that the light absorbed by the photosynthetic unit was transferred. This reaction occurs at the reaction center of Photosystem II and takes place in cyanobacteria, algae and green plants. Photosystem II Photosystem II is the photosystem that generates the two electrons that will eventually reduce NADP+ in ferredoxin-NADP-reductase. Photosystem II is present on the thylakoid membranes inside chloroplasts, the site of photosynthesis in green plants. The structure of Photosystem II is remarkably similar to the bacterial reaction center, and it is theorized that they share a common ancestor. The core of Photosystem II consists of two subunits referred to as D1 and D2. These two subunits are similar to the L and M subunits present in the bacterial reaction center. Photosystem II differs from the bacterial reaction center in that it has many additional subunits that bind additional chlorophylls to increase efficiency. The overall reaction catalyzed by Photosystem II is: 2Q + 2H2O + hν → O2 + 2QH2 Q represents the oxidized form of plastoquinone while QH2 represents its reduced form. This process of reducing quinone is comparable to that which takes place in the bacterial reaction center. Photosystem II obtains electrons by oxidizing water in a process called photolysis. Molecular oxygen is a byproduct of this process, and it is this reaction that supplies the atmosphere with oxygen. The fact that the oxygen from green plants originated from water was first deduced by the Canadian-born American biochemist Martin David Kamen. He used a stable isotope of oxygen, 18O, to trace the path of the oxygen from water to gaseous molecular oxygen. This reaction is catalyzed by a reactive center in Photosystem II containing four manganese ions. The reaction begins with the excitation of a pair of chlorophyll molecules similar to those in the bacterial reaction center. Due to the presence of chlorophyll a, as opposed to bacteriochlorophyll, Photosystem II absorbs light at a shorter wavelength. The pair of chlorophyll molecules at the reaction center are often referred to as P680. When the photon has been absorbed, the resulting high-energy electron is transferred to a nearby pheophytin molecule. This is above and to the right of the pair on the diagram and is coloured grey. The electron travels from the pheophytin molecule through two plastoquinone molecules, the first tightly bound, the second loosely bound. The tightly bound molecule is shown above the pheophytin molecule and is colored red. The loosely bound molecule is to the left of this and is also colored red. This flow of electrons is similar to that of the bacterial reaction center. Two electrons are required to fully reduce the loosely bound plastoquinone molecule to QH2 as well as the uptake of two protons. The difference between Photosystem II and the bacterial reaction center is the source of the electron that neutralizes the pair of chlorophyll a molecules. In the bacterial reaction center, the electron is obtained from a reduced compound haem group in a cytochrome subunit or from a water-soluble cytochrome-c protein. Every time the P680 absorbs a photon, it gives off an electron to pheophytin, gaining a positive charge. After this photoinduced charge separation, P680+ is a very strong oxidant of high energy. It passes its energy to water molecules that are bound at the manganese center directly below the pair and extracts an electron from them. This center, below and to the left of the pair in the diagram, contains four manganese ions, a calcium ion, a chloride ion, and a tyrosine residue. Manganese is adept at these reactions because it is capable of existing in four oxidation states: Mn2+, Mn3+, Mn4+ and Mn5+. Manganese also forms strong bonds with oxygen-containing molecules such as water. The process of oxidizing two molecules of water to form an oxygen molecule requires four electrons. The water molecules that are oxidized in the manganese center are the source of the electrons that reduce the two molecules of Q to QH2. To date, this water splitting catalytic center has not been reproduced by any man-made catalyst. Photosystem I After the electron has left Photosystem II it is transferred to a cytochrome b6f complex and then to plastocyanin, a blue copper protein and electron carrier. The plastocyanin complex carries the electron that will neutralize the pair in the next reaction center, Photosystem I. As with Photosystem II and the bacterial reaction center, a pair of chlorophyll a molecules initiates photoinduced charge separation. This pair is referred to as P700, where 700 is a reference to the wavelength at which the chlorophyll molecules absorb light maximally. The P700 lies in the center of the protein. Once photoinduced charge separation has been initiated, the electron travels down a pathway through a chlorophyll α molecule situated directly above the P700, through a quinone molecule situated directly above that, through three 4Fe-4S clusters, and finally to an interchangeable ferredoxin complex. Ferredoxin is a soluble protein containing a 2Fe-2S cluster coordinated by four cysteine residues. The positive charge on the high-energy P700+ is neutralized by the transfer of an electron from plastocyanin, which receives energy eventually used to convert QH2 back to Q. Thus the overall reaction catalyzed by Photosystem I is: Pc(Cu+) + Fd[ox] + hν → Pc(Cu2+) + Fd[red] The cooperation between Photosystems I and II creates an electron and proton flow from H2O to NADP+, producing NADPH needed for glucose synthesis. This pathway is called the 'Z-scheme' because the redox diagram from H2O to NADP+ via P680 and P700 resembles the letter Z. See also Dioxygen in biological reactions (oxygen in biological processes) Light-harvesting complex Photosynthesis Photosystem Phycobilisome Photosynthetic reaction center protein family References External links Light reactions Photosynthesis Integral membrane proteins
Photosynthetic reaction centre
Chemistry,Biology
3,524
76,998,906
https://en.wikipedia.org/wiki/Beijing%20Institute%20of%20Tracking%20and%20Telemetry%20Technology
The Beijing Institute of Tracking and Telecommunications Technology () aka BITTT is a research institution of the Aerospace Force of the People's Liberation Army. The head office is located in the Beijing Space City in the north of Haidian district. The head of the institute, Dong Guangliang (董光亮), is also the Technical Director of the Control and Communication Systems of the Manned Space Program of the PRC since September 2015. The BITTT is an observer member of the Consultative Committee for Space Data Systems. History The "Research Institute for Orbit Tracking and Communication Technology" was founded in May 1965 in connection with Project 651, the program to build and launch a Chinese satellite that began in January 1965.As part of this project, the institute was initially responsible for planning the radar facilities at the Jiuquan Launch site, the facilities of the ground stations of the Chinese Space Control Network, and its headquarters, which was located in Weinan at the time. After China's first satellite, Dongfanghong 1, was successfully launched into space on 24 April 1970, the institute took on a leading role in the further expansion of the space control network, not only as a technical planning office, but also as an intermediary between the individual departments. As of the 2020s, BITTT has established itself as the main systems design and general contracting technical unit in the field of aerospace measurement, control and communication in China, the chief designer unit of the two major systems of measurement, control and communication for the landing site and for emergency rescue of China's manned space program. It is an affiliated unit of the Spacecraft Measurement and Control Committee of the Chinese Society of Astronautics, and a Class A design unit of the communication industry approved by the state. As a general contractor, BITTT was also responsible for the design and construction of the Inmarsat Beijing ground station, for the computer systems for the Beijing ground station of the Sinosat communication satellites and for the computer systems of the satellite control centers of Nigeria and Venezuela Academics BITTT has also been a teaching institution since 1985, and has had title-granting authority since 2008. Degree of "Postgraduate Specialist" (专业硕士), roughly at master level, are granted in the following four programs: Communication and Information Systems Engineering (通信与信息系统) Navigation, steering and control (导航、制导与控制) Signal and Data Processing (信号与信息处理) Applied computer science (计算机应用技术) In a peculiarity of its status as a military unit, there are rules requiring minimum height (1.62m for men and 1.58 m for women), no short- or long-sightedness, or color blindness. The institute has 500 scientific and technological personnel, including more than 350 with master's degrees or above, and more than 60 with doctoral degrees or above, and more than 180 senior researchers, senior engineers and equivalent technical positions. It is composed of more than 10 laboratories. It has an above-average production of patents and awards. The institute has also a quite broad set of exchange and cooperation projects with more than 20 countries. Projects involved Beidou Satellite Navigation System Shenzhou program China Moon Program China Mars Program References People's Liberation Army Military units and formations established in the 1960s Space exploration Telemetry Tracking and Data Relay Satellite Chinese scientific instrument makers
Beijing Institute of Tracking and Telemetry Technology
Astronomy
687
1,543,837
https://en.wikipedia.org/wiki/Phase%20response
In signal processing, phase response is the relationship between the phase of a sinusoidal input and the output signal passing through any device that accepts input and produces an output signal, such as an amplifier or a filter. Amplifiers, filters, and other devices are often categorized by their amplitude and/or phase response. The amplitude response is the ratio of output amplitude to input, usually a function of the frequency. Similarly, phase response is the phase of the output with the input as reference. The input is defined as zero phase. A phase response is not limited to lying between 0° and 360°, as phase can accumulate to any amount of time. See also Group delay and phase delay References Trigonometry Wave mechanics Signal processing
Phase response
Physics,Technology,Engineering
148
30,602,470
https://en.wikipedia.org/wiki/Janssen%20Medal%20%28French%20Academy%20of%20Sciences%29
The Janssen Medal is an astrophysics award presented by the French Academy of Sciences to those who have made advances in this area of science. The award was founded in 1886, though the first medal was not awarded until a year later. The commission formed to decide on the first recipient of the medal selected the German physicist Gustav Kirchhoff for his work on the science of spectroscopy. However, Kirchhoff died aged 63 on 17 October 1887, a few months before the award would have been announced. Rather than chose a new recipient for the award, the commission announced at the Academy's session of 26 December 1887 that the inaugural medal would be placed on his grave, in "supreme honour of the memory of this great scholar of Heidelberg". The award had been intended to be biennial, but was awarded in 1888 and again in 1889. A statement in the 1889 volume of Comptes rendus de l'Académie des sciences clarified that the award would be presented annually for the first seven years, and then biennially from 1894 onwards. This award is distinct from the Prix Jules Janssen (created in 1897), an annual award presented by the French Astronomical Society. Both awards are named for the French astronomer Pierre Janssen (1824–1907) (better known as Jules Janssen). Janssen founded the Academy award, and was a member of the inaugural commission. Laureates 1887 – Gustav Kirchhoff (posthumously) 1888 – William Huggins 1889 – Norman Lockyer 1890 – Charles Augustus Young 1891 – Georges Rayet 1892 – Pietro Tacchini 1893 – Samuel Pierpont Langley 1894 – George Ellery Hale 1896 – Henri Deslandres 1898 – Aristarkh Belopolsky 1900 – Edward Emerson Barnard 1902 – Aymar de la Baume Pluvinel 1904 – Aleksey Pavlovitch Hansky 1905 – Gaston Millochau (silver-gilt award) 1906 – Annibale Ricco 1908 – Pierre Puiseux 1910 – William Wallace Campbell 1912 – Alfred Perot 1914 – René Jarry-Desloges 1916 – Charles Fabry 1918 – Stanislas Chevalier 1920 – William Coblentz 1922 – Carl Størmer 1924 – George Willis Ritchey 1926 – Francisco Miranda da Costa Lobo 1928 – William Hammond Wright 1930 – Bernard Ferdinand Lyot 1932 – Alexandre Dauvillier 1934 – Walter Sydney Adams 1936 – Henry Norris Russell 1938 – Bertil Lindblad 1940 – Harlow Shapley 1943 – Lucien Henri d'Azambuja 1944 – Jean Rösch 1946 – Jan Hendrik Oort 1949 – Daniel Chalonge 1952 – André Couder 1955 – Otto Struve 1958 – André Lallemand 1961 – Pol Swings 1964 – Jean-François Denisse 1967 – Bengt Strömgren 1970 – Gérard Wlérick 1973 – Lucienne Devan (silver-gilt award) 1976 – Paul Ledoux 1979 – Jean Delhaye 1982 – Georges Michaud 1985 – Pierre Lacroute 1988 – Lodewijk Woltjer. 1990 – Pierre Charvin 1992 – Henk C. Van de Hulst 1994 – Serge Koutchmy 1999 – Jean-Marie Mariotti 2003 – Gilbert Vedrenne 2007 – Bernard Fort 2011 – Francois Mignard 2019 – Eric Hosy The list above is complete up to 2019. See also List of astronomy awards References External links Les Prix Thematiques en Sciences de l'Univers, includes a description of the Janssen Medal (French Academy of Sciences) Article and photograph on the presentation of the 2007 award to Bernard Fort (Paris Institute of Astrophysics) Astronomy prizes French science and technology awards Awards of the French Academy of Sciences 1886 establishments in France Awards established in 1886
Janssen Medal (French Academy of Sciences)
Astronomy,Technology
751
24,110,654
https://en.wikipedia.org/wiki/China%20Atomic%20Energy%20Authority
China Atomic Energy Authority (CAEA) is the regulatory agency that oversees the development of nuclear energy in the People's Republic of China. History The agency was created out of the regulatory functions department of the China National Nuclear Corporation in 1999 - 2000. Agency structure The Administration Department This department is responsible for logistics and safeguards for the CAEA, and the management on physical protection for nuclear material and fire protection for the nuclear power plants. The System Engineering Department This department administers the on major nuclear R&D projects, creating development plan for nuclear power plants and nuclear fuels. It is also responsible for the construction, management and supervision of nuclear projects, and routine work of nuclear emergencies. Department of International Cooperation It is responsible for organizing and coordinating the exchange and cooperation with governments and international organizations and licensing for nuclear export and import and issuing governmental permits. The General Planning Department This department is responsible for approving the draft plan for nuclear energy, and drawing up the annual plan for nuclear energy development. The Science, Technology and Quality Control Department This department is responsible for organizing pre-study on nuclear energy and mapping out nuclear technical criteria. See also International Atomic Energy Agency Electricity sector in China References Governmental nuclear organizations Science and technology in the People's Republic of China Government agencies of China Nuclear power in China Ministry of Industry and Information Technology
China Atomic Energy Authority
Engineering
266
1,251,158
https://en.wikipedia.org/wiki/Hokkaido%20wolf
The Hokkaido wolf (Canis lupus hattai), also known as the and in Russia as the Sakhalin wolf, is an extinct subspecies of gray wolf that once inhabited coastal northeast Asia. Its nearest relatives were the wolves of North America rather than Asia. It was exterminated in Hokkaido during the Meiji Restoration period, when American-style agricultural reforms incorporated the use of strychnine-laced baits to kill livestock predators. Some taxonomists believe that it survived up until 1945 on the island of Sakhalin. It was one of two subspecies that were once found in the Japanese archipelago, the other being the Japanese wolf (C. l. hodophilax). Taxonomy and origin The Ezō wolf or Hokkaidō wolf (Canis lupus hattai Kishida, 1931) is an extinct subspecies of the gray wolf (Canis lupus). In 1890, the skulls of Japanese wolves (Canis lupus hodophilax) were compared with those of wolves from Hokkaido in the British Museum. The specimens were noticeably different and explained to be local varieties of the same subspecies. Later, explorers to the Kuril islands of Iturup and Kunashir believed that the wolves they saw there were the Japanese subspecies. In 1889, the wolf became extinct on Hokkaido island. In 1913, Hatta Suburō proposed that the wolf might be related to the Siberian wolf but had no living specimens to undertake further analysis. In 1931, Kishida Kyukishi described a skull from a wolf killed in 1881 and declared it to be a distinct subspecies. In 1935, Pocock examined one of the specimens in the British Museum that had been obtained in 1886 and named it Canis lupus rex because of its large size. Analysis of its mitochondrial DNA showed it to be identical with gray wolf specimens from Canada, Alaska and the US, indicating that the ancestor of the Ezo wolf was genetically related to the ancestor of North American wolves. The coalescence time back to the most recent common ancestor for two Ezo wolf samples was estimated to be 3,100 (between 700 and 5,900) years ago, and the Ezo wolf is estimated to have diverged from North American wolves 9,300 (between 5,700 and 13,700) years ago. These estimates indicate that Ezo wolves colonized Japan more recently than Japanese wolves from the Asian continent during the last glacial period via a land bridge with Sakhalin Island, which existed up to 10,000 years ago. The Tsugaru Strait was 3 km wide during the last glacial period, which prevented Ezo wolves from colonizing Honshu and they likely arrived in Japan less than 14,000 years ago. A more recent study estimates their arrival in Hokkaido less than 10,000 YBP. Stable isotope analysis measures the amount of different isotopes of the same element contained within a specimen. When conducted on the bone of an extinct specimen, it informs researchers about the diet of the specimen. In 2017, radiocarbon dating and an isotopic analysis of bone collagen was conducted Ezo wolf specimens. The radiocarbon dating confirmed that the wolves spanned different time periods dating back as far as 4,000 years ago. The isotopic analysis showed that feeding habits of these wolves were similar to the modern "coastal" British Columbia wolf, with both populations dependent on both marine and terrestrial prey. See further: Evolution of the wolf#Into America and Japan Range Ezo is a Japanese word meaning "foreigner" and referred to the historical lands of the Ainu people to the north of Honshu, which the Japanese named Ezo-chi. The Ainu were to be found on Hokkaido, Sakhalin, the Kuril islands, and as far north as the Kamchatka Peninsula. The range of the Ezo wolf was the Hokkaido and Sakhalin islands, Iturup and Kunashir islands just to the east of Hokkaido in the Kuril archipelago, and the Kamchatka Peninsula. It became extinct on Hokkaido island in 1889. It was reported to be surviving in Sakhalin island and perhaps the Kuril Islands in 1945;Mech, L David (1970) "The wolf: the Ecology and Behavior of an Endangered Species", published for the American Museum of Natural History by the Natural History Press, pages 352-3 however, according to the Soviet zoologist Vladimir Heptner it had not been seen on Sakhalin at the beginning of the 20th century, with vagrant specimens of Siberian forest wolf occasionally crossing into the island via the Nevelskoy Strait, though not permanently settling. Information on the animal's presence on the Kuril islands is often contradictory or erroneous. It was tentatively recorded to inhabit Kunashir, Iturup and Paramushir, while wolves reported on Shumshu were later dismissed as feral dogs. A survey undertaken in the mid-1960s could not find a wolf on any of the Kuril islands but did find many feral dogs. Description A study of Ezo wolf morphology showed that it was similar in size to mainland Asian and North American wolves. It stood 70–80 cm at the withers. Soviet zoologist Vladimir Geptner wrote that the wolves (classed under the nomen dubium C. l. altaicus) of Kamchatka (where C. l. hattai's range is supposed to have encompassed) are just as large as C. l. lupus, with light gray fur with dark guard hairs running along the back. Edwin Dun, in his unpublished memoirs, described it in the following terms: History In Ainu culture The Ainu revered the wolf as the deity Horkew Kamuy ("howling god"), in recognition of the animal's similar hunting habits. Wolves were sacrificed in "sending-away" iomante ceremonies, and some Ainu communities, such as those in Tokachi and Hidaka, held origin myths linking the birth of the Ainu to a coupling between a white wolf and a goddess. Ainu hunters would leave portions of their kills for wolves, and it was believed that hunters could share a wolf's kill if they politely cleared their throats in its presence. Because of the wolf's special status in Ainu culture, hunters were forbidden from killing wolves with poison arrows or firearms, and wasting the pelt and meat of a wolf was thought to provoke wolves into killing the hunter responsible. The Ainu did not differentiate wolves from their domestic dogs, and would strive to reproduce wolf traits in their dogs by allowing dogs in heat to roam freely in wolf-inhabited areas in order to produce hybrid offspring. Extinction on Hokkaido island With the onset of the Meiji Restoration in 1868, Emperor Meiji officially ended Japan's long-standing isolationism through the Charter Oath, and sought to modernize Japan's agriculture by replacing its dependence on rice farming with American-style ranching. Ohio rancher Edwin Dun was hired as a scientific adviser in 1873 for the Kaitakushi (Hokkaido Development Agency), and began promoting ranching with state-run experimental farms. As wolf predation was inhibiting the propagation of horses in southeastern Hokkaidō and allegedly causing hardship to Ainu deer hunters, the Meiji government declared wolves as "noxious animals" (yūgai dōbutsu''), entrusting Dun to oversee the animals' extermination. Dun began his work at the Niikappu ranch with a mass-poisoning campaign involving the use of strychnine-laced baits. This was supplemented by a bounty system established by the Kaitakushi. References External links Extinct subspecies of Canis lupus Extinct mammals of Asia Extinct animals of Japan Mammal extinctions since 1500 extinct Species made extinct by deliberate extirpation efforts Mammals described in 1931 Taxa named by Kyukichi Kishida Hokkaido Fauna of Sakhalin Fauna of the Russian Far East Fauna of the Kuril Islands
Hokkaido wolf
Biology
1,605
3,759,330
https://en.wikipedia.org/wiki/Primer%20extension
Primer extension is a technique whereby the 5' ends of RNA can be mapped - that is, they can be sequenced and properly identified. Primer extension can be used to determine the start site of transcription (the end site cannot be determined by this method) by which its sequence is known. This technique requires a radiolabelled primer (usually 20 - 50 nucleotides in length) which is complementary to a region near the 3' end of the mRNA. The primer is allowed to anneal to the RNA and reverse transcriptase is used to synthesize cDNA from the RNA until it reaches the 5' end of the RNA. By denaturing the hybrid and using the extended primer cDNA as a marker on an electrophoretic gel, it is possible to determine the transcriptional start site. It is usually done so by comparing its location on the gel with the DNA sequence (e.g. Sanger sequencing), preferably by using the same primer on the DNA template strand. The exact nucleotide by which the transcription starts at can be pinpointed by matching the labelled extended primer with the marker nucleotide, who are both sharing the same migration distance on the gel. Primer extension offers an alternative to a nuclease protection assay (S1 nuclease mapping) for quantifying and mapping RNA transcripts. The hybridization probe for primer extension is a synthesized oligonucleotide, whereas S1 mapping requires isolation of a DNA fragment. Both methods provide information where a mRNA starts and provide an estimate of the concentration of a transcript by the intensity of the transcript band on the resulting autoradiograph. Unlike S1 mapping, however, primer extension can only be used to locate the 5’-end of an mRNA transcript because the DNA synthesis required for the assay relies on reverse transcriptase (only polymerizes in the 5’ → 3’ direction). Primer extension is unaffected by splice sites and is thus preferable in situations where intervening splice sites prevent S1 mapping. Finally, primer extension is more accurate than S1 mapping because the S1 nuclease used in S1 mapping can “nibble off” ends of the RNA-DNA hybrid or fail to degrade the single-stranded regions completely, making a transcript either appear shorter or longer. References https://www.nationaldiagnostics.com/electrophoresis/article/primer-extension Shenk, T. E., C. Rhodes, P. W. J. Rigby, and P. Berg. “Biochemical Procedure for Production of Small Deletions in Simian Virus 40 DNA.” Proceedings of the National Academy of Sciences 72.4 (1975): 1392-396. Print. Molecular genetics
Primer extension
Chemistry,Biology
578
3,510,908
https://en.wikipedia.org/wiki/Minkowski%20functional
In mathematics, in the field of functional analysis, a Minkowski functional (after Hermann Minkowski) or gauge function is a function that recovers a notion of distance on a linear space. If is a subset of a real or complex vector space then the or of is defined to be the function valued in the extended real numbers, defined by where the infimum of the empty set is defined to be positive infinity (which is a real number so that would then be real-valued). The set is often assumed/picked to have properties, such as being an absorbing disk in , that guarantee that will be a real-valued seminorm on In fact, every seminorm on is equal to the Minkowski functional (that is, ) of any subset of satisfying (where all three of these sets are necessarily absorbing in and the first and last are also disks). Thus every seminorm (which is a defined by purely algebraic properties) can be associated (non-uniquely) with an absorbing disk (which is a with certain geometric properties) and conversely, every absorbing disk can be associated with its Minkowski functional (which will necessarily be a seminorm). These relationships between seminorms, Minkowski functionals, and absorbing disks is a major reason why Minkowski functionals are studied and used in functional analysis. In particular, through these relationships, Minkowski functionals allow one to "translate" certain properties of a subset of into certain properties of a function on The Minkowski function is always non-negative (meaning ). This property of being nonnegative stands in contrast to other classes of functions, such as sublinear functions and real linear functionals, that do allow negative values. However, might not be real-valued since for any given the value is a real number if and only if is not empty. Consequently, is usually assumed to have properties (such as being absorbing in for instance) that will guarantee that is real-valued. Definition Let be a subset of a real or complex vector space Define the of or the associated with or induced by as being the function valued in the extended real numbers, defined by (recall that the infimum of the empty set is , that is, ). Here, is shorthand for For any if and only if is not empty. The arithmetic operations on can be extended to operate on where for all non-zero real The products and remain undefined. Some conditions making a gauge real-valued In the field of convex analysis, the map taking on the value of is not necessarily an issue. However, in functional analysis is almost always real-valued (that is, to never take on the value of ), which happens if and only if the set is non-empty for every In order for to be real-valued, it suffices for the origin of to belong to the or of in If is absorbing in where recall that this implies that then the origin belongs to the algebraic interior of in and thus is real-valued. Characterizations of when is real-valued are given below. Motivating examples Example 1 Consider a normed vector space with the norm and let be the unit ball in Then for every Thus the Minkowski functional is just the norm on Example 2 Let be a vector space without topology with underlying scalar field Let be any linear functional on (not necessarily continuous). Fix Let be the set and let be the Minkowski functional of Then The function has the following properties: It is : It is : for all scalars It is : Therefore, is a seminorm on with an induced topology. This is characteristic of Minkowski functionals defined via "nice" sets. There is a one-to-one correspondence between seminorms and the Minkowski functional given by such sets. What is meant precisely by "nice" is discussed in the section below. Notice that, in contrast to a stronger requirement for a norm, need not imply In the above example, one can take a nonzero from the kernel of Consequently, the resulting topology need not be Hausdorff. Common conditions guaranteeing gauges are seminorms To guarantee that it will henceforth be assumed that In order for to be a seminorm, it suffices for to be a disk (that is, convex and balanced) and absorbing in which are the most common assumption placed on More generally, if is convex and the origin belongs to the algebraic interior of then is a nonnegative sublinear functional on which implies in particular that it is subadditive and positive homogeneous. If is absorbing in then is positive homogeneous, meaning that for all real where If is a nonnegative real-valued function on that is positive homogeneous, then the sets and satisfy and if in addition is absolutely homogeneous then both and are balanced. Gauges of absorbing disks Arguably the most common requirements placed on a set to guarantee that is a seminorm are that be an absorbing disk in Due to how common these assumptions are, the properties of a Minkowski functional when is an absorbing disk will now be investigated. Since all of the results mentioned above made few (if any) assumptions on they can be applied in this special case. Convexity and subadditivity A simple geometric argument that shows convexity of implies subadditivity is as follows. Suppose for the moment that Then for all Since is convex and is also convex. Therefore, By definition of the Minkowski functional But the left hand side is so that Since was arbitrary, it follows that which is the desired inequality. The general case is obtained after the obvious modification. Convexity of together with the initial assumption that the set is nonempty, implies that is absorbing. Balancedness and absolute homogeneity Notice that being balanced implies that Therefore Algebraic properties Let be a real or complex vector space and let be an absorbing disk in is a seminorm on is a norm on if and only if does not contain a non-trivial vector subspace. for any scalar If is an absorbing disk in and then If is a set satisfying then is absorbing in and where is the Minkowski functional associated with that is, it is the gauge of In particular, if is as above and is any seminorm on then if and only if If satisfies then Topological properties Assume that is a (real or complex) topological vector space (TVS) (not necessarily Hausdorff or locally convex) and let be an absorbing disk in Then where is the topological interior and is the topological closure of in Importantly, it was assumed that was continuous nor was it assumed that had any topological properties. Moreover, the Minkowski functional is continuous if and only if is a neighborhood of the origin in If is continuous then Minimal requirements on the set This section will investigate the most general case of the gauge of subset of The more common special case where is assumed to be an absorbing disk in was discussed above. Properties All results in this section may be applied to the case where is an absorbing disk. Throughout, is any subset of The proofs of these basic properties are straightforward exercises so only the proofs of the most important statements are given. The proof that a convex subset that satisfies is necessarily absorbing in is straightforward and can be found in the article on absorbing sets. For any real so that taking the infimum of both sides shows that This proves that Minkowski functionals are strictly positive homogeneous. For to be well-defined, it is necessary and sufficient that thus for all and all real if and only if is real-valued. The hypothesis of statement (7) allows us to conclude that for all and all scalars satisfying Every scalar is of the form for some real where and is real if and only if is real. The results in the statement about absolute homogeneity follow immediately from the aforementioned conclusion, from the strict positive homogeneity of and from the positive homogeneity of when is real-valued. Examples If is a non-empty collection of subsets of then for all where Thus for all If is a non-empty collection of subsets of and satisfies then for all The following examples show that the containment could be proper. Example: If and then but which shows that its possible for to be a proper subset of when The next example shows that the containment can be proper when the example may be generalized to any real Assuming that the following example is representative of how it happens that satisfies but Example: Let be non-zero and let so that and From it follows that That follows from observing that for every which contains Thus and However, so that as desired. Positive homogeneity characterizes Minkowski functionals The next theorem shows that Minkowski functionals are those functions that have a certain purely algebraic property that is commonly encountered. If holds for all and real then so that Only (1) implies (3) will be proven because afterwards, the rest of the theorem follows immediately from the basic properties of Minkowski functionals described earlier; properties that will henceforth be used without comment. So assume that is a function such that for all and all real and let For all real so by taking for instance, it follows that either or Let It remains to show that It will now be shown that if or then so that in particular, it will follow that So suppose that or in either case for all real Now if then this implies that that for all real (since ), which implies that as desired. Similarly, if then for all real which implies that as desired. Thus, it will henceforth be assumed that a positive real number and that (importantly, however, the possibility that is or has not yet been ruled out). Recall that just like the function satisfies for all real Since if and only if so assume without loss of generality that and it remains to show that Since which implies that (so in particular, is guaranteed). It remains to show that which recall happens if and only if So assume for the sake of contradiction that and let and be such that where note that implies that Then This theorem can be extended to characterize certain classes of -valued maps (for example, real-valued sublinear functions) in terms of Minkowski functionals. For instance, it can be used to describe how every real homogeneous function (such as linear functionals) can be written in terms of a unique Minkowski functional having a certain property. Characterizing Minkowski functionals on star sets Characterizing Minkowski functionals that are seminorms In this next theorem, which follows immediately from the statements above, is assumed to be absorbing in and instead, it is deduced that is absorbing when is a seminorm. It is also not assumed that is balanced (which is a property that is often required to have); in its place is the weaker condition that for all scalars satisfying The common requirement that be convex is also weakened to only requiring that be convex. Positive sublinear functions and Minkowski functionals It may be shown that a real-valued subadditive function on an arbitrary topological vector space is continuous at the origin if and only if it is uniformly continuous, where if in addition is nonnegative, then is continuous if and only if is an open neighborhood in If is subadditive and satisfies then is continuous if and only if its absolute value is continuous. A is a nonnegative homogeneous function that satisfies the triangle inequality. It follows immediately from the results below that for such a function if then Given the Minkowski functional is a sublinear function if and only if it is real-valued and subadditive, which is happens if and only if and is convex. Correspondence between open convex sets and positive continuous sublinear functions Let be an open convex subset of If then let and otherwise let be arbitrary. Let be the Minkowski functional of where this convex open neighborhood of the origin satisfies Then is a continuous sublinear function on since is convex, absorbing, and open (however, is not necessarily a seminorm since it is not necessarily absolutely homogeneous). From the properties of Minkowski functionals, we have from which it follows that and so Since this completes the proof. See also Notes References Further reading F. Simeski, A. M. P. Boelens, and M. Ihme. "Modeling Adsorption in Silica Pores via Minkowski Functionals and Molecular Electrostatic Moments". Energies 13 (22) 5976 (2020). . Convex analysis Functional analysis Hermann Minkowski
Minkowski functional
Mathematics
2,549
42,500,102
https://en.wikipedia.org/wiki/List%20of%20column-oriented%20DBMSes
This article is a list of column-oriented database management system software. Free and open-source software (FOSS) Platform as a Service (PaaS) Amazon Redshift Microsoft Azure Synapse Analytics (formerly Azure SQL Data Warehouse) Google BigQuery Oracle Autonomous Data Warehouse Cloud (ADWC) Snowflake Computing MariaDB SkySQL Actian Avalanche Vertica Accelerator CelerData Proprietary Actian Vector (formerly VectorWise) Actuate Corporation BIRT Analytics ColumnarDB Dimensional Insight Endeca EXASOL EXtremeDB Hydrolix IBM Db2 Infobright KDB kdb+ memSQL Microsoft SQL Server Oracle Database (in-memory option) SAND CDBMS SAP HANA SAP IQ SenSage SQream Teradata Vertica (developed from open source C-Store) Yellowbrick Data References Column oriented Database management systems column oriented
List of column-oriented DBMSes
Technology
180
49,182,501
https://en.wikipedia.org/wiki/Music%20technology
Music technology is the study or the use of any device, mechanism, machine or tool by a musician or composer to make or perform music; to compose, notate, playback or record songs or pieces; or to analyze or edit music. History The earliest known applications of technology to music was prehistoric peoples' use of a tool to hand-drill holes in bones to make simple flutes. Ancient Egyptians developed stringed instruments, such as harps, lyres and lutes, which required making thin strings and some type of peg system for adjusting the pitch of the strings. Ancient Egyptians also used wind instruments such as double clarinets and percussion instruments such as cymbals. In ancient Greece, instruments included the double-reed aulos and the lyre. Numerous instruments are referred to in the Bible, including the cornu, pipe, lyre, harp, and bagpipe. During Biblical times, the cornu, flute, horn, pipe organ, pipe, and trumpet were also used. During the Middle Ages, music notation was used to create a written record of the notes of plainchant melodies. During the Renaissance music era (c. 1400–1600), the printing press was invented, allowing for sheet music to be mass-produced (previously having been hand-copied). This helped to spread musical styles more quickly and across a larger area. During the Baroque era (c. 1600–1750), technologies for keyboard instruments developed, which led to improvements in the designs of pipe organs and the harpsichord, and the development of a new keyboard instrument in approximately 1700, the piano. In the Classical era, Beethoven added new instruments to the orchestra such as the piccolo, contrabassoon, trombones, and untuned percussion in his Ninth Symphony. During the Romantic music era (c. 1810–1900), one of the key ways that new compositions became known to the public was by the sales of sheet music, which amateur music lovers would perform at home on their piano or other instruments. In the 19th century, new instruments such as saxophones, euphoniums, Wagner tubas, and cornets were added to the orchestra. Around the turn of the 20th century, with the invention and popularization of the gramophone record (commercialized in 1892), and radio broadcasting (starting on a commercial basis ca. 1919–1920), there was a vast increase in music listening, and it was easier to distribute music to a wider public. The development of sound recording had a major influence on the development of popular music genres because it enabled recordings of songs and bands to be widely distributed. The invention of sound recording also gave rise to a new subgenre of classical music: the Musique concrete style of electronic composition. The invention of multitrack recording enabled pop bands to overdub many layers of instrument tracks and vocals, creating new sounds that would not be possible in a live performance. In the early 20th century, electric technologies such as electromagnetic pickups, amplifiers and loudspeakers were used to develop new electric instruments such as the electric piano (1929), electric guitar (1931), electro-mechanical organ (1934) and electric bass (1935). The 20th-century orchestra gained new instruments and new sounds. Some orchestra pieces used the electric guitar, electric bass or the Theremin. The invention of the miniature transistor in 1947 enabled the creation of a new generation of synthesizers, which were used first in pop music in the 1960s. Unlike prior keyboard instrument technologies, synthesizer keyboards do not have strings, pipes, or metal tines. A synthesizer keyboard creates musical sounds using electronic circuitry, or, later, computer chips and software. Synthesizers became popular in the mass market in the early 1980s. With the development of powerful microchips, a number of new electronic or digital music technologies were introduced in the 1980s and subsequent decades, including drum machines and music sequencers. Electronic and digital music technologies are any device, such as a computer, an electronic effects unit or software, that is used by a musician or composer to help make or perform music. The term usually refers to the use of electronic devices, computer hardware and computer software that is used in the performance, playback, composition, sound recording and reproduction, mixing, analysis and editing of music. Mechanical technologies Prehistoric eras Findings from paleolithic archaeology sites suggest that prehistoric people used carving and piercing tools to create instruments. Archeologists have found Paleolithic flutes carved from bones in which lateral holes have been pierced. The disputed Divje Babe flute, a perforated cave bear femur, is at least 40,000 years old. Instruments such as the seven-holed flute and various types of stringed instruments, such as the Ravanahatha, have been recovered from the Indus Valley civilization archaeological sites. India has one of the oldest musical traditions in the world—references to Indian classical music (marga) are found in the Vedas, ancient scriptures of the Hindu tradition. The earliest and largest collection of prehistoric musical instruments was found in China and dates back to between 7000 and 6600 BC. Ancient Egypt In prehistoric Egypt, music and chanting were commonly used in magic and rituals, and small shells were used as whistles. Evidence of Egyptian musical instruments dates to the Predynastic period, when funerary chants played an important role in Egyptian religion and were accompanied by clappers and possibly the flute. The most reliable evidence of instrument technologies dates from the Old Kingdom, when technologies for constructing harps, flutes and double clarinets were developed. Percussion instruments, lyres and lutes were used by the Middle Kingdom. Metal cymbals were used by ancient Egyptians. In the early 21st century, interest in the music of the pharaonic period began to grow, inspired by the research of such foreign-born musicologists as Hans Hickmann. By the early 21st century, Egyptian musicians and musicologists led by the musicology professor Khairy El-Malt at Helwan University in Cairo had begun to reconstruct musical instruments of ancient Egypt, a project that is ongoing. Indus Valley The Indus Valley civilization has sculptures that show old musical instruments, like the seven-holed flute. Various types of stringed instruments and drums have been recovered from Harappa and Mohenjo Daro by excavations carried out by Sir Mortimer Wheeler. References in the Bible According to the Scriptures, Jubal was the father of harpists and organists (Gen. 4:20–21). The harp was among the chief instruments and the favorite of David, and it is referred to more than fifty times in the Bible. It was used at both joyful and mournful ceremonies, and its use was "raised to its highest perfection under David" (1 Sam. 16:23). Lockyer adds that "It was the sweet music of the harp that often dispossessed Saul of his melancholy (1 Sam. 16:14–23; 18:10–11). When the Jews were captive in Babylon they hung their harps up and refused to use them while in exile, earlier being part of the instruments used in the Temple (1 Kgs. 10:12). Another stringed instrument of the harp class, and one also used by the ancient Greeks, was the lyre. A similar instrument was the lute, which had a large pear-shaped body, long neck, and fretted fingerboard with head screws for tuning. Coins displaying musical instruments, the Bar Kochba Revolt coinage, were issued by the Jews during the Second Jewish Revolt against the Roman Empire of 132–135 AD. In addition to those, there was the psaltery, another stringed instrument that is referred to almost thirty times in Scripture. According to Josephus, it had twelve strings and was played with a quill, not with the hand. Another writer suggested that it was like a guitar, but with a flat triangular form and strung from side to side. Among the wind instruments used in the biblical period were the cornet, flute, horn, organ, pipe, and trumpet. There were also silver trumpets and the double oboe. Werner concludes that from the measurements taken of the trumpets on the Arch of Titus in Rome and from coins, that "the trumpets were very high pitched with thin body and shrill sound." He adds that in War of the Sons of Light Against the Sons of Darkness, a manual for military organization and strategy discovered among the Dead Sea Scrolls, these trumpets "appear clearly capable of regulating their pitch pretty accurately, as they are supposed to blow rather complicated signals in unison." Whitcomb writes that the pair of silver trumpets were fashioned according to Mosaic law and were probably among the trophies that the Emperor Titus brought to Rome when he conquered Jerusalem. She adds that on the Arch raised to the victorious Titus, "there is a sculptured relief of these trumpets, showing their ancient form. (see photo) The flute was commonly used for festal and mourning occasions, according to Whitcomb. "Even the poorest Hebrew was obliged to employ two flute players to perform at his wife's funeral." The shofar (the horn of a ram) is still used for special liturgical purposes such as the Jewish New Year services in orthodox communities. As such, it is not considered a musical instrument but an instrument of theological symbolism that has been intentionally kept to its primitive character. In ancient times it was used for warning of danger, to announce the new moon or beginning of Sabbath, or to announce the death of a notable. "In its strictly ritual usage it carried the cries of the multitude to God," writes Werner. Among the percussion instruments were bells, cymbals, sistrum, tabret, hand drums, and tambourines. The tabret, or timbrel, was a small hand drum used for festive occasions and was considered a woman's instrument. In modern times it was often used by the Salvation Army. According to the Bible, when the children of Israel came out of Egypt and crossed the Red Sea, "Miriam took a timbrel in her hands; and all the women went out after her with timbrels and with dance." Ancient Greece In ancient Greece, instruments in all music can be divided into three categories, based on how sound is produced: string, wind, and percussion. The following were among the instruments used in the music of ancient Greece: the lyre: a strummed and occasionally plucked string instrument, essentially a hand-held zither built on a tortoise-shell frame, generally with seven or more strings tuned to the notes of one of the modes. The lyre was used to accompany others or even oneself for recitation and song. the kithara, also a strummed string instrument, more complicated than the lyre. It had a box-type frame with strings stretched from the cross-bar at the top to the sounding box at the bottom; it was held upright and played with a plectrum. The strings were tunable by adjusting wooden wedges along the cross-bar. the aulos, usually double, consisting of two double-reed (like an oboe) pipes, not joined but generally played with a mouth-band to hold both pipes steadily between the player's lips. Modern reconstructions indicate that they produced a low, clarinet-like sound. There is some confusion about the exact nature of the instrument; alternate descriptions indicate single reeds instead of double reeds. the Pan pipes, also known as panflute and syrinx (Greek συριγξ), (so-called for the nymph who was changed into a reed in order to hide from Pan) is an ancient musical instrument based on the principle of the stopped pipe, consisting of a series of such pipes of gradually increasing length, tuned (by cutting) to a desired scale. Sound is produced by blowing across the top of the open pipe (like blowing across a bottle top). the hydraulis, a keyboard instrument, the forerunner of the modern organ. As the name indicates, the instrument used water to supply a constant flow of pressure to the pipes. Two detailed descriptions have survived: that of Vitruvius and Heron of Alexandria. These descriptions deal primarily with the keyboard mechanism and with the device by which the instrument was supplied with air. A well-preserved model in pottery was found at Carthage in 1885. Essentially, the air to the pipes that produce the sound comes from a wind chest connected by a pipe to a dome; air is pumped in to compress water, and the water rises in the dome, compressing the air, and causing a steady supply of air to the pipes. In the Aeneid, Virgil makes numerous references to the trumpet. The lyre, kithara, aulos, hydraulis (water organ) and trumpet all found their way into the music of ancient Rome. Roman Empire The Romans may have borrowed the Greek method of enchiriadic notation to record their music if they used any notation at all. Four letters (in English notation 'A', 'G', 'F' and 'C') indicated a series of four succeeding tones. Rhythm signs, written above the letters, indicated the duration of each note. Roman art depicts various woodwinds, "brass", percussion and stringed instruments. Roman-style instruments are found in parts of the Empire where they did not originate and indicate that music was among the aspects of Roman culture that spread throughout the provinces. Roman instruments include: The Roman tuba was a long, straight bronze trumpet with a detachable, conical mouthpiece. Extant examples are about 1.3 metres long, and have a cylindrical bore from the mouthpiece to the point where the bell flares abruptly, similar to the modern straight trumpet seen in presentations of 'period music'. Since there were no valves, the tuba was capable only of a single overtone series. In the military, it was used for "bugle calls". The tuba is also depicted in art such as mosaics accompanying games (ludi) and spectacle events. The cornu (Latin "horn") was a long tubular metal wind instrument that curved around the musician's body, shaped rather like an uppercase G. It had a conical bore (again like a French horn) and a conical mouthpiece. It may be hard to distinguish from the buccina. The cornu was used for military signals and on parade. The cornicen was a military signal officer who translated orders into calls. Like the tuba, the cornu also appears as accompaniment for public events and spectacle entertainments. The tibia (Greek aulos – αὐλός), usually double, had two double-reed (as in a modern oboe) pipes, not joined but generally played with a mouth-band capistrum to hold both pipes steadily between the player's lips. The askaules — a bagpipe. Versions of the modern flute and panpipes. The lyre, borrowed from the Greeks, was not a harp, but instead had a sounding body of wood or a tortoise shell covered with skin, and arms of animal horn or wood, with strings stretched from a cross bar to the sounding body. The cithara was the premier musical instrument of ancient Rome and was played both in popular and elevated forms of music. Larger and heavier than a lyre, the cithara was a loud, sweet and piercing instrument with precision tuning ability. The lute (pandura or monochord) was known by several names among the Greeks and Romans. In construction, the lute differs from the lyre in having fewer strings stretched over a solid neck or fret-board, on which the strings can be stopped to produce graduated notes. Each lute string is thereby capable of producing a greater range of notes than a lyre string. Although long-necked lutes are depicted in art from Mesopotamia as early as 2340–2198 BC, and also occur in Egyptian iconography, the lute in the Greco-Roman world was far less common than the lyre and cithara. The lute of the medieval West is thought to owe more to the Arab oud, from which its name derives (al ʿūd). The hydraulic pipe organ (hydraulis), which worked by water pressure, was "one of the most significant technical and musical achievements of antiquity". Essentially, the air to the pipes that produce the sound comes from a mechanism of a wind-chest connected by a pipe to a dome; air is pumped in to compress water, and the water rises in the dome, compressing the air and causing a steady supply to reach the pipes (also see Pipe organ#History). The hydraulis accompanied gladiator contests and events in the arena, as well as stage performances. Variations of a hinged wooden or metal device, called a scabellum used to beat time. Also, there were various rattles, bells and tambourines. Drum and percussion instruments like timpani and castanets, the Egyptian sistrum, and brazen pans, served various musical and other purposes in ancient Rome, including backgrounds for rhythmic dance, celebratory rites like those of the Bacchantes and military uses. The sistrum was a rattle consisting of rings strung across the cross-bars of a metal frame, which was often used for ritual purposes. Cymbala (Lat. plural of cymbalum, from the Greek kymbalon) were small cymbals: metal discs with concave centres and turned rims, used in pairs which were clashed together. Islamic world A number of musical instruments later used in medieval European music were influenced by Arabic musical instruments, including the rebec (an ancestor of the violin) from the rebab and the naker from naqareh. Many European instruments have roots in earlier Eastern instruments that were adopted from the Islamic world. The Arabic rabāb, also known as the spiked fiddle, is the earliest known bowed string instrument and the ancestor of all European bowed instruments, including the rebec, the Byzantine lyra, and the violin. The plucked and bowed versions of the rebab existed alongside each other. The bowed instruments became the rebec or rabel and the plucked instruments became the gittern. Curt Sachs linked this instrument with the mandola, the kopuz and the gambus, and named the bowed version rabâb. The Arabic oud in Islamic music was the direct ancestor of the European lute. The oud is also cited as a precursor to the modern guitar. The guitar has roots in the four-string oud, brought to Iberia by the Moors in the 8th century. A direct ancestor of the modern guitar is the (Moorish guitar), which was in use in Spain by 1200. By the 14th century, it was simply referred to as a guitar. The origin of automatic musical instruments dates back to the 9th century when the Persian Banū Mūsā brothers invented a hydropowered organ using exchangeable cylinders with pins, and also an automatic flute playing machine using steam power. These were the earliest automated mechanical musical instruments. The Banu Musa brothers' automatic flute player was the first programmable musical device, the first music sequencer, and the first example of repetitive music technology, powered by hydraulics. In 1206, the Arab engineer Al-Jazari invented a programmable humanoid automata band. According to Charles B. Fowler, the automata were a "robot band" which performed "more than fifty facial and body actions during each musical selection." It was also the first programmable drum machine. Among the four automaton musicians, two were drummers. It was a drum machine where pegs (cams) bumped into little levers that operated the percussion. The drummers could be made to play different rhythms and different drum patterns if the pegs were moved around. Middle Ages During the medieval music era (476 to 1400) the plainchant tunes used for religious songs were primarily monophonic (a single line, unaccompanied melody). In the early centuries of the medieval era, these chants were taught and spread by oral tradition ("by ear"). The earliest Medieval music did not have any kind of notational system for writing down melodies. As Rome tried to standardize the various chants across vast distances of its empire, a form of music notation was needed to write down the melodies. Various signs written above the chant texts, called neumes were introduced. By the ninth century, it was firmly established as the primary method of musical notation. The next development in musical notation was heighted neumes, in which neumes were carefully placed at different heights in relation to each other. This allowed the neumes to give a rough indication of the size of a given interval as well as the direction. This quickly led to one or two lines, each representing a particular note, being placed on the music with all of the neumes relating back to them. The line or lines acted as a reference point to help the singer gauge which notes were higher or lower. At first, these lines had no particular meaning and instead had a letter placed at the beginning indicating which note was represented. However, the lines indicating middle C and the F a fifth below slowly became most common. The completion of the four-line staff is usually credited to Guido d' Arezzo (c. 1000–1050), one of the most important musical theorists of the Middle Ages. The neumatic notational system, even in its fully developed state, did not clearly define any kind of rhythm for the singing of notes or playing of melodies. The development of music notation made it faster and easier to teach melodies to new people, and facilitated the spread of music over long geographic distances. Instruments used to perform medieval music include earlier, less mechanically sophisticated versions of a number of instruments that continue to be used in the 2010s. Medieval instruments include the flute, which was made of wood and could be made as a side-blown or end-blown instrument (it lacked the complex metal keys and airtight pads of 2010s-era metal flutes); the wooden recorder and the related instrument called the gemshorn; and the pan flute (a group of air columns attached together). Medieval music used many plucked string instruments like the lute, mandore, gittern and psaltery. The dulcimers, similar in structure to the psaltery and zither, were originally plucked, but became struck by hammers in the 14th century after the arrival of new technology that made metal strings possible. Bowed strings were used as well. The bowed lyra of the Byzantine Empire was the first recorded European bowed string instrument. The Persian geographer Ibn Khurradadhbih of the 9th century (d. 911) cited the Byzantine lyra as a bowed instrument equivalent to the Arab rabāb and typical instrument of the Byzantines along with the urghun (organ), shilyani (probably a type of harp or lyre) and the salandj (probably a bagpipe). The hurdy-gurdy was a mechanical violin using a rosined wooden wheel attached to a crank to "bow" its strings. Instruments without sound boxes like the jaw harp were also popular in the time. Early versions of the organ, fiddle (or vielle), and trombone (called the sackbut) existed in the medieval era. Renaissance The Renaissance music era (c. 1400 to 1600) saw the development of many new technologies that affected the performance and distribution of songs and musical pieces. Around 1450, the printing press was invented, which made printed sheet music much less expensive and easier to mass-produce (prior to the invention of the printing press, all notated music was laboriously hand-copied). The increased availability of printed sheet music helped to spread musical styles more quickly and across a larger geographic area. Many instruments originated during the Renaissance; others were variations of, or improvements upon, instruments that had existed previously in the medieval era. Brass instruments in the Renaissance were traditionally played by professionals. Some of the more common brass instruments that were played included: Slide trumpet: Similar to the trombone of today except that instead of a section of the body sliding, only a small part of the body near the mouthpiece and the mouthpiece itself is stationary. Cornett: Made of wood and was played like the recorder, but blown like a trumpet. Trumpet: Early trumpets from the Renaissance era had no valves, and were limited to the tones present in the overtone series. They were also made in different sizes. Sackbut: A different name for the trombone, which replaced the slide trumpet by the middle of the 15th century Stringed instruments included: Viol: This instrument, developed in the 15th century, commonly has six strings. It was usually played with a bow. Lyre: Its construction is similar to a small harp, although instead of being plucked, it is strummed with a plectrum. Its strings varied in quantity from four, seven, and ten, depending on the era. It was played with the right hand, while the left hand silenced the notes that were not desired. Newer lyres were modified to be played with a bow. Hurdy-gurdy: (Also known as the wheel fiddle), in which the strings are sounded by a wheel which the strings pass over. Its functionality can be compared to that of a mechanical violin, in that its bow (wheel) is turned by a crank. Its distinctive sound is mainly because of its "drone strings" which provide a constant pitch similar in their sound to that of bagpipes. Gittern and mandore: these instruments were used throughout Europe. Forerunners of modern instruments including the mandolin and acoustic guitar. Percussion instruments included: Tambourine: The tambourine is a frame drum equipped with jingles that produce a sound when the drum is struck. Jew's harp: An instrument that produces sound using shapes of the mouth and attempting to pronounce different vowels with one's mouth. Woodwind instruments included: Shawm: A typical shawm is keyless and is about a foot long with seven finger holes and a thumb hole. The pipes were also most commonly made of wood and many of them had carvings and decorations on them. It was the most popular double reed instrument of the Renaissance period; it was commonly used in the streets with drums and trumpets because of its brilliant, piercing, and often deafening sound. To play the shawm a person puts the entire reed in their mouth, puffs out their cheeks, and blows into the pipe whilst breathing through their nose. Reed pipe: Made from a single short length of cane with a mouthpiece, four or five finger holes, and reed fashioned from it. The reed is made by cutting out a small tongue but leaving the base attached. It is the predecessor of the saxophone and the clarinet. Hornpipe: Same as reed pipe but with a bell at the end. Bagpipe/Bladderpipe: It used a bag made out of sheep or goat skin that would provide air pressure for a pipe. When the player takes a breath, the player only needs to squeeze the bag tucked underneath their arm to continue the tone. The mouth pipe has a simple round piece of leather hinged on to the bag end of the pipe and acts like a non-return valve. The reed is located inside the long metal mouthpiece, known as a bocal. Panpipe: Designed to have sixteen wooden tubes with a stopper at one end and open on the other. Each tube is a different size (thereby producing a different tone), giving it a range of an octave and a half. The player can then place their lips against the desired tube and blow across it. Transverse flute: The transverse flute is similar to the modern flute with a mouth hole near the stoppered end and finger holes along the body. The player blows in the side and holds the flute to the right side. Recorder: It uses a whistle mouthpiece, which is a beak-shaped mouthpiece, as its main source of sound production. It is usually made with seven finger holes and a thumb hole. Baroque During the Baroque era of music (ca. 1600–1750), technologies for keyboard instruments developed, which led to improvements in the designs of pipe organs and harpsichords, and to the development of the first pianos. During the Baroque period, organ builders developed new types of pipes and reeds that created new tonal colors. Organ builders fashioned new stops that imitated various instruments, such as the viola da gamba. The Baroque period is often thought of as organ building's "golden age," as virtually every important refinement to the instrument was brought to a peak. Builders such as Arp Schnitger, Jasper Johannsen, Zacharias Hildebrandt and Gottfried Silbermann constructed instruments that displayed both exquisite craftsmanship and beautiful sound. These organs featured well-balanced mechanical key actions, giving the organist precise control over the pipe speech. Schnitger's organs featured particularly distinctive reed timbres and large Pedal and Rückpositiv divisions. Harpsichord builders in the Southern Netherlands built instruments with two keyboards that could be used for transposition. These Flemish instruments served as the model for Baroque-era harpsichord construction in other nations. In France, the double keyboards were adapted to control different choirs of strings, making a more musically flexible instrument (e.g., the upper manual could be set to a quiet lute stop, while the lower manual could be set to a stop with multiple string choirs, for a louder sound). Instruments from the peak of the French tradition, by makers such as the Blanchet family and Pascal Taskin, are among the most widely admired of all harpsichords and are frequently used as models for the construction of modern instruments. In England, the Kirkman and Shudi firms produced sophisticated harpsichords of great power and sonority. German builders extended the sound repertoire of the instrument by adding sixteen-foot choirs, adding to the lower register and two-foot choirs, which added to the upper register. The piano was invented during the Baroque era by the expert harpsichord maker Bartolomeo Cristofori (1655–1731) of Padua, Italy, who was employed by Ferdinando de' Medici, Grand Prince of Tuscany. Cristofori invented the piano at some point before 1700. While the clavichord allowed expressive control of volume, with harder or louder key presses creating louder sound (and vice versa) and fairly sustained notes, it was too quiet for large performances. The harpsichord produced a sufficiently loud sound, but offered little expressive control over each note. Pressing a harpsichord key harder or softer had no effect on the instrument's loudness. The piano offered the best of both, combining loudness with dynamic control. Cristofori's great success was solving, with no prior example, the fundamental mechanical problem of piano design: the hammer must strike the string, but not remain in contact with it (as a tangent remains in contact with a clavichord string) because this would damp the sound. Moreover, the hammer must return to its rest position without bouncing violently, and it must be possible to repeat the same note rapidly. Cristofori's piano action was a model for the many approaches to piano actions that followed. Cristofori's early instruments were much louder and had more sustain than the clavichord. Even though the piano was invented in 1700, the harpsichord and pipe organ continued to be widely used in orchestra and chamber music concerts until the end of the 1700s. It took time for the new piano to gain in popularity. By 1800, though, the piano generally was used in place of the harpsichord (although pipe organ continued to be used in church music such as Masses). Classicism From about 1790 onward, the Mozart-era piano underwent tremendous changes that led to the modern form of the instrument. This revolution was in response to a preference by composers and pianists for a more powerful, sustained piano sound, and made possible by the ongoing Industrial Revolution with resources such as high-quality steel piano wire for strings, and precision casting for the production of iron frames. Over time, the tonal range of the piano was also increased from the five octaves of Mozart's day to the 7-plus range found on modern pianos. Early technological progress owed much to the firm of Broadwood. John Broadwood joined with another Scot, Robert Stodart, and a Dutchman, Americus Backers, to design a piano in the harpsichord case—the origin of the "grand". They achieved this in about 1777. They quickly gained a reputation for the splendour and powerful tone of their instruments, with Broadwood constructing ones that were progressively larger, louder, and more robustly constructed. They sent pianos to both Joseph Haydn and Ludwig van Beethoven, and were the first firm to build pianos with a range of more than five octaves: five octaves and a fifth (interval) during the 1790s, six octaves by 1810 (Beethoven used the extra notes in his later works), and seven octaves by 1820. The Viennese makers similarly followed these trends; however the two schools used different piano actions: Broadwoods were more robust, Viennese instruments were more sensitive. Beethoven's instrumentation for orchestra added piccolo, contrabassoon, and trombones to the triumphal finale of his Symphony No. 5. A piccolo and a pair of trombones help deliver storm and sunshine in the Sixth. Beethoven's use of piccolo, contrabassoon, trombones, and untuned percussion in his Ninth Symphony expanded the sound of the orchestra. Romanticism During the Romantic music era (c. 1810 to 1900), one of the key ways that new compositions became known to the public was by the sales of sheet music, which amateur music lovers would perform at home on their piano or in chamber music groups, such as string quartets. Saxophones began to appear in some 19th-century orchestra scores. While appearing only as featured solo instruments in some works, for example Maurice Ravel's orchestration of Modest Mussorgsky's Pictures at an Exhibition and Sergei Rachmaninoff's Symphonic Dances, the saxophone is included in other works, such as Ravel's Boléro, Sergei Prokofiev's Romeo and Juliet Suites 1 and 2. The euphonium is featured in a few late Romantic and 20th-century works, usually playing parts marked "tenor tuba", including Gustav Holst's The Planets, and Richard Strauss's Ein Heldenleben. The Wagner tuba, a modified member of the horn family, appears in Richard Wagner's cycle Der Ring des Nibelungen and several other works by Strauss, Béla Bartók, and others; it has a prominent role in Anton Bruckner's Symphony No. 7 in E Major. Cornets appear in Pyotr Ilyich Tchaikovsky's ballet Swan Lake, Claude Debussy's La Mer, and several orchestral works by Hector Berlioz. The piano continued to undergo technological developments in the Romantic era, up until the 1860s. By the 1820s, the center of piano building innovation had shifted to Paris, where the Pleyel firm manufactured pianos used by Frédéric Chopin and the Érard firm manufactured those used by Franz Liszt. In 1821, Sébastien Érard invented the double escapement action, which incorporated a repetition lever (also called the balancier) that permitted repeating a note even if the key had not yet risen to its maximum vertical position. This facilitated rapid playing of repeated notes, a musical device exploited by Liszt. When the invention became public, as revised by Henri Herz, the double escapement action gradually became standard in grand pianos and is still incorporated into all grand pianos currently produced. Other improvements of the mechanism included the use of felt hammer coverings instead of layered leather or cotton. Felt, which was first introduced by Jean-Henri Pape in 1826, was a more consistent material, permitting wider dynamic ranges as hammer weights and string tension increased. The sostenuto pedal, invented in 1844 by Jean-Louis Boisselot and copied by the Steinway firm in 1874, allowed a wider range of effects. One innovation that helped create the sound of the modern piano was the use of a strong iron frame. Also called the "plate", the iron frame sits atop the soundboard, and serves as the primary bulwark against the force of string tension that can exceed 20 tons in a modern grand. The single piece cast iron frame was patented in 1825 in Boston by Alpheus Babcock, combining the metal hitch pin plate (1821, claimed by Broadwood on behalf of Samuel Hervé) and resisting bars (Thom and Allen, 1820, but also claimed by Broadwood and Érard). The increased structural integrity of the iron frame allowed the use of thicker, tenser, and more numerous strings. In 1834, the Webster & Horsfal firm of Birmingham brought out a form of piano wire made from cast steel; according to Dolge it was "so superior to the iron wire that the English firm soon had a monopoly." Other important advances included changes to the way the piano is strung, such as the use of a "choir" of three strings rather than two for all but the lowest notes, and the implementation of an over-strung scale, in which the strings are placed in two separate planes, each with its own bridge height. The mechanical action structure of the upright piano was invented in London, England in 1826 by Robert Wornum, and upright models became the most popular model, also amplifying the sound. 20th- and 21st-century music With 20th-century music, there was a vast increase in music listening, as the radio gained popularity and phonographs were used to replay and distribute music. The invention of sound recording and the ability to edit music gave rise to new subgenre of classical music, including the acousmatic and Musique concrète schools of electronic composition. Sound recording was also a major influence on the development of popular music genres, because it enabled recordings of songs and bands to be widely distributed. The introduction of the multitrack recording system had a major influence on rock music, because it could do much more than record a band's performance. Using a multitrack system, a band and their music producer could overdub many layers of instrument tracks and vocals, creating new sounds that would not be possible in a live performance. The 20th-century orchestra was far more flexible than its predecessors. In Beethoven's and Felix Mendelssohn's time, the orchestra was composed of a fairly standard core of instruments which was very rarely modified. As time progressed, and as the Romantic period saw changes in accepted modification with composers such as Berlioz and Mahler, the 20th century saw that instrumentation could practically be hand-picked by the composer. Saxophones were used in some 20th-century orchestra scores such as Vaughan Williams' Symphonies No. 6 and 9 and William Walton's Belshazzar's Feast, and many other works as a member of the orchestral ensemble. In the 2000s, the modern orchestra became standardized with the modern instrumentation that includes a string section, woodwinds, brass instruments, percussion, piano, celeste, and even, for some 20th century or 21st-century works, electric instruments such as electric guitar, electric bass and/or electronic instruments such as the Theremin or synthesizer. Electric and electro-mechanical Electric music technology refers to musical instruments and recording devices that use electrical circuits, which are often combined with mechanical technologies. Examples of electric musical instruments include the electro-mechanical electric piano (invented in 1929), the electric guitar (invented in 1931), the electro-mechanical Hammond organ (developed in 1934) and the electric bass (invented in 1935). None of these electric instruments produce a sound that is audible by the performer or audience in a performance setting unless they are connected to instrument amplifiers and loudspeaker cabinets, which made them sound loud enough for performers and the audience to hear. Amplifiers and loudspeakers are separate from the instrument in the case of the electric guitar (which uses a guitar amplifier), electric bass (which uses a bass amplifier) and some electric organs (which use a Leslie speaker or similar cabinet) and electric pianos. Some electric organs and electric pianos include the amplifier and speaker cabinet within the main housing for the instrument. Electric piano An electric piano is an electric musical instrument which produces sounds when a performer presses the keys of the piano-style musical keyboard. Pressing keys causes mechanical hammers to strike metal strings or tines, leading to vibrations which are converted into electrical signals by magnetic pickups, which are then connected to an instrument amplifier and loudspeaker to make a sound loud enough for the performer and audience to hear. Unlike a synthesizer, the electric piano is not an electronic instrument. Instead, it is an electromechanical instrument. Some early electric pianos used lengths of wire to produce the tone, like a traditional piano. Smaller electric pianos used short slivers of steel, metal tines or short wires to produce the tone. The earliest electric pianos were invented in the late 1920s. Electric guitar An electric guitar is a guitar that uses a pickup to convert the vibration of its strings into electrical impulses. The most common guitar pickup uses the principle of direct electromagnetic induction. The signal generated by an electric guitar is too weak to drive a loudspeaker, so it is amplified before being sent to a loudspeaker. The output of an electric guitar is an electric signal, and the signal can easily be altered by electronic circuits to add "color" to the sound. Often the signal is modified using electronic effects such as reverb and distortion. Invented in 1931, the electric guitar became a necessity as jazz guitarists sought to amplify their sound in the big band format. Hammond organ The Hammond organ is an electric organ, invented by Laurens Hammond and John M. Hanert and first manufactured in 1935. Various models have been produced, most of which use sliding drawbars to create a variety of sounds. Until 1975, Hammond organs generated sound by creating an electric current from rotating a metal tonewheel near an electromagnetic pickup. Around two million Hammond organs have been manufactured, and it has been described as one of the most successful organs. The organ is commonly used with, and associated with, the Leslie speaker. The organ was originally marketed and sold by the Hammond Organ Company to churches as a lower-cost alternative to the wind-driven pipe organ, or instead of a piano. It quickly became popular with professional jazz bandleaders, who found that the room-filling sound of a Hammond organ could form small bands such as organ trios which were less costly than paying an entire big band. Electric bass The electric bass (or bass guitar) was invented in the 1930s, but it did not become commercially successful or widely used until the 1950s. It is a stringed instrument played primarily with the fingers or thumb, by plucking, slapping, popping, strumming, tapping, thumping, or picking with a plectrum, often known as a pick. The bass guitar is similar in appearance and construction to an electric guitar, but with a longer neck and scale length, and four to six strings or courses. The electric bass usually uses metal strings and an electromagnetic pickup which senses the vibrations in the strings. Like the electric guitar, the bass guitar is plugged into an amplifier and speaker for live performances. Electronic or digital Electronic or digital music technology is any device, such as a computer, an electronic effects unit or software, that is used by a musician or composer to help make or perform music. The term usually refers to the use of electronic devices, computer hardware and computer software that is used in the performance, composition, sound recording and reproduction, mixing, analysis and editing of music. Electronic or digital music technology is connected to both artistic and technological creativity. Musicians and music technology experts are constantly striving to devise new forms of expression through music, and they are physically creating new devices and software to enable them to do so. Although in the 2010s, the term is most commonly used in reference to modern electronic devices and computer software such as digital audio workstations and Pro Tools digital sound recording software, electronic and digital musical technologies have precursors in the electric music technologies of the early 20th century, such as the electromechanical Hammond organ, which was invented in 1929. In the 2010s, the ontological range of music technology has greatly increased, and it may now be electronic, digital, software-based or indeed even purely conceptual. A synthesizer is an electronic musical instrument that generates electric signals that are converted to sound through instrument amplifiers and loudspeakers or headphones. Synthesizers may either imitate existing sounds (instruments, vocal, natural sounds, etc.), or generate new electronic timbres or sounds that did not exist before. They are often played with an electronic musical keyboard, but they can be controlled via a variety of other input devices, including music sequencers, instrument controllers, fingerboards, guitar synthesizers, wind controllers, and electronic drums. Synthesizers without built-in controllers are often called sound modules, and are controlled using a controller device. References Sources Further reading External links Sound recording Audio electronics Music history Musical instruments
Music technology
Engineering
9,298
61,144,035
https://en.wikipedia.org/wiki/C6428H9912N1694O1987S46
{{DISPLAYTITLE:C6428H9912N1694O1987S46}} The molecular formula C6428H9912N1694O1987S46 (molar mass: 144190.3 g/mol) may refer to: Adalimumab Infliximab
C6428H9912N1694O1987S46
Chemistry
75
17,545,148
https://en.wikipedia.org/wiki/Instruments%20used%20in%20general%20medicine
Image gallery Notes References Medical equipment
Instruments used in general medicine
Biology
8
7,055,324
https://en.wikipedia.org/wiki/Reproductive%20synchrony
Reproductive synchrony is a term used in evolutionary biology and behavioral ecology. Reproductive synchrony—sometimes termed "ovulatory synchrony"—may manifest itself as "breeding seasonality". Where females undergo regular menstruation, "menstrual synchrony" is another possible term. Reproduction is said to be synchronised when fertile matings across a population are temporarily clustered, resulting in multiple conceptions (and consequent births) within a restricted time window. In marine and other aquatic contexts, the phenomenon may be referred to as mass spawning. Mass spawning has been observed and recorded in a large number of phyla, including in coral communities within the Great Barrier Reef. In primates, reproductive synchrony usually takes the form of conception and birth seasonality. The regulatory "clock", in this case, is the sun's position in relation to the tilt of the earth. In nocturnal or partly nocturnal primates—for example, owl monkeys—the periodicity of the moon may also come into play. Synchrony in general is for primates an important variable determining the extent of "paternity skew"—defined as the extent to which fertile matings can be monopolised by a fraction of the population of males. The greater the precision of female reproductive synchrony—the greater the number of ovulating females who must be guarded simultaneously—the harder it is for any dominant male to succeed in monopolising a harem all to himself. This is simply because, by attending to any one fertile female, the male unavoidably leaves the others at liberty to mate with his rivals. The outcome is to distribute paternity more widely across the total male population, reducing paternity skew (figures a, b). Reproductive synchrony can never be perfect. On the other hand, theoretical models predict that group-living species will tend to synchronise wherever females can benefit by maximising the number of males offered chances of paternity, minimising reproductive skew. For example, the cichlid fish V. moorii spawns in the days leading up to each full moon (lunar synchrony), and broods often exhibit multiple paternity. The same models predict that female primates, including evolving humans, will tend to synchronise wherever fitness benefits can be gained by securing access to multiple males. Conversely, group-living females who need to restrict paternity to a single dominant harem-holder should assist him by avoiding synchrony. In the human case, evolving females with increasingly heavy childcare burdens would have done best by resisting attempts at harem-holding by locally dominant males. No human female needs a partner who will get her pregnant only to disappear, abandoning her in favour of his next sexual partner. To any local group of females, the more such philandering can be successfully resisted—and the greater the proportion of previously excluded males who can be included in the breeding system and persuaded to invest effort—the better. Hence scientists would expect reproductive synchrony—whether seasonal, lunar or a combination of the two—to be central to evolving human strategies of reproductive levelling, reducing paternity skew and culminating in the predominantly monogamous egalitarian norms illustrated by extant hunter-gatherers. Divergent climate regimes differentiating Neanderthal reproductive strategies from those of modern Homo sapiens have recently been analysed in these terms. See also Lunar effect Lunar phase Mast seeding Menstrual cycle Menstrual synchrony Menstruation Photoperiodism Predator satiation Season of birth References Ethology Periodic phenomena Reproduction Synchronization Theriogenology
Reproductive synchrony
Engineering,Biology
764
11,105,988
https://en.wikipedia.org/wiki/Antwerp%20Water%20Works
The Antwerp Water Works () or AWW produces water for the city of Antwerp (Belgium) and its surroundings. The AWW has a yearly production of and a revenue of 100 million euro. History Between 1832 and 1892, Antwerp was struck every ten to fifteen years by a major cholera epidemic which each time claimed a few thousand lives and lasted for about two years. In 1866 the cholera epidemic infected about 5000 people and about 3000 people died. Between 1861 and 1867 several propositions were done for a water supply for Antwerp. In 1873, under mayor Leopold De Wael, it was decided that a concession should be granted to secure the water supply of the city. On 25 June 1873, a concession of 50 years was granted to the English engineers, Joseph Quick from London, together with John Dick, to organize the water supply of Antwerp. Due to a lack of funds and a dispute between the partners this venture stranded. In 1879, the English engineering company Easton & Anderson took over the yards and the concession. Within two years they succeeded in finishing the work. An exploitation society was established: the Antwerp Waterworks Company Limited, a society according to English law which would be in charge of the exploitation from 1881 up to 1930. The water was won from the Nete river at the bridge of Walem. It was purified according to an original method: an iron filter. In the period 1881 up to 1908 the system was repaired repeatedly, until eventually a new method of filtration was chosen which was a combination of fast with slow sand filtration. This method of filtration is still being used today for the treatment of a large part of the raw material, now water from the Albert Canal. In 1930, the concession came to an end, as no agreement could be reached with the English owners concerning a new construction in which the municipalities surrounding Antwerp would be included. The city of Antwerp took over the company and founded a mixed intermunicipal company (private and public participation) in which the English Waterworks kept a minority participation. The remaining shares were in the hands of the city of Antwerp and the surrounding municipalities of Berchem, Boechout, Borgerhout, Deurne, Edegem, Ekeren, Hoboken, Hove, Mortsel, Kontich and Wilrijk. The English withdrew from the company in 1965. In the same year a new production site in Oelegem was established and a new office building in Antwerp. During the dry summer of 1976 it became clear that the reserve capacity needed to be expanded and in 1982 the reservoir of Broechem was inaugurated. The second concession ended after 53 years, so in 1983 a new concession to the AWW was granted. In 2003 Brabo Industrial Water Solutions (BIWS) started, a consortium with Ondeo Industrial Solutions, to provide water tailored for the industry. In 2004 the RI-ANT project started (together with Aquafin), which takes over the management and the maintenance of the sewerage network of Antwerp. See also EU water policy Public water supply Water purification References Sources AWW AWW History (Dutch) Water treatment facilities Water companies of Belgium Water supply and sanitation in Belgium Companies based in Antwerp Antwerp
Antwerp Water Works
Chemistry
650
77,705,587
https://en.wikipedia.org/wiki/Ramses%20%28spacecraft%29
Ramses, or Rapid Apophis Mission for Space Safety, is a proposed ESA mission to a near-Earth asteroid 99942 Apophis. If approved, it is expected to be launched in April 2028, to arrive at Apophis in February 2029, before its closest approach to Earth. It will conduct multiple measurements of the asteroid's properties, to study possible response in case such an asteroid would be on a collision course with Earth. ESA signed a contract with OHB Italia SpA for preliminary work on the mission in October 2024 and has also unveil the official mission patch. A funding decision is expected to be made in late 2025 at the ESA Ministerial Council. Ramses will be "an adaptation of Hera", launched in 2024. References European Space Agency space probes Proposed spacecraft Missions to asteroids
Ramses (spacecraft)
Astronomy
171
24,507,888
https://en.wikipedia.org/wiki/Gymnopilus%20satur
Gymnopilus satur is a species of mushroom in the family Hymenogastraceae. See also List of Gymnopilus species External links Gymnopilus satur at Index Fungorum satur Fungus species
Gymnopilus satur
Biology
49
27,935,302
https://en.wikipedia.org/wiki/Lower%20Saxon%20State%20Department%20for%20Waterway%2C%20Coastal%20and%20Nature%20Conservation
The Lower Saxon Department for Water, Coastal and Nature Conservation (NLWKN) () is a department of the state of Lower Saxony, with its headquarters in Norden (Ostfriesland) and is responsible to the Minister for the Environment and Climate Protection. Departements NLWKN is structured in relatively independend departements for different services at Norden, Hanover and Lüneburg: Operation and maintenance of state-owned facilities and bodies of water, combating pollutant accidents, based in Norden Planning and construction of water management systems, based in Norden River basin management, state hydrological service, radiological monitoring, based in Norden General administration, finance, human resources, based in Norden Regional nature conservation, based in Hanover State-wide nature conservation, based in Hanover Allgemeine Verwaltung, Finanzen, Personal, Sitz in Norden Water management legislation, based in Lüneburg Landesweiter Naturschutz, Sitz in Hannover Coastal Research Center, based in Norden, formerly in Norderney NLWKN services national flood reporting service in the catchment areas of the Weser, Aller and Leine national storm surge warning service for the Lower Saxon coast current water level data (gauge measurements) for the Weser and Ems External links Website: NLWKN Organisations based in Lower Saxony Hydraulic engineering Coastal engineering Nature conservation in Germany
Lower Saxon State Department for Waterway, Coastal and Nature Conservation
Physics,Engineering,Environmental_science
288
1,866,599
https://en.wikipedia.org/wiki/Passive%20house
Passive house () is a voluntary standard for energy efficiency in a building that reduces the building's carbon footprint. Conforming to these standards results in ultra-low energy buildings that require less energy for space heating or cooling. A similar standard, MINERGIE-P, is used in Switzerland. Standards are available for residential properties, and several office buildings, schools, kindergartens and a supermarket have also been constructed to the standard. Energy efficiency is not an attachment or supplement to architectural design, but a design process that integrates with architectural design. Although it is generally applied to new buildings, it has also been used for renovations. In 2008, estimates of the number of passive house buildings around the world ranged from 15,000 to 20,000 structures. In 2016, there were approximately 60,000 such certified structures of all types worldwide. The vast majority of passive house structures have been built in German-speaking countries and Scandinavia. History The term passive house has had at least two meanings in the literature. Its earlier meaning, used since the 1970s, was for a low-energy building designed to exploit passive solar technologies and establish a comfortable indoor temperature with a low-energy requirement for heating or cooling. More recently the term has been used to indicate a building that is certified to meet the criteria for the passive house standard, including heating, cooling and primary energy demands in addition to airtightness, thermal comfort requirements and non-heating related energy demands. The passive house standard originated from a conversation in May 1988 between Bo Adamson of Lund University, in Sweden, and Wolfgang Feist of the (Institute for Housing and Environment), in Darmstadt, Germany. Their concept was developed through a number of research projects with financial assistance from the German state of Hesse. Many of the early passive house builds were based on research and the experience of North American builders during the 1970s, who—in response to the OPEC oil embargo—sought to build homes that used little to no energy. These designs often utilised expansive solar-gain windows, which used the sun as a heat source. However, superinsulation became a key feature of such efforts, as seen in the Saskatchewan Conservation House in Regina, Saskatchewan, (1977) and the Leger House in Pepperell, Massachusetts (1977). The Saskatchewan Conservation House was a project of the Saskatchewan Research Council (SRC) with Harold Orr as its lead engineer. The team independently developed a heat recovery air exchanger, hot water recovery, and a blower-door apparatus to measure building air-tightness. Notably, the house was designed for the extreme −40°C to +40°C (-40°F to 100°F) climate of the Canadian Prairies. The SRC and Leger houses were predated by the Lyngby, Denmark house (1975), developed by the Technical University of Denmark, and several homes were built between 1977 and 1979 based on the Lo-Cal house design (1976) developed by the University of Illinois at Urbana–Champaign. The term passive can be partly attributed to William Shurcliff, an American physicist who contributed to the WWII Manhattan Project, and in the 1970s became an advocate for energy-efficient home design: An early book explaining the concepts of passive house construction was The Passive Solar Energy Book by Edward Mazria in 1979. First examples The eventual construction of four row houses (terraced houses or town homes) were designed for four private clients by the architectural firm Bott, Ridder and Westermeyer. The first passive house residences were built in Darmstadt in 1990, and occupied the following year. Further implementation and councils In September 1996, the Passivhaus-Institut was founded in Darmstadt to promote and control passive house standards. By 2010 more than 25,000 passive house structures were estimated to have been built. Most are located in Germany and Austria, others in various countries worldwide. In 1996, after the concept had been validated at the Institute in Darmstadt, with space heating at 90% less than that required for a standard new building at the time, the economical passive houses working group was created. This group developed the planning package and initiated the production of the innovative components that had been used, notably the windows and the high-efficiency ventilation systems. Meanwhile, further passive houses were built in Stuttgart (1993), Naumburg, Hesse, Wiesbaden, and Cologne (1997). Products that had been developed according to the passive house standard were further commercialized during and following the European Union sponsored CEPHEUS project, which proved the concept in five European countries in the winter of 2000–2001. The first certified house was built in 2006 near Bemidji, Minnesota, in Camp Waldsee of the German Concordia Language Villages. The first US passive retrofit project, the remodeled craftsman O'Neill house in Sonoma, California, was certified in July 2010. In the United States, passive house design was first implemented by Katrin Klingenberg in 2003 when she built a passive home prototype named "The Smith House" in Urbana, Illinois. Later, she and builder Mike Kernagis co-founded the Ecological Construction Laboratory in 2004 to further explore the feasibility of the affordable passive design. It eventually led to the inception of the Passive House Institute United States (PHIUS) in 2007. Afterwards, the PHIUS has released their PHIUS + 2015 Building Standard and has certified over 1,200 projects and across the United States. In 2019, Park Avenue Green, a low-income housing building in New York was built with passive house standards. The building later became the largest certified passive house in North America. Ireland's first passive house was built in 2005 by Tomas O'Leary, a "passive house" designer and teacher. The house was called 'Out of the Blue'. Upon completion, Tomas moved into the building. The world's first standardised passive prefabricated house was built in Ireland in 2005 by Scandinavian Homes a Swedish company, that has since built more passive houses in England and Poland. The first certified passive house in Antwerp, Belgium, was built in 2010. In 2011, Heidelberg, Germany, initiated the Bahnstadt project, which was seen as the world's largest passive house building area. A company in Qatar planned the country's first Passive House in 2013, the first in the region. The world's tallest passive house was built in the Bolueta neighborhood in Bilbao, Spain. At , it is currently the world's tallest building certified under the standard in 2018. The $14.5 million, 171-unit development (including a nine-story companion to the high-rise) consists entirely of social housing. Gaobeidian, China, hosted the 23rd International Passive House Conference in 2019, and later built the Gaobeidian Railway City apartment complex which is reported to be "the world's largest passive house project". China has 73 different companies that have started "making windows to the 'passive house' standards." The United Kingdom’s first passive house health centre in Foleshill was opened in November 2021. Standards While some techniques and technologies were specifically developed for the passive house standard, others, such as superinsulation, already existed, and the concept of passive solar building design dates back to antiquity. There were other previous buildings with low-energy building standards, notably the German Niedrigenergiehaus (low-energy house) standard, in addition to buildings constructed to the demanding energy codes of Sweden and Denmark. International passive house standard The passive house standard requires that the building fulfills the following requirements: Use up to of floor area per year for heating and cooling as calculated by the Passivhaus Planning Package, or a peak heat load of of floor area based on local climate data. Use up to of floor area per year primary energy (for heating, hot water and electricity). Leak air up to 0.6 times the house volume per hour (n50 ≤ 0.6 / hour) at as tested by a blower door; or up to per square foot of the surface area of the enclosure. Recommendations The specific heat load for the heating source at design temperature is recommended, but not required, to be less than 10 W/m2 (3.17 btu/(h⋅ft2)). These standards are much higher than houses built to most normal building codes. For comparisons, see the international comparisons section below. National partners within the 'consortium for the Promotion of European Passive Houses' are thought to have some flexibility to adapt these limits locally. Passive house standards in the US - Passive House Standard and PHIUS+ In the US there are two versions of passive house being promoted by two separate entities: the Passive House Institute (PHI) and the Passive House Institute US (PHIUS). PHIUS was originally an affiliate and approved trainer and certifier for the Passive House Institute. In 2011, PHI cancelled its contract with PHIUS for misconduct. PHIUS disputed the claims by PHI and continued working to launch an independent building performance program. In 2015 PHIUS launched its own PHIUS+ standard, which primarily focuses on reducing negative effects of building operations for any type of building. This standard also uses climate data sets to determine specific building performance criteria for different regions. Such information is determined using metrics that represent a space where significant carbon and energy reduction overlap with cost-effectiveness. Overall, the PHIUS database includes more than 1,000 climate data sets for North America. The standard is based on five principles: airtightness, ventilation, waterproofing, heating and cooling, and electrical loads. Within these principles, projects must pass building specified blower door, ventilation airflow, overall airflow, and electrical load tests; buildings must also achieve other measures such as low-emission materials, renewable energy systems, moisture control, outdoor ventilation, energy efficient ventilation and space conditioning equipment. All buildings must also pass a quality assurance and quality control test – this is implemented to ensure that the building continues to adhere to the regional criteria set forth by the PHIUS’ climate data. These tests and analyses of operative conditions are performed by PHIUS raters or verifiers. These are accredited professionals from the PHIUS that are able to perform on-site testing and inspections to ensure that the newly constructed building is adhering to the construction plans, created energy models, and desired operating conditions. The two standards (passive house and PHIUS+) are distinct and target different performance metrics and use different energy modeling software and protocols. Construction costs In passive house buildings, the cost savings from replacing the conventional heating system can be used to fund the upgrade of the building envelope and the heat recovery ventilation system. With careful design and increasing competition in the supply of the specifically designed passive house building products, in Germany it is currently possible to construct buildings for the same cost as those built to normal German building standards, as was done with the passive house apartments in Vauban, Freiburg. On average, passive houses are reported to be more expensive upfront than conventional buildings: 5% to 8% in Germany, 8% to 10% in UK and 5% to 10% in USA. Evaluations have indicated that while it is technically possible, the costs of meeting the passive house standard increase significantly when building in Northern Europe above 60° latitude. European cities at approximately 60° include Helsinki, Finland, and Bergen, Norway. London is at 51°; Moscow is at 55°. Design and construction Achieving the major decrease in heating energy consumption required by the standard involves a shift in approach to building design and construction. Design may be assisted by use of the Passivhaus Planning Package (PHPP), which uses specifically-designed computer simulations. Below are the techniques used to achieve the standard. Passive solar design and landscape Passive solar building design and energy-efficient landscaping support passive house energy conservation and can integrate them into a neighborhood and environment. Following passive solar building techniques, where possible buildings are compact in shape to reduce their surface area; principal windows are oriented towards the equator to maximize passive solar gain. However, the use of solar gain, especially in temperate climate regions, is secondary to minimizing the overall house energy requirements. In climates and regions needing to reduce excessive summer passive solar heat gain, whether from direct or reflected sources, brise soleil, trees, attached pergolas with vines, vertical gardens, green roofs, and other techniques are implemented. Exterior wall color, when the surface allows a choice for reflection or absorption insolation qualities, depends on the predominant year-round ambient outdoor temperature. The use of deciduous trees and wall trellised or self attaching vines can assist in climates not at the temperature extremes. Superinsulation Passive house buildings employ superinsulation to significantly reduce the heat transfer through the walls, roof and floor compared to conventional buildings. A wide range of thermal insulation materials can be used to provide the required high R-values (low U-values, typically in the 0.10 to 0.15 W/(m2·K) range). Special attention is given to eliminating thermal bridges. Advanced window technology To meet the requirements of the passive house standard, windows are manufactured with exceptionally high R-values (low U-values, typically 0.85 to 0.45 W/(m2·K) for the entire window including the frame). The windows normally combine triple or quadruple-pane insulated glazing (with an appropriate solar heat-gain coefficient, low-emissivity coatings, sealed argon or krypton gas filled inter-pane voids, and 'warm edge' insulating glass spacers) with air-seals and specially developed thermal break window frames. Air tightness Building envelopes under the passive house standard are required to be extremely airtight compared to conventional construction. They are required to meet 0.60 ACH50 (air changes per hour at 50 pascals) based on the building's volume. In order to achieve these metrics, best practice is to test the building air barrier enclosure with a blower door at mid-construction if possible. A passive house is designed so that most of the air exchange with exterior is done by controlled ventilation through a heat-exchanger in order to minimize heat loss (or gain, depending on climate), so uncontrolled air leaks are best avoided. Another reason is the passive house standard makes extensive use of insulation which usually requires a careful management of moisture and dew points. This is achieved through air barriers, careful sealing of every construction joint in the building envelope, and sealing of all service penetrations. Ventilation Use of passive natural ventilation is an integral component of passive house design where ambient temperature is conducive—either by singular or cross ventilation, by a simple opening or enhanced by the stack effect from smaller ingress with larger egress windows and/or clerestory-operable skylight. When ambient climate is not conducive, mechanical heat recovery ventilation systems with a heat recovery rate of over 80% and high-efficiency electronically commutated motors (ECM) are employed to maintain air quality, and to recover sufficient heat to dispense with a conventional central heating system. Since passively designed buildings are essentially air-tight, the rate of air change can be optimized and carefully controlled at about 0.4 air changes per hour. All ventilation ducts are insulated and sealed against leakage. Some passive house builders promote the use of earth warming tubes. The tubes are typically around in diameter, long at a depth of about . They are buried in the soil to act as earth-to-air heat exchangers and pre-heat (or pre-cool) the intake air for the ventilation system. In cold weather, the warmed air also prevents ice formation in the heat recovery system's heat exchanger. Concerns about this technique have arisen in some climates due to problems with condensation and mold. Space heating In addition to using passive solar gain, passive house buildings make extensive use of their intrinsic heat from internal sources—such as waste heat from lighting, major appliances and other electrical devices (but not dedicated heaters)—as well as body heat from the people and other animals inside the building. This is due to the fact that people, on average, emit heat equivalent to 100 watts each of radiated thermal energy. Together with the comprehensive energy conservation measures taken, this means that a conventional central heating system is not necessary, although they are sometimes installed due to client's skepticism. Instead, passive houses sometimes have a dual purpose 800 to 1,500 watt heating and/or cooling element integrated with the supply air duct of the ventilation system, for use during the coldest days. It is fundamental to the design that all the heat required can be transported by the normal low air volume required for ventilation. A maximum air temperature of is applied, to prevent any possible smell of scorching from dust that escapes the filters in the system. Beyond the recovery of heat by the heat recovery ventilation unit, a well-designed passive house in the European climate should not need any supplemental heat source if the heating load is kept under 10 W/m2. The passive house standards in Europe set a space heating and cooling energy demand of per year, and peak demand. In addition, the total energy to be used in the building operations including heating, cooling, lighting, equipment, hot water, plug loads, etc. is limited to of treated floor area per year. Traits of passive houses Some have voiced concerns that the passive house standard is not a general approach as the occupant has to behave in a prescribed way; for example, not opening windows too often. A 2013 study concluded that in general passive houses are less sensitive to such behaviour than anticipated. International comparisons In the United States, a house built to passive house standard results in a building that requires space heating energy of per heating degree day, compared with about per heating degree day for a similar building built to meet the 2003 Model Energy Efficiency Code. This is between 75 and 95% less energy for space heating and cooling than current new buildings that meet today's US energy efficiency codes. The passive house in the German-language camp of Waldsee, Minnesota, was designed by architect Stephan Tanner of INTEP, LLC, a Minneapolis- and Munich-based consulting company for high performance and sustainable construction. Waldsee BioHaus is modeled on Germany's passive house standard and, when compared to houses of the U.S. LEED standard, shows improvement to the quality of life inside the building while using 85% less energy than a house built to the latter standard. VOLKsHouse 1.0 was the first certified "passive house" offered and sold in Santa Fe New Mexico. In the United Kingdom, an average new house built with the passive house standard used 77% less energy for space heating compared to the house built under circa-2006 Building Regulations. In Ireland, a typical house built to passive house standards instead of to the 2002 Building Regulations consumed 85% less energy for space heating and cut space-heating related carbon emissions by 94%. Tropical climate needs A certified passive house was built in the hot and humid climate of Lafayette, Louisiana, USA. It uses energy recovery ventilation and an efficient one-ton air-conditioner to provide cooling and dehumidification. See also EnerGuide (Canada) Energy-plus-house Green building History of passive solar building design Home energy rating (USA) House Energy Rating (Aust.) List of low-energy building techniques List of pioneering solar buildings Low-energy house National Home Energy Rating (UK) Passive daytime radiative cooling Passive solar Quadruple glazing R-2000 program (Canada) Renewable heat Self-sufficient homes Solar air heat Sustainable refurbishment Zero heating building References Further reading External links Passive House Institute (PHI) (in English) International Passive House Association (iPHA) Passipedia - The Passive House Resource North American Passive House Network Canadian Passive House Institute (CanPHI) Passive House Institute U.S. European Passive Houses Passive House Alliance United States Passive House California New York Passive House Passive House Institute New Zealand Passive House Institute Australia Passivhaus Germany Passive house Illawarra Passive house Accelerator Energy conservation in Germany House types Low-energy building Sustainable building
Passive house
Engineering
4,116
44,655,201
https://en.wikipedia.org/wiki/Melnick%2034
Melnick 34 (abbreviated to Mk34), also called BAT99-116, is a binary Wolf–Rayet star near R136 in the 30 Doradus complex (also known as the Tarantula Nebula) in the Large Magellanic Cloud. Both components are amongst the most massive and most luminous stars known, and the system is the most massive known binary system. Binary Melnick 34 is a binary star with an orbital period of 155 days. It shows high x-ray luminosity characteristic of colliding-wind binaries, and periodic variations in luminosity, spectral absorption, and the x-ray brightness. The orbit has been calculated based on spectroscopic observations with the Very Large Telescope. The two components have identical spectral types of WN5h and the spectral lines of each vary every 155 days, indicating projected orbital motions with speeds of and respectively. The similar orbital speeds show that the two components have similar masses; the secondary has a mass 92% of the primary, assuming an inclination near . The inclination of best matches the orbital properties of the two stars to their observed properties. The primary is designated A and the secondary B. The orbit is moderately eccentric, with a periastron separation of about . Physical characteristics The two components of Mk34 have identical spectral classes of WN5h, having spectra with prominent emission lines of highly-ionised helium, nitrogen, and carbon. The h suffix indicates that the spectrum also contains lines of hydrogen which are not usually seen in Wolf-Rayet spectra. The strength of the helium emission lines in the spectrum shows that the outer layers of the star consist of 35% helium. The WN5 spectral class indicates an extremely high photospheric temperature. Modelling the profiles of several spectral lines gives an effective temperature of for each star. The primary star has a bolometric luminosity of about and a radius of about , while the secondary has a luminosity of about and a radius of about . The masses of the two components inferred from their spectra are about and respectively. The masses determined from the orbit of the stars depends strongly on the inclination of the orbit, which is poorly known. The best match with the observed masses is found for orbits with an inclination near . The emission line spectra of the two stars in the Mk34 system are caused by strong mass loss which produces a dense stellar wind. Both stars have a stellar wind with a velocity of about causing each star to lose more than the mass of the sun every , a billion times stronger than the sun's wind. Evolution Although Wolf-Rayet stars are typically old stars that have lost their outer layers of hydrogen, some are very young massive stars which still contain hydrogen. Both stars in the Mk34 system are very young, and the helium, carbon, and nitrogen fusion products in their spectra are produced by the strong convection that occurs in massive main sequence stars and by rotational mixing. The stars are rotating at about and respectively. Modelling the evolution of the stars gives ages of about , with current masses of about and respectively, and initial masses of and respectively. These are similar to the masses deduced from observation. The stars are expected to have a hydrogen-burning lifetime of about 2.2 Myr, and are not expected to experience significant mass exchange during their evolution. Both stars should reach core collapse with masses too high to produce a normal supernova. Instead they are likely to produce a weak supernova followed by collapse to a black hole, or directly collapse to a black hole with no visible explosion. References Further reading External links ESA/Hubble image Stars in the Large Magellanic Cloud Dorado Large Magellanic Cloud Tarantula Nebula Extragalactic stars Wolf–Rayet stars J05384424-6906058 Spectroscopic binaries
Melnick 34
Astronomy
774
1,752,072
https://en.wikipedia.org/wiki/Universality%20%28dynamical%20systems%29
In statistical mechanics, universality is the observation that there are properties for a large class of systems that are independent of the dynamical details of the system. Systems display universality in a scaling limit, when a large number of interacting parts come together. The modern meaning of the term was introduced by Leo Kadanoff in the 1960s, but a simpler version of the concept was already implicit in the van der Waals equation and in the earlier Landau theory of phase transitions, which did not incorporate scaling correctly. The term is slowly gaining a broader usage in several fields of mathematics, including combinatorics and probability theory, whenever the quantitative features of a structure (such as asymptotic behaviour) can be deduced from a few global parameters appearing in the definition, without requiring knowledge of the details of the system. The renormalization group provides an intuitively appealing, albeit mathematically non-rigorous, explanation of universality. It classifies operators in a statistical field theory into relevant and irrelevant. Relevant operators are those responsible for perturbations to the free energy, the imaginary time Lagrangian, that will affect the continuum limit, and can be seen at long distances. Irrelevant operators are those that only change the short-distance details. The collection of scale-invariant statistical theories define the universality classes, and the finite-dimensional list of coefficients of relevant operators parametrize the near-critical behavior. Universality in statistical mechanics The notion of universality originated in the study of phase transitions in statistical mechanics. A phase transition occurs when a material changes its properties in a dramatic way: water, as it is heated boils and turns into vapor; or a magnet, when heated, loses its magnetism. Phase transitions are characterized by an order parameter, such as the density or the magnetization, that changes as a function of a parameter of the system, such as the temperature. The special value of the parameter at which the system changes its phase is the system's critical point. For systems that exhibit universality, the closer the parameter is to its critical value, the less sensitively the order parameter depends on the details of the system. If the parameter β is critical at the value βc, then the order parameter a will be well approximated by The exponent α is a critical exponent of the system. The remarkable discovery made in the second half of the twentieth century was that very different systems had the same critical exponents . In 1975, Mitchell Feigenbaum discovered universality in iterated maps. Examples Universality gets its name because it is seen in a large variety of physical systems. Examples of universality include: Avalanches in piles of sand. The likelihood of an avalanche is in power-law proportion to the size of the avalanche, and avalanches are seen to occur at all size scales. This is termed "self-organized criticality" . The formation and propagation of cracks and tears in materials ranging from steel to rock to paper. The variations of the direction of the tear, or the roughness of a fractured surface, are in power-law proportion to the size scale . The electrical breakdown of dielectrics, which resemble cracks and tears. The percolation of fluids through disordered media, such as petroleum through fractured rock beds, or water through filter paper, such as in chromatography. Power-law scaling connects the rate of flow to the distribution of fractures . The diffusion of molecules in solution, and the phenomenon of diffusion-limited aggregation. The distribution of rocks of different sizes in an aggregate mixture that is being shaken (with gravity acting on the rocks) . The appearance of critical opalescence in fluids near a phase transition . Theoretical overview One of the important developments in materials science in the 1970s and the 1980s was the realization that statistical field theory, similar to quantum field theory, could be used to provide a microscopic theory of universality . The core observation was that, for all of the different systems, the behaviour at a phase transition is described by a continuum field, and that the same statistical field theory will describe different systems. The scaling exponents in all of these systems can be derived from the field theory alone, and are known as critical exponents. The key observation is that near a phase transition or critical point, disturbances occur at all size scales, and thus one should look for an explicitly scale-invariant theory to describe the phenomena, as seems to have been put in a formal theoretical framework first by Pokrovsky and Patashinsky in 1965 . Universality is a by-product of the fact that there are relatively few scale-invariant theories. For any one specific physical system, the detailed description may have many scale-dependent parameters and aspects. However, as the phase transition is approached, the scale-dependent parameters play less and less of an important role, and the scale-invariant parts of the physical description dominate. Thus, a simplified, and often exactly solvable, model can be used to approximate the behaviour of these systems near the critical point. Percolation may be modeled by a random electrical resistor network, with electricity flowing from one side of the network to the other. The overall resistance of the network is seen to be described by the average connectivity of the resistors in the network . The formation of tears and cracks may be modeled by a random network of electrical fuses. As the electric current flow through the network is increased, some fuses may pop, but on the whole, the current is shunted around the problem areas, and uniformly distributed. However, at a certain point (at the phase transition) a cascade failure may occur, where the excess current from one popped fuse overloads the next fuse in turn, until the two sides of the net are completely disconnected and no more current flows . To perform the analysis of such random-network systems, one considers the stochastic space of all possible networks (that is, the canonical ensemble), and performs a summation (integration) over all possible network configurations. As in the previous discussion, each given random configuration is understood to be drawn from the pool of all configurations with some given probability distribution; the role of temperature in the distribution is typically replaced by the average connectivity of the network . The expectation values of operators, such as the rate of flow, the heat capacity, and so on, are obtained by integrating over all possible configurations. This act of integration over all possible configurations is the point of commonality between systems in statistical mechanics and quantum field theory. In particular, the language of the renormalization group may be applied to the discussion of the random network models. In the 1990s and 2000s, stronger connections between the statistical models and conformal field theory were uncovered. The study of universality remains a vital area of research. Applications to other fields Like other concepts from statistical mechanics (such as entropy and master equations), universality has proven a useful construct for characterizing distributed systems at a higher level, such as multi-agent systems. The term has been applied to multi-agent simulations, where the system-level behavior exhibited by the system is independent of the degree of complexity of the individual agents, being driven almost entirely by the nature of the constraints governing their interactions. In network dynamics, universality refers to the fact that despite the diversity of nonlinear dynamic models, which differ in many details, the observed behavior of many different systems adheres to a set of universal laws. These laws are independent of the specific details of each system. References Dynamical systems Critical phenomena
Universality (dynamical systems)
Physics,Materials_science,Mathematics
1,521
7,059,260
https://en.wikipedia.org/wiki/Hun%20Mining
Hun Mining, previously known as Genesis Energy Investment Company and Genesis Mining is an investment company based in Budapest, Hungary. From 2007 to 2010 Genesis Energy Investment Company was invested in the photovoltaics market, producing solar panels with thin film technology. In 2010 it sold the solar panel manufacturing and technology units to Denver-based Cogenco International for €15 million so that it could concentrate on mining activities. It was renamed Genesis Mining in the wake of this sale. In lage 2011 it changed its name again to Hun Mining. In late 2011 the company sold its mining operation to Davies Corporation, announcing plans to acquire mobile telecommunications company 6GMOBILE. References External links http://genesisenergy.eu/ https://web.archive.org/web/20080202154922/http://www.genesistechnologyfund.vc/html/en_home.php Energy companies of Hungary Solar power in Hungary Photovoltaics manufacturers Hungarian brands
Hun Mining
Engineering
204
4,296,904
https://en.wikipedia.org/wiki/Audio%20noise%20measurement
Audio noise measurement is a process carried out to assess the quality of audio equipment, such as the kind used in recording studios, broadcast engineering, and in-home high fidelity. In audio equipment noise is a low-level hiss or buzz that intrudes on audio output. Every piece of equipment which the recorded signal subsequently passes through will add a certain amount of electronic noise the process of removing this and other noises is called noise reduction. Origins of noise – the need for weighting Microphones, amplifiers and recording systems all add some electronic noise to the signals passing through them, generally described as hum, buzz or hiss. All buildings have low-level magnetic and electrostatic fields in and around them emanating from mains supply wiring, and these can induce hum into signal paths, typically 50 Hz or 60 Hz (depending on the country's electrical supply standard) and lower harmonics. Shielded cables help to prevent this, and on professional equipment where longer interconnections are common, balanced signal connections (most often with XLR or phone connectors) are usually employed. Hiss is the result of random signals, often arising from the random motion of electrons in transistors and other electronic components, or the random distribution of oxide particles on analog magnetic tape. It is predominantly heard at high frequencies, sounding like steam or compressed air. Attempts to measure noise in audio equipment as RMS voltage, using a simple level meter or voltmeter, do not produce useful results; a special noise-measuring instrument is required. This is because noise contains energy spread over a wide range of frequencies and levels, and different sources of noise have different spectral content. For measurements to allow fair comparison of different systems they must be made using a measuring instrument that responds in a way that corresponds to how we hear sounds. From this, three requirements follow. Firstly, it is important that frequencies above or below those that can be heard by even the best ears are filtered out and ignored by bandwidth limiting (usually 22 Hz to 22 kHz). Secondly, the measuring instrument should give varying emphasis to different frequency components of the noise in the same way that our ears do, a process referred to as weighting. Thirdly, the rectifier or detector that is used to convert the varying alternating noise signal into a steady positive representation of level should take time to respond fully to brief peaks to the same extent that our ears do; it should have the correct dynamics. The proper measurement of noise, therefore, requires the use of a specified method, with defined measurement bandwidth and weighting curve, and rectifier dynamics. The two main methods defined by current standards are A-weighting and ITU-R 468 (formerly known as CCIR weighting). A-weighting A-weighting uses a weighting curve based on equal-loudness contours that describe our hearing sensitivity to pure tones, but it turns out that the assumption that such contours would be valid for noise components was wrong. While the A-weighting curve peaks by about 2 dB around 2 kHz, it turns out that our sensitivity to noise peaks by some 12.2 dB at 6 kHz. ITU-R 468 weighting When measurements started to be used in reviews of consumer equipment in the late 1960s, it became apparent that they did not always correlate with what was heard. In particular, the introduction of Dolby B noise reduction on cassette recorders was found to make them sound a full 10.2 dB less noisy, yet they did not measure 10.2 dB better. Various new methods were then devised, including one which used a harsher weighting filter and a quasi-peak rectifier, defined as part of the German DIN2 45500 Hi-Fi standard. This standard, no longer in use, attempted to lay down minimum performance requirements in all areas for High Fidelity reproduction. The introduction of FM radio, which also generates predominantly high-frequency hiss, also showed up the unsatisfactory nature of A-weighting, and the BBC Research Department undertook a research project to determine which of several weighting filter and rectifier characteristics gave results that were most in line with the judgment of a panel of listeners, using a wide variety of different types of noise. BBC Research Department Report EL-17 formed the basis of what became known as CCIR recommendation 468, which specified both a new weighting curve and a quasi-peak rectifier. This became the standard of choice for broadcasters worldwide, and it was also adopted by Dolby, for measurements on its noise-reduction systems which were rapidly becoming the standard in cinema sound, as well as in recording studios and the home. Though they represent what we truly hear, ITU-R 468 noise weighting gives figures that are typically some 112 dB worse than A-weighted, a fact that brought resistance from marketing departments reluctant to put worse specifications on their equipment than the public had been used to. Dolby tried to get around this by introducing a version of their own called CCIR-Dolby which incorporated a 62 dB shift into the result (and a cheaper average reading rectifier), but this only confused matters, and was very much disapproved of by the CCIR. With the demise of the CCIR, the 468 standard is now maintained as ITU-R 468, by the International Telecommunication Union, and forms part of many national and international standards, in particular by the IEC (International Electrotechnical Commission), and the BSI (British Standards Institute). It is the only way to measure noise that allows fair comparisons; and yet the flawed A-weighting has made a comeback in the consumer field recently, for the simple reason that it gives the lower figures that are considered more impressive by marketing departments. Signal-to-noise ratio and dynamic range Audio equipment specifications tend to include the terms signal-to-noise ratio and dynamic range, both of which have multiple definitions, sometimes treated as synonyms. The exact meaning must be specified along with the measurement. Analog Dynamic range used to mean the difference between maximum level and noise level, with maximum level defined as a clipping signal with a specified THD+N. The term has become corrupted by a tendency to refer to the dynamic range of CD players as meaning the noise level on a blank recording with no dither, (in other words, just the analog noise content at the output). This is not particularly useful; especially since many CD players incorporate automatic muting in the absence of signal. Since the early 1990s various writers such as Julian Dunn have suggested that dynamic range be measured in the presence of a low-level test signal. Thus, any spurious signals caused by the test signal or distortion will not degrade the signal-to-noise ratio. This also addresses concerns about muting circuits. Digital In 1999, Steven Harris & Clif Sanchez Cirrus Logic published a white paper titled "Personal Computer Audio Quality Measurements" stating: In 2000 the AES released AES Information Document 6id-2000 which defined dynamic range as "20 times the logarithm of the ratio of the full-scale signal to the r.m.s. noise floor in the presence of signal, expressed in dB2 FS" with the following note: See also Audio quality measurement Noise Sound level meter ITU-R 468 noise weighting Noise measurement Headroom Weighting filter Equal-loudness contour Fletcher–Munson curves References External links Noise measurement briefing Audio electronics Broadcast engineering Sound measurements
Audio noise measurement
Physics,Mathematics,Engineering
1,532
323,141
https://en.wikipedia.org/wiki/Epigrams%20on%20Programming
"Epigrams on Programming" is an article by Alan Perlis published in 1982, for ACM's SIGPLAN journal. The epigrams are a series of short, programming-language-neutral, humorous statements about computers and programming, which are widely quoted. It first appeared in SIGPLAN Notices 17(9), September 1982. In epigram #54, Perlis coined the term "Turing tarpit", which he defined as a programming language where "everything is possible but nothing of interest is easy." References External links List of quotes (Yale) Full article text -- (including so-called "meta epigrams", numbers 122-130) Magazine articles Association for Computing Machinery
Epigrams on Programming
Technology
152
77,522,420
https://en.wikipedia.org/wiki/Disorders%20of%20diminished%20motivation
Disorders of diminished motivation (DDM) are a group of disorders involving diminished motivation and associated emotions. Many different terms have been used to refer to diminished motivation. Often however, a spectrum is defined encompassing apathy, abulia, and akinetic mutism, with apathy the least severe and akinetic mutism the most extreme. DDM can be caused by psychiatric disorders like depression and schizophrenia, brain injuries, strokes, and neurodegenerative diseases. Damage to the anterior cingulate cortex and to the striatum, which includes the nucleus accumbens and caudate nucleus and is part of the mesolimbic dopamine reward pathway, have been especially associated with DDM. Diminished motivation can also be induced by certain drugs, including antidopaminergic agents like antipsychotics, selective serotonin reuptake inhibitors (SSRIs), and cannabis, among others. DDM can be treated with dopaminergic and other activating medications, such as dopamine reuptake inhibitors, dopamine releasing agents, and dopamine receptor agonists, among others. These kinds of drugs have also been used by healthy people to improve motivation. A limitation of some medications used to increase motivation is development of tolerance to their effects. Definition Disorders of diminished motivation (DDM) is an umbrella term referring to a group of psychiatric and neurological disorders involving diminished capacity for motivation, will, and affect. A multitude of terms have been used to refer to DDM of varying severities and varieties, including apathy, abulia, akinetic mutism, athymhormia, avolition, amotivation, anhedonia, psychomotor retardation, affective flattening, akrasia, and psychic akinesia (auto-activation deficit or loss of psychic self-activation), among others. Other constructs, like fatigue, lethargy, and anergia, also overlap with the concept of DDM. Alogia (poverty of speech) and asociality (lack of social interest) are associated with DDM as well. Often however, a spectrum of DDM is defined encompassing apathy, abulia, and akinetic mutism, with apathy being the mildest form and akinetic mutism being the most severe or extreme form. Akinetic mutism involves alertness but absence of movement and speech due to profound lack of will. People with the condition are indifferent even to biologically relevant stimuli such as pain, hunger, and thirst. Causes Less extreme forms of DDM, for instance apathy or anhedonia, can be a symptom of psychiatric disorders and related conditions, like depression, schizophrenia, or drug withdrawal. More extreme forms of DDM, for instance severe apathy, abulia, or akinetic mutism, can be a result of traumatic brain injury (TBI), stroke, or neurodegenerative diseases like dementia or Parkinson's disease. Reduction in motivation and affect can also be induced by certain drugs, such as dopamine receptor antagonists including D2 receptor receptor antagonists like antipsychotics (e.g., haloperidol) and metoclopramide and D1 receptor antagonists like ecopipam, dopamine-depleting agents like tetrabenazine and reserpine, dopaminergic neurotoxins like 6-hydroxydopamine (6-OHDA) and methamphetamine, serotonergic antidepressants like the selective serotonin reuptake inhibitors (SSRIs) and MAO-A-inhibiting monoamine oxidase inhibitors (MAOIs), and cannabis or cannabinoids (CB1 receptor agonists). Damage to a variety of brain areas have been implicated in DDM. However, damage to or reduced functioning of the anterior cingulate cortex (ACC) and striatum have been especially implicated in DDM. The striatum is part of the dopaminergic mesolimbic pathway, which connects the ventral tegmental area (VTA) of the midbrain to the nucleus accumbens (NAc) of the ventral striatum and basal ganglia. Strokes affecting other striatal and basal ganglia structures, like the caudate nucleus of the dorsal striatum, have also been associated with DDM. Treatment DDM, like abulia and akinetic mutism, can be treated with dopaminergic and other activating medications. These include psychostimulants and releasers or reuptake inhibitors of dopamine and/or norepinephrine like amphetamine, methylphenidate, bupropion, modafinil, and atomoxetine; D2-like dopamine receptor agonists like pramipexole, ropinirole, rotigotine, piribedil, bromocriptine, cabergoline, and pergolide; the dopamine precursor levodopa; and MAO-B-selective monoamine oxidase inhibitors (MAOIs) like selegiline and rasagiline, among others. Selegiline is also a catecholaminergic activity enhancer (CAE), and this may additionally or alternatively be involved in its pro-motivational effects. The dopamine D1 receptor appears to have an important role in motivation and reward. Centrally acting dopamine D1-like receptor agonists like tavapadon and razpipadon and D1 receptor positive modulators like mevidalen and glovadalen are under development for medical use, including treatment of Parkinson's disease and notably of dementia-related apathy. Centrally active catechol-O-methyltransferase inhibitors (COMTIs) like tolcapone, which are likewise dopaminergic agents, have been studied in the treatment of psychiatric disorders but not in the treatment of DDM. Genetic variants in catechol-O-methyltransferase (COMT) have been associated with motivation and apathy susceptibility, as well as with reward, mood, and other neuropsychological variables. Besides in people with DDM, psychostimulants and related agents have been used non-medically to enhance motivation in healthy people, for instance in academic contexts. This has provoked discussions on the ethics of such uses. A limitation of certain medications used to improve motivation, like psychostimulants, is development of tolerance to their effects. Rapid acute tolerance to amphetamines is believed to be responsible for the dissociation between their relatively short durations of action (~4hours for main desired effects) and their much longer elimination half-lives (~10hours) and durations in the body (~2days). It appears that continually increasing or ascending concentration–time curves are beneficial for prolonging effects, which has resulted in administration multiple times per day and development of delayed- and extended-release formulations. Medication holidays and breaks can be helpful in resetting tolerance. Another possible limitation of amphetamine specifically is dopaminergic neurotoxicity, which might occur even at therapeutic doses. Besides medications, various psychological and physiological processes, including arousal, mood, expectancy effects (e.g., placebo), novelty, psychological stress or urgency, rewarding and aversive stimuli, availability of rewards, addiction, and sleep amount, among others, can also context- and/or stimulus-dependently modulate or enhance brain dopamine signaling and motivation to varying degrees. Relatedly, the psychostimulant effects of amphetamine are greatly potentiated by environmental novelty in animals. Related concepts Attention deficit hyperactivity disorder (ADHD) often involves motivational deficits, and the ADHD academic Russell Barkley has referred to the condition as a "motivational deficit disorder" in various publications and presentations. However, ADHD has perhaps more accurately been conceptualized as a disorder of executive function and of directing or allocating attention and motivation rather than a global deficiency in these processes. People with ADHD are often highly motivated towards stimuli that interest them, not uncommonly experiencing a flow-like state called hyperfocus while engaging such stimuli. In any case, as with management of DDM, psychostimulants and other catecholaminergic agents are used in people with ADHD to treat their symptoms, including difficulties with attention, executive control, and motivation, and are clinically effective for such purposes. Amphetamines in the treatment of ADHD appear to have among the largest effect sizes in terms of effectiveness of any interventions (medications or forms of psychotherapy) used in the management of psychiatric disorders generally. DDM (and ADHD) should not be confused with "motivational deficiency disorder" ("MoDeD"; "extreme laziness"), a fake or spoof disease created for humorous purposes in 2006 to raise awareness about disease mongering, overdiagnosis, and medicalization. References Emotions Motivation Neuropsychology Pro-motivational agents Psychopathological syndromes Symptoms and signs of mental disorders Symptoms and signs: Nervous system
Disorders of diminished motivation
Biology
1,923
297,519
https://en.wikipedia.org/wiki/Flipped%20SU%285%29
The Flipped SU(5) model is a grand unified theory (GUT) first contemplated by Stephen Barr in 1982, and by Dimitri Nanopoulos and others in 1984. Ignatios Antoniadis, John Ellis, John Hagelin, and Dimitri Nanopoulos developed the supersymmetric flipped SU(5), derived from the deeper-level superstring. In 2010, efforts to explain the theoretical underpinnings for observed neutrino masses were being developed in the context of supersymmetric flipped . Flipped is not a fully unified model, because the factor of the Standard Model gauge group is within the factor of the GUT group. The addition of states below Mx in this model, while solving certain threshold correction issues in string theory, makes the model merely descriptive, rather than predictive. The model The flipped model states that the gauge group is: Fermions form three families, each consisting of the representations for the lepton doublet, L, and the up quarks ; for the quark doublet, Q, the down quark, and the right-handed neutrino, ; for the charged leptons, . This assignment includes three right-handed neutrinos, which have never been observed, but are often postulated to explain the lightness of the observed neutrinos and neutrino oscillations. There is also a and/or called the Higgs fields which acquire a VEV, yielding the spontaneous symmetry breaking The representations transform under this subgroup as the reducible representation as follows: (uc and l) (q, dc and νc) (ec) . Comparison with the standard SU(5) The name "flipped" arose in comparison to the "standard" Georgi–Glashow model, in which and quark are respectively assigned to the and representation. In comparison with the standard , the flipped can accomplish the spontaneous symmetry breaking using Higgs fields of dimension 10, while the standard typically requires a 24-dimensional Higgs. The sign convention for varies from article/book to article. The hypercharge Y/2 is a linear combination (sum) of the following: There are also the additional fields and containing the electroweak Higgs doublets. Calling the representations for example, and is purely a physicist's convention, not a mathematician's convention, where representations are either labelled by Young tableaux or Dynkin diagrams with numbers on their vertices, and is a standard used by GUT theorists. Since the homotopy group this model does not predict monopoles. See 't Hooft–Polyakov monopole. Minimal supersymmetric flipped SU(5) Spacetime The superspace extension of Minkowski spacetime Spatial symmetry SUSY over Minkowski spacetime with R-symmetry Gauge symmetry group Global internal symmetry (matter parity) not related to in any way for this particular model Vector superfields Those associated with the gauge symmetry Chiral superfields As complex representations: Superpotential A generic invariant renormalizable superpotential is a (complex) invariant cubic polynomial in the superfields which has an -charge of 2. It is a linear combination of the following terms: The second column expands each term in index notation (neglecting the proper normalization coefficient). and are the generation indices. The coupling has coefficients which are symmetric in and . In those models without the optional sterile neutrinos, we add the nonrenormalizable couplings instead. These couplings do break the R-symmetry. See also Flipped SO(10) References Grand Unified Theory
Flipped SU(5)
Physics
731
65,790,632
https://en.wikipedia.org/wiki/Call%20setup
In telecommunication, call setup is the process of establishing a virtual circuit across a telecommunications network. Call setup is typically accomplished using a signaling protocol. The term call set-up time has the following meanings: The overall length of time required to establish a circuit-switched call between users. For data communication, the overall length of time required to establish a circuit-switched call between terminals; i.e., the time from the initiation of a call request to the beginning of the call message. Note: Call set-up time is the summation of: (a) call request time—the time from initiation of a calling signal to the delivery to the caller of a proceed-to-select signal; (b) selection time—the time from the delivery of the proceed-to-select signal until all the selection signals have been transmitted; and (c) post selection time—the time from the end of the transmission of the selection signals until the delivery of the call-connected signal to the originating terminal. Success rate In telecommunications, the call setup success rate (CSSR) is the fraction of the attempts to make a call that result in a connection to the dialled number (due to various reasons not all call attempts end with a connection to the dialled number). This fraction is usually measured as a percentage of all call attempts made. In telecommunications a call attempt invokes a call setup procedure, which, if successful, results in a connected call. A call setup procedure may fail due to a number of technical reasons. Such calls are classified as failed call attempts. In many practical cases, this definition needs to be further expanded with a number of detailed specifications describing which calls exactly are counted as successfully set up and which not. This is determined to a great degree by the stage of the call setup procedure at which a call is counted as connected. In modern communications systems, such as cellular (mobile) networks, the call setup procedure maybe very complex and the point at which a call is considered successfully connected may be defined in a number of ways, thus influencing the way the call setup success rate is calculated. If a call is connected successfully but the dialled number is busy, the call is counted as successful. Another term, used to denote call attempts that fail during the call setup procedure, is blocked calls. The call setup success rate in conventional (so-called land-line) networks is extremely high and is significantly above 99.9%. In mobile communication systems using radio channels the call setup success rate is lower and may range for commercial networks between 90% and 98% or higher. The main reasons for unsuccessful call setups in mobile networks are lack of radio coverage (either in the downlink or the uplink), radio interference between different subscribers, imperfections in the functioning of the network (such as failed call setup redirect procedures), overload of the different elements of the network (such as cells), etc. The call setup success rate is one of the key performance indicators (KPIs) used by the network operators to assess the performance of their networks. It is assumed to have direct influence on the customer satisfaction with the service provided by the network and its operator. The call setup success rate is usually included, together with other technical parameters of the network, in a key performance indicator known as service accessibility. The operators of telecommunication networks aim at increasing the call setup success rate as much as practical and affordable. In mobile networks this is achieved by improving radio coverage, expanding the capacity of the network and optimising the performance of its elements, all of which may require considerable effort and significant investments on the part of the network operator. See also Clearing (telecommunications) References Communications protocols Computer networking Teletraffic
Call setup
Technology,Engineering
757
23,552,810
https://en.wikipedia.org/wiki/Half-precision%20floating-point%20format
In computing, half precision (sometimes called FP16 or float16) is a binary floating-point computer number format that occupies 16 bits (two bytes in modern computers) in computer memory. It is intended for storage of floating-point values in applications where higher precision is not essential, in particular image processing and neural networks. Almost all modern uses follow the IEEE 754-2008 standard, where the 16-bit base-2 format is referred to as binary16, and the exponent uses 5 bits. This can express values in the range ±65,504, with the minimum value above 1 being 1 + 1/1024. Depending on the computer, half-precision can be over an order of magnitude faster than double precision, e.g. 550 PFLOPS for half-precision vs 37 PFLOPS for double precision on one cloud provider. History Several earlier 16-bit floating point formats have existed including that of Hitachi's HD61810 DSP of 1982 (a 4-bit exponent and a 12-bit mantissa), Thomas J. Scott's WIF of 1991 (5 exponent bits, 10 mantissa bits) and the 3dfx Voodoo Graphics processor of 1995 (same as Hitachi). ILM was searching for an image format that could handle a wide dynamic range, but without the hard drive and memory cost of single or double precision floating point. The hardware-accelerated programmable shading group led by John Airey at SGI (Silicon Graphics) used the s10e5 data type in 1997 as part of the 'bali' design effort. This is described in a SIGGRAPH 2000 paper (see section 4.3) and further documented in US patent 7518615. It was popularized by its use in the open-source OpenEXR image format. Nvidia and Microsoft defined the half datatype in the Cg language, released in early 2002, and implemented it in silicon in the GeForce FX, released in late 2002. However, hardware support for accelerated 16-bit floating point was later dropped by Nvidia before being reintroduced in the Tegra X1 mobile GPU in 2015. The F16C extension in 2012 allows x86 processors to convert half-precision floats to and from single-precision floats with a machine instruction. IEEE 754 half-precision binary floating-point format: binary16 The IEEE 754 standard specifies a binary16 as having the following format: Sign bit: 1 bit Exponent width: 5 bits Significand precision: 11 bits (10 explicitly stored) The format is laid out as follows: The format is assumed to have an implicit lead bit with value 1 unless the exponent field is stored with all zeros. Thus, only 10 bits of the significand appear in the memory format but the total precision is 11 bits. In IEEE 754 parlance, there are 10 bits of significand, but there are 11 bits of significand precision (log10(211) ≈ 3.311 decimal digits, or 4 digits ± slightly less than 5 units in the last place). Exponent encoding The half-precision binary floating-point exponent is encoded using an offset-binary representation, with the zero offset being 15; also known as exponent bias in the IEEE 754 standard. Emin = 000012 − 011112 = −14 Emax = 111102 − 011112 = 15 Exponent bias = 011112 = 15 Thus, as defined by the offset binary representation, in order to get the true exponent the offset of 15 has to be subtracted from the stored exponent. The stored exponents 000002 and 111112 are interpreted specially. The minimum strictly positive (subnormal) value is 2−24 ≈ 5.96 × 10−8. The minimum positive normal value is 2−14 ≈ 6.10 × 10−5. The maximum representable value is (2−2−10) × 215 = 65504. Half precision examples These examples are given in bit representation of the floating-point value. This includes the sign bit, (biased) exponent, and significand. By default, 1/3 rounds down like for double precision, because of the odd number of bits in the significand. The bits beyond the rounding point are ... which is less than 1/2 of a unit in the last place. Precision limitations 65520 and larger numbers round to infinity. This is for round-to-even; other rounding strategies will change this cut-off. ARM alternative half-precision ARM processors support (via a floating-point control register bit) an "alternative half-precision" format, which does away with the special case for an exponent value of 31 (111112). It is almost identical to the IEEE format, but there is no encoding for infinity or NaNs; instead, an exponent of 31 encodes normalized numbers in the range 65536 to 131008. Uses of half precision Half precision is used in several computer graphics environments to store pixels, including MATLAB, OpenEXR, JPEG XR, GIMP, OpenGL, Vulkan, Cg, Direct3D, and D3DX. The advantage over 8-bit or 16-bit integers is that the increased dynamic range allows for more detail to be preserved in highlights and shadows for images, and avoids gamma correction. The advantage over 32-bit single-precision floating point is that it requires half the storage and bandwidth (at the expense of precision and range). Half precision can be useful for mesh quantization. Mesh data is usually stored using 32-bit single-precision floats for the vertices, however in some situations it is acceptable to reduce the precision to only 16-bit half-precision, requiring only half the storage at the expense of some precision. Mesh quantization can also be done with 8-bit or 16-bit fixed precision depending on the requirements. Hardware and software for machine learning or neural networks tend to use half precision: such applications usually do a large amount of calculation, but don't require a high level of precision. Due to hardware typically not supporting 16-bit half-precision floats, neural networks often use the bfloat16 format, which is the single precision float format truncated to 16 bits. If the hardware has instructions to compute half-precision math, it is often faster than single or double precision. If the system has SIMD instructions that can handle multiple floating-point numbers within one instruction, half precision can be twice as fast by operating on twice as many numbers simultaneously. Support by programming languages Zig provides support for half precisions with its f16 type. .NET 5 introduced half precision floating point numbers with the System.Half standard library type. , no .NET language (C#, F#, Visual Basic, and C++/CLI and C++/CX) has literals (e.g. in C#, 1.0f has type System.Single or 1.0m has type System.Decimal) or a keyword for the type. Swift introduced half-precision floating point numbers in Swift 5.3 with the Float16 type. OpenCL also supports half-precision floating point numbers with the half datatype on IEEE 754-2008 half-precision storage format. , Rust is currently working on adding a new f16 type for IEEE half-precision 16-bit floats. Julia provides support for half-precision floating point numbers with the Float16 type. C++ introduced half-precision since C++23 with the std::float16_t type. GCC already implements support for it. Hardware support Several versions of the ARM architecture have support for half precision. Support for half precision in the x86 instruction set is specified in the F16C instruction set extension, first introduced in 2009 by AMD and fairly broadly adopted by AMD and Intel CPUs by 2012. This was further extended up the AVX-512_FP16 instruction set extension implemented in the Intel Sapphire Rapids processor. On RISC-V, the Zfh and Zfhmin extensions provide hardware support for 16-bit half precision floats. The Zfhmin extension is a minimal alternative to Zfh. On Power ISA, VSX and the not-yet-approved SVP64 extension provide hardware support for 16-bit half-precision floats as of PowerISA v3.1B and later. Support for half precision on IBM Z is part of the Neural-network-processing-assist facility that IBM introduced with Telum. IBM refers to half precison floating point data as NNP-Data-Type 1 (16-bit). See also bfloat16 floating-point format: Alternative 16-bit floating-point format with 8 bits of exponent and 7 bits of mantissa Minifloat: small floating-point formats IEEE 754: IEEE standard for floating-point arithmetic (IEEE 754) ISO/IEC 10967, Language Independent Arithmetic Primitive data type RGBE image format Power Management Bus § Linear11 Floating Point Format References Further reading Khronos Vulkan signed 16-bit floating point format External links Minifloats (in Survey of Floating-Point Formats) OpenEXR site Half precision constants from D3DX OpenGL treatment of half precision Fast Half Float Conversions Analog Devices variant (four-bit exponent) C source code to convert between IEEE double, single, and half precision can be found here Java source code for half-precision floating-point conversion Half precision floating point for one of the extended GCC features Binary arithmetic Floating point types
Half-precision floating-point format
Mathematics
1,988
3,225,450
https://en.wikipedia.org/wiki/Ceramic%20knife
A ceramic knife is a knife with a ceramic blade typically made from zirconium dioxide (ZrO2; also known as zirconia), rather than the steel used for most knives. Ceramic knife blades are usually produced through the dry-pressing and firing of powdered zirconia using solid-state sintering. The blades typically score 8.5 on the Mohs scale of mineral hardness, compared to 4.5 for normal steel and 7.5 to 8 for hardened steel and 10 for diamond. The resultant blade has a hard edge that stays sharp for much longer than conventional steel blades. However, the blade is brittle, subject to chipping, and will break rather than flex if twisted. The ceramic blade is sharpened by grinding the edges with a diamond-dust-coated grinding wheel. Zirconium oxide Zirconium oxide is used due to the fact it exists in several different forms. Zirconium can be monoclinic, tetragonal or cubic in form. Cooling to the monoclinic phase after sintering causes a large volume change, which often causes stress fractures in pure zirconia. Additives such as magnesia, calcia and yttria are used in manufacturing the knife material to stabilize the high-temperature phases and minimize this volume change. The highest strength and toughness is produced by the addition of 3 mol% yttrium oxide yielding partially stabilized zirconia. This material consists of a mixture of tetragonal and cubic phases with a bending strength of nearly . Small cracks allow phase transformations to occur, which essentially close the cracks and prevent catastrophic failure, resulting in a relatively tough ceramic material, sometimes known as TTZ (transformation-toughened zirconia). Properties Ceramic knives are substantially harder than steel knives, will not corrode in harsh environments, are non-magnetic, and do not conduct electricity at room temperature. Because of their resistance to strong acid and caustic substances, and their ability to retain a cutting edge longer than steel knives, ceramic knives are suitable for slicing boneless meat, vegetables, fruit and bread. Since ceramics are brittle, blades may break if dropped on a hard surface, although improved manufacturing processes have reduced this risk. They are also unsuitable for chopping through hard foods such as bones or frozen foods, and for applications which require prying, which may cause breaking or chipping. Several brands offer either a black-coloured or a designed blade made through an additional hot isostatic pressing step, which increases toughness. Sharpening and general care Unlike a steel blade that benefits from regular honing and resharpening in order to keep a sharp edge, a much harder ceramic blade will stay sharp and retain its cutting edge for much longer—at least ten times longer according to tests on a particular knife. However, the hardness of the ceramic material also makes it difficult to resharpen. Consequently, although a ceramic knife does not need regular sharpening in the same way as steel, when its blade eventually becomes blunt or chips specialized sharpening services are required for the ceramic edge. References External links Knives Ceramic engineering Zirconium dioxide
Ceramic knife
Engineering
643