id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
7,388,768
https://en.wikipedia.org/wiki/Hilkka%20Rantasepp%C3%A4-Helenius
Hilkka Rantaseppä-Helenius (1925–1975) was a Finnish astronomer. Rantaseppä-Helenius began studying mathematics in hopes of becoming a teacher. Finnish astronomer Yrjö Väisälä inspired her to become an astronomer instead. Helenius, as a daughter of a farmer, was among the lucky few astronomers that had the privilege of having an observatory at their own corral. Rantaseppä-Helenius worked on observing minor planets. She worked as an assistant at Tuorla Observatory from 1956 to 1962. In 1962 she became an observer when a vacancy became available. She remained an observer until 1975. She was also involved in building the Kevola Observatory by Tähtitieteellis-optillinen seura (Astronomy-Optical Society) on her own property in 1963. Rantaseppä-Helenius died at age 50 in an accident. The Florian asteroid 1530 Rantaseppä was named in her memory. References 1925 births 1975 deaths Finnish astronomers Women astronomers Astronomy-optics society
Hilkka Rantaseppä-Helenius
[ "Astronomy" ]
207
[ "Women astronomers", "Astronomers" ]
7,389,093
https://en.wikipedia.org/wiki/Museum%20Erotica
Museum Erotica was a sex museum in Copenhagen, Denmark, located just off of Strøget, Copenhagen's main shopping street. The museum was founded by director/photographer Ole Ege and business manager Kim Clausen. It originally opened in 1992 at Vesterbrogade 31 in Copenhagen. On May 14, 1994, it reopened at Købmagergade 24, where it remained until it closed in March 2009, following the sudden, unexpected death of Kim Clausen in 2008, and then the financial recession. The museum claimed to have had one million visitors . The museum often described itself as having "illustrated some of the sex life of Homo sapiens" , which reflects its very historic and holistic approach to its exhibitions. The walk through the museum took a visitor through many exhibits in roughly chronological order. A good deal of written commentary in English and Danish explained and augmented many of the items on display. There were extensive exhibitions on the beginning of erotic photography, a room with Playboy-centerfolds and other American pinups, and a special exhibition on Marilyn Monroe, among other things. One of the final displays in Museum Erotica was a small room with a sofa opposite a large wall of small television screens each showing a different porno video. The selections reflected the eclectic nature of the museum's displays. See also List of sex museums References Museums established in 2011 Sex museums Museum Erotica
Museum Erotica
[ "Biology" ]
286
[ "Behavior", "Sexuality stubs", "Sexuality" ]
7,389,351
https://en.wikipedia.org/wiki/MEMS%20electrothermal%20actuator
A MEMS electrothermal actuator is a microelectromechanical device that typically generates motion by thermal expansion. It relies on the equilibrium between the thermal energy produced by an applied electric current and the heat dissipated into the environment or the substrate. Its working principle is based on resistive heating. Fabrication processes for electrothermal actuators include deep X-ray lithography, LIGA (lithography, electroplating, and molding), and deep reactive ion etching (DRIE). These techniques allow for the creation of devices with high aspect ratios. Additionally, these actuators are relatively easy to fabricate and are compatible with standard Integrated Circuits (IC) and MEMS fabrication methods. These electrothermal actuators can be utilized in different kind of MEMS devices like microgrippers, micromirrors, tunable inductors and resonators. Types of MEMS electrothermal actuators Generally, there are three types of MEMS electrothermal actuators. One is asymmetric thermal actuator, also known as hot-and-cold-arm or U-shaped actuator. Its working principle is based on the unequal thermal expansion of its components. The second type of electrothermal actuators is the symmetric thermal actuator, also known as chevron or bent beam actuator. Its operation is based on the total thermal expansion and its output motion is limited to one direction. The third type of MEMS electrothermal actuator is the bimorph actuator. Its motion relies on the varying coefficients of thermal expansion of the materials used in their fabrication. Asymmetric (hot-and-cold-arm actuator, U-Shaped) An asymmetric MEMS electrothermal actuator, often referred to as a bimorph or U-shaped thermal actuator, consists of a narrow "hot" arm and a wider "cold" arm connected in series to an electrical circuit. When current flows through the actuator, Joule heating occurs, producing more heat in the narrow arm due to its higher electrical resistance, resulting in greater thermal expansion compared to the wide arm. This differential thermal expansion creates a bending moment, causing the actuator to bend towards the cold arm. This design allows for precise actuation and is suitable for various MEMS applications, including micro and nano manipulation tools like microgrippers and micro positioners. These tools are essential for tasks such as micro assembly, biological cell manipulation, and material characterization, offering advantages such as low driving voltages and easy control. Various microgripper designs have been developed to enhance performance, including different arm widths and lengths, electro-thermo-compliant actuators, three-beam actuators, folded and meander heaters, sandwiched structures, inclined arms, and curved hot arms. These actuators are used in applications requiring precise control of temperature and force, such as handling fragile micro-particles and single-cell manipulation. Additionally, they are employed in switching mechanisms, optical devices, and bi-directional actuators for applications like RF MEMS switches and micro-positioning platforms, providing larger displacement ranges and improved functionality. Symmetric (Chevron, bent beam) The symmetric or Chevron actuator, also known as the V-shape or bent-beam actuator, is a widely used in-plane electrothermal actuator. It features a V-shaped design but can also be found in other shapes. Unlike the differential expansion in hot-and-cold-arm actuators, the Chevron actuator relies on the total thermal expansion for actuation. It consists of two equal slanted beams connected at an apex and anchored to the substrate, forming a single conduction path. When current passes through the beams, resistive heating causes thermal expansion, pushing the apex forward. A comprehensive deflection model for this actuator involves solving a transcendental function numerically to determine the tip displacement, influenced by factors like beam length, pre-bending angle, and temperature increase. The critical parameters include the beam length, pre-bending angle, and thickness. Smaller inclination angles yield larger displacements but risk out-of-plane buckling and fabrication issues. The stiffness and output force can be increased by stacking multiple beams. Chevron actuators are versatile, being used in MEMS applications like micro-switches, microgrippers, and material characterization tools. They can produce substantial gripping force but with limited lateral displacement. To amplify displacement, mechanical amplifiers are often used. Applications include pick-and-place operations for nanomaterials, biological cell manipulation, and RF MEMS switches, where the actuator's stability and high force are advantageous. Variants like Z-shape and kink actuators offer alternative designs for specific needs, such as larger displacement or easier fabrication. Cascaded Chevron actuators enhance displacement further by connecting multiple stages, albeit with increased buckling risk. Applications include micro-engines and advanced microgrippers. These actuators provide significant advantages over other types due to their rectilinear motion, high output force, and low driving voltage, making them suitable for a wide range of precise, small-scale tasks. Bimorph The bimorph design is a prominent type of electrothermal actuator consisting of two or more layers of different materials with varied coefficients of thermal expansion (CTE). When subjected to thermal stimuli, the differential expansion causes the actuator to bend, producing out-of-plane displacement. This makes bimorph actuators ideal for applications where in-plane actuators are unsuitable, offering a broad range of applications. The deflection mechanism relies on material properties, such as Young’s modulus and CTE mismatch, as well as the thickness ratio of the layers and the beam's geometrical parameters. A basic bimorph cantilever consists of two layers: one with a high CTE and another with a low CTE. Joule heating induces more expansion in the high-CTE layer, causing the structure to bend towards the low-CTE layer. The theoretical models for the behavior of bimorph actuators, such as tip deflection and output force, are well-established. For a simple two-layer cantilever, the curvature due to thermal expansion mismatch can be calculated using specific formulas involving temperature change, CTE, width, thickness, and Young’s modulus of each layer. The choice of materials for bimorph actuators is diverse, with metals and polymers commonly used for high-CTE layers, and dielectrics or semiconductors for low-CTE layers. Recent advancements include the use of carbon materials like graphene, which has a negative CTE, and graphene/polymer composites. Bimorph actuators are typically designed for out-of-plane actuation due to the planar deposition of layers, innovative designs such as the "vertical bimorph" and lateral actuators have been developed to achieve in-plane actuation using techniques like angled electron-beam evaporation and post-CMOS micromachining. Bimorph actuators find applications in various fields. In micromanipulation, conventional bimorph actuators are less feasible for in-plane microgrippers, but novel designs like a four-finger microgripper provide stable and reliable gripping by curling upwards when open. In micromirrors, bimorph actuators enable large displacement with low power consumption, ideal for tilting and piston motion in applications like projection displays, optical switches, barcode readers, biomedical imaging, tunable lasers, spectroscopy, and adaptive optics. They are also used in atomic force microscopy (AFM) and scanning probe nanolithography (SPN), offering nanometer-scale resolution imaging and efficient patterning. Additionally, bimorph actuators are utilized in tunable RF devices due to their precise control and actuation capabilities. However, challenges such as shear stress at layer interfaces must be managed to ensure the longevity of bimorph devices. Advantages Electrothermal actuators offer several advantages over other types of actuators, making them valuable components for MEMS. They operate with relatively low driving voltages yet can generate large forces and displacements, either parallel or perpendicular to the substrate. Unlike actuators that rely on electrostatic or magnetic fields, electrothermal actuators are suitable for manipulating biological samples and electronic chips. These actuators are also easy to control, as they do not exhibit significant hysteresis like piezoresistive and shape memory alloy (SMA) actuators. Electrothermal actuators are scalable in size and typically have a more compact structure compared to electrostatic actuators, which use large arrays of comb drives, or electromagnetic and SMA actuators, which are challenging to implement on a small scale. They are versatile in their operating environments, functioning well in air, vacuum, dusty conditions, liquid media, and under the electron beam in scanning electron microscopy (SEM). However, electrothermal actuators generally have low switching speeds due to the large time constants of thermal processes. Despite this, high-frequency thermal actuation has been demonstrated. The method of electrothermal excitation is also attractive for actuation in resonance mode, particularly for microcantilever-based sensing and probing applications. MEMS resonators using this method have shown high-quality factors and wide frequency tuning ranges. Other types of MEMS Actuators Electrostatic — parallel plate or comb drive Piezoelectric Magnetic Thermostatic — linear motion, paraffin wax drive See also MEMS magnetic actuator MEMS electrostatic actuator References Further reading Experimental and numerical study of MEMS electrothermal actuators: Comparing dynamic behavior and heat transfer process in vacuum and non-vacuum environments External links Micro-particles manipulation and sorting Electrothermal actuator simulation on Comsol Actuators Materials science Nanoelectronics Microtechnology
MEMS electrothermal actuator
[ "Physics", "Materials_science", "Engineering" ]
2,098
[ "Applied and interdisciplinary physics", "Microtechnology", "Materials science", "Nanoelectronics", "nan", "Nanotechnology" ]
7,389,796
https://en.wikipedia.org/wiki/HMG-CoA
β-Hydroxy β-methylglutaryl-CoA (HMG-CoA), also known as 3-hydroxy-3-methylglutaryl coenzyme A, is an intermediate in the mevalonate and ketogenesis pathways. It is formed from acetyl CoA and acetoacetyl CoA by HMG-CoA synthase. The research of Minor J. Coon and Bimal Kumar Bachhawat in the 1950s at University of Illinois led to its discovery. HMG-CoA is a metabolic intermediate in the metabolism of the branched-chain amino acids, which include leucine, isoleucine, and valine. Its immediate precursors are β-methylglutaconyl-CoA (MG-CoA) and β-hydroxy β-methylbutyryl-CoA (HMB-CoA). HMG-CoA reductase catalyzes the conversion of HMG-CoA to mevalonic acid, a necessary step in the biosynthesis of cholesterol. Biosynthesis Mevalonate pathway Mevalonate synthesis begins with the beta-ketothiolase-catalyzed Claisen condensation of two molecules of acetyl-CoA to produce acetoacetyl CoA. The following reaction involves the joining of acetyl-CoA and acetoacetyl-CoA to form HMG-CoA, a process catalyzed by HMG-CoA synthase. In the final step of mevalonate biosynthesis, HMG-CoA reductase, an NADPH-dependent oxidoreductase, catalyzes the conversion of HMG-CoA into mevalonate, which is the primary regulatory point in this pathway. Mevalonate serves as the precursor to isoprenoid groups that are incorporated into a wide variety of end-products, including cholesterol in humans. Ketogenesis pathway HMG-CoA lyase breaks it into acetyl CoA and acetoacetate. See also Steroidogenic enzyme References Thioesters of coenzyme A
HMG-CoA
[ "Chemistry", "Biology" ]
433
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
7,391,012
https://en.wikipedia.org/wiki/Combes%20quinoline%20synthesis
The Combes quinoline synthesis is a chemical reaction, which was first reported by Combes in 1888. Further studies and reviews of the Combes quinoline synthesis and its variations have been published by Alyamkina et al., Bergstrom and Franklin, Born, and Johnson and Mathews. The Combes quinoline synthesis is often used to prepare the 2,4-substituted quinoline backbone and is unique in that it uses a β-diketone substrate, which is different from other quinoline preparation methods, such as the Conrad-Limpach synthesis and the Doebner reaction. It involves the condensation of unsubstituted anilines (1) with β-diketones (2) to form substituted quinolines (4) after an acid-catalyzed ring closure of an intermediate Schiff base (3). Mechanism The reaction mechanism undergoes three major steps, the first one being the protonation of the oxygen on the carbonyl in the β-diketone, which then undergoes a nucleophilic addition reaction with the aniline. An intramolecular proton transfer is followed by an E2 mechanism, which causes a molecule of water to leave. Deprotonation at the nitrogen atom generates a Schiff base, which tautomerizes to form an enamine that gets protonated via the acid catalyst, which is commonly concentrated sulfuric acid (H2SO4). The second major step, which is also the rate-determining step, is the annulation of the molecule. Immediately following the annulation, there is a proton transfer, which eliminates the positive formal charge on the nitrogen atom. The alcohol is then protonated, followed by the dehydration of the molecule, resulting in the end product of a substituted quinoline. Regioselectivity The formation of the quinoline product is influenced by the interaction of both steric and electronic effects. In a recent study, Sloop investigated how substituents would influence the regioselectivity of the product as well as the rate of reaction during the rate-determining step in a modified Combes pathway, which produced trifluoromethylquinoline as the product. Sloop focused specifically on the influences that substituted trifluoro-methyl-β-diketones and substituted anilines would have on the rate of quinoline formation. One modification to the generic Combes quinoline synthesis was the use of a mixture of polyphosphoric acid (PPA) and various alcohols (Sloop used ethanol in his experiment). The mixture produced a polyphosphoric ester (PPE) catalyst that proved to be more effective as the dehydrating agent than concentrated sulfuric acid (H2SO4), which is commonly used in the Combes quinoline synthesis. Using the modified Combes synthesis, two possible regioisomers were found: 2-CF3- and 4-CF3-quinolines. It was observed that the steric effects of the substituents play a more important role in the electrophilic aromatic annulation step, which is the rate-determining step, compared to the initial nucleophilic addition of the aniline to the diketone. It was also observed that increasing the bulk of the R group on the diketone and using methoxy-substituted anilines leads to the formation of 2-CF3-quinolines. If chloro- or fluoroanilines are used, the major product would be the 4-CF3 regioisomer. The study concludes that the interaction of steric and electronic effects leads to the preferred formation of 2-CF3-quinolines, which provides us with some information on how to manipulate the Combes quinoline synthesis to form a desired regioisomer as the product. Importance of Quinoline Synthesis There are multiple ways to synthesize quinoline, one of which is the Combes quinoline synthesis. The synthesis of quinoline derivatives has been prevalent in biomedical studies due to the efficiency of the synthetic methods as well as the relative low-cost production of these compounds, which can also be produced in large scales. Quinoline is an important heterocyclic derivative that serves as a building block for many pharmacological synthetic compounds. Quinoline and its derivatives are commonly used in antimalarial drugs, fungicides, antibiotics, dyes, and flavoring agents. Quinoline and its derivatives also have important roles in other biological compounds that are involved in cardiovascular, anticancer, and anti-inflammatory activities. Additionally, researchers, such as Luo Zai-gang et al., recently looked at the synthesis and use of quinoline derivatives as HIV-1 integrase inhibitors. They also looked at how the substituent placement on the quinoline derivatives affected the primary anti-HIV inhibitory activity. See also Conrad-Limpach reaction Doebner reaction Doebner-Miller reaction Skraup synthesis References Further reading Bergstrom, F.W. and Franklin, E.C. Hexaacylic Compounds: Pyridine, Quinoline, and Isoquinoline In Heterocyclic Nitrogen Compounds. California: Department of Chemistry, Stanford University, 1944, 156. Carbon-carbon bond forming reactions Condensation reactions Quinoline forming reactions Name reactions
Combes quinoline synthesis
[ "Chemistry" ]
1,122
[ "Name reactions", "Condensation reactions", "Carbon-carbon bond forming reactions", "Organic reactions" ]
7,391,204
https://en.wikipedia.org/wiki/Transport%20hub
A transport hub is a place where passengers and cargo are exchanged between vehicles and/or between transport modes. Public transport hubs include railway stations, rapid transit stations, bus stops, tram stops, airports, and ferry slips. Freight hubs include classification yards, airports, seaports, and truck terminals, or combinations of these. For private transport by car, the parking lot functions as an unimodal hub. History Historically, an interchange service in the scheduled passenger air transport industry involved a "through plane" flight operated by two or more airlines where a single aircraft was used with the individual airlines operating it with their own flight crews on their respective portions of a direct, no-change-of-plane multi-stop flight. In the U.S., a number of air carriers including Alaska Airlines, American Airlines, Braniff International Airways, Continental Airlines, Delta Air Lines, Eastern Airlines, Frontier Airlines (1950-1986), Hughes Airwest, National Airlines (1934-1980), Pan Am, Trans World Airlines (TWA), United Airlines and Western Airlines previously operated such cooperative "through plane" interchange flights on both domestic and/or international services with these schedules appearing in their respective system timetables. Delta Air Lines pioneered the hub and spoke system for aviation in 1955 from its hub in Atlanta, Georgia, United States, in an effort to compete with Eastern Air Lines. FedEx adopted the hub and spoke model for overnight package delivery during the 1970s. When the United States airline industry was deregulated in 1978, Delta's hub and spoke paradigm was adopted by several airlines. Many airlines around the world operate hub-and-spoke systems facilitating passenger connections between their respective flights. Public transport Intermodal passenger transport hubs in public transport include bus stations, railway stations and metro stations, while a major transport hub, often multimodal (bus and rail), may be referred to as a transport centre or, in American English, as a transit center. Sections of city streets that are devoted to functioning as transit hubs are referred to as transit malls. In cities with a central station, that station often also functions as a transport hub in addition to being a railway station. Journey planning involving transport hubs is more complicated than direct trips, as journeys will typically require a transfer at the hub. Modern electronic journey planners for public transport have a digital representation of both the stops and transport hubs in a network, to allow them to calculate journeys that include transfers at hubs. Airports Airports have a twofold hub function. First, they concentrate passenger traffic into one place for onward transportation. This makes it important for airports to be connected to the surrounding transport infrastructure, including roads, bus services, and railway and rapid transit systems. Secondly some airports function as intra-modular hubs for the airlines, or airline hubs. This is a common strategy among network airlines who fly only from limited number of airports and usually will make their customers change planes at one of their hubs if they want to get between two cities the airline does not fly directly between. Airlines have extended the hub-and-spoke model in various ways. One method is to create additional hubs on a regional basis, and to create major routes between the hubs. This reduces the need to travel long distances between nodes that are close together. Another method is to use focus cities to implement point-to-point service for high traffic routes, bypassing the hub entirely. Freight There are usually three kinds of freight hubs: sea-road, sea-rail, and road-rail, though they can also be sea-road-rail. With the growth of containerization, intermodal freight transport has become more efficient, often making multiple legs cheaper than through services—increasing the use of hubs. See also Central station Infrastructure security Intermodal journey planner Junction (traffic) Layover Spoke-hub distribution paradigm Transit desert Transit mall References Hub Transit centers
Transport hub
[ "Physics" ]
795
[ "Physical systems", "Transport", "Transportation geography" ]
7,391,406
https://en.wikipedia.org/wiki/C/2006%20M4%20%28SWAN%29
C/2006 M4 is one of several SWAN comets; the others are C/2002 O6, C/2004 H6, C/2004 V13, C/2005 P3, P/2005 T4, C/2009 F6, C/2011 Q4 and C/2012 E2. Comet C/2006 M4 (SWAN) is a non-periodic comet discovered in late June 2006 by Robert D. Matson of Irvine, California and Michael Mattiazzo of Adelaide, South Australia in publicly available images of the Solar and Heliospheric Observatory (SOHO). These images were captured by the Solar Wind ANisotropies (SWAN) Lyman-alpha all-sky camera on board the SOHO. The comet was officially announced after a ground-based confirmation by Robert McNaught (Siding Spring Survey) on July 12. Although perihelion was Sept 28, 2006, the comet flared dramatically from seventh magnitude to fourth magnitude on October 24, 2006, becoming visible with the naked eye. Comet C/2006 M4 is in a hyperbolic trajectory (with an osculating eccentricity larger than 1) during its passage through the inner Solar System. After leaving the influence of the planets, the eccentricity will drop below 1 and it will remain bound to the Solar System as an Oort cloud comet. Given the extreme orbital eccentricity of this object, different epochs can generate quite different heliocentric unperturbed two-body best-fit solutions to the aphelion distance (maximum distance) of this object. For objects at such high eccentricity, the Suns barycentric coordinates are more stable than heliocentric coordinates. Using JPL Horizons, the barycentric orbital elements for epoch 2013-May-14 generate a semi-major axis of about 1300 AU and a period of about 47,000 years. References External links Orbital simulation from JPL (Java) / Horizons Ephemeris C/2006 M4 ( SWAN ) @ Seiichi Yoshida (September 21, 2007) C/2006 M4 (SWAN) @ Gary W. Kronk's Cometography Non-periodic comets 20060620
C/2006 M4 (SWAN)
[ "Astronomy" ]
445
[ "Astronomy stubs", "Comet stubs" ]
7,391,667
https://en.wikipedia.org/wiki/Census%20of%20Marine%20Life
The Census of Marine Life was a 10-year, US $650 million scientific initiative, involving a global network of researchers in more than 80 nations, engaged to assess and explain the diversity, distribution, and abundance of life in the oceans. The world's first comprehensive Census of Marine Life — past, present, and future — was released in 2010 in London. Initially supported by funding from the Alfred P. Sloan Foundation, the project was successful in generating many times that initial investment in additional support and substantially increased the baselines of knowledge in often underexplored ocean realms, as well as engaging over 2,700 different researchers for the first time in a global collaborative community united in a common goal, and has been described as "one of the largest scientific collaborations ever conducted". Project history According to Jesse Ausubel, Senior Research Associate of the Program for the Human Environment of Rockefeller University and science advisor to the Alfred P. Sloan Foundation, the idea for a "Census of Marine Life" originated in conversations between himself and Dr. J. Frederick Grassle, an oceanographer and benthic ecology professor at Rutgers University, in 1996. Grassle had been urged to talk with Ausubel by former colleagues at the Woods Hole Oceanographic Institution and was at that time unaware that Ausubel was also a program manager at the Alfred P. Sloan Foundation, funders of a number of other large scale "public good" science-based projects such as the Sloan Digital Sky Survey. Ausubel was instrumental in persuading the Foundation to fund a series of "feasibility workshops" over the period 1997-1998 into how the project might be conducted, one result of these workshops being the broadening of the initial concept from a "Census of the Fishes" into a comprehensive "Census of Marine Life". Results from these workshops, plus associated invited contributions, formed the basis of a special issue of Oceanography magazine in 1999; later that year, a workshop in Washington, D.C. addressed the formation of an Ocean Biogeographic Information System (OBIS) which would serve to collate existing knowledge about the distribution of organisms in the ocean and form the information management component of the Census. The Census began in a formal sense with the announcement in May 2000 of eight grants totaling about 4 million US$ to create OBIS, as reported in Science magazine, 2 June. Meanwhile, an International Scientific Steering Committee was formed in 1999, which by 2001 envisaged "about half a dozen pilot [field] programs" for the period 2002-2004 which, along with OBIS and another project called "History of Marine Animal Populations" (HMAP), would provide the initial activities of the Census, to be followed by an additional series of field programs in 2005-2007, culminating in an analysis and integration phase in 2008-2010. During the operation of the Census, an additional non-field project was added, the Future of Marine Animal Populations (FMAP), which concentrated on forecasting the future of life in the oceans using modeling and simulation tools. As a general method of working, project proposals would be debated within the Scientific Steering Committee and, if recommended for funding, a formal submission would be made to the Sloan Foundation for funding to support the Principal Investigators (PIs) and a Project Coordinator, meetings of project participants, and additional Synthesis and Education and Outreach activities. Since Sloan Foundation approval was dependent on promises of contributions from additional sources, and projects were encouraged to bring additional resources on board during their operation, the Foundation funds committed were effectively leveraged many times to provide a much more substantial program than would otherwise have been possible. As core infrastructure components, the Foundation also supported the Census' International Scientific Steering Committee and Secretariat, the U.S. National Committee, and an Education and Outreach Network to lift the project's visibility and engage other nations and organizations. The Census was ultimately estimated to have cost US $650 million, of which the Sloan Foundation contributed US $75 million with the remainder supplied by a large number of participating institutions, countries, and national and international organizations in the form of both direct and in-kind contributions. In a retrospective review in 2011, David Penman and co-authors wrote: Census program The Census consisted of three major component themes organized around the questions: What has lived in the oceans? What does live in the oceans? What will live in the oceans? The largest component of the Census involved investigating what currently lives in the world's oceans through 14 field projects. Each sampled the biota in one of six realms of the global oceans using a range of technologies. These projects were as follows: Arctic Ocean: ArcOD (Arctic Ocean Diversity) Antarctic Ocean: CAML (Census of Antarctic Marine Life) Mid-Ocean Ridges: MAR-ECO (Mid-Atlantic Ridge Ecosystem Project) Vents and Seeps: ChEss (Biogeography of Deep-Water Chemosynthetic Ecosystems) Abyssal Plains: CeDAMar (Census of Diversity of Abyssal Marine Life) Seamounts: CenSeam (Global Census of Marine Life on Seamounts) Continental Margins: COMARGE (Continental Margin Ecosystems) Continental Shelves: POST (Pacific Ocean Shelf Tracking Project) Near Shore: NaGISA (Natural Geography in Shore Areas) Coral Reefs: CReefs (Census of Coral Reefs) Regional Ecosystems: GoMA (Gulf of Maine Program) Microbes: ICoMM (International Census of Marine Microbes) Zooplankton: CMarZ (Census of Marine Zooplankton) Top Predators: TOPP (Tagging of Pacific Predators) These field projects were complemented by the three non-field Census projects, namely HMAP, FMAP and OBIS. A series of National and Regional Implementation Committees (NRICs) was also established to progress the involvement of particular countries and regions in Census activities. Towards the end of the project, additional teams were created for education and outreach, and mapping and vizualization products, while a "synthesis" group coordinated the final outcomes (publications, etc.). Outcomes During its lifespan, the Census involved some 2,700 scientists from more 80 countries who spent 9,000 days at sea participating in more than 540 census-badged expeditions, as well as uncounted nearshore sampling events. In addition to many thousands of records of previously known species, Census scientists found more than 6,000 marine species potentially new to science and had completed formal descriptions of 1,200 of them up to 2010. Census scientists visited many parts of the global ocean to learn more about species ranging in size from the blue whale to minute zooplankton and microbes (bacteria and viruses); sampled from the world's coldest regions to the warm tropics, from deep-sea hydrothermal vents to coastal ecosystems; tracked the movements of fish and interrogated historical records to learn what the ocean used to be like before the influence of humans; and employed forecasting methods to predict what may happen to ocean life in the future. One of the largest scientific collaborations ever conducted, by 2011 the Census had produced over 3,100 scientific papers and many thousands of other information products, with over 30 million species distribution records freely available via OBIS. As well as its tangible scientific legacy, the Census was instrumental in building a global community of researchers, many of whom had never collaborated before until they were brought together under the auspices of the Census, and a new approach to collaborative research. As Ian Poiner, outgoing chair of the Census has said, "The Census changed our views on how things could be done. We shared our problems and we shared our solutions." In their 2011 review of the Census commissioned by the Alfred P. Sloan Foundation, David Penman and co-authors wrote: "[Prior to the Census there was] A fragmented research community: Marine biodiversity researchers had few active coordinated national and international research programs and taxonomic research in particular was underfunded and scattered in disparate organizations... [there was] No culture of collaboration and data sharing: Unlike the oceanographic community, marine biology was characterized by small research projects leading to publications but there was little experience or willingness to openly collaborate and share data... [and in addition there was] No recognized open-access data portal for marine biodiversity data: Unlike the "physical science" oceanographic community, there was no recognized data depository or common standards for sharing marine biodiversity data." As summarizing remarks, Penman et al., writing in 2011, stated: In 2011, the Census Steering Committee received the International Cosmos Prize in recognition of its decade of international ocean research spanning multiple scientific disciplines. Partnerships The Census partnered with the Encyclopedia of Life in creating pages for marine species, and supplied marine material for DNA barcoding in the Barcode of Life project. Google and Census of Marine Life partnered on Google Earth 5.0. Ocean in Google Earth contains a layer devoted to the Census of Marine Life that allows users to follow scientists from the Census on expeditions and see marine life and features found during the Census. A partnership with the French film company Galatée Films resulted in the production of the film Oceans which was released in 2009, featuring film of over 200 species at more than 50 global locations. See also Notes References Bibliography Ausubel, Jesse H., Crist, Darlene Trew & Waggoner, Paul E. (eds). 2010. First Census of Marine Life 2010: Highlights of a Decade of Discovery. Census of Marine Life. . Available at http://www.coml.org/pressreleases/census2010/PDF/Highlights-2010-Report-Low-Res.pdf Snelgrove, Paul V. R. 2010. Discoveries of the Census of Marine Life: Making Ocean Life Count. Cambridge University Press, 270 pp. (paperback), 9781107000131 (hardback). Penman, David, Pearce, Andrew and Morton, Missy. 2011. The Census of Marine Life: Review of Lessons Learned. Report to the Alfred P. Sloan Foundation, New York, June 2011. Landcare Research, New Zealand, Contract Report: LC 271. Available at https://www.landcareresearch.co.nz/uploads/public/researchpubs/MarineLifeCensusReview.pdf Further reading McIntyre, Alasdair D. (editor). 2010. Life in the World’s Oceans: Diversity, Distribution, and Abundance. Blackwell Publishing Ltd., 384 pp. - A summary of findings and discoveries by the 17 Census projects Publisher's information Knowlton, Nancy. 2010. Citizens of the Sea: Wondrous Creatures from the Census of Marine Life. National Geographic, 216 pp. - Portraits of about 100 species Publisher's information External links Census of Marine Life home page Paul Snelgrove: A census of the ocean TED, 2010. Census of Marine Life Mapping and Visualization project page PLOS (Public Library of Science) Collections: Census of Marine Life Census of Marine Life: Investigating Marine Life (educational site) Census of Marine Life news releases Marine biology Fisheries databases Biogeography Ecology organizations Zoology Biological censuses
Census of Marine Life
[ "Biology" ]
2,267
[ "Biogeography", "Zoology", "Marine biology" ]
7,391,817
https://en.wikipedia.org/wiki/Participatory%20GIS
Participatory GIS (PGIS) or public participation geographic information system (PPGIS) is a participatory approach to spatial planning and spatial information and communications management. PGIS combines Participatory Learning and Action (PLA) methods with geographic information systems (GIS). PGIS combines a range of geo-spatial information management tools and methods such as sketch maps, participatory 3D modelling (P3DM), aerial photography, satellite imagery, and global positioning system (GPS) data to represent peoples' spatial knowledge in the forms of (virtual or physical) two- or three-dimensional maps used as interactive vehicles for spatial learning, discussion, information exchange, analysis, decision making and advocacy. Participatory GIS implies making geographic technologies available to disadvantaged groups in society in order to enhance their capacity in generating, managing, analysing and communicating spatial information. PGIS practice is geared towards community empowerment through measured, demand-driven, user-friendly and integrated applications of geo-spatial technologies. GIS-based maps and spatial analysis become major conduits in the process. A good PGIS practice is embedded into long-lasting spatial decision-making processes, is flexible, adapts to different socio-cultural and bio-physical environments, depends on multidisciplinary facilitation and skills and builds essentially on visual language. The practice integrates several tools and methods whilst often relying on the combination of 'expert' skills with socially differentiated local knowledge. It promotes interactive participation of stakeholders in generating and managing spatial information and it uses information about specific landscapes to facilitate broadly-based decision making processes that support effective communication and community advocacy. If appropriately utilized, the practice could exert profound impacts on community empowerment, innovation and social change. More importantly, by placing control of access and use of culturally sensitive spatial information in the hands of those who generated them, PGIS practice could protect traditional knowledge and wisdom from external exploitation. PPGIS is meant to bring the academic practices of GIS and mapping to the local level in order to promote knowledge production by local and non-governmental groups. The idea behind PPGIS is empowerment and inclusion of marginalized populations, who have little voice in the public arena, through geographic technology education and participation. PPGIS uses and produces digital maps, satellite imagery, sketch maps, and many other spatial and visual tools, to change geographic involvement and awareness on a local level. The term was coined in 1996 at the meetings of the National Center for Geographic Information and Analysis (NCGIA). Debate Attendees to the Mapping for Change International Conference on Participatory Spatial Information Management and Communication conferred to at least three potential implications of PPGIS; it can: (1) enhance capacity in generating, managing, and communicating spatial information; (2) stimulate innovation; and ultimately; (3) encourage positive social change. This reflects on the rather nebulous definition of PPGIS as referenced in the Encyclopedia of GIS which describes PPGIS as having a definition problem. There are a range of applications for PPGIS. The potential outcomes can be applied from community and neighborhood planning and development to environmental and natural resource management. Marginalized groups, be they grassroots organizations to indigenous populations could benefit from GIS technology. Governments, non-government organizations and non-profit groups are a big force behind many programs. The current extent of PPGIS programs in the US has been evaluated by Sawicki and Peterman. They catalog over 60 PPGIS programs who aid in "public participation in community decision making by providing local-area data to community groups," in the United States. The organizations providing these programs are mostly universities, local chambers of commerce, non-profit foundations. In general, neighborhood empowerment groups can form and gain access to information that is normally very easy for the official government and planning offices to obtain. It is easier for this to happen than for individuals of lower-income neighborhoods just working by themselves. There have been several projects where university students help implement GIS in neighborhoods and communities. It is believed that access to information is the doorway to more effective government for everybody and community empowerment. In a case study of a group in Milwaukee, residents of an inner city neighborhood became active participants in building a community information system, learning to access public information and create and analyze new databases derived from their own surveys, all with the purpose of making these residents useful actors in city management and in the formation of public policy. In many cases, there are providers of data for community groups, but the groups may not know that such entities exist. Getting the word out would be beneficial. Some of the spatial data that the neighborhood wanted was information on abandoned or boarded-up buildings and homes, vacant lots, and properties that contained garbage, rubbish and debris that contributed to health and safety issues in the area. They also appreciated being able to find landlords that were not keeping up the properties. The university team and the community were able to build databases and make maps that would help them find these areas and perform the spatial analysis that they needed. Community members learned how to use the computer resources, ArcView 1.0, and build a theme or land use map of the surrounding area. They were able to perform spatial queries and analyze neighborhood problems. Some of these problems included finding absentee landlords and finding code violations for the buildings on the maps. Approaches There are two approaches to PPGIS use and application. These two perspectives, top–down and bottom–up, are the currently debated schism in PPGIS. Top-down According to Sieber (2006), PPGIS was first envisioned as a means of mapping individuals by many social and economic demographic factors in order to analyze the spatial differences in access to social services. She refers to this kind of PPGIS as top-down, being that it is less hands on for the public, but theoretically serves the public by making adjustments for the deficiencies, and improvements in public management. Bottom-up A current trend with academic involvement in PPGIS, is researching existing programs, and or starting programs in order to collect data on the effectiveness of PPGIS. Elwood (2006) in The Professional Geographer, talks in depth about the "everyday inclusions, exclusions, and contradictions of Participatory GIS research." The research is being conducted in order to evaluate if PPGIS is involving the public equally. In reference to Sieber's top-down PPGIS, this is a counter method of PPGIS, rightly referred to as bottom-up PPGIS. Its purpose is to work with the public to let them learn the technologies, then producing their own GIS. Public participation GIS is defined by Sieber as the use of geographic information systems to broaden public involvement in policymaking as well as to the value of GIS to promote the goals of nongovernmental organizations, grassroots groups and community-based organizations. It would seem on the surface that PPGIS, as it is commonly referred to, in this sense would be of a beneficial nature to those in the community or area that is being represented. But in truth only certain groups or individuals will be able to obtain the technology and use it. Is PPGIS becoming more available to the underprivileged sector of the community? The question of "who benefits?" should always be asked, and does this harm a community or group of individuals. The local, participatory management of urban neighborhoods usually follows on from 'claiming the territory', and has to be made compatible with national or local authority regulations on administering, managing and planning urban territory. PPGIS applied to participatory community/neighborhood planning has been examined by many others. Specific attention has been given to applications such as housing issues or neighborhood revitalization. Spatial databases along with the P-mapping are used to maintain a public records GIS or community land information systems. These are just a few of the uses of GIS in the community. Examples Public Participation in decision making processes works not only to identify areas of common values or variability, but also as an illustrative and instructional tool. One example of effective dialogue and building trust between the community and decision makers comes from pre-planning for development in the United Kingdom. It involves using GIS and multi-criteria decision analysis (MCDA) to make a decision about wind farm siting. This method hinges upon taking all stakeholder perspectives into account to improve chances of reaching consensus . This also creates a more transparent process and adds weight to the final decision by building upon traditional methods such as public meetings and hearings, surveys, focus groups, and deliberative processes enabling participants more insights and more informed opinions on environmental issues. Collaborative processes that consider objective and subjective inputs have the potential to efficiently address some of the conflict between development and nature as they involve a fuller justification by wind farm developers for location, scale, and design. Spatial tools such as creation of 3D view sheds offer participants new ways of assessing visual intrusion to make a more informed decision. Higgs et al. make a very telling statement when analyzing the success of this project – "the only way of accommodating people's landscape concerns is to site wind farms in places that people find more acceptable". This implies that developers recognize the validity of citizens' concerns and are willing to compromise in identifying sites where wind farms will not only be successful financially, but also successful politically and socially. This creates greater accountability and facilitates the incorporation of stakeholder values to resolve differences and gain public acceptance for vital development projects. In another planning example, Simao et al. analyzed how to create sustainable development options with widespread community support. They determined that stakeholders need to learn likely outcomes that result from stated preferences, which can be supported through enhanced access to information and incentives to increase public participation. Through a multi-criteria spatial decision support system stakeholders were able to voice concerns and work on a compromise solution to have final outcome accepted by majority when siting wind farms. This differs from the work of Higgs et al. in that the focus was on allowing users to learn from the collaborative process, both interactively and iteratively about the nature of the problem and their own preferences for desirable characteristics of solution. This stimulated sharing of opinions and discussion of interests behind preferences. After understanding the problem more fully, participants could discuss alternative solutions and interact with other participants to come to a compromise solution. Similar work has been done to incorporate public participation in spatial planning for transportation system development, and this method of two-way benefits is even beginning to move towards web-based mapping services to further simplify and extend the process into the community. See also Collaborative mapping Counter-mapping Geodesign Geographic information system Neogeography OpenStreetMap Public participation Traditional knowledge GIS Volunteered geographic information Web mapping References Other Sources Public Participation and GIS: Annotated Bibliography Community Mapping, PGIS, PPGIS and P3DM Virtual Library Further reading Beever, L. B. 2002. Addressing Environmental Justice (EJ) through Community Impact Assessment (CIA). Proceedings of the 8th TRB Conference on the Application of Transportation Planning Methods, Corpus Christi, TX, 22–26 April 2001, ed. R. Donnelly and G. Bennett, 388–98. Washington, DC: Transportation Research Board. Chambers, K., Corbett, J., Keller, P., Wood, C. 2004. Indigenous Knowledge, Mapping, and GIS: A Diffusion of Innovation Perspective. Cartographica 39(3). Corbett, J. and Keller, P. 2006. An analytical framework to examine empowerment associated with participatory geographic information systems (PGIS). Cartographica 40(4): 91–102. Elwood, Sarah. 2006 Critical Issues in Participatory GIS: Deconstructions, Reconstructions, and New Research Directions. Transactions in GIS 10:5, 693–708 Hoicka, D. 2002. Connecting the dots. Journal of Housing and Community Development 59 (6): 35–38. Kahila, M., Kyttä, M. 2009. SoftGIS as a Bridge-Builder in Collaborative Urban Planning. In: Geertman, S., Stillwell, J. (eds) Planning Support Systems Best Practice and New Methods. The GeoJournal Library, vol 95. Springer, Dordrecht. https://doi.org/10.1007/978-1-4020-8952-7_19 Kahila-Tani, M., Broberg, A., Kyttä M. & Tyger T. (2016) Let the Citizens Map—Public Participation GIS as a Planning Support System in the Helsinki Master Plan Process, Planning Practice & Research, 31:2, 195-214, DOI: 10.1080/02697459.2015.1104203 Kyem, P. 2004. Of Intractable Conflicts and Participatory GIS Applications; The Search for Consensus Amidst Competing Claims and Institutional Demands. Annals of the Association of American Geographers 94(1): 37–57. Kyem, P. 2001/2004. Power, participation and inflexible institutions: An examination of the challenges to community empowerment in participatory GIS applications. Cartographica 38(3/4): 5–17. McCall, Michael K., and Peter A. Minang. 2005. Assessing Participatory GIS for Community-Based Natural Resource Management: Claiming Community Forests in Cameroon. Geographical Journal 171.4 : 340-358. Plescia, M., S. Koontz, and S. Laurent. 2001. Community assessment in a vertically integrated health care system. American Journal of Public Health 91 (5): 811–14. Rambaldi G., Kwaku Kyem A. P.; Mbile P.; McCall M. and Weiner D. 2006. Participatory Spatial Information Management and Communication in Developing Countries. EJISDC 25, 1, 1–9 . Rambaldi G, Chambers R., McCall M, And Fox J. 2006. Practical ethics for PGIS practitioners, facilitators, technology intermediaries and researchers. PLA 54:106–113, IIED, London, UK External links Networks Open Forum on Participatory Geographic Information Systems and Technologies - a global network of PGIS/PPGIS practitioners with Spanish, Portuguese and French-speaking chapters. The Aboriginal Mapping Network supports aboriginal and indigenous peoples facing issues such as land claims, treaty negotiations and resource development. Organizations Integrated Approaches to Participatory Development (IAPAD) - Provides information and case studies on Participatory 3-Dimensional Modelling (P3DM) practice. Village Earth - Provides facilitation, consultation and training in for community-based mapping initiatives including mapping of indigenous territories, community census projects, community/government interactions. International Institute for Sustainable Development - Provides online training in community-based mapping. Native Lands works to protect biological and cultural diversity in Latin America, with a focus on Central America and southern Mexico. The Philippine Association for Inter-Cultural Development (PAFID) uses Participatory 3D Modelling, GPS and GIS applications to support Indigenous Cultural Communities throughout the Philippines in claiming for their rights over ancestral domains. The Borneo Project partners with communities and local organizations that document and map ancestral land claims. ERMIS Africa builds capacities among local communities and development practitioners in using Participatory Geo-spatial Information Management Technologies. The Technical Centre for Agricultural and Rural Cooperation ACP-EU (CTA) supports the dissemination of good PGIS practice in ACP countries. Others Community Mapping, PGIS, PPGIS and P3DM Virtual Library Grassroots Mapping National Center for Geographic Information and Analysis (NCGIA) Open Forum on Participatory Geographic Information Systems and Technologies Ecosensus: Participatory Resource Management and Decision Making in the Northern Rupununi River Catchment in Guyana Public Participation and GIS: Annotated Bibliography Ushahidi Maptionnaire Citizen Engagement Platform based on PPGIS technology Participatory democracy Participatory budgeting Geographic information systems Human geography Collaborative mapping Neogeography Applications of geographic information systems Urban planning Group processes
Participatory GIS
[ "Technology", "Engineering", "Environmental_science" ]
3,310
[ "Geographic information systems", "Information systems", "Urban planning", "Environmental social science", "Human geography", "Architecture" ]
7,391,938
https://en.wikipedia.org/wiki/Boletus%20aereus
Boletus aereus, commonly known as the dark cep, bronze bolete, or queen bolete, is a highly prized and much sought-after edible mushroom in the family Boletaceae. The bolete is widely consumed in Spain (Basque Country and Navarre), France, Italy, Greece, and generally throughout the Mediterranean. Described in 1789 by French mycologist Pierre Bulliard, it is closely related to several other European boletes, including B. reticulatus, B. pinophilus, and the popular B. edulis. Some populations in North Africa have in the past been classified as a separate species, B. mamorensis, but have been shown to be phylogenetically conspecific to B. aereus and this taxon is now regarded as a synonym. The fungus predominantly grows in habitats with broad-leaved trees and shrubs, forming symbiotic ectomycorrhizal associations in which the underground roots of these plants are enveloped with sheaths of fungal tissue (hyphae). The cork oak (Quercus suber) is a key host. The fungus produces spore-bearing fruit bodies above ground in summer and autumn. The fruit body has a large dark brown cap, which can reach in diameter. Like other boletes, B. aereus has tubes extending downward from the underside of the cap, rather than gills; spores escape at maturity through the tube openings, or pores. The pore surface of the fruit body is whitish when young, but ages to a greenish-yellow. The squat brown stipe, or stem, is up to 15 cm (6 in) tall and thick and partially covered with a raised network pattern, or reticulation. Taxonomy and phylogeny French mycologist Pierre Bulliard described Boletus aereus in 1789. The species epithet is the Latin adjective aerěus, meaning "made with bronze or copper". His countryman Lucien Quélet transferred the species to the now-obsolete genus Dictyopus in 1886, which resulted in the synonym Dictyopus aereus, while René Maire reclassified it as a subspecies of B. edulis in 1937. In 1940, Manuel Cabral de Rezende-Pinto published the variety B. aereus var. squarrosus from collections made in Brazil, but this taxon is not considered to be taxonomically distinct. In works published before 1987, the binomial name was written fully as Boletus aereus Fr., as the description by Bulliard had been sanctioned (i.e., treated as if conserved against earlier homonyms and competing synonyms) in 1821 by the "father of mycology", Swedish naturalist Elias Magnus Fries. The starting date for all the mycota had been set by general agreement as 1 January 1821, the date of Fries' work. The 1987 edition of the International Code of Botanical Nomenclature revised the rules on the starting date and primary work for names of fungi; names can now be considered valid as far back as 1 May 1753, hence predating publication of Bulliard's work. Moroccan collections under the cork oak (Quercus suber) that were initially regarded as B. aereus, were described as a separate species—Boletus mamorensis—in 1978, on the basis of a rufous chestnut cap and a rooting stipe, or stem, with a reticulation often limited to the top (apex). However, molecular phylogenetic studies by Bryn Dentinger and colleagues in 2010, placed these collections very close to B. aereus, suggesting they are more likely an ecological variant or phenotype, rather than a distinct species. More recent phylogenetic studies by M. Loizides and colleagues in 2019, have confirmed that B. mamorensis is a later synonym of B. aereus, since collections identified as the two taxa could not be genetically separated and nested in the same clade. American mycologist Harry Thiers reported Boletus aereus from California in 1975; a taxonomic revision of western North American porcini boletes in 2008 formally established them as a separate species, Boletus regineus. These differ from B. aereus by nature of their more gelatinous cap skin (pileipellis), and belong in a different porcini lineage. Boletus aereus is classified in Boletus section Boletus, alongside close relatives such as B. reticulatus, B. edulis, and B. pinophilus. A genetic study of the four European species found that B. aereus was sister to B. reticulatus. More extensive testing of worldwide taxa revealed that B. aereus was sister to a lineage that had split into B. reticulatus and two lineages that had been classified as B. edulis from southern China and Korea/northern China respectively. Molecular analysis suggests that the B. aereus/mamorensis and B. reticulatus/Chinese B. "edulis" lineages diverged around 6 to 7 million years ago. Common names Bulliard gave Boletus aereus the common name of le bolet bronzé (the bronze bolete) in 1789, noting that it was called the cep noir (black cep) in other countries. It is commonly known as ontto beltza (black fungus) in Basque, porcino nero (black piglet) in Italian, and Cèpe bronzé in French. In Greek it is known as vasilikό (the royal one), or kalogeraki (little monk). The English common name is dark cep, while the British Mycological Society also approved the name bronze bolete. Description The cap is hemispherical to convex, reaching in diameter, although specimens of have been found in some cases. Slightly velvety and lobed or dented, it is dark brown, greyish-brown, violet brown, or purple brown, often with copper, golden, or olivaceous patches. The stipe is high by wide, usually shorter than the cap diameter, initially barrel shaped but gradually becoming club shaped and tapering at the base. The stipe is pale brown, chestnut, or reddish brown in colour, covered in a brown or concolorous reticulation. As with other boletes, there are tubes rather than gills on the underside of the cap. The tube openings—known as pores—are small and rounded. Whitish or greyish-white when young, they slowly become yellowish or greenish yellow at maturity, and can turn wine coloured with bruising. The tubes themselves are initially white, later becoming yellowish or olivaceous. The thick flesh is white, exudes a robust and pleasant smell reminiscent of hazelnuts, and has a mild sweet taste. The spores are spindle shaped and measure 10.5–19 by 4–7 μm. The pileipellis is a trichodermium of interwoven septate hyphae, with long cylindrical cells. Similar species Boletus reticulatus is very similar to B. aereus, also occurring during the summer months under broad-leaved trees. It has a paler, often cracked cap and a usually paler stipe covered in a more elaborate and pronounced whitish reticulation, often extending to the stipe base. Boletus pinophilus occurs under conifers, mostly Pinus sylvestris, and has a reddish-brown cap. Microscopically, it can be separated by the more inflated, club- to spindle-shaped hyphal ends of the pileipellis. Boletus edulis occurs later in the season during lower temperatures, mostly under Picea. It has a paler viscid cap, and a paler stipe with an acute white reticulation. Microscopically, it has gelatinised hyphal ends in the pileipellis. Distribution and habitat The distribution and abundance of Boletus aereus varies greatly. Found mainly in central and southern Europe as well as north Africa, this species is rare in colder climates such as England. It is classified as a vulnerable species in the Czech Republic and has been placed on a provisional Red List of endangered species of Montenegro. Nevertheless, the fungus can be locally abundant; it is the most common bolete in the woodlands of Madonie Regional Natural Park in northern Sicily. Boletus aereus has been reported from several other island ecosystems across the Mediterranean, such as Corsica, Cyprus, Lesvos, and Naxos. Mushrooms are mostly found during hot spells in summer and autumn, growing in mycorrhizal association with various broad-leaved trees and sclerophyllous shrubs, especially oak (Quercus), beech (Fagus), chestnut (Castanea), strawberry trees (Arbutus), treeheath (Erica), and rockrose (Cistus), showing a preference for acid soils. Roadsides and parks are common habitats. The cork oak in particular is an important symbiont, and the distribution of B. aereus aligns with the tree across Europe and North Africa. The ectomycorrhizae that B. aereus forms with sweet chestnut (Castanea sativa) and downy oak (Quercus pubescens) have been described in detail. They are characterized by a lack of hyphal clamps, a plectenchymatous mantle (made of parallel-orientated hyphae with little branching or overlap), and rhizomorphs with differentiated hyphae. A 2007 field study on four species of boletes revealed little correlation between the abundance of fruit bodies and presence of its mycelia below ground, even when soil samples were taken from directly beneath the mushroom; the study concluded that the triggers leading to formation of mycorrhizae and production of the fruit bodies appear to be more complex than previously thought. In the past the fungus had been reported in China. However, recent molecular studies show that Asian porcini appear to belong to different species. Edibility and culinary uses A choice edible species, Boletus aereus is highly appreciated in Southern Europe for its culinary qualities, and is considered by many to be gastronomically superior to Boletus edulis. In the vicinity of Borgotaro in the Province of Parma of northern Italy, the four species Boletus edulis, B. aereus, B. reticulatus (formerly known as B. aestivalis), and B. pinophilus have been recognised for their superior taste and officially termed Fungo di Borgotaro. Here, these mushrooms have been collected and exported commercially for centuries. Throughout Spain, it is one of the wild edible fungi most commonly collected for the table, particularly in Aragon, where it is harvested for sale in markets. When collected, the skin of the cap is left intact, and dirt is brushed off the surface. Pores are left unless old and soft. Boletus aereus is especially suited for drying, a process which enhances its flavour and aroma. Like other boletes, the mushrooms can be dried by being sliced and strung separately on twine, then hung close to the ceiling of a kitchen. Alternatively, the mushrooms can be dried by cleaning with a brush (washing is not recommended), and then placed in a wicker basket or bamboo steamer on top of a boiler or hot water tank. Once dry, they are kept in an airtight jar. They are easily reconstituted by soaking in hot, but not boiling, water for about twenty minutes; the water is infused with the mushroom aroma and can be used as stock in subsequent cooking. When dried, a small amount of the mushroom can improve the taste of less flavoursome fungi-based dishes. Nutritional value Based on analyses of fruit bodies collected in Portugal, there are 367 kilocalories per 100 grams of bolete (as dry weight). The macronutrient composition of 100 grams of dried bolete includes 17.9 grams of protein, 72.8 grams of carbohydrates, and 0.4 grams of fat. By weight, fresh fruit bodies are about 92% water. The predominant sugar is trehalose (4.7 grams/100 grams dry weight; all following values assume this mass), with lesser amounts of mannitol (1.3 grams). There are 6 grams of tocopherols, the majority of which is gamma-tocopherol (vitamin E), and 3.7 grams of ascorbic acid. References Basque cuisine Edible fungi aereus Fungi of Europe Fungi of North America Fungi of Africa Fungi of China Fungi described in 1789 Taxa named by Jean Baptiste François Pierre Bulliard Fungus species
Boletus aereus
[ "Biology" ]
2,625
[ "Fungi", "Fungus species" ]
7,391,950
https://en.wikipedia.org/wiki/OLT%20%28mobile%20network%29
OLT (Norwegian for Offentlig Landmobil Telefoni, Public Land Mobile Telephony), was the first land mobile telephone network in Norway. It was established December 1, 1966, and continued until it was obsoleted by NMT in 1990. In 1981, there were 30,000 mobile subscribers, which at the time made this network the largest in the world. The network operated in the 160 MHz VHF band, using frequency modulation (FM) on 160-162 MHz for the mobile unit, and 168-170 MHz for the base station. Most mobile sets were semi-duplex, but some of the more expensive units were full duplex. Each subscriber was assigned a five digit phone number. In 1976, the OLT system was extended to include UHF bands, incorporating MTD, and allowing international roaming within Scandinavian countries. External links Norwegian mobile phone history from Norsk Telemuseum (Norwegian) Mobile radio telephone systems
OLT (mobile network)
[ "Technology" ]
196
[ "Mobile telecommunications", "Mobile radio telephone systems" ]
7,392,359
https://en.wikipedia.org/wiki/Arthur%20J.%20Bond
Arthur J. Bond (1939 – December 30, 2012) was the dean of the School of Engineering and Technology at Alabama A&M University in Alabama, United States, and an activist in the cause of increasing black enrollment and retention in engineering and technology. He was a founding member of the National Society of Black Engineers and part of the team that fought for state funding of engineering at Alabama A&M University. Education Bond came to Purdue University in 1957 to study electrical engineering on National Merit Scholarship and Purdue's Special Merit Scholarship. He describes always having been interested in electrical engineering, and ending up at Purdue by luck, recounting that "My high school principal's son was interested in engineering at Purdue... One day they were going for a visit to campus, and they asked me if I wanted to come." After two years, however, he had to drop out due to a softball injury. After he recovered, he joined the army, "because Vietnam was looming on the horizon," he would later recount. Bond returned to Purdue in 1966, was graduated with a bachelor's degree in electrical engineering (BSEE) in 1968, a master's degree in electrical engineering (MSEE) in 1969, and Ph.D. in 1974. At the time Bond was the 42nd African-American to earn a PhD in engineering, and only the 12th to earn it in electrical engineering. Student organizing Bond was a student leader at Purdue during the time when the civil rights movement was in full swing. He would become a founding member of Purdue's Black Cultural Center and a founder of the National Society of Black Engineers. At Purdue, Bond led students to demand that Purdue open up its engineering schools to more blacks and women. Frederick L. Hovde, Purdue's president at the time, was sympathetic to the cause. He appointed Bond to a steering committee, which organized the first national effort to increase minority participation in engineering. Bond remembers "When you would go to class, you would never see another Black student from the day you entered Purdue until you graduated. So we didn't know what other black student was studying engineering." Responding to students' need for a place where minority students could network and study, Purdue provided black students with a house, which Bond and his friends would "move in and decorate it and call it a Black Cultural Center," Bond later said. When two undergraduate black engineering students approached the dean of engineering to create a Black Society of Engineers, the dean agreed and assigned Bond, then a graduate student, to be the group's advisor. In 1971, working from other engineering association's constitutions, they wrote the constitution for the original chapter of the NSBE. This group would grow into a national organization that is now the National Society of Black Engineers, which as of 2011 had over 30,000 members. Career Upon receiving his doctorate, Bond became an assistant professor of electrical engineering at Purdue for five years, and then an associate professor at Purdue Calumet. He then went to work in industry for RCA, AlliedSignal, and Bendix. In 1989, Bond joined Tuskegee University as head of its department of electrical engineering, where he helped the Electrical Engineering Department regain full accreditation from the Accrediting Board for Engineering and Technology (ABET). In 1992, Bond joined Alabama A&M as Dean of Engineering and Technology. At the time, the land-grant university was involved in the notorious Knight v. Alabama lawsuit, in which the plaintiff class, joined by the U.S. Justice Department argued that the State of Alabama's system of public university funding is a violation of equal rights. The case resulted in a 1995 decree that ordered Alabama to fund engineering at Alabama A&M. The ruling further ordered that whatever level of the engineering program that would be built up in nine years would constitute the required level of funding by the state. As dean, Bond played a pivotal role in meeting the nine-year challenge. A&M's efforts bore fruit in 1997, when it was able to offer the first engineering courses. In 2000 mechanical and electrical engineering at A&M was accredited with the effective date made retroactive to 1998. Bond retired from Alabama A&M in 1996. Honors 1968: Bachelor of Science in Electrical Engineering, Purdue University. 1969: Masters of Science in Electrical Engineering, Purdue University. 1971: Co-Founder of The Society of Black Engineers (now NSBE). 1974: Assistant Professor, Purdue University. 1979: Associate Professor, Purdue University, Calumet Campus. 1984: Principal Engineer, Bendix Engine Control Systems. 1989: Head of Electrical Engineering Department, Tuskegee University. 1992-96: Dean of Engineering and Technology, Alabama A&M University. 1994: Minorities in Engineering Award (formerly Vincent Bendix Award), American Society for Engineering Education. 1994: NACME Reginald Jones Distinguished Service Award for Minorities in Engineering for service above the call of duty 2000: Golden Torch Award for Academic Visionary, National Society of Black Engineers 2000: Outstanding Electrical and Computer Engineer, Purdue University 2005: Distinguished Engineering Alumni, Purdue University. 2009: Doctor of Engineering (Honoris Causa), Purdue University. 2010: Alabama A&M University named new Engineering & Technology Building Arthur J. Bond Hall. References External links Purdue Distinguished Engineering Alumni web page about Bond 1939 births Alabama A&M University faculty African-American engineers American electrical engineers Purdue University College of Engineering alumni 21st-century African-American people 20th-century African-American people Bendix Corporation people 2012 deaths Tuskegee University faculty Electrical engineering Electrical engineering academics
Arthur J. Bond
[ "Engineering" ]
1,137
[ "Electrical engineering" ]
7,392,872
https://en.wikipedia.org/wiki/Ensemble%20interpretation
The ensemble interpretation of quantum mechanics considers the quantum state description to apply only to an ensemble of similarly prepared systems, rather than supposing that it exhaustively represents an individual physical system. The advocates of the ensemble interpretation of quantum mechanics claim that it is minimalist, making the fewest physical assumptions about the meaning of the standard mathematical formalism. It proposes to take to the fullest extent the statistical interpretation of Max Born, for which he won the Nobel Prize in Physics in 1954. On the face of it, the ensemble interpretation might appear to contradict the doctrine proposed by Niels Bohr, that the wave function describes an individual system or particle, not an ensemble, though he accepted Born's statistical interpretation of quantum mechanics. It is not quite clear exactly what kind of ensemble Bohr intended to exclude, since he did not describe probability in terms of ensembles. The ensemble interpretation is sometimes, especially by its proponents, called "the statistical interpretation", but it seems perhaps different from Born's statistical interpretation. As is the case for "the" Copenhagen interpretation, "the" ensemble interpretation might not be uniquely defined. In one view, the ensemble interpretation may be defined as that advocated by Leslie E. Ballentine, Professor at Simon Fraser University. His interpretation does not attempt to justify, or otherwise derive, or explain quantum mechanics from any deterministic process, or make any other statement about the real nature of quantum phenomena; it intends simply to interpret the wave function. It does not propose to lead to actual results that differ from orthodox interpretations. It makes the statistical operator primary in reading the wave function, deriving the notion of a pure state from that. History In his 1926 paper introducing the concept of quantum scattering theory Max Born proposed to view "the motion of the particle follows the laws of probability, but the probability itself propagates in accord with causal laws", where the causal laws are Schrödinger's equations. As related in his 1954 Nobel Prize in Physics lecture Born viewed the statistical character of quantum mechanics as an empirical observation with philosophical implications. Einstein maintained consistently that the quantum mechanics only supplied a statistical view. In 1936 he wrote "“the function does not in any way describe a condition which could be that of a single system; it relates rather to many systems, to ‘an ensemble of systems’ in the sense of statistical mechanics.” However Einstein did not provide a detailed study of the ensemble, ultimately because he considered quantum mechanics itself to be incomplete primarily because it was only an ensemble theory. Einstein believed quantum mechanics was correct in the same sense that thermodynamics is correct, but that it was insufficient as means of unifying physics. Also in the years around 1936, Karl Popper published philosophical studies countering the work of Heisenberg and Bohr. Popper considered their work as essentially subjectivist, unfalsifiable, and thus unscientific. He held that the quantum state represented statistical assertions which have no predictive power for individual particles. Popper described "propensities" as the correct notion of probability for quantum mechanics. Although several other notable physicists championed the ensemble concept, including John C. Slater, Edwin C. Kemble, and Dmitry Blokhintsev, Leslie Ballentine's 1970 paper 'The statistical interpretation of quantum mechanics" and his textbook have become the main sources. Ballentine followed up with axiomatic development of propensity theory, analysis of decoherence in the ensemble interpretation and other papers spanning 40 years. States, systems, and ensembles Perhaps the first expression of an ensemble interpretation was that of Max Born. In a 1968 article, he used the German words 'gleicher Haufen', which are often translated into English, in this context, as 'ensemble' or 'assembly'. The atoms in his assembly were uncoupled, meaning that they were an imaginary set of independent atoms that defines its observable statistical properties. Born did not develop a more detailed specification of ensembles to complete his scattering theory work. Although Einstein described quantum mechanics as clearly an ensemble theory he did present a formal definition of an ensemble. Einstein sought a theory of individual entities, which he argued was not quantum mechanics. Ballentine distinguish his particular ensemble interpretation the Statistical Interpretation. According to Ballentine, the distinguishing difference between many of the Copenhagen-like interpretations (CI) and the Statistical Interpretation (EI) is the following: CI: A pure state provides a complete description of an individual system, e.g. an electron. EI: A pure state describes the statistical properties of an ensemble of identically prepared systems. Ballentine defines a quantum state as an ensemble of similarly prepared systems. For example, the system may be a single electron, then the ensemble will be "the set of all single electrons subjected to the same state preparation technique." He uses the example of a low-intensity electron beam prepared with a narrow range of momenta. Each prepared electron is a system, the ensemble consists of many such systems. Ballentine emphasizes that the meaning of the "Quantum State" or "State Vector" may be described, essentially, by a one-to-one correspondence to the probability distributions of measurement results, not the individual measurement results themselves. A mixed state is a description only of the probabilities, and of positions, not a description of actual individual positions. A mixed state is a mixture of probabilities of physical states, not a coherent superposition of physical states. Probability; propensity Quantum observations are inherently statistical. For example, the electrons in a low-intensity double slit experiment arrive at random times and seemingly random places and yet eventually show an interference pattern. The theory of quantum mechanics offer only statistical results. Given that we have prepared a system in a state , the theory predicts a result as a probability distribution: . Different approaches to probability can be applied to connect the probability distribution of theory to the observed randomness. Popper, Ballentine, Paul Humphreys, and others point to propensity as the correct interpretation of probability in science. Propensity, a form of causality that is weaker than determinism, is the tendency of a physical system to produce a result. Thus the mathematical statement means the propensity for event to occur given the physical scenario is . The physical scenario is viewed as weakly causal condition. The weak causation invalidates Bayes' theorem and correlation is no longer symmetric. As noted by Paul Humphreys, many physical examples show the lack of reciprocal correlation, for example, the propensity for smokers to get lung cancer does not imply lung cancer has a propensity to cause smoking. Propensity closely matches the application of quantum theory: single event probability can be predicted by theory but only verified by repeated samples in experiment. Popper explicitly developed propensity theory to eliminate subjectivity in quantum mechanics. Preparative and observing devices as origins of quantum randomness An isolated quantum mechanical system, specified by a wave function, evolves in time in a deterministic way according to the Schrödinger equation that is characteristic of the system. Though the wave function can generate probabilities, no randomness or probability is involved in the temporal evolution of the wave function itself. This is agreed, for example, by Born, Dirac, von Neumann, London & Bauer, Messiah, and Feynman & Hibbs. An isolated system is not subject to observation; in quantum theory, this is because observation is an intervention that violates isolation. The system's initial state is defined by the preparative procedure; this is recognized in the ensemble interpretation, as well as in the Copenhagen approach. The system's state as prepared, however, does not entirely fix all properties of the system. The fixing of properties goes only as far as is physically possible, and is not physically exhaustive; it is, however, physically complete in the sense that no physical procedure can make it more detailed. This is stated clearly by Heisenberg in his 1927 paper. It leaves room for further unspecified properties. For example, if the system is prepared with a definite energy, then the quantum mechanical phase of the wave function is left undetermined by the mode of preparation. The ensemble of prepared systems, in a definite pure state, then consists of a set of individual systems, all having one and the same definite energy, but each having a different quantum mechanical phase, regarded as probabilistically random. The wave function, however, does have a definite phase, and thus specification by a wave function is more detailed than specification by state as prepared. The members of the ensemble are logically distinguishable by their distinct phases, though the phases are not defined by the preparative procedure. The wave function can be multiplied by a complex number of unit magnitude without changing the state as defined by the preparative procedure. The preparative state, with unspecified phase, leaves room for the several members of the ensemble to interact in respectively several various ways with other systems. An example is when an individual system is passed to an observing device so as to interact with it. Individual systems with various phases are scattered in various respective directions in the analyzing part of the observing device, in a probabilistic way. In each such direction, a detector is placed, in order to complete the observation. When the system hits the analyzing part of the observing device, that scatters it, it ceases to be adequately described by its own wave function in isolation. Instead it interacts with the observing device in ways partly determined by the properties of the observing device. In particular, there is in general no phase coherence between system and observing device. This lack of coherence introduces an element of probabilistic randomness to the system–device interaction. It is this randomness that is described by the probability calculated by the Born rule. There are two independent originative random processes, one that of preparative phase, the other that of the phase of the observing device. The random process that is actually observed, however, is neither of those originative ones. It is the phase difference between them, a single derived random process. The Born rule describes that derived random process, the observation of a single member of the preparative ensemble. In the ordinary language of classical or Aristotelian scholarship, the preparative ensemble consists of many specimens of a species. The quantum mechanical technical term 'system' refers to a single specimen, a particular object that may be prepared or observed. Such an object, as is generally so for objects, is in a sense a conceptual abstraction, because, according to the Copenhagen approach, it is defined, not in its own right as an actual entity, but by the two macroscopic devices that should prepare and observe it. The random variability of the prepared specimens does not exhaust the randomness of a detected specimen. Further randomness is injected by the quantum randomness of the observing device. It is this further randomness that makes Bohr emphasize that there is randomness in the observation that is not fully described by the randomness of the preparation. This is what Bohr means when he says that the wave function describes "a single system". He is focusing on the phenomenon as a whole, recognizing that the preparative state leaves the phase unfixed, and therefore does not exhaust the properties of the individual system. The phase of the wave function encodes further detail of the properties of the individual system. The interaction with the observing device reveals that further encoded detail. It seems that this point, emphasized by Bohr, is not explicitly recognized by the ensemble interpretation, and this may be what distinguishes the two interpretations. It seems, however, that this point is not explicitly denied by the ensemble interpretation. Einstein perhaps sometimes seemed to interpret the probabilistic "ensemble" as a preparative ensemble, recognizing that the preparative procedure does not exhaustively fix the properties of the system; therefore he said that the theory is "incomplete". Bohr, however, insisted that the physically important probabilistic "ensemble" was the combined prepared-and-observed one. Bohr expressed this by demanding that an actually observed single fact should be a complete "phenomenon", not a system alone, but always with reference to both the preparing and the observing devices. The Einstein–Podolsky–Rosen criterion of "completeness" is clearly and importantly different from Bohr's. Bohr regarded his concept of "phenomenon" as a major contribution that he offered for quantum theoretical understanding. The decisive randomness comes from both preparation and observation, and may be summarized in a single randomness, that of the phase difference between preparative and observing devices. The distinction between these two devices is an important point of agreement between Copenhagen and ensemble interpretations. Though Ballentine claims that Einstein advocated "the ensemble approach", a detached scholar would not necessarily be convinced by that claim of Ballentine. There is room for confusion about how "the ensemble" might be defined. "Each photon interferes only with itself" Niels Bohr famously insisted that the wave function refers to a single individual quantum system. He was expressing the idea that Dirac expressed when he famously wrote: "Each photon then interferes only with itself. Interference between different photons never occurs.". Dirac clarified this by writing: "This, of course, is true only provided the two states that are superposed refer to the same beam of light, i.e. all that is known about the position and momentum of a photon in either of these states must be the same for each." Bohr wanted to emphasize that a superposition is different from a mixture. He seemed to think that those who spoke of a "statistical interpretation" were not taking that into account. To create, by a superposition experiment, a new and different pure state, from an original pure beam, one can put absorbers and phase-shifters into some of the sub-beams, so as to alter the composition of the re-constituted superposition. But one cannot do so by mixing a fragment of the original unsplit beam with component split sub-beams. That is because one photon cannot both go into the unsplit fragment and go into the split component sub-beams. Bohr felt that talk in statistical terms might hide this fact. The physics here is that the effect of the randomness contributed by the observing apparatus depends on whether the detector is in the path of a component sub-beam, or in the path of the single superposed beam. This is not explained by the randomness contributed by the preparative device. Measurement and collapse Bras and kets The ensemble interpretation is notable for its relative de-emphasis on the duality and theoretical symmetry between bras and kets. The approach emphasizes the ket as signifying a physical preparation procedure. There is little or no expression of the dual role of the bra as signifying a physical observational procedure. The bra is mostly regarded as a mere mathematical object, without very much physical significance. It is the absence of the physical interpretation of the bra that allows the ensemble approach to by-pass the notion of "collapse". Instead, the density operator expresses the observational side of the ensemble interpretation. It hardly needs saying that this account could be expressed in a dual way, with bras and kets interchanged, mutatis mutandis. In the ensemble approach, the notion of the pure state is conceptually derived by analysis of the density operator, rather than the density operator being conceived as conceptually synthesized from the notion of the pure state. An attraction of the ensemble interpretation is that it appears to dispense with the metaphysical issues associated with reduction of the state vector, Schrödinger cat states, and other issues related to the concepts of multiple simultaneous states. The ensemble interpretation postulates that the wave function only applies to an ensemble of systems as prepared, but not observed. There is no recognition of the notion that a single specimen system could manifest more than one state at a time, as assumed, for example, by Dirac. Hence, the wave function is not envisaged as being physically required to be "reduced". This can be illustrated by an example: Consider a quantum die. If this is expressed in Dirac notation, the "state" of the die can be represented by a "wave" function describing the probability of an outcome given by: Where the "+" sign of a probabilistic equation is not an addition operator, it is the standard probabilistic Boolean operator OR. The state vector is inherently defined as a probabilistic mathematical object such that the result of a measurement is one outcome OR another outcome. It is clear that on each throw, only one of the states will be observed, but this is not expressed by a bra. Consequently, there appears to be no requirement for a notion of collapse of the wave function/reduction of the state vector, or for the die to physically exist in the summed state. In the ensemble interpretation, wave function collapse would make as much sense as saying that the number of children a couple produced, collapsed to 3 from its average value of 2.4. The state function is not taken to be physically real, or be a literal summation of states. The wave function, is taken to be an abstract statistical function, only applicable to the statistics of repeated preparation procedures. The ket does not directly apply to a single particle detection, but only the statistical results of many. This is why the account does not refer to bras, and mentions only kets. Diffraction The ensemble approach differs significantly from the Copenhagen approach in its view of diffraction. The Copenhagen interpretation of diffraction, especially in the viewpoint of Niels Bohr, puts weight on the doctrine of wave–particle duality. In this view, a particle that is diffracted by a diffractive object, such as for example a crystal, is regarded as really and physically behaving like a wave, split into components, more or less corresponding to the peaks of intensity in the diffraction pattern. Though Dirac does not speak of wave–particle duality, he does speak of "conflict" between wave and particle conceptions. He indeed does describe a particle, before it is detected, as being somehow simultaneously and jointly or partly present in the several beams into which the original beam is diffracted. So does Feynman, who speaks of this as "mysterious". The ensemble approach points out that this seems perhaps reasonable for a wave function that describes a single particle, but hardly makes sense for a wave function that describes a system of several particles. The ensemble approach demystifies this situation along the lines advocated by Alfred Landé, accepting Duane's hypothesis. In this view, the particle really and definitely goes into one or other of the beams, according to a probability given by the wave function appropriately interpreted. There is definite quantal transfer of translative momentum between particle and diffractive object. This is recognized also in Heisenberg's 1930 textbook, though usually not recognized as part of the doctrine of the so-called "Copenhagen interpretation". This gives a clear and utterly non-mysterious physical or direct explanation instead of the debated concept of wave function "collapse". It is presented in terms of quantum mechanics by other present day writers also, for example, Van Vliet. For those who prefer physical clarity rather than mysterianism, this is an advantage of the ensemble approach, though it is not the sole property of the ensemble approach. With a few exceptions, this demystification is not recognized or emphasized in many textbooks and journal articles. Criticism David Mermin sees the ensemble interpretation as being motivated by an adherence ("not always acknowledged") to classical principles. "[...] the notion that probabilistic theories must be about ensembles implicitly assumes that probability is about ignorance. (The 'hidden variables' are whatever it is that we are ignorant of.) But in a non-deterministic world probability has nothing to do with incomplete knowledge, and ought not to require an ensemble of systems for its interpretation". However, according to Einstein and others, a key motivation for the ensemble interpretation is not about any alleged, implicitly assumed probabilistic ignorance, but the removal of "…unnatural theoretical interpretations…". A specific example being the Schrödinger cat problem, but this concept applies to any system where there is an interpretation that postulates, for example, that an object might exist in two positions at once. Mermin also emphasises the importance of describing single systems, rather than ensembles. "The second motivation for an ensemble interpretation is the intuition that because quantum mechanics is inherently probabilistic, it only needs to make sense as a theory of ensembles. Whether or not probabilities can be given a sensible meaning for individual systems, this motivation is not compelling. For a theory ought to be able to describe as well as predict the behavior of the world. The fact that physics cannot make deterministic predictions about individual systems does not excuse us from pursuing the goal of being able to describe them as they currently are." Schrödinger's cat The ensemble interpretation states that superpositions are nothing but subensembles of a larger statistical ensemble. That being the case, the state vector would not apply to individual cat experiments, but only to the statistics of many similar prepared cat experiments. Proponents of this interpretation state that this makes the Schrödinger's cat paradox a trivial non-issue. However, the application of state vectors to individual systems, rather than ensembles, has claimed explanatory benefits, in areas like single-particle twin-slit experiments and quantum computing (see Schrödinger's cat applications). As an avowedly minimalist approach, the ensemble interpretation does not offer any specific alternative explanation for these phenomena. The frequentist probability variation The claim that the wave functional approach fails to apply to single particle experiments cannot be taken as a claim that quantum mechanics fails in describing single-particle phenomena. In fact, it gives correct results within the limits of a probabilistic or stochastic theory. Probability always requires a set of multiple data, and thus single-particle experiments are really part of an ensemble — an ensemble of individual experiments that are performed one after the other over time. In particular, the interference fringes seen in the double-slit experiment require repeated trials to be observed. The quantum Zeno effect Leslie Ballentine promoted the ensemble interpretation in his book Quantum Mechanics, A Modern Development. In it, he described what he called the "Watched Pot Experiment". His argument was that, under certain circumstances, a repeatedly measured system, such as an unstable nucleus, would be prevented from decaying by the act of measurement itself. He initially presented this as a kind of reductio ad absurdum of wave function collapse. The effect has been shown to be real. Ballentine later wrote papers claiming that it could be explained without wave function collapse. See also Atomic electron transition Interpretations of quantum mechanics References External links Quantum mechanics as Wim Muynk sees it Kevin Aylwards's account of the ensemble interpretation Detailed ensemble interpretation by Marcel Nooijen Pechenkin, A.A. The early statistical interpretations of quantum mechanics Krüger, T. An attempt to close the Einstein–Podolsky–Rosen debate Duda, J. Four-dimensional understanding of quantum mechanics Ulf Klein's website on the statistical interpretation of quantum theory Mamas, D.L. An intrinsic quantum state interpretation of quantum mechanics Klein, U. From probabilistic mechanics to quantum theory Quantum measurement Interpretations of quantum mechanics
Ensemble interpretation
[ "Physics" ]
4,851
[ "Interpretations of quantum mechanics", "Quantum measurement", "Quantum mechanics" ]
7,392,938
https://en.wikipedia.org/wiki/Trope%20%28music%29
A trope or tropus may refer to a variety of different concepts in medieval, 20th-, and 21st-century music. The term trope derives from the Greek (tropos), "a turn, a change", related to the root of the verb (trepein), "to turn, to direct, to alter, to change". The Latinised form of the word is tropus. In music, a trope is adding another section, or trope to a plainchant or section of plainchant, thus making it appropriate to a particular occasion or festival. Medieval music From the 9th century onward, trope refers to additions of new music to pre-existing chants in use in the Western Christian Church. Three types of addition are found in music manuscripts: new melismas without text (mostly unlabelled or called "trope" in manuscripts) addition of a new text to a pre-existing melisma (more often called prosula, prosa, verba or versus) new verse or verses, consisting of both text and music (mostly called trope, but also laudes or versus in manuscripts). The new verses can appear preceding or following the original material, or in between phrases. O God creator of all things, thou our merciful God eleyson, we pray to thee, O great king of kings, singing praises together to thee eleyson, to whom be praise, power, peace and dominion for ever without end eleyson, O Christ, sole king, O Son coeternal with the kind Father eleyson who saved mankind, being lost, giving life for death eleyson lest your pastured sheep should perish, O Jesus, good shepherd eleyson. Consoler of suppliant spirits below, we beseech thee eleyson, O Lord, our strength and our salvation for eternity eleyson, O highest God, grant to us the gifts of eternal life and have mercy upon us eleyson. The standard Latin-rite ninefold Kyrie is the backbone of this trope. Although the supplicatory format ('eleyson'/'have mercy') has been retained, the Kyrie in this troped format adopts a distinctly Trinitarian cast with a tercet address to the Holy Spirit which is not present in the standard Kyrie. Deus creator omnium is thus a fine example of the literary and doctrinal sophistication of some of the tropes used in the Latin rite and its derived uses in the mediæval period. 20th-century music In certain types of atonal and serial music, a trope is an unordered collection of different pitches, most often of cardinality six (now usually called an unordered hexachord, of which there are two complementary ones in twelve-tone equal temperament). Tropes in this sense were devised and named by Josef Matthias Hauer in connection with his own twelve-tone technique, developed simultaneously with but overshadowed by Arnold Schoenberg's. Hauer discovered the 44 tropes, pairs of complementary hexachords, in 1921, allowing him to classify any of the 479,001,600 twelve-tone melodies into one of 44 types. The primary purpose of the tropes is not analysis (although it can be used for it) but composition. A trope is neither a hexatonic scale nor a chord. Likewise, it is neither a pitch-class set nor an interval-class set. A trope is a framework of contextual interval relations. Therefore, the key information a trope contains is not the set of intervals it consists of (and by no means any set of pitch-classes), it is the relational structure of its intervals. Each trope contains different types of symmetries and significant structural intervallic relations on varying levels, namely within its hexachords, between the two halves of an hexachord and with relation to whole other tropes. Based on the knowledge one has about the intervallic properties of a trope, one can make precise statements about any twelve-tone row that can be created from it. A composer can utilize this knowledge in many ways in order to gain full control over the musical material in terms of form, harmony and melody. The hexachords of trope no. 3 are related by inversion. Trope 3 is therefore suitable for the creation of inversional and retrograde inversional structures. Moreover, its primary formative intervals are the minor second and the major third/minor sixth. This trope contains [0,2,6] twice inside its first hexachord (e.g. F–G–B and G–A–C and [0,4,6] in the second one (e.g. A–C–D and B–D–E). Its multiplications M5 and M7 will result in trope 30 (and vice versa). Trope 3 also allows the creation of an intertwined retrograde transposition by a major second and therefore of trope 17 (e.g., G–A–C–B–F–F–|–E–E–C–D–B–A → Bold pitches represent a hexachord of trope 17). In general, familiarity with the tropes enables a composer to precisely predetermine a whole composition according to almost any structural plan. For instance, an inversional twelve-tone row from this trope 3 (such as G–A–C–B–F–F–D–C–A–B–E–D) that is harmonized by the [3–3–3–3] method as suggested by Hauer, will result in an equally inversional sequence of sonorities. This will enable the composer, for example, to write an inversional canon or a mirror fugue easily (see example 1). The symmetry of a twelve-tone row can thus be transferred to a whole composition likewise. Consequently, trope technique allows the integration of a formal concept into both a twelve-tone row and a harmonic matrix—and therefore into a whole musical piece. See also Trope (cantillation), (Yiddish טראָפ), the notation for accentuation and musical reading of the Bible in Jewish religious liturgy References Sources Further reading Dewhitt, Mitzi. 2010. The Meaning of the Musical Tree. [USA]: Xlibris Corp. . Hansen, Finn Egeland. 1990. "Tropering: Et kompositionsprincip". In Festskrift Søren Sørensen: 1920. 29 September 1990, edited by Finn Egeland Hansen, Steen Pade, Christian Thodberg, and Arthur Ilfeldt, 185–205. Copenhagen: Fog. . Hauer, Josef Matthias. 1948. . Knapp, Janet. 1990. "Which Came First, the Chicken or the Egg?: Some Reflections on the Relationship between Conductus and Trope". In Essays in Musicology: A Tribute to Alvin Johnson, edited by Lewis Lockwood and Edward Roesner. [Philadelphia?]: American Musicological Society. . Perle, George. 1991. Serial Composition and Atonality: An Introduction to the Music of Schoenberg, Berg, and Webern, sixth edition, revised. Berkeley: University of California Press. . Sedivy, Dominik. 2012. Tropentechnik. Ihre Anwendung und ihre Möglichkeiten. Salzburger Stier 5. Würzburg: Königshausen & Neumann. . Sengstschmid, Johann. 1980. Zwischen Trope und Zwölftonspiel: J. M. Hauers Zwölftontechnik in ausgewählten Beispielen. Forschungsbeiträge zur Musikwissenschaft 28. Regensburg: G. Bosse. . Summers, William John. 2007. "To Trope or Not to Trope?: or, How Was That English Gloria Performed?" In Music in Medieval Europe: Studies in Honour of Bryan Gillingham, edited by Terence Bailey and Alma Santosuosso. Aldershot, England; Burlington, Vermont: Ashgate Publishers. . Christian music Formal sections in music analysis Twelve-tone technique
Trope (music)
[ "Technology" ]
1,736
[ "Components", "Formal sections in music analysis" ]
7,393,195
https://en.wikipedia.org/wiki/Cumulative%20hierarchy
In mathematics, specifically set theory, a cumulative hierarchy is a family of sets indexed by ordinals such that If is a limit ordinal, then Some authors additionally require that . The union of the sets of a cumulative hierarchy is often used as a model of set theory. The phrase "the cumulative hierarchy" usually refers to the von Neumann universe, which has . Reflection principle A cumulative hierarchy satisfies a form of the reflection principle: any formula in the language of set theory that holds in the union of the hierarchy also holds in some stages . Examples The von Neumann universe is built from a cumulative hierarchy . The sets of the constructible universe form a cumulative hierarchy. The Boolean-valued models constructed by forcing are built using a cumulative hierarchy. The well founded sets in a model of set theory (possibly not satisfying the axiom of foundation) form a cumulative hierarchy whose union satisfies the axiom of foundation. References Set theory
Cumulative hierarchy
[ "Mathematics" ]
191
[ "Mathematical logic", "Set theory" ]
7,393,279
https://en.wikipedia.org/wiki/Soilmec
Soilmec S.p.A. is an Italian manufacturer of construction equipment belonging to the Trevi Group established in 1969 in Cesena. Soilmec is distributed in more than 70 countries worldwide. Soilmec manufactures drilling machinery to be used in the construction of pile foundations, and the drilling and servicing of oil, gas and water wells. The company has expanded into the manufacture of crawler cranes and tunnel boring machines. The machinery produced by Soilmec is typically white with blue trim and the name SOILMEC is written in yellow text with a black outline. Gallery See also List of Italian companies External links Soilmec S.p.A website Construction equipment manufacturers of Italy Mining equipment companies Engineering companies of Italy Manufacturing companies established in 1969 Italian companies established in 1969 Italian brands
Soilmec
[ "Engineering" ]
160
[ "Mining equipment", "Mining equipment companies" ]
7,393,485
https://en.wikipedia.org/wiki/Lecithinase
Lecithinase is a type of phospholipase that acts upon lecithin. It can be produced by Clostridium perfringens, Staphylococcus aureus, Pseudomonas aeruginosa or Listeria monocytogenes. C. perfringens alpha toxin (lecithinase) causes myonecrosis and hemolysis. The lecithinase of S. aureus is used in detection of coagulase-positive strains, because of high link between lecithinase activity and coagulase activity. References EC 3.1.4
Lecithinase
[ "Chemistry", "Biology" ]
130
[ "Biochemistry stubs", "Biotechnology stubs", "Biochemistry" ]
7,393,754
https://en.wikipedia.org/wiki/Advanced%20CCD%20Imaging%20Spectrometer
The Advanced CCD Imaging Spectrometer (ACIS), formerly the AXAF CCD Imaging Spectrometer, is an instrument built by a team from the Massachusetts Institute of Technology's Center for Space Research and the Pennsylvania State University for the Chandra X-ray Observatory. ACIS is a focal plane instrument that uses an array of charge-coupled devices. It serves as an X-ray integral field spectrograph for Chandra. The instrument is capable of measuring both the position and energy of incoming X-rays. The CCD sensors of ACIS operate at and its filters at . It carries a special heater that allows contamination from Chandra to be baked off; the spacecraft contains lubricants, and the ACIS design took this into account in order to clean its sensors. Contamination buildup can reduce the instrument's sensitivity. Radiation in space is another potential danger to the sensor. , after 15 years of operation, there was no indication of a limit to the lifetime of ACIS. Another design feature of the instrument was a calibration source that can be used to understand its health. This allows for a measurement of the level of contamination, if present, as well as any degree of charge transfer inefficiency. References External links ACIS website by the Massachusetts Institute of Technology ACIS website by Pennsylvania State University The Chandra Proposers' Observatory Guide by the Smithsonian Astrophysical Observatory Chandra X-ray Observatory Space telescope sensors
Advanced CCD Imaging Spectrometer
[ "Astronomy" ]
292
[ "Space telescopes", "Chandra X-ray Observatory", "Space telescope sensors" ]
7,394,916
https://en.wikipedia.org/wiki/Gun%20data%20computer
The gun data computer was a series of artillery computers used by the U.S. Army for coastal artillery, field artillery and anti-aircraft artillery applications. For antiaircraft applications they were used in conjunction with a director computer. Variations M1: This was used by seacoast artillery for major-caliber seacoast guns. It computed continuous firing data for a battery of two guns that were separated by not more than . It utilised the same type of input data furnished by a range section with the then-current (1940) types of position-finding and fire-control equipment. M3: This was used in conjunction with the M9 and M10 directors to compute all required firing data, i.e. azimuth, elevation and fuze time. The computations were made continuously, so that the gun was at all times correctly pointed and the fuze correctly timed for firing at any instant. The computer was mounted in the M13 or M14 director trailer. M4: This was identical to the M3 except for some mechanisms and parts which were altered to allow for different ammunition being used. M8: This was an electronic computer (using vacuum tube technology) built by Bell Labs and used by coast artillery with medium-caliber guns (up to ). It made the following corrections: wind, drift, Earth's rotation, muzzle velocity, air density, height of site and spot corrections. M9: This was identical to the M8 except for some mechanisms and parts which were altered to accommodate anti-aircraft ammunition and guns. M10: A ballistics computer, part of the M38 fire control system, for Skysweeper anti-aircraft guns. M13: A ballistics computer for M48 tanks. M14: A ballistics computer for M103 heavy tanks. M15: A part of the M35 field artillery fire-control system, which included the M1 gunnery officer console and M27 power supply. M16: A ballistics computer for M60A1 tanks. M18: FADAC (field artillery digital automatic computer), an all-transistorized general-purpose digital computer manufactured by Amelco (Teledyne Systems, Inc.,) and North American—Autonetics. FADAC was first fielded during 1960, and was the first semiconductor-based digital electronics field-artillery computer. M19: A ballistics computer for M60A2 tanks. M21: A ballistics computer for M60A3 tanks. M23: A mortar ballistics computer. M26: A fire-control computer for AH-1 Cobra helicopters, (AH-1F). M31: A mortar ballistics computer. M32: A mortar ballistics computer, (handheld). M1: A ballistics computer for M1 Abrams main battle tanks. Systems The Battery Computer System (BCS) AN/GYK-29 was a computer used by the United States Army for computing artillery fire mission data. It replaced the FADAC and was small enough to fit into the HMMWV combat vehicle. The AN/GSG-10 TACFIRE (Tactical Fire) direction system automated field artillery command and control functions. It was composed of computers and remote devices such as the Variable Format Message Entry Device (VFMED), the AN/PSG-2 Digital Message Device (DMD) and the AN/TPQ-36 Firefinder field artillery target acquisition radar system linked by digital communications using existing radio and wire communications equipment. Later it also linked with the BCS which had more advanced targeting algorithms. The last TACFIRE fielding was completed during 1987. Replacement of TACFIRE equipment began during 1994. TACFIRE used the AN/GYK-12, a second-generation mainframe computer developed primarily by Litton Industries for Army divisional field artillery (DIVARTY) units. It had two configurations (division and battalion level) housed in mobile command shelters. Field artillery brigades also use the division configuration. Components of the system were identified using acronyms: CPU Central Processing Unit IOU Input/Output Unit MCMU Mass Core Memory Unit DDT Digital Data Terminal MTU Magnetic Tape Unit PCG Power Converter Group ELP Electronic Line Printer DPM Digital Plotter Map ACC Artillery Control Console RCMU Remote Control Monitoring Unit The successor to the TACFIRE system is the Advanced Field Artillery Tactical Data System (AFATDS). The AFATDS is the "Fires XXI" computer system for both tactical and technical fire control. It replaced both BCS (for technical fire solutions) and IFSAS/L-TACFIRE (for tactical fire control) systems in U.S. Field Artillery organizations, as well as in maneuver fire support elements at the battalion level and higher. As of 2009, the U.S. Army was transitioning from a version based on a Sun Microsystems SPARC computer running the Linux kernel to a version based on laptop computers running the Microsoft Windows operating system. Surviving examples One reason for a lack of surviving examples of early units was the use of radium on the dials. As a result they were classified as hazardous waste and were disposed of by the United States Department of Energy. Currently there is one surviving example of FADAC at the Fort Sill artillery museum. See also Director (military) Fire-control system Kerrison Predictor List of military electronics of the United States Mark I Fire Control Computer – US Navy system for 5-inch guns Numerical control Project Manager Battle Command Rangekeeper References Sources TM 9-2300 Standard Artillery and Fire Control Materiel dated 1944 TM 9-2300 Artillery Materiel and Associated Equipment. dated May 1949 ST 9-159 Handbook of Ordnance materiel dated 1968 Gun Data Computers, Coast Artillery Journal March–April 1946, pp. 45–47 External links http://www.globalsecurity.org/military/library/report/1988/MJR.htm http://ed-thelen.org/comp-hist/BRL61.html#TOC modern system https://web.archive.org/web/20110617062042/http://sill-www.army.mil/famag/1960/sep_1960/SEP_1960_PAGES_8_15.pdf Article title https://web.archive.org/web/20040511174351/http://combatindex.com/mil_docs/pdf/hdbk/0700/MIL-HDBK-799.pdf https://web.archive.org/web/20110720002347/https://rdl.train.army.mil/soldierPortal/atia/adlsc/view/public/12288-1/FM/3-22.91/chap1.htm https://web.archive.org/web/20110617062233/http://sill-www.army.mil/famag/1958/FEB_1958/FEB_1958_PAGES_32_35.pdf Bell labs patent http://web.mit.edu/STS.035/www/PDFs/Newell.pdf tacfire Archived at BCS components Military electronics of the United States Artillery operation Applications of control engineering Analog computers Ballistics World War II American electronics Fire-control computers of World War II
Gun data computer
[ "Physics", "Engineering" ]
1,535
[ "Control engineering", "Applied and interdisciplinary physics", "Ballistics", "Applications of control engineering" ]
7,395,285
https://en.wikipedia.org/wiki/Food%20Technology%20Industrial%20Achievement%20Award
The Food Technology Industrial Achievement Award has been awarded by the Institute of Food Technologists since 1959. It is awarded for the development of an outstanding food process or product that represents a significant advance in the application of food technology to food production. The process or product must have been successfully applied in actual commercial operations between six months and seven years before December 1 in the year of the nomination. Sponsored by Food Technology magazine, award winners receive a plaque from IFT. Winners References List of past winners - Official site Food technology awards Awards established in 1959
Food Technology Industrial Achievement Award
[ "Technology" ]
109
[ "Science and technology awards", "Food technology awards" ]
7,395,352
https://en.wikipedia.org/wiki/Lixiviant
A lixiviant is a chemical used in hydrometallurgy to extract elements from its ore. One of the most famous lixiviants is cyanide, which is used in extracting 90% of mined gold. The combination of cyanide and air converts gold particles into a soluble salt. Once separated from the bulk gangue, the solution is processed in a series of steps to give the metal. Etymology The origin is the word lixiviate, meaning to leach or to dissolve out, deriving from the Latin lixivium. A lixiviant assists in rapid and complete leaching, for example during in situ leaching. The metal can be recovered from it in a concentrated form after leaching. Further reading References Metallurgical processes
Lixiviant
[ "Chemistry", "Materials_science", "Engineering" ]
161
[ "Metallurgical processes", "Materials science stubs", "Materials science", "Metallurgy" ]
7,396,128
https://en.wikipedia.org/wiki/Asset%20turnover
In finance, asset turnover (ATO), total asset turnover, or asset turns is a financial ratio that measures the efficiency of a company's use of its assets in generating sales revenue or sales income to the company. Asset turnover is considered to be a profitability ratio, which is a group of financial ratios that measure how efficiently a company uses assets. Asset turnover can be furthered subdivided into fixed asset turnover, which measures a company's use of its fixed assets to generate revenue, and working capital turnover, which measures a company's use of its working capital (current assets minus liabilities) to generate revenue. Total asset turnover ratios can be used to calculate return on equity (ROE) figures as part of DuPont analysis. As a financial and activity ratio, and as part of DuPont analysis, asset turnover is a part of company fundamental analysis. Companies with low profit margins tend to have high asset turnover, while those with high profit margins have low asset turnover. Companies in the retail industry tend to have a very high turnover ratio, due mainly to cutthroat and competitive pricing. "Sales" is the value of "Net Sales" or "Sales" from the company's income statement "Average Total Assets" is the average of the values of "Total assets" from the company's balance sheet in the beginning and the end of the fiscal period. It is calculated by adding up the assets at the beginning of the period and the assets at the end of the period, then dividing that number by two. This method can produce unreliable results for businesses that experience significant intra-year fluctuations. For such businesses, it is advisable to use some other formula for Average Total Assets. Alternatively, "Average Total Assets" can be ending total assets. References Financial ratios
Asset turnover
[ "Mathematics" ]
360
[ "Financial ratios", "Quantity", "Metrics" ]
7,396,194
https://en.wikipedia.org/wiki/Playground%20slide
Playground slides are found in parks, schools, playgrounds and backyards. The slide is an example of the simple machine known as the inclined plane, which makes moving objects up and down easier, or in this case more fun. The slide may be flat, or half cylindrical or tubular to prevent falls. Slides are usually constructed of plastic, metal, and sometimes concrete. They have a smooth surface called a 'slide bed' that is either straight for the full length or can contain bends. The user, typically a child, climbs to the top of the slide via a ladder or stairs and sits down on the top of the slide and slides down the chute. In Australia, the playground slide is known as a slide, slippery slide, slipper slide or slippery dip depending on the region. Whereas sliding board is used in the Philadelphia area and other parts of the Mid-Atlantic. History The earliest known playground slide was erected in the playground of Washington, D.C.'s "Neighborhood House" sometime between the establishment of the "Neighborhood House" in early 1902 and the publication of an image of the slide on August 1, 1903, in Evening Star (Washington DC) The first bamboo slide at Coney Island opened for business in May 1903, so it is unclear which slide was first the playground slide or the amusement park slide. Early slides were frequently referred to as "Slide, Kelly, Slide" (after the song of the same name), "Helter Skelter" (after the slide at Coney Island), or "Shoot the Chutes" (after the water slide made famous by "Captain" Paul Boyton). The manufacturer, Wicksteed, ballyhoo claim that the playground slide was invented by founder, Charles Wicksteed, and installed in Wicksteed Park in 1922, The discovery of Wicksteed's oldest slide was announced by the company in 2013. However, this has been countered by a 1916-07-25 US Patent and others who refer to a rooftop slide in NYC , the nursery slide of the young Tsar Alexei, at Alexander Palace in Tsarkoye Selo built around 1910, the 45-foot (13.7 m) slide at the Smith Memorial Playground in Philadelphia, which was installed in 1904 (renovated and reopened in 2005), or the Coney Island Slide around 1905. Indeed, Arthur Leyland's book "Playground Technique and Playcraft", volume 1, originally published in 1909 and revised in 1913, gives full instructions for the construction of a metal playground slide. Types Here is a list of slide styles: A spiral slide is a playground slide that is wrapped around a central pole to form a descending spiral forming a simple helter skelter. A wavy slide is a slide that has waves in its shape, causing the person sliding to go up and down slightly while descending. A tube slide is simply a slide in the form of a tube. It can also curve or have bumps. A straight slide is a flat slide that just goes down at a slight angle. A roller slide is a slide made of horizontal cylinders which spin underneath the person sliding as they travel down. Amusement-park slides are just larger versions of the playground slide, much higher and with multiple parallel slideways. Participants may be provided with a sack to sit on to reduce friction for higher speeds and to protect clothing. Drop slides are slides with a vertical or nearly vertical drop (nicknamed the death slide or free-fall slide). Water slides are a type of slide that water streams down to create a slippery slide; found near water, generally in water parks or pools. Inflatable Slides are a type of slide that is continuously blown up by an exterior blower. The air flow allows the slide to be softer than traditional slides. They are also used on airplanes during an emergency evacuation and they are known as evacuation slides. Ice Slides a type of slide that is made with ice. There are several other different types and styles of slides. Slides can also be sub-classified as either free-standing slides, slides that stand on their own, or composite slides, which are slides that are connected to another or several pieces of playground equipment. Safety Playground slides are associated with several types of injury. The most obvious is that when a slide is not enclosed and is elevated above the playground surface, then users may fall off and incur bumps, bruises, sprains, broken bones, or traumatic head injuries. Some materials, such as metal, may become very hot during warm, sunny weather. Plastic slides can also be vulnerable to melting by arson. Some efforts to keep children safe on slides may do more harm than good. Rather than letting young children play on slides by themselves, some parents seat the children on the adult's lap and go down the slide together. If the child's shoe catches on the edge of the slide, however, this arrangement frequently results in the child's leg being broken. If the child had been permitted to use the slide independently, then this injury would not happen, because when the shoe caught, the child would have stopped sliding rather than being propelled down the slide by the adult's weight. See also Jungle gym (monkey bars) Outdoor playset Swing (seat) Slide (disambiguation) Notes Sources Play (activity) Playground equipment 1922 introductions
Playground slide
[ "Biology" ]
1,084
[ "Play (activity)", "Behavior", "Human behavior" ]
7,396,884
https://en.wikipedia.org/wiki/Subsurface%20lithoautotrophic%20microbial%20ecosystem
Subsurface lithoautotrophic microbial ecosystems, or "SLIMEs" (also abbreviated "SLMEs" or "SLiMEs"), are a type of endolithic ecosystems. They are defined by Edward O. Wilson as "unique assemblages of bacteria and fungi that occupy pores in the interlocking mineral grains of igneous rock beneath Earth's surface." Endolithic systems are still at an early stage of exploration. In some cases its biota can support simple invertebrates; in most, organisms are unicellular. Near-surface layers of rock may contain blue-green algae but most energy comes from chemical synthesis of minerals. The limited supply of energy limits the rates of growth and reproduction. In deeper rock layers microbes are exposed to high pressures and temperatures. References Further reading Biodiversity Systems ecology Ecosystems
Subsurface lithoautotrophic microbial ecosystem
[ "Biology", "Environmental_science" ]
174
[ "Symbiosis", "Systems ecology", "Ecosystems", "Biodiversity", "Environmental social science" ]
7,399,002
https://en.wikipedia.org/wiki/Trypticase%20soy%20agar
Trypticase soy agar or Tryptic soy agar (TSA) is a growth media for the culturing of moderately to non fastidious bacteria. It is a general-purpose, non-selective media providing enough nutrients to allow for a wide variety of microorganisms to grow. It is used for a wide range of applications, including culture storage, enumeration of cells (counting), isolation of pure cultures, or simply general culture. TSA contains enzymatic digests of casein and soybean meal, which provide amino acids and other nitrogenous substances, making it a nutritious medium for a variety of organisms. Sodium chloride maintains the osmotic equilibrium, while dipotassium phosphate acts as buffer to maintain pH. Agar extracted from any number of organisms is used as a gelling agent. One liter of the agar contains: 15 g pancreatic digest of casein 5 g peptic digest of soybean 5 g sodium chloride 15 g agar Uses The medium may be supplemented with blood to facilitate the growth of more fastidious bacteria or antimicrobial agents to permit the selection of various microbial groups from pure microbiota. As with any media, minor changes may be made to suit specific circumstances. TSA is frequently the base medium of other agar plate types. For example, blood agar plates (BAP) are made by enriching TSA plates with defibrinated sheep blood, and chocolate agar is made through additional cooking of BAP. References Microbiological media
Trypticase soy agar
[ "Biology" ]
322
[ "Microbiological media", "Microbiology equipment" ]
8,887,084
https://en.wikipedia.org/wiki/Function%20series
In calculus, a function series is a series where each of its terms is a function, not just a real or complex number. Examples Examples of function series include ordinary power series, Laurent series, Fourier series, Liouville-Neumann series, formal power series, and Puiseux series. Convergence There exist many types of convergence for a function series, such as uniform convergence, pointwise convergence, and convergence almost everywhere. Each type of convergence corresponds to a different metric for the space of functions that are added together in the series, and thus a different type of limit. The Weierstrass M-test is a useful result in studying convergence of function series. See also Function space References Chun Wa Wong (2013) Introduction to Mathematical Physics: Methods & Concepts Oxford University Press p. 655 Mathematical analysis Mathematical series
Function series
[ "Mathematics" ]
166
[ "Sequences and series", "Mathematical analysis", "Mathematical structures", "Series (mathematics)", "Calculus" ]
8,887,666
https://en.wikipedia.org/wiki/Carl%20Wagner
Carl Wilhelm Wagner (25 May 1901 – 10 December 1977) was a German physical chemist. He is best known for his pioneering work on solid-state chemistry, where his work on oxidation rate theory, counter diffusion of ions and defect chemistry led to a better understanding of how reactions take place at the atomic level. His life and achievements were honoured in a Solid State Ionics symposium commemorating his 100th birthday in 2001, where he was described as the father of solid-state chemistry. Early life Wagner was born in Leipzig, Germany; the son of Dr Julius Wagner who was the Head of Chemistry at the local institute and secretary of the German Bunsen Society of Physical Chemistry. Wagner graduated from the University of Munich and gained his PhD at the University of Leipzig in 1924 supervised by Max Le Blanc with a dissertation on the reaction rate in solutions, "Beiträge zur Kenntnis der Reaktionsgeschwindigkeit in Lösungen". Career Wagner was interested in the measurement of thermodynamic activities of the components in solid and liquid alloys. He also researched problems of solid-state chemistry, especially the role of defects of ionic crystals on thermodynamic properties, electrical conductivity and diffusion. He became a research fellow at the Bodernstein Institute at the University of Berlin. It was in Berlin that he first became acquainted with Walter H. Schottky who asked him to co-author a book on thermodynamic problems. Together with Hermann Ulich they published Thermodynamik in 1929, which is still considered a standard reference in the field. In 1930 he was Privatdozent at the University of Jena and published a notable paper with Schottky, "Theorie der geordneten Mischphasen" (Theory of arranged mixed phases). In 1931 he published a paper "Zur Theorie der Gleichrichterwirkung" ("Theory of Rectifier Action") [C. Wagner, Zur Theorie der Gleichrichterwirkung, Phys. Zeitschrift, Vol. 32 (1931), pp 641-645] describing in the context of copper oxide semiconductors the basic equations of thermally activated charge carriers and their diffusion in rectifier junctions which were later described by others such as Davydov [B. Davydov, The rectifying action of semi-conductors, The Technical Physics of the USSR, Vol. 5, No. 2 (1938), pp. 87-95] and Shockley [Shockley, William (1949). "The Theory of p-n Junctions in Semiconductors and p-n Junction Transistors". Bell System Technical Journal. 28 (3): 435–489. doi:10.1002/j.1538-7305.1949.tb03645]. His subsequent published papers led to the new concept of chemical disorder now known as defect chemistry. Wagner spent one year as Visiting Professor of Physical Chemistry, at the University of Hamburg in 1933, before moving to the Technische Universität Darmstadt where he was Professor of Physical Chemistry until 1945. He proposed an important law of oxidation kinetics in 1933. In 1936 he published a crucial paper "On the mechanism of the formation of ionic crystals of higher order (double salts, spinels, silicates)", a concept of counter-diffusion of cations, which contributed to the understanding of all diffusion controlled, solid-state reactions. Over a twenty-year period he produced an important body of work relating to the bulk transport processes in oxides. Wagner and Schottky proposed the point defect-mediated mechanism of mass transport in solids, Wagner then extended the analysis to electronic defects. For these works and his subsequent research on local equilibrium, his oxidation rate theory, and the concept of counter diffusion of cations, Wagner is considered by some as the "father of solid state chemistry." At the end of the Second World War, it was anticipated that German universities and research establishments would undergo a long period of rebuilding. Wagner was invited to the United States to become a scientific advisor at Fort Bliss, Texas, with other German scientists as part of Operation Paperclip. He acquired US citizenship at this time. His work on the thermodynamics of fuels used in V2-rockets was continued by Malcolm Hebb and their techniques are now known as the Hebb-Wagner polarisation method. Wagner was a professor of metallurgy at MIT from 1949 until 1958. He then returned to Germany to take up the position of Director of the Max Planck Institute of Physical Chemistry at Göttingen, which was vacant due to the untimely death of Karl Friedrich Bonhoeffer. In 1961 he produced a paper on the theory of the ageing of precipitates by dissolution-reprecipitation Ostwald ripening, now known as the Lifshitz-Slyozov-Wagner theory, which helps predict the rate of coarsening in alloys. When NASA tested the theory in space shuttle experiments they discovered the theory did not work as they initially expected and realised the way engineers had been using it needed to be reconsidered. Legacy Wagner officially retired in 1966 but from 1967 to 1977 was a Scientific Member of the Max Planck Institute in Göttingen, continuing to contribute to publications. Many modern inventions based on solid-state technology and semiconductor fabrication, used in devices such as solar energy conversion have been developed with the aid of Wagner's theories. Some examples of solid state electrochemical devices are typically, fuel cells, batteries, sensors and membranes. Wagner died on 10 December 1977 in Göttingen. Honours 1951 - Palladium Medal of the Electrochemical Society 1957 - Willis R. Whitney Award, NACE 1959 - Wilhelm Exner Medal of the 1961 - Bunsen Medal of the German Bunsen Society 1964 - of the 1972 - Honorary member of the German Bunsen Society 1972 - Heyn Medal of the German Society of Metallurgy 1973 - Cavallaro Medal, European Federation of Corrosion Honorary member of American Institute of Mining, Metallurgical and Petroleum Engineers 1973 - Honorary member of the Mathematics and Natural Sciences of the Austrian Academy of Sciences in Vienna 1973 - Gold Medal of the American Society for Metals 1975 - Honorary Membership of the Japan Institute of Metals 1975 - Corresponding member of the See also Electrochemical engineering Diffusion Solid-state ionics Lifshitz–Slyozov–Wagner theory References External links Chemistry Tree: Carl W. Wagner Details 1901 births 1977 deaths Scientists from Leipzig German physical chemists MIT School of Engineering faculty Foreign associates of the National Academy of Sciences Academic staff of Technische Universität Darmstadt Fellows of the Minerals, Metals & Materials Society Solid state chemists Ludwig Maximilian University of Munich alumni Academic staff of the University of Jena 20th-century German chemists Academic staff of Max Planck Society Leipzig University alumni Max Planck Institute directors
Carl Wagner
[ "Chemistry" ]
1,408
[ "Solid state chemists" ]
8,888,246
https://en.wikipedia.org/wiki/NeSSI
NeSSI (for New Sampling/Sensor Initiative) is a global and open initiative sponsored by the Center for Process Analysis and Control (CPAC) at the University of Washington, in Seattle. The NeSSI initiative was begun to simplify the tasks and reduce the overall costs associated with engineering, installing, and maintaining chemical process analytical systems. Process analytical systems are commonly used by the chemical, oil refining and petrochemical industries to measure and control both chemical composition as well as certain intrinsic physical properties (such as viscosity). The specific objectives of NeSSI are: Increasing the reliability of these systems through the use of increased automation, Shrinking their physical size and energy use by means of miniaturization, Promoting the creation and use of industry standards for process analytical systems, Helping create the infrastructure needed to support the use of the emerging class of robust and selective microAnalytical sensors. To date, NeSSI has served as a forum for the adoption and improvement of an industrial standard which specifies the use of miniature and modular Lego-like flow components. NeSSI has also issued a specification which has been instrumental in spurring the development and commercialization of a plug and play low power communication bus (NeSSI-bus) specifically designed for use with process analytical sample systems in electrically hazardous environments. As part of its development road map, NeSSI has defined the electrical and mechanical interfaces, as well as compiled a list of automated (smart) software features, which are now beginning to be used by microanalytical manufacturers for industrial applications. Background Modern chemical and petrochemical processing plants are complex systems containing many steps (often called unit operations) involved in producing one or more products from various raw materials. In order to control the many processes, for both improved product quality and operational safety, many measurements are made at the different stages of processing. These measurements, either from simple sensors (such as temperature, pressure, flow, etc.) or from sophisticated chemical analyzers (providing composition of one or more components in the chemical stream), are typically used as inputs to process control algorithms to give a "snapshot" of the process operation and to control the process to ensure it is operating efficiently and safely. Traditionally, most of the measurements (with the exception of temperature, pressure and flow) were performed "off-line" by taking a sample from the process and analyzing it in the laboratory. Beginning in the latter of part of the 1930s, a trend aimed at moving the analysis from the laboratory to the process plant began. With the advent of more sophisticated analyzers, this concept known as Process Analytics become much more prevalent in the 1980s and a new discipline called Process Analytical Chemistry (PAC) emerged which combined chemical engineering and analytical chemistry. One of the main driving forces for PAC (See also: PAT) is to remove the bottleneck and time lag associated with sending the samples to the lab and waiting for the analysis results. By moving the analysis to the process, results can be obtained closer to real-time which effectively improves the ability for the control action to correct for process changes (i.e., feedback and feed forward control). By far, the most common implementation of PAC (especially for more complex analyzers) utilizes what is known as extractive sampling. This typically involves the continuous (or sometimes periodic) removal of a small portion of sample from a much larger piping system or process vessel. This sample is then conditioned (filtered, pressure regulated, flow controlled, etc.) and introduced to the analyzer where the chemical composition or the intrinsic physical properties of process fluids (vapours and liquids) are measured. In industrial plants, the majority of sample systems and their related analyzers are installed in analyzer houses. The hardware (traditionally metal tubing, compression fittings, valves, regulators, rotameters and filters) associated with extractive sampling is collectively referred to as the sampling system. Sample systems are used to condition or adjust the sample conditions (pressure, amount of particulate allowed, temperature and flow) to a level suitable for use with an analytical device (analyzer) such as a gas chromatograph, an oxygen analyzer or an infra red spectrometer. Despite the simple explanation just given, modern sampling systems can be quite large, complex, and expensive. The design features of analytical sample systems have changed little, when the discipline of Process Analytics began in Germany, right through until the present day. An example of an early analyzer and sample system used at the Buna Chemical Works (Schkopau, Germany), is shown in the following photograph. Process analytics remains exceptional in the fact that it is the last outpost of low level automation (retains manual adjustments and visible checks) within the process industries. History The rationale for NeSSI originated from focus group meetings held in 1999 at the Center for Process Analytical Chemistry (CPAC) which called out for more reliable sampling and analysis for the manufacturing processes. Early work with NeSSI was started in July, 2000 by Peter van Vuuren (of ExxonMobil Chemical) and Rob Dubois (of Dow Chemical) with the initial aim of adopting new types of modular and miniature hardware which were being addressed in a standard being developed by an ISA (Instrumentation, Systems and Automation Society) technical committee. (Reference 1) The term NeSSI, along with the futuristic concepts of a communication/power bus specifically designed for process analytical (the NeSSI-bus) and fully automated sampling systems were first introduced outside of CPAC at a presentation given in January 2001 at the International Forum of Process Analytical Chemistry (IFPAC) at Amelia Island, Florida, USA. These new concepts were collected in the NeSSI Generation II Specification and released by CPAC in 2003 as an open publication. The specification is located on the CPAC website. (Reference 2) NeSSI Technical Objectives Facilitate the acceptance and implementation of modular, miniature and automated (smart) sample system technology using the mechanical design based on the ANSI/ISA SP76.00.02-2002 standard. Provide the mechanical, electrical and software infrastructure needed to accelerate the use of microanalytical sensors within the process industries. Move the analytical systems out of the analyzer houses by promoting the use of field-mounted analytical systems (similar to pressure transmitters) which are close-coupled to the major process equipment. (NeSSI refers to this concept as By-Line analysis) Lay the groundwork for the adoption of an open communication standard(s) for process analytical. This includes communication between sample system components such as flow sensors, actuators, and microanalytical sensors, as well as communication to a Distributed Control System (DCS). Comparison of Current Technology vs. NeSSI Technology (Extractive Systems) Technology Development Roadmap The NeSSI Technology Development Roadmap groups the technology into three generations, which are backward compatible. Generation I is a commercial product and proven in numerous industrial and laboratory applications. Generation II products have been proven in the laboratory but have yet to be commercialized. Generation III (microanalytical) is in development. Technical Development Generations Generation I Fluid Components Generation I covers the commercially available mechanical systems associated with the fluid handling components. Generation I has adopted the ANSI/ISA SP76.00.2002 miniature, modular mechanical standard. This standard precisely defines inlet and outlet ports and overall dimensions which allow Lego-like interchangeability of components, between different manufacturers. The ANSI/ISA standard is referenced by the International Electrotechnical Commission in publication IEC 62339-1:2006. Currently three manufacturers produce the mechanical mounting system (known as a substrate) which serves as the platform for attaching various components. Since the components are bolted to the surface of the substrate, sealed by O-rings, they are sometimes referred to as surface mount devices. (The semiconductor industry has a related system; however, the sealing is done by metallic seals rather than elastomeric O-rings.) There are currently over 60 different types of surface mount components available from various suppliers who provide valves, filters and regulators as well as pressure and flow sensing devices. Although the platform for mounting various components is common among the manufacturers, the interconnections below the surface are proprietary. The following figure shows three of the common designs. (From left to right) A Swagelok system which uses various lengths of tube connectors set in rigid channels; a CIRCORTech design which uses a single block with assorted flow-tubes; and a Parker Hannifin design which uses various blocks ported together with small connectors which also serve as flow paths. Generation II Connectivity using NeSSI-bus and the SAM The key elements of the NeSSI Generation II Specification are as follows. Adoption of a digital communication bus (NeSSI-bus) that is specifically tailored for process analytics and intended to replace 4-20 mA systems. This bus can handle up to 30 devices. (This bus would be equivalent to a plug-and-play USB bus on a personal computer but with special requirements.) For electrical equipment in hazardous areas, classifying the interior of an enclosure handling hazardous (flammable) fluids (e.g. hydrogen and ethylene) as Division 1/Zone 1 rather than Division/Zone 2. Adopting the use of a safe low energy, globally accepted method of electrical protection called intrinsic safety for the NeSSI-bus. Adopting the use of miniature smart/automated electronic devices including sensors (flow, pressure temperature), on/off and proportional actuators and enclosure heater controls. A move away from the use of local indicating devices such as gauges and rotameters in order to reduce labor-intensive manual checking (rounds). A move away from centralized control (automation) model to a local/field control model which is represented by a small computing device called the Sensor Actuator Manager (SAM). Adopting the concept of portable, commercially available software smart applets for the purpose of automating specific sample system functions. These applets would be resident in the SAM. Employing an Ethernet network between the SAM, the DCS and the Operator & Maintenance (O & M) user station. (NeSSI refers to this bus as the ANLAN) Introduction of a graphical user interface (GUI) for better visualization of physically compact sampling systems. The first prototype of a multi node/miniature Generation II system was demonstrated by Siemens Process Analytics in 2006. Siemens has adapted an existing bus system called I2C to operate in an intrinsically safe mode. This work was undertaken once it was determined that existing intrinsically safe capable digital communication systems such as Foundation Fieldbus and Profibus could not meet the requirements of reduced physical size as well as the lower cost and power draw defined by the NeSSI-bus. Whether or not this bus will go into wide commercial production is unknown at this time. A nonprofit organization, CAN in Automation (CiA) released a 2007 Draft Standard Proposal (DSP-103), that specifies the physical layer of an intrinsically safe bus. [CAN = Controller Area Network] The specification has been developed by members of the CiA organization among them ABB, Pepperl+Fuchs, Texas Instruments, and Siemens. By using a lower voltage (9.5 V) for its power supply, this bus can provide more current (up to 1,000 mA) to power multiple devices in a hazardous environment. This group has standardized upon the 5-pin M8 pico connector for providing both power and signal to the devices. A commercial implementation of a process analytical system using this bus has yet to be demonstrated. An interim development called Generation 1.5 uses both conventional 4-20 mA analogue sensors and discrete signals to actuate valves. A Programmable Logic Controller (PLC) is used as the Sensor Actuator Manager (SAM). Generation III - microAnalytical The introduction of new microAnalytical devices to the process industries can be enabled by employing standard physical, electrical and software interfaces. Generation III will allow tighter integration of the sample conditioning and analytical measurement devices. Applications NeSSI is used for process analytical measurements in the petrochemical, chemical and oil refining industries. These measurements may be for quality control of raw material or final product, environmental compliance, safety, energy reduction or process control purposes. Vapour applications may include hydrocarbon feed stocks and intermediates (ethylene, ethane, propylene, etc.), natural gas streams, liquefied petroleum gas (LPG) streams, hydrogen and air gas streams. Liquid systems suitable for use with the Generation I mechanical portion of NeSSI are hydrocarbons such as diesel fuel as well as aqueous streams. Highly viscous fluids and solids are not suitable for use with NeSSI. Very dirty, high particulate streams need to be filtered. Some liquid service applications may be limited by pressure drops associated with components hooked up in a serial configuration. NeSSI systems have found applications in areas other than the process analytical environments including micro reactor, mini plant and laboratory environments where small size, unskilled assembly and flexible configuration is important. Role of CPAC The development of NeSSI has been a collaborative effort between industrial end-users, manufacturers who supply the industries, and academic researchers working in the area of process analytics. CPAC continues as the focal point for NeSSI development, and sponsor of the NeSSI steering team. CPAC provides a neutral umbrella under which interested companies have been able to meet, discuss needs and issues, and make progress towards defining the future of industrial sampling and analyzer systems. The NeSSI name is trademarked by the University of Washington to ensure that it remains freely associated with the open nature of the initiative anyone can use the name NeSSI to refer to products or services that are consistent with the specifications and guidelines of NeSSI as long as they refrain from exclusively tying the name to a proprietary product or service. Criticism, Impact and Summary Criticism of NeSSI mechanical systems have included higher initial cost, inability to troubleshoot at a component level (due to compact/intensive spacing), and the lack of performance data associated with the use of elastomeric seals in long term installations. From a design perspective, it may be difficult to design a modular, mechanical system which meets the needs of the diverse process applications found in industry. Development of the NeSSI-bus has been an iterative exercise, and it will need the close cooperation of both component and analyzer manufacturers to make their equipment NeSSI-bus compliant. At this time, there are missing elements such as a low cost, low power flow sensor which is capable of providing a continuous reading of sample system flow as well as a proportional, miniature control valve. The predicted impact of NeSSI systems are as follows: Adoption of a universally accepted method of protection (intrinsic safety) for sample systems will globalize and harmonize system design and help overcome geographical restrictions currently mandated by various electrical certification/approval bodies such as Factory Mutual (FM) and Underwriters Laboratories (UL), ATEX (Europe), GOST (Russian Federation) and Canadian Standards Association (CSA). Analyzer technical staff will have the capability of accessing the status of all the key indicators of analytical sample system performance both locally and remotely. Predictive rather than preventive maintenance can be performed and remote diagnostics and graphical user interfaces are the norm. Analyzer rounds will be eliminated. Analyzer systems will become more reliable and trustworthy. The analyzer technician will have the power to configure a sampling/analytical system, as he/she desires using the smart applets. The adjustable wrench and screwdriver will be replaced with software. Molecular management meaning tighter process control by more analysis of the chemical processes - will become feasible with better, faster, less costly and more abundant analysis. This will help reduce manufacturing energy costs and minimize environmental emissions in the process industries. Since its debut in 2000, NeSSI the mechanical portion has seen gradual but steady acceptance in industry. Currently, there are three major commercial suppliers of NeSSI compliant mechanical systems along with dozens of components available for mounting on these systems. There is also a growing list of companies implementing NeSSI systems in their manufacturing and pilot-plant facilities. Recently, two of the largest suppliers of process analyzers have committed to supporting NeSSI hardware and the development of the intrinsically safe NeSSI-bus communication into their products. NeSSI is gaining status as a de facto standard for many process sampling system applications. NeSSI (Generation I) acceptance has spread beyond its initial chemical and petrochemical industry roots to find applications in the automotive, food, and pharmaceutical industries, as well as applications as an analytical development system in research laboratories. Generation II electrical systems are now close to commercialization with the first industrial systems scheduled for operation in 2008. References "ANSI/ISA 76.00.02-2002 Modular Component Interfaces for Surface-Mount Fluid Distribution Components – Part1: Elastomeric Seals," Instrumentation, Systems, and Automation Society (ISA), Compositional Analyzers Committee, (2002), www.isa.org Dubois, Robert N.; van Vuuren, Peter; Gunnell, Jeffrey J. "NeSSI (New Sampling/Sensor Initiative) Generation II Specification", A Conceptual and Functional Specification Describing the Use of Miniature, Modular Electrical Components for Adaptation to the ANSI/ISA SP76 Substrate in Electrically Hazardous Areas. Center for Process Analytical Chemistry (CPAC), University of Washington, Seattle WA, (2003) External links — provides more technical information about NeSSI as well as a complete history of its development through a compendium of papers and talks presented at various meetings, workshops, and conferences since its inception. AVENISENSE— provides NeSSI miniaturized fluid properties sensors & transmitters(liquid and gas) such as viscosity, density, pressure, temperature and molar mass. Chemical engineering Systems analysis
NeSSI
[ "Chemistry", "Engineering" ]
3,631
[ "Chemical engineering", "nan" ]
8,888,314
https://en.wikipedia.org/wiki/National%20Biodefense%20Analysis%20and%20Countermeasures%20Center
The National Biodefense Analysis and Countermeasures Center (NBACC) is a government biodefense research laboratory created by the U.S. Department of Homeland Security (DHS) and located at the sprawling biodefense campus at Fort Detrick in Frederick, MD, USA. The NBACC (pronounced EN-back) is the principal U.S. biodefense research institution engaged in laboratory-based threat assessment and bioforensics. NBACC is an important part of the National Interagency Biodefense Campus (NIBC) also located at Fort Detrick for the US Army, National Institutes of Health and the US Department of Agriculture. Background and mission The NBACC was created as a federal response to the anthrax letter attacks in 2001 as the first national laboratory operating under the Department of Homeland Security. The Department of Homeland Security said the mission of the NBACC is "to provide the scientific basis for the characterization of biological threats and bioforensic analysis to support attribution of their planned or actual use." Part of the NBACC's mission is to conduct realistic tests of the pathogens and tactics that might be used in a bioterrorism attack. It seeks to quantitatively answer questions pertaining to what might happen in a biological attack. This work is carried out by about 180 researchers and support staff and has become more advanced since the NBACC became certified to work with biological select agents and toxins in September 2011. Its work is necessary in the preparation and response to biological threats, which can be handled as they emerge through the NBACC national security biocontainment laboratory. The NBACC is equipped to develop and investigate genetically engineered viruses and bacteria. The NBACC evaluates new and emerging technologies, along with delivery devices that U.S. adversaries might use to disseminate the pathogens. The NBACC coordinates closely with the many departments and agencies in the U.S. government, including the U.S. intelligence community which has assigned advisers to the Center. The NBACC is involved in a partnership with the National Interagency Confederation for Biological Research at Fort Detrick and initiated the "Work for Others" program, which expands the NBACC's capacity to share information with a variety of related federal agencies. Significance of operation In June 2017, Daniel M. Gerstein, senior policy researcher at the RAND Corporation and former acting Under Secretary and Deputy Under Secretary of the DHS's Science and Technology Directorate, said, "In the last 40 to 60 years, hundreds of new diseases have cropped up across the world ... The country and the world have not proven ready to handle a true public health crisis." Describing the NBACC's value to science, Gerstein said, "Back in 2001, we did not have this facility, and it literally took months in order to do the forensics analysis of samples ... they had to outsource ... to 12 different facilities to get everything analyzed back then. Today, the NBACC can do those forensic analyses in a couple days." The risk of a bioterrorism attack continues to grow. International terrorist organizations, such as ISIS, have shown growing desire to access and use biological weapons. A 19-page document providing instructions on the creation of biological weapons was discovered on a laptop obtained from ISIS in 2014. Kenyan authorities stopped an ISIS-affiliated anthrax plot in late 2016. In early 2017, South Korea speculated that North Korea was developing biological weapons that could be dispersed through drones. The NBACC and affiliated groups play an important role in protecting the public from these mounting threats. NBACC has already proven capable of responding to these types of threats, as an important actor in the United States response to the Ebola Outbreak of 2014. During the crisis, NBACC scientists carried out experiments to test the duration of the virus' activity when placed on different surfaces, and best practices for sanitation after completing tests. Threats to continued work Under President Trump's budget proposal for fiscal year 2018, the NBACC stands to lose funding that allows it to continue scientific studies. The NBACC is faced with the threat of complete shutdown of its facilities by September 2018. Daniel M. Gerstein said the result of the closure of the NBACC would be a "potentially devastating public health concern." The cost of replacing the services provided by the NBACC will be more expensive than continuing its operations simply due to the nature of changing the role of the NBACC's facilities, according to Homeland Preparedness News. Congressman John Delaney (D-MD), who represents the Maryland congressional district in which the NBACC resides, said in a statement from May 2017: I am 100 percent opposed to the closing of the National Biodefense Analysis and Countermeasures Center in Frederick and will fight this deeply misguided move by the Trump Administration ... The National Biodefense Analysis and Countermeasures Center is a unique facility that is crucial to our homeland security, intelligence, and anti-terrorism endeavors. This is the lab that protects us against anthrax attacks, ricin attacks and other bioterrorism threats. In May 2017, several faculty at the Johns Hopkins Center for Health Security, said the loss of the NBACC would make the impact of a bioterror attack "far more dire." They said, "In the first moments after the attack is identified, we'd want to know the identity of the pathogen used in the attack, whether it could spread from person to person and what drugs and vaccines would work to treat and protect people. But with ... complete elimination [of the NBACC] ... the delay before these and other facts are known would increase, costing many lives." In September 2015, the Battelle National Biodefense Institute (which operates the NBACC) was awarded a 10-year contract for operations and management. At the time, Battelle said the contract would be worth $480 million if fully executed. The proposed Trump budget for 2018, though, would see this award retracted, leading to the end of all scientific operations by March 2018. Facilities NBACC laboratories operate at the biosafety levels of 2, 3, and 4, providing the optimal level of safety and operational capacity. In particular, the NBACC's biosafety level of 4 allows its facilities to research pathogens that have no existing vaccines or treatment. This makes it one of only seven locations in the United States where this occurs. In June 2006, construction began on a new $128 million, facility inside the Ft. Detrick installation. The facility contains two centers: The National Biological Threat Characterization Center (NBTCC), which seeks to identify and prioritize biological threats and our vulnerabilities to those threats through its laboratory threat assessments. It includes biocontainment suites, including air-handling equipment, security controls, and other supporting features. It is classified as a SCIF, or Sensitive Compartmented Information Facility, meaning that while the majority of research that occurs at the lab is unclassified, some research results must remain classified for security purposes. The National Bioforensic Analysis Center (NBFAC), a forensic testing center equipped to identify and characterize the possible culprit pathogens after an attack has already occurred. Notable accomplishments The NBACC achieved a "Superior" Defense Security Service (DSS) rating in 2012 and 2013 and was recognized for staff volunteerism. It has played a role in over 100 federal law enforcement cases. The NBTCC has unique national biosafety level 3 and 4 aerobiology capabilities, which are necessary to collect crucial data that is used to develop biodefense plans and responses. In 2013, it provided necessary data, which addressed 10 specific biological agent knowledge gaps, to improve hazard, risk, and threat assessments. The data allowed for significant growth in the credibility of hazard and risk assessment modeling of bioterrorism scenarios for a variety of toxin threat agents, including both bacteria and viruses. The NBFAC played a role in more than 45 federal law enforcement investigations of biological crimes in 2013 alone. Also in 2013, it activated unique bioforensic laboratories at biosafety level 3 with accreditation for casework operations. Its processes maintain operational capability to study more than 60 high-priority human, animal, and plant pathogens and toxins. Its sequencing methods are imperative to enable new kinds of studies. It played a central role in developing capabilities to investigate genetically modified and de novo synthetic agents (new complex molecules formed from simple molecules). In January 2018, the Intelligence Advanced Research Projects Activity selected Battelle to create a software platform able to fight the development of synthetic biological threats. Battelle said it would develop the threat assessment platform with collaboration from Ginkgo Bioworks, One Codex and Twist Bioscience. The intended final result is a program that can "merge computational approaches into a software tool that can be applied in real-world scenarios." Controversy and response Questions have been raised by some arms-control and international law experts as to the necessity and advisability of the very high level of security surrounding the NBACC and as to whether it does (or will) place the United States in violation of the 1972 Biological and Toxin Weapons Convention (BWC). (The BWC outlawed developing, stockpiling, acquiring or retaining pathogens "of types and in quantities that have no justification" for peaceful purposes.) NBACC's opponents contended that the facility would operate in a "legal gray zone" and skirt the edges of the BWC, which outlaws production of even small amounts of biological weapons. They contend that a high degree of transparency is needed to reassure Americans (and the rest of the world) of the U.S. government's good intentions. In their view, the U.S. government may find it hard in the future to object to other countries testing genetically engineered pathogens and novel delivery systems when they invoke their own national biodefense requirements. The Bush administration contended that the NBACC is purely defensive and thus its operations are fully legal and in accord with the BWC. A principle is that assessing the technical threat of biological pathogens is essential to inform and help develop biodefense policy. Administration officials say that making small amounts of biowarfare pathogens for study is permitted under a broad interpretation of the treaty. Faculty at the NBACC provided their own responses to help answer questions raised by critics. In May 2008, director Patrick Fitch said that research at the laboratory is not conducted to "create threats in order to study them." Maureen McCarthy, former Homeland Security director of research and development, said, "All the programs we do are defensive in nature. Our job is to ensure that the civilian population of the country is protected, and that we know what the threats are." Bernard Courtney, the NBACC's former scientific director, described oversight, mentioning that frequent independent reviews over particular experiments occur. These reviews are operated by a group of up to four scientists on a case-by-case scenario. Additionally, research at the labs is overseen by the Institutional Biosafety Committee. The CDC also conducts inspections to assure labs comply with Select Agents rules. The Final Environmental Impact Statement, or FEIS, for NBACC facilities states that all research is performed for defense purposes and is conducted in a legal manner, including under the BWC. See also Center for Biosecurity, University of Pittsburgh, PA; directed by Tara O'Toole, founded by D.A. Henderson. Notes and references Bibliography Warrick, Joby, "The Secretive Fight Against Bioterror", The Washington Post; Sunday, July 30, 2006; A01. Hernandez, Nelson, "Huge New Biodefense Lab Is Dedicated At Fort Detrick", Washington Post, October 23, 2008; p. B1. External links DHS/NBACC Website Battelle National Biodefense Institute/NBACC Website PowerPoint/PDF presentation detailing NBACC structure and mission United States Department of Homeland Security Research installations of the United States Army Federally Funded Research and Development Centers 2001 anthrax attacks 2002 establishments in Maryland Biological warfare Biosafety level 4 laboratories Buildings and structures in Frederick County, Maryland Disaster preparedness in the United States Government agencies established in 2002
National Biodefense Analysis and Countermeasures Center
[ "Biology" ]
2,527
[ "Biological warfare" ]
8,888,500
https://en.wikipedia.org/wiki/Electroadhesion
Electroadhesion is the electrostatic effect of astriction between two surfaces subjected to an electrical field. Applications include the retention of paper on plotter surfaces, astrictive robotic prehension (electrostatic grippers), electroadhesive displays, etc. Clamping pressures in the range of 0.5 to 1.5 N/cm2 (0.8 to 2.3 psi) have been claimed. Currently, the maximum lateral pressure achievable through electroadhesion is 85.6 N/cm2. An electroadhesive pad consists of conductive electrodes placed upon a polymer substrate. When alternate positive and negative charges are induced on adjacent electrodes, the resulting electric field sets up opposite charges on the surface that the pad touches, and thus causes electrostatic adhesion between the electrodes and the induced charges in the touched surface material. Electroadhesion can be loosely divided into two basic forms: that which concerns the prehension of electrically conducting materials where the general laws of capacitance hold (D = E ε) and that used with electrically insulating subjects where the more advanced theory of electrostatics (D = E ε + P) applies. In practice, surface irregularities such as waviness, wrinkles, and roughness introduce air gaps. Some models account for these effects by incorporating a layer that represents these air gaps. Recently, electroadhesion has been garnering increasing attention from both academia and industry. It is being proposed for application in various fields, including gripping devices, climbing robots, VR haptics, and variable stiffness mechanisms. References Further reading Liang X, Sun Y, Wang H, et al. Delicate manipulations with compliant mechanism and electrostatic adhesion[C]//2016 6th IEEE International Conference on Biomedical Robotics and Biomechatronics (BioRob). IEEE, 2016: 401-406. Wang H, Yamamoto A, Higuchi T. A crawler climbing robot integrating electroadhesion and electrostatic actuation[J]. International Journal of Advanced Robotic Systems, 2014, 11(12): 191. Xie G, Wang W, Zhao X, et al. Low-voltage electroadhesive pad with thin insulation layer fabricated by parylene deposition[C]//2019 IEEE 9th Annual International Conference on CYBER Technology in Automation, Control, and Intelligent Systems (CYBER). IEEE, 2019: 197-202. Wang H, Yamamoto A, Higuchi T. Electrostatic-motor-driven electroadhesive robot[C]//2012 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2012: 914-919. WANG H, YAMAMOTO A. Peel force of electrostatic adhesion in crawler-type electrostatic climbing robots[J]. Journal of the Japan Society of Applied Electromagnetics and Mechanics, 2015, 23(3): 498-503. Monkman G.J., Hesse S., Steinmann R. & Schunk H., Robot Grippers, Wiley‐VCH, 2007. Monkman G.J., Electroadhesive Microgrippers, Assembly Automation 30(4), 2003. Monkman G.J., Workpiece Retention during Machine Processing, Assembly Automation 20(1), 2000. Monkman G.J., An Analysis of Astrictive Prehension, International Journal of Robotics Research 16(1), 1997. Monkman G.J., Robot Grippers for use with Fibrous Materials, International Journal of Robotics Research 14(2), 1995. Monkman G.J., Compliant Robotic Devices and Electroadhesion, Robotica 10(2), 1992. Monkman G.J., Taylor P.M. & Farnworth G.J., Principles of Electroadhesion in Clothing Technology, International Journal of Clothing Science & Technology 1(3), 1989. Guo J., et al., Electroadhesion Technologies for Robotics: A Comprehensive Review, IEEE Transactions on Robotics 36(2), 2020. Guo J., Bamber T., et al, Optimization and experimental verification of coplanar interdigital electroadhesives, J. Phys. D: Appl. Phys. 49 415304, 2016. Guo J., Bamber T., et al, Investigation of relationship between interfacial electroadhesive force and surface texture, J. Phys. D: Appl. Phys. 49 035303, 2016. Bamber T., Guo J., et al., Visualization methods for understanding the dynamic electroadhesion phenomenon, J. Phys. D: Appl. Phys. 50 205304, 2017 Guo J., Bamber T., et al, Toward Adaptive and Intelligent Electroadhesives for Robotic Material Handling, EEE ROBOTICS AND AUTOMATION LETTERS, VOL. 2, NO. 2, APRIL 2017 Guo J., Bamber T., et al, Geometric optimisation of electroadhesive actuators based on 3D electrostatic simulation and its experimental verification, IFAC-PapersOnLine, 2016 Guo J., Bamber T., et al, Experimental study of relationship between interfacial electroadhesive force and applied voltage for different substrate materials, Applied Physics Letters, 2017 Guo J., Bamber T., et al, Symmetrical electroadhesives independent of different interfacial surface conditions, Applied Physics Letters, 2017 External links Electroadhesive robotic climbers Electroadhesives for MAV perching Electroadhesives, combined with artificial muscles, for skin-like robotic devices including soft conveyors and crawlers SUSTech AAR Laboratory Electrostatics Robotics engineering
Electroadhesion
[ "Technology", "Engineering" ]
1,173
[ "Computer engineering", "Robotics engineering" ]
8,889,260
https://en.wikipedia.org/wiki/Payment%20card%20industry
The payment card industry (PCI) denotes the debit, credit, prepaid, e-purse, ATM, and POS cards and associated businesses. Overview The payment card industry consists of all the organizations which store, process and transmit cardholder data, most notably for debit cards and credit cards. The security standards are developed by the Payment Card Industry Security Standards Council which develops the Payment Card Industry Data Security Standards used throughout the industry. Individual card brands establish compliance requirements that are used by service providers and have their own compliance programs. Major card brands include American Express, Discover Financial Services, JCB, Mastercard, RuPay, UnionPay and Visa. Most companies use member banks that connect and accept transactions from the card brands. Not all card brands use member banks, like American Express, these instead act as their own bank. , the United States uses a magnetic stripe on a card to process transactions and its security relies on the holder's signature and visual inspection of the card to check for features such as hologram. This system will be outmoded and replaced by EMV in 2015. EMV is a global standard for inter-operation of integrated circuit cards (IC cards or "chip cards") and IC card capable point of sale (POS) terminals and automated teller machines (ATMs), for authenticating credit and debit card transactions. It has enhanced security features, but is still susceptible to fraud. Payment Card Industry Security Standards Council On 7 September 2006, American Express, Discover Financial Services, Japan Credit Bureau, Mastercard and Visa International formed the Payment Card Industry Security Standards Council (PCI SSC) with the goal of managing the ongoing evolution of the Payment Card Industry Data Security Standard. The council itself claims to be independent of the various card vendors that make up the council. As of 1 August 2014, the PCI SSC website lists 688 "Participating Organizations". Internationally, 61 different financial institutions were noted, including Bank of America, Capital One, JPMorgan Chase, Royal Bank of Scotland, TD Bank and Wells Fargo. A total of 275 merchants were listed, including Amazon, Burger King, Citgo, Dell, Equifax, ExxonMobil, Global Cash Access, Motorola, Microsoft, Southwest Airlines and Walmart. Industry growth MasterCard's Nicole Krieg has noted that the Russian credit card market started in early 2000, when issuers first began launching products. However, credit products became especially popular in Russia in 2005, after new legislation took effect. Immense growth was noted in just eight years, by comparing second quarter growth on Visa card purchases, which went from $306 million in 2002 to $61.5 billion in 2010. Merchants who accepted Visa cards also increased from 21,000 to 331,000 during the same period. Visa also noted that they had issued 70 million cards and the Central Bank of the Russian Federation reported that 8.6 million credit cards were on issue. Regional and national payment schemes Interac Association The Interac Association is Canada's national organization linking Financial Institutions and enterprises that have proprietary networks, to enable communication with each other for the purpose of exchanging electronic financial transactions. The Association was founded in 1984 by the big five banks. Today, there are over 80 members. The Interac Association is the organization responsible for the development of Canada's national network of two shared electronic financial services: Shared Cash Dispensing (SCD) for cash withdrawals from any ABM not belonging to a cardholder's financial institution; and Interac Direct Payment (IDP) for Debit Card payments at the Point-of-Sale See also Payment Card Industry Data Security Standard Payment gateway Payment system Payment processor Payment service provider RuPay References External links Payment card industry PCI Security Standards Council, the organization responsible for the development, enhancement, storage, dissemination and implementation of security standards for account data protection. The European Payment Council (EPC) is the decision-making and coordination body of the European banking industry in relation to payments. PCI Security Standards Council Participating Organizations EMV EMVCo, the organization responsible for developing and maintaining the EMV standard Chip and PIN, site run by the UK Payments Administration (UKPA), the UK's central co-ordinating authority for the implementation of EMV Payment cards Information privacy Financial services
Payment card industry
[ "Engineering" ]
881
[ "Cybersecurity engineering", "Information privacy" ]
8,889,938
https://en.wikipedia.org/wiki/Pillai%20prime
In number theory, a Pillai prime is a prime number p for which there is an integer n > 0 such that the factorial of n is one less than a multiple of the prime, but the prime is not one more than a multiple of n. To put it algebraically, but . The first few Pillai primes are 23, 29, 59, 61, 67, 71, 79, 83, 109, 137, 139, 149, 193, ... Pillai primes are named after the mathematician Subbayya Sivasankaranarayana Pillai, who studied these numbers. Their infinitude has been proven several times, by Subbarao, Erdős, and Hardy & Subbarao. References . . https://planetmath.org/pillaiprime, PlanetMath Classes of prime numbers Eponymous numbers in mathematics Factorial and binomial topics
Pillai prime
[ "Mathematics" ]
187
[ "Number theory stubs", "Factorial and binomial topics", "Number theory", "Combinatorics" ]
8,889,979
https://en.wikipedia.org/wiki/Committer
A committer is an individual who is permitted to modify the source code of a software project, that will be used in the project's official releases. To contribute source code to most large software projects, one must make modifications and then "commit" those changes to a central version control system, such as Git (or CVS). In open-source software development, the committer role may be used to distinguish commit access, a specific type of responsibility, from other forms of contribution, such as triaging issues or organizing events. Typically, an author submits a software patch containing changes and a committer integrates the patch into the main code base of the project. Commit bit To have a "commit bit" on one's user account means that the user is permitted to contribute source code changes. This dates to the use of a literal binary digit to represent yes-or-no privileges in access control systems of legacy version control and software systems, such as BSD. The commit bit represents the permission to contribute to the shared code of a software project. It can be resigned or may be removed due to inactivity in the project, as dormant committer accounts can represent security risks. Common responsibilities Project committers are usually the lead developers of a project and are the ones responsible for the majority of changes. They are seen as trusted, responsible and reliable members of the project's community. Relatedly, committers are usually responsible for the review of patches submitted by members of the community for inclusion into the software. After a successful review, usually consisting of conformance to coding standards and ensuring it does not introduce any new bugs, the committer will commit that specific patch on behalf of the patch submitter. Becoming a committer The process to becoming a committer can vary across projects, but in general, there are three common ways to do it. Be one of the original developers Be appointed by one of the original developers Be successfully voted in by the community of committers Becoming a committer in an existing project often involves becoming active on both the mailing lists as well as with supplying patches. After enough involvement, the other committers can then vote you in as a new committer. This normally happens through an e-mail vote. The XML-SOAP project hosted at Apache.org is an example of this process. References Free software culture and documents Version control
Committer
[ "Engineering" ]
482
[ "Software engineering", "Version control" ]
8,890,014
https://en.wikipedia.org/wiki/Perfect%20totient%20number
In number theory, a perfect totient number is an integer that is equal to the sum of its iterated totients. That is, one applies the totient function to a number n, apply it again to the resulting totient, and so on, until the number 1 is reached, and adds together the resulting sequence of numbers; if the sum equals n, then n is a perfect totient number. Examples For example, there are six positive integers less than 9 and relatively prime to it, so the totient of 9 is 6; there are two numbers less than 6 and relatively prime to it, so the totient of 6 is 2; and there is one number less than 2 and relatively prime to it, so the totient of 2 is 1; and , so 9 is a perfect totient number. The first few perfect totient numbers are 3, 9, 15, 27, 39, 81, 111, 183, 243, 255, 327, 363, 471, 729, 2187, 2199, 3063, 4359, 4375, ... . Notation In symbols, one writes for the iterated totient function. Then if c is the integer such that one has that n is a perfect totient number if Multiples and powers of three It can be observed that many perfect totient are multiples of 3; in fact, 4375 is the smallest perfect totient number that is not divisible by 3. All powers of 3 are perfect totient numbers, as may be seen by induction using the fact that Venkataraman (1975) found another family of perfect totient numbers: if is prime, then 3p is a perfect totient number. The values of k leading to perfect totient numbers in this way are 0, 1, 2, 3, 6, 14, 15, 39, 201, 249, 1005, 1254, 1635, ... . More generally if p is a prime number greater than 3, and 3p is a perfect totient number, then p ≡ 1 (mod 4) (Mohan and Suryanarayana 1982). Not all p of this form lead to perfect totient numbers; for instance, 51 is not a perfect totient number. Iannucci et al. (2003) showed that if 9p is a perfect totient number then p is a prime of one of three specific forms listed in their paper. It is not known whether there are any perfect totient numbers of the form 3kp where p is prime and k > 3. References Integer sequences
Perfect totient number
[ "Mathematics" ]
551
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
8,890,699
https://en.wikipedia.org/wiki/Yankee%20dryer
A Yankee dryer is a pressure vessel used in the production of machine glazed (MG) and tissue paper. On the Yankee dryer, the paper goes from approximately 42–45% dryness to just over 89% dryness. In industry, MG cylinders or Yankee dryers are primarily used to remove excess moisture from pulp that is about to be converted into paper. The Yankee cylinder can be equipped with a doctor blade and sprayed with adhesives to make the paper stick. Creping is done by the Yankee's doctor blade scraping the dry paper off the cylinder surface thereby crêping the paper. The crinkle (crêping) is controlled by the strength of the adhesive, the geometry of the doctor blade, the speed difference between the yankee and final section of the paper machine, and the paper pulp's characteristics. Configuration Whereas in other paper productions a series of drying cylinders is used, in tissue production only one cylinder (the yankee cylinder) dries the paper. This is due to the necessity of creping and made possible by the low grammage (gsm = gram per square meter) of the paper sheet for tissue products, which is in the range of 14-45 gsm. For the production of the higher gsm in this range, some machines are nevertheless provided with some (4-10) drying cylinders after the yankee. In this case one speaks of "wet creping", as the creping of the paper done on the yankee is not made on the fully dried paper and the drying is completed after the yankee cylinder. Yankee cylinders are traditionally made of cast iron and have diameters up to 6 m, therefore much higher than conventional drying cylinders. The width is a bit larger than that of the paper: typical for tissue are paper machine widths 1,74m 2,32m and 2,70m and their multiples (usually nowadays the doubles of these values are typical). Therefore, yankees are very heavy (~100t) and difficult to cast. Since a couple of decades the production of yankees made of steel is gaining market and new machines are practically always nowadays equipped with a steel yankee, which is much lighter, easier to produce and to transport. Due to the abrasive effect of the creping blade the surface of the yankee becomes irregular and rough. Therefore, periodically (with very varying frequency: from every 6 months to some years) cast iron yankees have to be ground or polished. This decreases the thickness of the shell, and since the yankee is a pressure vessel, also the maximum pressure which can be used for paper production. A way to avoid this declassing of pressure is to make a metallisation, i.e. to spray onto the surface a special Chrom-Nickel alloy, similar to a high alloyed stainless steel, which is then ground with the foreseen crown of the shell surface, leaving a coated thickness of ca. 0,7–1 mm. The metallised surface is then much more resistant to the abrasion and only a very mild polishing may be necessary ca. every 2 to 4 years. The thermal conductivity of the metallisation is a bit less than that of the original material, so that cast iron cylinders are metallised after some grindings of the shell, in order to gain, through thickness decrease, some conductivity which is then again lost with the metallisation. Steel yankees are instead always metallised since the beginning. The big dimension generates a problem with the elimination of the condensate forming inside the cylinders, therefore all yankees use a system with blow through steam which than it is re-compressed by help of a thermocompressor (ejector). In the inner surface of the yankee circular grooves are present which accommodate small pipes (s.c. "straw pipes" or "straws") through which the mixture of steam and condensate is sucked by means of a pressure difference between the yankee and the tank collecting the condensate (separator). The straws are combined in racks and these go to typically 6 collectors which take the condensate-steam mixture towards the center of the cylinder from which then it is taken out into usual piping and brought to the separator. Here the condensate and the blow-through steam separate at a lower pressure than that inside the yankee. In order not to spoil the blow-through steam, a thermocompressor uses motive steam (at a pressure ca. the double of that in the yankee) in order to increase again the pressure of the blow through steam to the value of the yankee. Yankee safety is an important issue and TAPPI has a committee (Yankee Dryer Safety Committee) dedicated to it. The sharper problems are present with cast iron yankee, being this material much more brittle than steel, making the yankee very sensitive to temperature differences. For example, a yankee which is receiving steam while it is not rotating may be damaged by the temperature difference of the bulk to the bottom of the cylinder where the condensate is present. Particular attention shall be taken in case of fire, as direct jets of cold water on the surface of the hot yankee may damage it. Some cases of exploded yankees due to these and other reasons are unfortunately present in the history of tissue production. The usage of a chemical coating on the surface of the yankee is nowadays the rule. This is constituted by a mixture of usually 2 or more constituents: 1) a base polymer (adhesive) based on polyamides or epichlorhydrine resins which coats the surface of the yankee with a polymer layer in which the creping blade works, instead of scratching the iron surface. The base coating usually also has adhesive properties and keeps the paper attached to the yankee surface until creping 2) a release agent based on mineral or vegetable oil or waxes which lubricates and improved the detaching of the paper from the surface 3)various modifiers, which may make the coating more soft or hard or improve the protective effect on the yankee The coating is sprayed together with water in the space between the creping blade and the press through a spray bar with nozzles. Only the narrow space between this spray bar and the press is the one in which the water coming with the coating shall be evaporated and the polymer has to cure. For this reason for the base coating increasingly pre-cured polymers are used. References Papermaking Dryers
Yankee dryer
[ "Chemistry", "Engineering" ]
1,327
[ "Dryers", "Chemical equipment" ]
8,890,983
https://en.wikipedia.org/wiki/Nonlinear%20acoustics
Nonlinear acoustics (NLA) is a branch of physics and acoustics dealing with sound waves of sufficiently large amplitudes. Large amplitudes require using full systems of governing equations of fluid dynamics (for sound waves in liquids and gases) and elasticity (for sound waves in solids). These equations are generally nonlinear, and their traditional linearization is no longer possible. The solutions of these equations show that, due to the effects of nonlinearity, sound waves are being distorted as they travel. Introduction A sound wave propagates through a material as a localized pressure change. Increasing the pressure of a gas or fluid increases its local temperature. The local speed of sound in a compressible material increases with temperature; as a result, the wave travels faster during the high pressure phase of the oscillation than during the lower pressure phase. This affects the wave's frequency structure; for example, in an initially plain sinusoidal wave of a single frequency, the peaks of the wave travel faster than the troughs, and the pulse becomes cumulatively more like a sawtooth wave. In other words, the wave distorts itself. In doing so, other frequency components are introduced, which can be described by the Fourier series. This phenomenon is characteristic of a nonlinear system, since a linear acoustic system responds only to the driving frequency. This always occurs but the effects of geometric spreading and of absorption usually overcome the self-distortion, so linear behavior usually prevails and nonlinear acoustic propagation occurs only for very large amplitudes and only near the source. Additionally, waves of different amplitudes will generate different pressure gradients, contributing to the nonlinear effect. Physical analysis The pressure changes within a medium cause the wave energy to transfer to higher harmonics. Since attenuation generally increases with frequency, a countereffect exists that changes the nature of the nonlinear effect over distance. To describe their level of nonlinearity, materials can be given a nonlinearity parameter, . The values of and are the coefficients of the first and second order terms of the Taylor series expansion of the equation relating the material's pressure to its density. The Taylor series has more terms, and hence more coefficients (C, D, ...) but they are seldom used. Typical values for the nonlinearity parameter in biological mediums are shown in the following table. In a liquid usually a modified coefficient is used known as . Mathematical model Governing equations to derive Westervelt equation Continuity: Conservation of momentum: with Taylor perturbation expansion on density: where ε is a small parameter, i.e. the perturbation parameter, the equation of state becomes: If the second term in the Taylor expansion of pressure is dropped, the viscous wave equation can be derived. If it is kept, the nonlinear term in pressure appears in the Westervelt equation. Westervelt equation The general wave equation that accounts for nonlinearity up to the second-order is given by the Westervelt equation where is the sound pressure, is the small signal sound speed, is the sound diffusivity, is the nonlinearity coefficient and is the ambient density. The sound diffusivity is given by where is the shear viscosity, the bulk viscosity, the thermal conductivity, and the specific heat at constant volume and pressure respectively. Burgers' equation The Westervelt equation can be simplified to take a one-dimensional form with an assumption of strictly forward propagating waves and the use of a coordinate transformation to a retarded time frame: where is retarded time. This corresponds to a viscous Burgers equation: in the pressure field (y=p), with a mathematical "time variable": and with a "space variable": and a negative diffusion coefficient: . The Burgers' equation is the simplest equation that describes the combined effects of nonlinearity and losses on the propagation of progressive waves. KZK equation An augmentation to the Burgers equation that accounts for the combined effects of nonlinearity, diffraction, and absorption in directional sound beams is described by the Khokhlov–Zabolotskaya–Kuznetsov (KZK) equation, named after Rem Khokhlov, Evgenia Zabolotskaya, and V. P. Kuznetsov. Solutions to this equation are generally used to model nonlinear acoustics. If the axis is in the direction of the sound beam path and the plane is perpendicular to that, the KZK equation can be written The equation can be solved for a particular system using a finite difference scheme. Such solutions show how the sound beam distorts as it passes through a nonlinear medium. Common occurrences Sonic boom The nonlinear behavior of the atmosphere leads to change of the wave shape in a sonic boom. Generally, this makes the boom more 'sharp' or sudden, as the high-amplitude peak moves to the wavefront. Acoustic levitation Acoustic levitation would not be possible without nonlinear acoustic phenomena. The nonlinear effects are particularly evident due to the high-powered acoustic waves involved. Ultrasonic waves Because of their relatively high amplitude to wavelength ratio, ultrasonic waves commonly display nonlinear propagation behavior. For example, nonlinear acoustics is a field of interest for medical ultrasonography because it can be exploited to produce better image quality. Musical acoustics The physical behavior of musical acoustics is mainly nonlinear. Attempts are made to model their sound generation from physical modeling synthesis, emulating their sound from measurements of their nonlinearity. Parametric arrays A parametric array is a nonlinear transduction mechanism that generates narrow, nearly side lobe-free beams of low frequency sound, through the mixing and interaction of high-frequency sound waves. Applications are e.g. in underwater acoustics and audio. See also Cavitation References Acoustics Nonlinear systems
Nonlinear acoustics
[ "Physics", "Mathematics" ]
1,182
[ "Nonlinear systems", "Classical mechanics", "Acoustics", "Dynamical systems" ]
8,891,109
https://en.wikipedia.org/wiki/Neomura
Neomura (from Ancient Greek neo- "new", and Latin -murus "wall") is a proposed clade of life composed of the two domains Archaea and Eukaryota, coined by Thomas Cavalier-Smith in 2002. Its name reflects the hypothesis that both archaea and eukaryotes evolved out of the domain Bacteria, and one of the major changes was the replacement of the bacterial peptidoglycan cell walls with other glycoproteins. , the neomuran hypothesis is not accepted by most scientific workers; many molecular phylogenies suggest that eukaryotes are most closely related to one group of archaeans and evolved from them, rather than forming a clade with all archaeans, and that archaea and bacteria are sister groups both descended from the last universal common ancestor (LUCA). Other scenarios have been proposed based on competing phylogenies, and the relationship between the three domains of life (Archaea, Bacteria, and Eukaryota) was described in 2021 as "one of Biology's greatest mysteries". Morphology Considered as comprising the Archaea and the Eukaryota, the Neomura are a very diverse group, containing all of the multicellular species, as well as all of the most extremophilic species, but they all share certain molecular characteristics. All neomurans have histones to help with chromosome packaging, and most have introns. All use the molecule methionine as the initiator amino acid for protein synthesis (bacteria use formylmethionine). Finally, all neomurans use several kinds of RNA polymerase, whereas bacteria use only one. Phylogeny There are several hypotheses for the phylogenetic relationships between archaeans and eukaryotes. Three-domain view When Carl Woese first published his three-domain system in 1990, it was believed that the domains Bacteria, Archaea, and Eukaryota were equally old and equally related on the tree of life. However certain evidence began to suggest that Eukaryota and Archaea were more closely related to each other than either was to Bacteria. This evidence included the common use of cholesterols and proteasomes, which are complex molecules not found in most bacteria, leading to the inference that the root of life lay between Bacteria on the one hand, and Archaea and Eukaryota combined on the other, i.e. that there were two primary branches of life subsequent to the LUCA – Bacteria and Neomura (not then called by this name). The "three primary domains" (3D) scenario was one of the two hypotheses considered plausible in a 2010 review of the origin of eukaryotes. Derived clade view In a 2002 paper, and subsequent papers, Thomas Cavalier-Smith and coworkers have promulgated a hypothesis that Neomura is a clade deeply nested with Eubacteria with Actinomycetota as its sister group. He wrote, "Eukaryotes and archaebacteria form the clade neomura and are sisters, as shown decisively by genes fragmented only in archaebacteria and by many sequence trees. This sisterhood refutes all theories that eukaryotes originated by merging an archaebacterium and an α-proteobacterium, which also fail to account for numerous features shared specifically by eukaryotes and actinobacteria." These include the presence of cholesterols and proteasomes in Actinomycetota as well as in Neomura. Features of this complexity are unlikely to evolve more than once in separate branches, so either there was a horizontal transfer of those two pathways, or Neomura evolved from this particular branch of the bacterial tree. Two domains view As early as 2010, the major competitor to the three domains scenario for the origin of eukaryotes was the "two domains" (2D) scenario, in which eukaryotes emerged from within the archaea. The discovery of a major group within the Archaea, Lokiarchaeota, to which eukaryotes are more genetically similar than to other archaeans, is not consistent with the Neomura hypothesis. Instead, it supports the hypothesis that eukaryotes emerged from within one group of archaeans: A 2016 study using 16 universally-conserved ribosomal proteins supports the two domain view. Its "new view of the tree of life" shows eukaryotes as a small group nested within Archaea, in particular within the TACK superphylum. However, the origin of eukaryotes remains unresolved, and the two domain and three domain scenarios remain viable hypotheses. One domain view An alternative to the placement of Eukaryota within Archaea is that both domains evolved from within Bacteria, which is then the ancestral group. This view is similar to the derived clade view above, but the bacterial group involved is different. The evidence for this phylogeny includes the detection of membrane coat proteins and of processes related to phagocytosis in the bacterial Planctomycetes. Although Archaea and Eukaryota are sisters in this view, their joint sister is a bacterial group called PVC for short (the Planctomycetes-Verrucomicrobia-Chlamydiae superphylum): On this view, the traditional Bacteria taxon is paraphyletic. Eukaryotes were not formed by a symbiotic merger between an archaeon and a bacterium, but by the merger of two bacteria, albeit that one was highly modified. In a 2020 paper, Cavalier-Smith accepted the planctobacterial origins of Archaea and Eukaryota, noting that the evidence was not sufficient to safely distinguish between the two possibilities that eukaryotes are sisters of all archaea (as shown in the cladogram above) or that eukaryotes evolved from filarchaeotes, i.e. within Archaea (the two-domain view above). See also Protocell References Further reading Phylogenetics Taxa named by Thomas Cavalier-Smith
Neomura
[ "Biology" ]
1,268
[ "Bioinformatics", "Phylogenetics", "Taxonomy (biology)" ]
8,891,344
https://en.wikipedia.org/wiki/Gibbs%E2%80%93Duhem%20equation
In thermodynamics, the Gibbs–Duhem equation describes the relationship between changes in chemical potential for components in a thermodynamic system: where is the number of moles of component the infinitesimal increase in chemical potential for this component, the entropy, the absolute temperature, volume and the pressure. is the number of different components in the system. This equation shows that in thermodynamics intensive properties are not independent but related, making it a mathematical statement of the state postulate. When pressure and temperature are variable, only of components have independent values for chemical potential and Gibbs' phase rule follows. The Gibbs−Duhem equation cannot be used for small thermodynamic systems due to the influence of surface effects and other microscopic phenomena. The equation is named after Josiah Willard Gibbs and Pierre Duhem. Derivation Deriving the Gibbs–Duhem equation from the fundamental thermodynamic equation is straightforward. The total differential of the extensive Gibbs free energy in terms of its natural variables is Since the Gibbs free energy is the Legendre transformation of the internal energy, the derivatives can be replaced by their definitions, transforming the above equation into: The chemical potential is simply another name for the partial molar Gibbs free energy (or the partial Gibbs free energy, depending on whether N is in units of moles or particles). Thus the Gibbs free energy of a system can be calculated by collecting moles together carefully at a specified T, P and at a constant molar ratio composition (so that the chemical potential does not change as the moles are added together), i.e. . The total differential of this expression is Combining the two expressions for the total differential of the Gibbs free energy gives which simplifies to the Gibbs–Duhem relation: Alternative derivation Another way of deriving the Gibbs–Duhem equation can be found by taking the extensivity of energy into account. Extensivity implies that where denotes all extensive variables of the internal energy . The internal energy is thus a first-order homogenous function. Applying Euler's homogeneous function theorem, one finds the following relation when taking only volume, number of particles, and entropy as extensive variables: Taking the total differential, one finds Finally, one can equate this expression to the definition of to find the Gibbs–Duhem equation Applications By normalizing the above equation by the extent of a system, such as the total number of moles, the Gibbs–Duhem equation provides a relationship between the intensive variables of the system. For a simple system with different components, there will be independent parameters or "degrees of freedom". For example, if we know a gas cylinder filled with pure nitrogen is at room temperature (298 K) and 25 MPa, we can determine the fluid density (258 kg/m3), enthalpy (272 kJ/kg), entropy (5.07 kJ/kg⋅K) or any other intensive thermodynamic variable. If instead the cylinder contains a nitrogen/oxygen mixture, we require an additional piece of information, usually the ratio of oxygen-to-nitrogen. If multiple phases of matter are present, the chemical potentials across a phase boundary are equal. Combining expressions for the Gibbs–Duhem equation in each phase and assuming systematic equilibrium (i.e. that the temperature and pressure is constant throughout the system), we recover the Gibbs' phase rule. One particularly useful expression arises when considering binary solutions. At constant P (isobaric) and T (isothermal) it becomes: or, normalizing by total number of moles in the system substituting in the definition of activity coefficient and using the identity : This equation is instrumental in the calculation of thermodynamically consistent and thus more accurate expressions for the vapor pressure of a fluid mixture from limited experimental data. Ternary and multicomponent solutions and mixtures Lawrence Stamper Darken has shown that the Gibbs–Duhem equation can be applied to the determination of chemical potentials of components from a multicomponent system from experimental data regarding the chemical potential of only one component (here component 2) at all compositions. He has deduced the following relation xi, amount (mole) fractions of components. Making some rearrangements and dividing by (1 – x2)2 gives: or or as formatting variant The derivative with respect to one mole fraction x2 is taken at constant ratios of amounts (and therefore of mole fractions) of the other components of the solution representable in a diagram like ternary plot. The last equality can be integrated from to gives: Applying LHopital's rule gives: . This becomes further: . Express the mole fractions of component 1 and 3 as functions of component 2 mole fraction and binary mole ratios: and the sum of partial molar quantities gives and are constants which can be determined from the binary systems 1_2 and 2_3. These constants can be obtained from the previous equality by putting the complementary mole fraction x3 = 0 for x1 and vice versa. Thus and The final expression is given by substitution of these constants into the previous equation: See also Margules activity model Darken's equations Gibbs–Helmholtz equation References External links J. Phys. Chem. Gokcen 1960 A lecture from www.chem.neu.edu A lecture from www.chem.arizona.edu Encyclopædia Britannica entry Chemical thermodynamics Thermodynamic equations fr:Potentiel chimique#Relation de Gibbs-Duhem
Gibbs–Duhem equation
[ "Physics", "Chemistry" ]
1,146
[ "Chemical thermodynamics", "Thermodynamic equations", "Equations of physics", "Thermodynamics" ]
8,892,017
https://en.wikipedia.org/wiki/Electric%20guitar%20design
Electric guitar design is a type of industrial design where the looks and efficiency of the shape as well as the acoustical aspects of the guitar are important factors. In the past many guitars have been designed with various odd shapes as well as very practical and convenient solutions to improve the usability of the object. History George Beauchamp is occasionally credited with inventing the electric guitar by designing a lap steel guitar with a pickup, though a lap steel does not have functional frets or a standard guitar-type neck. The earliest "electrified" fretted guitars were hollow-bodied archtop acoustic guitars to which some form of electromagnetic transducer had been attached. The first commercial electrified guitar was the Electro-Spanish Ken Roberts model produced from 1931 to 1936 by Rickenbacker, with one Beauchamp-designed pickup and an early "Vib-rola" hand vibrato created by Doc Kauffman. Early years Paul Tutmarc built and may have offered an electric solid-body guitar as early as 1932, under the brand "Audiovox". Tutmarc is also credited as the co-inventor of the magnetic pickup along with Art Stimpson, and the fretted electric bass guitar. Bob Wisner worked for Tutmarc, converting tube radio amplifiers into guitar amplifiers (eventually developing his own amplifier circuits) so Tutmark's instruments could be sold matched up with amplifiers. Paul was unsuccessful at obtaining a patent for his magnetic pickup as it was too similar to the telephone microphone coil sensor device. Audiovox production was handed over to Paul's son, Bud Tutmarc, who continued building these instruments under the "Bud-Electro" brand until the early 1950s. Bud Tutmarc had been delegated by the senior Tutmarc the task of winding the pickup coils, and he continued producing them for his own guitars. He used horseshoe magnets in a single-coil and later a hum-cancelling dual-coil configuration. When Wisner was hired by Rickenbacher (later Rickenbacker), he may have passed along Tutmarc's magnetic pickup design, which strongly resembles the pickup on their cast aluminum lap steel guitar, nicknamed The Frying Pan or The Pancake Guitar, released in 1933. Another early solid-body electric guitar was built by musician and inventor Les Paul in the early 1940s, working after hours in the Epiphone Guitar factory. His log guitar (a wood post with a neck attached to it and two hollow body halves attached to the sides for appearance only) was patented, and is often considered to be the first of its kind, although it shares nothing of design or hardware in common with the solid-body "Les Paul" model later created by Gibson. Fender In 1950 and 1951, amplifier builder Leo Fender designed the first commercially successful solid-body electric guitar with a single magnetic pickup, which was initially named the "Esquire". The later two-pickup version of the Esquire was called the "Broadcaster". The bolt-on neck was consistent with Leo Fender's belief that the instrument design should be modular to allow cost-effective and consistent manufacture and assembly, as well as simplified repair or replacement. The Broadcaster name was changed to Telecaster because of a legal dispute over the name. In 1954, the Fender Electric Instrument Company introduced the Fender Stratocaster, or "Strat". It was positioned as a deluxe model and offered various product improvements and innovations over the Telecaster, often based upon responses from working musicians. These innovations included an ash or alder double-cutaway body design, with an integrated vibrato mechanism (called a synchronized tremolo by Fender, thus beginning a confusion of the terms that still continues), three single-coil pickups, and "comfort contours" where the body edges are significantly contoured. Leo Fender is also credited with developing the first commercially successful electric bass, the Fender Precision Bass, introduced in 1951. Gibson The more traditionally designed and styled Gibson solid-body instruments were a contrast to Leo Fender's modular designs and heavily contoured "slab" bodies, with the most notable differentiator being the method of neck attachment and the scale of the neck (Gibson-24.75", Fender-25.5"). Gibson, like many guitar manufacturers, had long offered semi-acoustic guitars with pickups, and previously rejected Les Paul and his "log" electric in the 1940s. In apparent response to the Telecaster, Gibson introduced the first Gibson Les Paul solid body guitar in 1952 (Les Paul was brought in only towards the end of the design process for details of the design and for marketing endorsement ). Features of the Les Paul include a solid mahogany body with a carved maple top (much like a violin and earlier Gibson archtop hollow body electric guitars) and contrasting edge binding, two single-coil "soapbar" pickups, a 24¾" scale mahogany neck with a more traditional glued-in "set" neck joint, binding on the edges of the fretboard, and a tilt-back headstock with three machine heads (tuners) to a side. The earliest version had a combination bridge and trapeze-tailpiece design as specified by Les Paul himself, but was largely disliked and discontinued after the first year. Gibson then developed the Tune-o-matic bridge and separate stop tailpiece, an adjustable non-vibrato design still in wide use. By 1957, Gibson had made the final major change to the Les Paul of today - the humbucking pickup, or humbucker. The pickup, invented by Seth Lover, was a dual-coil pickup which featured two windings connected out-of-phase and reverse-wound, in order to cancel the 60-cycle mains hum that plagued single-coil pickups; as a byproduct, the two-coil design also produces a distinctive, more "mellow" tone which appeals to many guitarists. Vox In 1962 Vox introduced the pentagonal Phantom guitar, originally made in England but soon after made by EKO of Italy. It was followed a year later by the teardrop-shaped Mark VI, the prototype of which was used by Brian Jones of The Rolling Stones. Vox guitars also experimented with onboard effects and electronics. The Teardrop won a prize for its design. In the mid 1960s, as the sound of electric 12 string guitar became popular, Vox introduced the Phantom XII and Mark XII electric 12 string guitars. Vox produced many more traditional 6 and 12 string electric guitars in both England and Italy. It may be noted that the Phantom guitar shape was quite similar to that of first fretted electric bass guitar, the Audiovox "Electric Bass Fiddle" of 1934. In 1966 Vox introduced the revolutionary but problematic GuitarOrgan, a Phantom VI guitar with internal organ electronics. The instrument's trigger mechanism required a specially-wired plectrum that completed circuit connections to each fret, resulting in a very wide and unwieldy neck. John Lennon was given one in a bid to secure an endorsement, though this never panned out. According to Up-Tight: the Velvet Underground Story, Brian Jones of the Rolling Stones also tried one; when asked by the Velvets if it "worked", his answer was negative. The instrument never became popular, but it was a precursor to the modern guitar synthesizer. Multiscale/Fanned-Fret Guitars In recent years, guitars and basses with multi-scale or fanned-fret fingerboards started to appear. These instruments are supposed to offer an advantage over the classical fixed-scale guitars and basses by providing more freedom in setting the tension of each string at the design and manufacturing phases. This may result in a more uniform tension of the strings, as well as possibly offer timbre and tonal characteristics somewhat different from the usual fixed-scale instruments. Variant designs Materials other than wood have been used. Travis Bean and Kramer built guitars with aluminium necks. The Gittler guitar was a "skeleton" design from the late 1970s, largely stainless steel. In 1979, for the Chicago NAMM trade show, Ibanez built a 76-pound solid-brass guitar, primarily as an attention-getting gimmick but also to demonstrate that while such extreme mass would provide very long note sustain (a characteristic sought by many guitarists), the tonal qualities suffered. Various plastics and composites have been employed. Some hollow-body Danelectro had Masonite body shells. The Ampeg guitars designed by Dan Armstrong pioneered acrylic as a body material. Fiberglass was used by Valco (called "Res-O-Glas") for some models of hollow-body "Airline" guitars sold through Montgomery Ward. Carbon fiber has been used for necks as well as bodies. 1991 saw the introduction of guitar designer Jol Dantzig's first truly workable acoustic-electric hybrid guitar design. The instrument, called the DuoTone, was conceived while Dantzig was at Hamer Guitars. (Dantzig was also the designer of the first 12 string bass.) Adapted by players like Ty Tabor, Stone Gossard, Elvis Costello and Jeff Tweedy, the DuoTone was a full "duplex" instrument that could switch between acoustic and electric tones. Recently there have been many entries in the hybrid category (capable of both acoustic and electric tones) including the T5 by Taylor, Michael Kelly's "Hybrid," the Parker Fly and the Anderson Crowdster. In the 90s the band Neptune began building a weird metal guitar with 3rd bridge options incorporated. A predecessor of this type of guitars is the Pencilina. Linda Manzer designed the Pikasso guitar with multiple necks. See also Bolt-on construction Set neck construction Through-neck construction Experimental luthier Experimental musical instrument Leo Fender Doc Kauffman Seth Lover Paul Bigsby Paul Reed Smith Ken Parker Grover Jackson Wayne Charvel John D'Angelico Jimmy D'Aquisto Gary Kramer Travis Bean References Electric guitars Industrial design
Electric guitar design
[ "Engineering" ]
2,046
[ "Industrial design", "Design engineering", "Design" ]
8,892,305
https://en.wikipedia.org/wiki/2002%20Eastern%20Mediterranean%20event
The 2002 Eastern Mediterranean Event was a high-energy upper atmosphere explosion over the Mediterranean Sea, around 34°N 21°E (between Libya and Crete) on June 6, 2002. This explosion, similar in power to a small atomic bomb, has been related to a small asteroid undetected while approaching Earth. The object disintegrated as a meteor air burst over the sea, and no meteorite fragments were recovered. The event occurred during the 2001–2002 India–Pakistan standoff, and there were concerns by General Simon Worden of the U.S. Air Force that if the upper atmosphere explosion had occurred closer to Pakistan or India, it could have sparked a nuclear war between the two countries. See also Impact event Near-Earth object Potentially hazardous asteroid Vela incident References Explosions in 2002 2002 natural disasters Modern Earth impact events Eastern Mediterranean History of the Mediterranean June 2002 events in Africa 21st-century astronomical events
2002 Eastern Mediterranean event
[ "Astronomy" ]
187
[ "Astronomical events", "21st-century astronomical events" ]
8,892,818
https://en.wikipedia.org/wiki/Home%20range
A home range is the area in which an animal lives and moves on a periodic basis. It is related to the concept of an animal's territory which is the area that is actively defended. The concept of a home range was introduced by W. H. Burt in 1943. He drew maps showing where the animal had been observed at different times. An associated concept is the utilization distribution which examines where the animal is likely to be at any given time. Data for mapping a home range used to be gathered by careful observation, but in more recent years, the animal is fitted with a transmission collar or similar GPS device. The simplest way of measuring the home range is to construct the smallest possible convex polygon around the data but this tends to overestimate the range. The best known methods for constructing utilization distributions are the so-called bivariate Gaussian or normal distribution kernel density methods. More recently, nonparametric methods such as the Burgman and Fox's alpha-hull and Getz and Wilmers local convex hull have been used. Software is available for using both parametric and nonparametric kernel methods. History The concept of the home range can be traced back to a publication in 1943 by W. H. Burt, who constructed maps delineating the spatial extent or outside boundary of an animal's movement during the course of its everyday activities. Associated with the concept of a home range is the concept of a utilization distribution, which takes the form of a two dimensional probability density function that represents the probability of finding an animal in a defined area within its home range. The home range of an individual animal is typically constructed from a set of location points that have been collected over a period of time, identifying the position in space of an individual at many points in time. Such data are now collected automatically using collars placed on individuals that transmit through satellites or using mobile cellphone technology and global positioning systems (GPS) technology, at regular intervals. Methods of calculation The simplest way to draw the boundaries of a home range from a set of location data is to construct the smallest possible convex polygon around the data. This approach is referred to as the minimum convex polygon (MCP) method which is still widely employed, but has many drawbacks including often overestimating the size of home ranges. The best known methods for constructing utilization distributions are the so-called bivariate Gaussian or normal distribution kernel density methods. This group of methods is part of a more general group of parametric kernel methods that employ distributions other than the normal distribution as the kernel elements associated with each point in the set of location data. Recently, the kernel approach to constructing utilization distributions was extended to include a number of nonparametric methods such as the Burgman and Fox's alpha-hull method and Getz and Wilmers local convex hull (LoCoH) method. This latter method has now been extended from a purely fixed-point LoCoH method to fixed radius and adaptive point/radius LoCoH methods. Although, currently, more software is available to implement parametric than nonparametric methods (because the latter approach is newer), the cited papers by Getz et al. demonstrate that LoCoH methods generally provide more accurate estimates of home range sizes and have better convergence properties as sample size increases than parametric kernel methods. Home range estimation methods that have been developed since 2005 include: LoCoH Brownian Bridge Line-based Kernel GeoEllipse Line-Buffer Computer packages for using parametric and nonparametric kernel methods are available online. In the appendix of a 2017 JMIR article, the home ranges for over 150 different bird species in Manitoba are reported. See also Territoriality Dear enemy recognition References Ethology Ecology terminology
Home range
[ "Biology" ]
755
[ "Behavioural sciences", "Ethology", "Behavior", "Ecology terminology" ]
8,892,966
https://en.wikipedia.org/wiki/Hydrocarbon%20dew%20point
The hydrocarbon dew point is the temperature (at a given pressure) at which the hydrocarbon components of any hydrocarbon-rich gas mixture, such as natural gas, will start to condense out of the gaseous phase. It is often also referred to as the HDP or the HCDP. The maximum temperature at which such condensation takes place is called the cricondentherm. The hydrocarbon dew point is a function of the gas composition as well as the pressure. The hydrocarbon dew point is universally used in the natural gas industry as an important quality parameter, stipulated in contractual specifications and enforced throughout the natural gas supply chain, from producers through processing, transmission and distribution companies to final end users. The hydrocarbon dew point of a gas is a different concept from the water dew point, the latter being the temperature (at a given pressure) at which water vapor present in a gas mixture will condense out of the gas. Relation to the term GPM In the United States, the hydrocarbon dew point of processed, pipelined natural gas is related to and characterized by the term GPM which is the gallons of liquefiable hydrocarbons contained in of natural gas at a stated temperature and pressure. When the liquefiable hydrocarbons are characterized as being hexane or higher molecular weight components, they are reported as GPM (C6+). However, the quality of raw produced natural gas is also often characterized by the term GPM meaning the gallons of liquefiable hydrocarbons contained in of the raw natural gas. In such cases, when the liquefiable hydrocarbons in the raw natural gas are characterized as being ethane or higher molecular weight components, they are reported as GPM (C2+). Similarly, when characterized as being propane or higher molecular weight components, they are reported as GPM (C3+). Care must be taken not to confuse the two different definitions of the term GPM. Although GPM is an additional parameter of some value, most pipeline operators and others who process, transport, distribute or use natural gas are primarily interested in the actual HCDP, rather than GPM. Furthermore, GPM and HCDP are not interchangeable and one should be careful not to confuse what each one exactly means. Methods of HCDP determination There are primarily two categories of HCDP determination. One category involves "theoretical" methods, and the other involves "experimental" methods. Theoretical methods The theoretical methods use the component analysis of the gas mixture (usually via gas chromatography, GC) and then use an equation of state (EOS) to calculate what the dew point of the mixture should be at a given pressure. The Peng–Robinson and Kwong–Redlich–Soave equations of state are the most commonly used for determining the HCDP in the natural gas industry. The theoretical methods using GC analysis suffer from four sources of error: The first source of error is the sampling error. Pipelines operate at high pressure. To do an analysis using a field GC, the pressure has to be regulated down to close to atmospheric pressure. In the process of reducing pressure, some of the heavier components may drop out, particularly if the pressure reduction is done in the retrograde region. Therefore, the gas reaching the GC is fundamentally different (usually leaner in the heavy components) than the actual gas in the pipeline. Alternatively, if a sample bottle is collected for delivery to a laboratory for analysis, significant care must be taken not to introduce any contaminants to the sample, to make sure that the sample bottle represents the actual gas in the pipeline, and to extract the complete sample correctly in to the laboratory GC. The second source is the error on the analysis of the gas mix components. A typical field GC will have at best (under ideal conditions and frequent calibration) ~2% (of range) error in the quantity of each gas analyzed. Since the range for most field-GCs for C6 components is 0-1 mol%, there will be about 0.02 mol% uncertainty in the quantity of C6+ components. While this error does not change the heating value by much, it will introduce a significant error in the HCDP determination. Furthermore, since the exact distribution of C6+ components is an unknown (the amount of C6, C7, C8, ...), this further introduces additional errors in any HCDP calculations. When using a C6+ GC these errors can be as high as 100 °F or more, depending on the gas mixture and the assumptions made regarding the composition of the C6+ fraction. For "pipeline quality" natural gas, a C9+ GC analysis may reduce the uncertainty, because it eliminates the C6-C8 distribution error. However, independent studies have shown that the cumulative error can still be very significant, in some cases in excess of 30 C. A laboratory C12+ GC analysis using a Flame Ionization Detector (FID) can reduce the error further. However, using a C12 laboratory system can introduce additional errors, namely sampling error. If the gas has to be collected in a sample bottle and shipped to a laboratory for C12 analysis, sampling errors can be significant. Obviously there is also a lag time error between the time the sample was collected and the time it was analyzed. The third source of errors is calibration errors. All GCs have to be calibrated routinely with a calibration gas representative of the gas under analysis. If the calibration gas is not representative, or calibrations are not routinely performed, there will be errors introduced. The fourth source of error relates to the errors embedded in the equation of state model used to calculate the dew point. Different models are prone to varying amounts error at different pressure regimes and gas mixes. There is sometimes a significant divergence of calculated dew point based solely on the choice of equation of state used. The significant advantage of using the theoretical models is that the HCDP at several pressures (as well as the cricondentherm) can be determined from a single analysis. This provides for operational uses such as determining the phase of the stream flowing through the flow-meter, determining if the sample has been affected by ambient temperature in the sample system, and avoiding amine foaming from liquid hydrocarbons in the amine contactor. However, recent developments in combining experimental methods and software enhancements have eliminated this shortcoming (see combined experimental and theoretical approach below). GC vendors with a product targeting the HCDP analysis include Emerson, ABB, Thermo-fisher, as well as other companies. Experimental methods In the "experimental" methods, one actually cools a surface on which gas condenses and then measures the temperature at which the condensation takes place. The experimental methods can be divided into manual and automated systems. Manual systems, such as the Bureau of Mines dewpoint tester, depend on an operator to manually cool the chilled mirror slowly and to visually detect the onset of condensation. The automated methods use automatic mirror chilling controls and sensors to detect the amount of light reflected by the mirror and detect when condensation occurs through changes in the reflected light. The chilled mirror technique is a first principle measurement. Depending on the specific method used to establish the dew point temperature, some correction calculations may be necessary. As condensation must necessarily have already occurred for it to be detected, the reported temperature is lower than when using theoretical methods. Similar to GC analysis, the experimental method is subject to potential sources of error. The first error is in the detection of condensation. A key component in chilled mirror dew point measurements is the subtlety with which condensate can be detected — in other words, the thinner the film is when detected, the better. A manual chilled mirror device relies on the operator to determine when a mist has formed on the mirror, and, depending on the device, can be highly subjective. It is also not always clear what is condensing: water or hydrocarbons. Because of the low resolution that has traditionally been available, the operator has been prone to under report the dew point, in other words, to report the dew point temperature as being below what it actually is. This is due to the fact that by the time condensation had accumulated enough to be visible, the dew point had already been reached and passed. The most modern manual devices make possible greatly improved reporting accuracy. There are two manufacturers of manual devices, and each of their devices meet the requirements for dew point measurement apparatus as defined in the ASTM Manual for Hydrocarbon Analysis. However, there are significant differences between the devices – including the optical resolution of the mirror and the method of mirror cooling – depending on the manufacturer. Automated chilled mirror devices provide significantly more repeatable results, but these measurements can be affected by contaminants that may compromise the mirror's surface. In many instances it is important to incorporate an effective filtration system that prepares the gas for analysis. On the other hand, filtration may alter the gas composition slightly and filter elements are subject to clogging and saturation. Advances in technology have led to analyzers that are less affected by contaminants and certain devices can also measure the dew point of water that may be present in the gas. One recent innovation is the use of spectroscopy to determine the nature of the condensate at dewpoint. Another device user laser interferometry to register extremely tenuous amounts of condensation. It is asserted that these technologies are less affected by interference from contaminants. Another source of error is the speed of the cooling of the mirror and the measurement of the temperature of the mirror when the condensation is detected. This error can be minimized by controlling the cooling speed, or having a fast condensation detection system. Experimental methods only provide a HCDP at the pressure at which the measurement is taken, and cannot provide the cricondentherm or the HCDP at other pressures. As the cricondentherm of natural gas is typically around 27 bar, there are gas preparation systems currently available which adjust input pressure to this value. Although, as pipeline operators often wish to know the HCDP at their current line pressure, the input pressure of many experimental systems can be adjusted by a regulator. There are instruments that can be operated in either manual or automatic mode from the Vympel company. Companies who offer an automated chilled mirror system include: Vympel, Ametek, Michell Instruments, ZEGAZ Instruments and Bartec Benke (Model: Hygrophil HCDT). Combined experimental and theoretical approach A recent innovation is to combine the experimental method with theoretical. If the composition of the gas is analyzed by a C6+ GC, and a dewpoint is experimentally measured at any pressure, then the experimental dewpoint can be used in combination with the GC analysis to provide a more exact phase diagram. This approach overcome the main shortcoming of the experimental method which is not knowing the whole phase diagram. An example of this software is provided by Starling Associates. See also Natural-gas processing Natural-gas condensate References https://www.bartec.de/en/products/analyzers-and-measurement-technology/trace-moisture-measurement-for-gases/hygrophil-hcdt/ External links https://www.zegaz.com/blog Pipeline and Gas Journal - Hydrocarbon Dew Point Measurement Using a Gas Chromatograph Emerson Hydrocarbon Dew Point Application Note Natural Gas Processing: The Crucial Link Between Natural Gas Production and Its Transportation Identification of Hydrocarbon Dew Point, Cricondentherm, Cricondenbar and critical points Hydrocarbon Dew-point – A Key Natural Gas Quality Parameter (ISO 6570:2001) Natural gas -- Determination of potential hydrocarbon liquid content Engineering thermodynamics Hydrocarbons Natural gas Threshold temperatures
Hydrocarbon dew point
[ "Physics", "Chemistry", "Engineering" ]
2,459
[ "Hydrocarbons", "Physical phenomena", "Phase transitions", "Engineering thermodynamics", "Threshold temperatures", "Organic compounds", "Thermodynamics", "Mechanical engineering" ]
8,893,452
https://en.wikipedia.org/wiki/Wassim%20Almawi
Wassim Y. Almawi is professor in the School of Pharmacy at Lebanese American University in Byblos, Lebanon, and adjunct professor at Faculty of Sciences, El Manar University in Tunis, Tunisia. This followed appointment as professor and chairman of Department of Biochemistry at Arabian Gulf University in Bahrain from 2000 to 2017. Almawi is also the Chief of the Special and Molecular Diagnostics Laboratory in Bahrain. References Living people Dalhousie University alumni Lebanese American University alumni Academic staff of the American University of Beirut Harvard Medical School people Bahraini academics Academic staff of the Arabian Gulf University Year of birth missing (living people)
Wassim Almawi
[ "Chemistry" ]
126
[ "Biochemistry stubs", "Biochemists", "Biochemist stubs" ]
8,893,870
https://en.wikipedia.org/wiki/NGC%204194
NGC 4194, the Medusa merger, is a galaxy merger in the constellation Ursa Major about away. It was discovered on April 2, 1791 by German-British astronomer William Herschel. Due to its disturbed appearance, it is object 160 in Halton Arp's 1966 Atlas of Peculiar Galaxies. The morphological classification of NGC 4194 is Imeger, indicating an irregular form. This galaxy consists of a brighter central region spanning an angular size across, with an accompanying system of loops and arcs. Additional material is thinly spread out to a radius of from the central region. There is a tidal tail and regions undergoing high levels of star formation, making this a starburst galaxy. It is a source for strong infrared and radio emission. These features indicate NGC 4194 is a late-stage galaxy merger. A region of extreme star formation across exists in the center of the Eye of Medusa, the central gas-rich region. Within of the dynamic center of NGC 4194, star formation is occurring at a rate of ·yr−1. The star forming regions in this volume range from 5 to 9 million years in age, with the youngest occurring in areas of the highest star formation rate. As of 2014, no galactic nucleus has been detected based on radio emissions, nor have the respective nuclei of the merger galaxies. However, X-ray emission from a black hole in the tidal tail was detected by Chandra in 2009. References Further reading External links Peculiar galaxies Galaxy mergers Luminous infrared galaxies Ursa Major 160 4194 39068 Markarian galaxies 07241
NGC 4194
[ "Astronomy" ]
322
[ "Ursa Major", "Constellations" ]
8,894,085
https://en.wikipedia.org/wiki/EcoCyc
In bioinformatics, EcoCyc is a biological database for the bacterium Escherichia coli K-12. The EcoCyc project performs literature-based curation of the E. coli genome, and of E. coli transcriptional regulation, transporters, and metabolic pathways. EcoCyc contains written summaries of E. coli genes, distilled from over 36,000 scientific articles. EcoCyc is also a description of the genome and cellular networks of E. coli that supports scientists to carry out computational analyses. Data objects in the EcoCyc database describe each E. coli gene and gene product. Database objects also describe molecular interactions, including metabolic pathways, transport events, and the regulation of gene expression. EcoCyc provides several genome-scale visualization tools to aid in the analysis of omics data, such as by painting gene expression or metabolomics data onto the full regulatory network of E. coli. EcoCyc can be accessed through the EcoCyc web site, as a set of downloadable files, and in conjunction with the Pathway Tools software that can be installed locally on Macintosh, PC/Windows, and PC/Linux computers. The downloadable software provides capabilities that go well beyond the web version of EcoCyc. References Biological databases Escherichia coli
EcoCyc
[ "Biology" ]
265
[ "Bioinformatics", "Model organisms", "Escherichia coli", "Biological databases" ]
8,894,344
https://en.wikipedia.org/wiki/MetaCyc
The MetaCyc database is one of the largest metabolic pathways and enzymes databases currently available. The data in the database is manually curated from the scientific literature, and covers all domains of life. MetaCyc has extensive information about chemical compounds, reactions, metabolic pathways and enzymes. The data have been curated from more than 58,000 publications. MetaCyc has been designed for multiple types of uses. It is often used as an extensive online encyclopedia of metabolism. In addition, MetaCyc is used as a reference data set for computationally predicting the metabolic network of organisms from their sequenced genomes; it has been used to perform pathway predictions for thousands of organisms, including those in the BioCyc Database Collection. MetaCyc is also used in metabolic engineering and metabolomics research. MetaCyc includes mini reviews for pathways and enzymes that provide background information as well as relevant literature references. It also provides extensive data on individual enzymes, describing their subunit structure, cofactors, activators and inhibitors, substrate specificity, and, when available, kinetic constants. MetaCyc data on metabolites includes chemical structures, predicted Standard energy of formation, and links to external databases. Reactions in MetaCyc are presented in a visual display that includes the structures of all components. The reactions are balanced and include EC numbers, reaction direction, predicted atom mappings that describe the correspondence between atoms in the reactant compounds and the product compounds, and computed Gibbs free energy. All objects in MetaCyc are clickable and provide easy access to related objects. For example, the page for L-lysine lists all of the reactions in which L-lysine participates, as well as the enzymes that catalyze them and pathways in which these reactions take place. References Chemical databases Enzyme databases Metabolism
MetaCyc
[ "Chemistry", "Biology" ]
367
[ "Enzyme databases", "Biochemistry databases", "Chemical databases", "Protein classification", "Molecular biology techniques", "Cellular processes", "Biochemistry", "Metabolism" ]
8,894,516
https://en.wikipedia.org/wiki/BioCyc%20database%20collection
The BioCyc database collection is an assortment of organism specific Pathway/Genome Databases (PGDBs) that provide reference to genome and metabolic pathway information for thousands of organisms. As of July 2023, there were over 20,040 databases within BioCyc. SRI International, based in Menlo Park, California, maintains the BioCyc database family. Categories of Databases Based on the manual curation done, BioCyc database family is divided into 3 tiers: Tier 1: Databases which have received at least one year of literature based manual curation. Currently there are seven databases in Tier 1. Out of the seven, MetaCyc is a major database that contains almost 2500 metabolic pathways from many organisms. The other important Tier 1 database is HumanCyc which contains around 300 metabolic pathways found in humans. The remaining five databases include, EcoCyc (E. coli), AraCyc (Arabidopsis thaliana), YeastCyc (Saccharomyces cerevisiae), LeishCyc (Leishmania major Friedlin) and TrypanoCyc (Trypanosoma brucei). Tier 2: Databases that were computationally predicted but have received moderate manual curation (most with 1–4 months curation). Tier 2 Databases are available for manual curation by scientists who are interested in any particular organism. Tier 2 databases currently contain 43 different organism databases. Tier 3: Databases that were computationally predicted by PathoLogic and received no manual curation. As with Tier 2, Tier 3 databases are also available for curation for interested scientists. Software tools The ontological resource contains a variety of software tools for searching, visualizing, comparing, and analyzing genome and pathway information. It includes a genome browser, and browsers for metabolic and regulatory networks. The website also includes tools for painting large-scale ("omics") datasets onto metabolic and regulatory networks, and onto the genome. Use in Research Since BioCyc Database family comprises a long list of organism specific databases and also data at different systems level in a living system, the usage in research has been in a wide variety of context. Here, two studies are highlighted which show two different varieties of uses, one on a genome scale and other on identifying specific SNPs (Single Nucleotide Polymorphisms) within a genome. AlgaGEM AlgaGEM is a genome scale metabolic network model for a compartmentalized algae cell developed by Gomes de Oliveira Dal’Molin et al. based on the Chlamydomonas reinhardtii genome. It has 866 unique ORFs, 1862 metabolites, 2499 gene-enzyme-reaction-association entries, and 1725 unique reactions. One of the Pathway databases used for reconstruction is MetaCyc. SNPs The study by Shimul Chowdhury et al. showed association differed between maternal SNPs and metabolites involved in homocysteine, folate, and transsulfuration pathways in cases with Congenital Heart Defects (CHDs) as opposed to controls. The study used HumanCyc to select candidate genes and SNPs. References Biochemistry databases Genome databases Biotechnology Metabolomic databases SRI International
BioCyc database collection
[ "Chemistry", "Biology" ]
659
[ "Biochemistry", "Biochemistry databases", "nan", "Biotechnology" ]
8,894,581
https://en.wikipedia.org/wiki/Sexual%20cannibalism
Sexual cannibalism is when an animal, usually the female, cannibalizes its mate prior to, during, or after copulation. This trait is observed in many arachnid orders, several insect and crustacean clades, gastropods, and some snake species. Several hypotheses to explain this seemingly paradoxical behavior have been proposed, including the adaptive foraging hypothesis, aggressive spillover hypothesis and mistaken identity hypothesis. This behavior is believed to have evolved as a manifestation of sexual conflict, occurring when the reproductive interests of males and females differ. In many species that exhibit sexual cannibalism, the female consumes the male upon detection. Females of cannibalistic species are generally hostile and unwilling to mate; thus many males of these species have developed adaptive behaviors to counteract female aggression. Prevalence Sexual cannibalism occurs among insects, arachnids and amphipods. Sexual cannibalism occurs more often among species with prominent sexual size dimorphism (SSD); extreme SSD likely drives this trait of sexual cannibalism in spiders. It also sometimes occurs in some anacondas, especially the green anaconda (Eunectes murinus), where females are larger than the males. Proposed explanations Different hypotheses have been proposed to explain sexual cannibalism, namely adaptive foraging, aggressive spillover, mate choice, and mistaken identity. Adaptive foraging The adaptive foraging hypothesis is a proposed pre-copulatory explanation in which females assess the nutritional value of a male compared to the male's value as a mate. Starving females are usually in poor physical condition and are therefore more likely to cannibalize a male than to mate with him. Among mantises, cannibalism by female Pseudomantis albofimbriata improves fecundity, overall growth, and body condition. A study on the Chinese mantis found that cannibalism occurred in up to 50% of matings. Among spiders, Dolomedes triton females in need of additional energy and nutrients for egg development choose to consume the closest nutritional source, even if this means cannibalizing a potential mate. In Agelenopsis pennsylvanica and Lycosa tarantula, a significant increase in fecundity, egg case size, hatching success, and survivorship of offspring has been observed when hungry females choose to cannibalize smaller males before copulating with larger, genetically superior males. This reproductive success was largely due to the increased energy uptake by cannibalizing males and investing that additional energy in the development of larger, higher-quality egg cases. In D. triton, post-copulatory sexual cannibalism was observed in the females that had a limited food source; these females copulated with the males and then cannibalized them. The adaptive foraging hypothesis has been criticized because males are considered poor meals when compared to crickets; however, recent findings discovered Hogna helluo males have nutrients crickets lack, including various proteins and lipids. In H. helluo, females have a higher protein diet when cannibalizing males than when consuming only house crickets. Further studies show that Argiope keyserlingi females with high-protein/low-lipid diets resulting from sexual cannibalism may produce eggs of greater egg energy density (yolk investment). Aggressive spillover The aggressive spillover hypothesis suggests that the more aggressive a female is concerning prey, the more likely the female is to cannibalize a potential mate. The decision of a female to cannibalize a male is not defined by the nutritional value or genetic advantage (courtship dances, male aggressiveness, & large body size) of males but instead depends strictly on her aggressive state. Aggression of the female is measured by latency (speed) of attack on prey. The faster the speed of attack and consumption of prey, the higher the aggressiveness level. Females displaying aggressive characteristics tend to grow larger than other females and display continuous cannibalistic behavior. Such behavior may drive away potential mates, reducing chances of mating. Aggressive behavior is less common in an environment that is female-biased, because there is more competition to mate with a male. In these female dominated environments, such aggressive behavior comes with the risk of scaring away potential mates. Males of the Pisaura mirabilis species feign death to avoid being cannibalized by a female prior to copulation. When males feign death, their success in reproduction depends on the level of aggressiveness the female displays. Research has shown that in the Nephilengys livida species, female aggressiveness had no effect on the likelihood of her cannibalizing a potential mate; male aggressiveness and male-male competition determined which male the female cannibalized. Males with aggressive characteristics were favored and had a higher chance of mating with a female. Mate choice Females exercise mate choice, rejecting unwanted and unfit males by cannibalizing them. Mate choice often correlates size with fitness level; smaller males tend to display a low level of fitness; smaller males are therefore eaten more often because of their undesirable traits. Males perform elaborate courtship dances to display fitness and genetic advantage. Female orb-web spiders (Nephilengys livida) tend to cannibalize males displaying less aggressive behavior and mate with males displaying more aggressive behavior, showing a preference for this trait, which, along with large body size that indicates a strong foraging ability, displays high male quality and genetic advantage. Indirect mate choice can be witnessed in fishing spiders, Dolomedes fimbriatus, where females do not discriminate against smaller body size, attacking males of all sizes. Females had lower success rates cannibalizing large males, which managed to escape where smaller males could not. It was shown that males with desirable traits (large body size, high aggression, and long courtship dances) had longer copulation duration than males with undesirable traits. In A. keyserlingi and Nephila edulis females allow longer copulation duration and a second copulation for smaller males. The gravity hypothesis suggests that some species of spiders may favor smaller body sizes because they enable them to climb up plants more efficiently and find a mate faster. Also smaller males may be favored because they hatch and mature faster, giving them a direct advantage in finding and mating with a female. In Leucauge mariana females will cannibalize males if their sexual performance was poor. They use palpal inflations to determine sperm count and if the female deems sperm count too low she will consume the male. In Latrodectus revivensis females tend to limit copulation duration for small males and deny them a second copulation, showing preference for larger body size. Another form of mate choice is the genetic bet-hedging hypothesis in which a female consumes males to prevent them from exploiting her. It is not beneficial for a female exploited by multiple males because it may result in prey theft, reduction in web, and reduced time of foraging. Sexual cannibalism might have promoted the evolution of some behavioral and morphological traits exhibited by spiders today. Mistaken identity The mistaken identity hypothesis suggests that sexual cannibalism occurs when females fail to identify males that try to court. This hypothesis suggests that a cannibalistic female attacks and consumes the male without the knowledge of mate quality. In pre-copulatory sexual cannibalism, mistaken identity can be seen when a female does not allow the male to perform the courtship dance and engages in attack. There is no conclusive evidence for this hypothesis because scientists struggle to distinguish between mistaken identity and the other hypotheses (aggressive spillover, adaptive foraging, and mate choice). Male adaptive behaviours In some cases, sexual cannibalism may characterize an extreme form of male monogamy, in which the male sacrifices itself to the female. Males may gain reproductive success from being cannibalized by either providing nutrients to the female (indirectly to the offspring), or through enhancing the probability that their sperm is used to fertilize the female's eggs. Although sexual cannibalism is fairly common in spiders, male self-sacrifice has only been reported in six genera of araneoid spiders. However, much of the evidence for male complicity in such cannibalistic behavior may be anecdotal, and has not been replicated in experimental and behavioural studies. Male members of cannibalistic species have adapted different mating tactics as a mechanism for escaping the cannibalistic tendencies of their female counterparts. Current theory suggests antagonistic co-evolution has occurred, where adaptations seen in one sex produce adaptations in the other. Adaptations consist of courtship displays, opportunistic mating tactics, and mate binding. Opportunistic mating The risk of cannibalism becomes greatly reduced when opportunistic mating is practiced. Opportunistic mating has been characterized in numerous orb-weaving spider species, such as Nephila fenestrata, where the male spider waits until the female is feeding or distracted, and then proceeds with copulation; this greatly reduces the chances of cannibalization. This distraction can be facilitated by the male's presentation of nuptial gifts, where they provide a distracting meal for the female in order to prolong copulation and increase paternity. Altered sexual approach Multiple methods of sexual approaches have appeared in cannibalistic species as a result of sexual cannibalism. The mechanism by which the male approaches the female is imperative for his survival. If the female is unable to detect his presence, the male is less likely to face cannibalization. This is evident in the mantid species, Tenodera aridifolia, where the male alters his approach utilizing the surrounding windy conditions. The male attempts to avoid detection by approaching the female when the wind impairs her ability to hear him. In the praying mantid species Pseudomantis albofimbrata, the males approach the female either from a "slow mounting from the rear" or a "slow approach from the front" position to remain undetected. Mate guarding Sexual cannibalism has impaired the ability of the orb-weaving spider, N. fenestrata, to perform mate guarding. If a male successfully mates with a female, he then exhibits mate guarding, inhibiting the female from re-mating, thus ensuring his paternity and eliminating sperm competition. Guarding can refer to the blockage of female genital openings to prevent further insertion of a competing male's pedipalps, or physical guarding from potential mates. Guarding can decrease female re-mating by fifty percent. Males who experience genital mutilation can sometimes exhibit the "gloves off" hypothesis which states that a male's body weight and his endurance are inversely proportional. Thus when a male's body weight decreases substantially, his endurance increases as a result, allowing him to guard his female mate with increased efficiency. Mate binding Mate binding refers to a pre-copulatory courtship behavior where the male deposits silk onto the abdomen of the female while simultaneously massaging her in order to reduce her aggressive behavior. This action allows for initial and subsequent copulatory bouts. While both chemical and tactile cues are important factors for reducing cannibalistic behaviors, the latter functions as a resource to calm the female, exhibited in the orb-weaver spider species, Nephila pilipes. Additional hypotheses suggest that male silk contains pheromones which seduce the female into submission. However, silk deposits are not necessary for successful copulation. The primary factor in successful subsequent copulation lies in the tactile communication between the male and female spider that results in female acceptance of the male. The male mounts the posterior portion of the female's abdomen, while rubbing his spinnerets on her abdomen during his attempt at copulation. Mate binding was not necessary for the initiation of copulation in the golden orb-weaving spider, except when the female was resistant to mating. Subsequent copulatory bouts are imperative for the male's ability to copulate due to prolonged sperm transfer, therefore increasing his probability of paternity. Courtship displays Courtship displays in sexually cannibalistic spiders are imperative in order to ensure the female is less aggressive. Additional courtship displays include pre-copulatory dances such as those observed in the redback spider, and vibrant male coloration morphologies which function as female attraction mechanisms, as seen in the peacock spider, Maratus volans, or in Habronattus pyrrithrix. Nuptial gifts play a vital role in safe copulation for males in some species. Males present meals to the female to facilitate opportunistic mating while the female is distracted. Subsequent improvements in male adaptive mating success include web reduction, as seen in the Western black widow, Latrodectus hesperus. Once mating occurs, the males destroy a large portion of the female's web to discourage the female from future mating, thus reducing polyandry, which has been observed in the Australian redback spider, Latrodectus hasselti. Male-induced cataleptic state In some species of spiders, such as Agelenopsis aperta, the male induces a passive state in the female prior to copulation. It has been hypothesized that the cause of this "quiescent" state is the male's massaging of the female's abdomen, following male vibratory signals on the web. The female enters a passive state, and the male's risk of facing cannibalism is reduced. This state is most likely induced as a result of a male volatile pheromone. The chemical structure of the pheromone utilized by the male A. aperta is currently unknown; however, physical contact is not necessary for the induced passive state. Eunuch males, or males with partially or fully removed palps, are unable to induce the passive state on females from a distance, but can induce quiescence upon physical contact with the female; this suggests that the pheromone produced is potentially related to sperm production, since the male inserts sperm from his pedipalps, structures which are removed in eunuchs. This adaptation has most likely evolved in response to the overly aggressive nature of female spiders. Copulatory silk wrapping In order to avoid being consumed by the female, some male spiders may utilize their silk to physically bind the female spider. For example, in Pisaurina mira, also known as the nursery web spider, the male wraps the legs of the female in silk prior to and during copulation. While he holds legs III and IV of the female, he uses the silk to bind legs I and II. Because the male spider legs play a significant role in copulation, longer leg length P. mira are generally favored over shorter lengths. In the Paratrechalea genus, males silk wrap nutritive or worthless gifts to avoid being cannibalized by the female spider. Costs and benefits for males The physiological impacts of cannibalism on male fitness include his inability to father any offspring if he is unable to mate with a female. There are males in species of arachnids, such as N. plumipes, that sire more offspring if the male is cannibalized after or during mating; copulation is prolonged and sperm transfer is increased. In the species of orb-weaving spider, Argiope arantia, males prefer short copulation duration upon the first palp insertion in order to avoid cannibalism. Upon the second insertion, however, the male remains inserted in the female. The male exhibits a "programmed death" to function as a full-body genital plug. This causes it to become increasingly difficult for the female to remove him from her genital openings, discouraging her from mating with other males. An additional benefit to cannibalization is the idea that a well-fed female is less likely to mate again. If the female has no desire to mate again, the male who has already mated with her has his paternity ensured. Genital mutilation Before or after copulating with females, certain males of spider species in the superfamily Araneoidea become half or full eunuchs with one or both of their pedipalps (male genitals) severed. This behavior is often seen in sexually cannibalistic spiders, causing them to exhibit the "eunuch phenomenon". Due to the chance that they may be eaten during or after copulation, male spiders use genital mutilation to increase their chances of successful mating. The male can increase his chances of paternity if the female's copulatory organs are blocked, which decreases sperm competition and her chances of mating with other males. In one study, females with mating plugs had a 75% lower chance of re-mating. Additionally, if a male successfully severs his pedipalp within the female copulatory duct the pedipalp can not only serve as a plug but can continue to release sperm to the female spermathacae, again increasing the male's chances of paternity. This is referred to as "remote copulation". Occasionally (in 12% of cases in a 2012 study on Nephilidae spiders) palp severance is only partial due to copulation interruption by sexual cannibalism. Partial palp severance can result in a successful mating plug but not to the extent of full palp severance. Some males, as in the orb-weaving spider, Argiope arantia, have been found to spontaneously die within fifteen minutes of their second copulation with a female. The male dies while his pedipalps are still intact within the female, as well as still swollen from copulation. In this "programmed death", the male is able to utilize his entire body as a genital plug for the female, causing it to be much more difficult for her to remove him from her copulatory ducts. In other species males voluntarily self-amputate a pedipalp prior to mating and thus the mutilation is not driven by sexual cannibalism. This has been hypothesized to be due to an increased fitness advantage of half or full eunuchs. Upon losing a pedipalp, males experience a significant decrease in body weight that provides them with enhanced locomotor abilities and endurance, enabling them to better search for a mate and mate-guard after mating. This is referred to as the "gloves-off" theory. Males and females have also been seen with the roles reversed in terms of genital mutilation. In Cyclosa argenteoalba, males mutilate female spider's genitals by detaching the female's scape, making it impossible for another male to mate with them. Male self-sacrifice Male reproductive success can be determined by their number of fathered offspring, and monogyny is seen quite often in sexually cannibalistic species. Males are willing to sacrifice themselves, or lose their reproductive organs in order to ensure their paternity from one mating instance. Whether it is by spontaneous programmed death, or the male catapulting into the mouth of the female, these self-sacrificing males die in order for prolonged copulation to occur. Males of many of these species cannot replenish sperm stores, therefore they must exhibit these extreme behaviors in order to ensure sperm transfer and fathered offspring during their one and only mating instance. An example of such behavior can be seen in the redback spider. The males of this species "somersault" into the mouths of the female after copulation has occurred, which has been shown to increase paternity by sixty-five percent when compared to males that are not cannibalized. A majority of males in this species are likely to die on the search for a mate, so the male must sacrifice himself as an offering if it means prolonged copulation and doubled paternity. In many species, cannibalized males can mate longer, thus having longer sperm transfers. Male sexual cannibalism Although females often instigate sexual cannibalism, reversed sexual cannibalism has been observed in the spiders Micaria sociabilis and Allocosa brasiliensis. In a laboratory experiment on M. sociabilis, males preferred to eat older females. This behavior may be interpreted as adaptive foraging, because older females have low reproductive potential and food may be limited. Reversed cannibalism in M. sociabilis may also be influenced by size dimorphism. Males and females are similar in size, and bigger males were more likely to be cannibalistic. In A. brasiliensis males tend to be cannibalistic in between mating seasons, after they have mated, gone out of their burrows to search for food, and left their mates in their burrows. Any females they cross during this period likely have little reproductive value, so this may also be interpreted as adaptive foraging. It has also been observed in the crab Ovalipes catharus. Reversed sexual cannibalism is also observed in a snake species called Malpolon monspessulanus, commonly known as Montpellier snakes. This behavior may occur due to their opportunistic feeding habits, lack of availability of prey, or competition for resources among the individuals of the species. As this species exhibits male-biased sexual dimorphism, it is easier for male Montpellier snakes to attack and cannibalize the females. Male cannibalism might also be triggered by the refusal to mate by female M. monspessulanus. Monogamy Males in these mating systems are generally monogamous, if not bigynous. Since males of these cannibalistic species have adapted to the extreme mating system, and usually mate only once with a polyandrous female, they are considered monogynous. Other factors Sexual dimorphism Sexual dimorphism in size has been proposed as an explanation for the widespread nature of sexual cannibalism across distantly related arthropods. Typically, male birds and mammals are larger as they participate in male-male competition. However, in arthropods this size dimorphism ratio is reversed, with females commonly larger than males. Sexual cannibalism may have led to selection for larger, stronger females in invertebrates. Further research is needed to evaluate the explanation. To date, studies have been done on wolf spiders such as Zyuzicosa (Lycosidae), where the female is much larger than the male. See also Evolutionary arms race Femme fatale Interlocus sexual conflict Sexual conflict Spider cannibalism Traumatic insemination References External links Argiope argentata#Sexual cannibalism Animal cannibalism Mating Natural selection
Sexual cannibalism
[ "Biology" ]
4,699
[ "Animal cannibalism", "Evolutionary processes", "Behavior", "Eating behaviors", "Natural selection", "Ethology", "Mating" ]
8,894,652
https://en.wikipedia.org/wiki/Hypersensitive%20response
Hypersensitive response (HR) is a mechanism used by plants to prevent the spread of infection by microbial pathogens. HR is characterized by the rapid death of cells in the local region surrounding an infection and it serves to restrict the growth and spread of pathogens to other parts of the plant. It is analogous to the innate immune system found in animals, and commonly precedes a slower systemic (whole plant) response, which ultimately leads to systemic acquired resistance (SAR). HR can be observed in the vast majority of plant species and is induced by a wide range of plant pathogens such as oomycetes, viruses, fungi and even insects. HR is commonly thought of as an effective defence strategy against biotrophic plant pathogens, which require living tissue to gain nutrients. In the case of necrotrophic pathogens, HR might even be beneficial to the pathogen, as they require dead plant cells to obtain nutrients. The situation becomes complicated when considering pathogens such as Phytophthora infestans which at the initial stages of the infection act as biotrophs but later switch to a necrotrophic lifestyle. It is proposed that in this case HR might be beneficial in the early stages of the infection but not in the later stages. Genetics The first idea of how the hypersensitive response occurs came from Harold Henry Flor's gene-for-gene model. He postulated that for every resistance (R) gene encoded by the plant, there is a corresponding avirulence (Avr) gene encoded by the microbe. The plant is resistant to the pathogen if both the Avr and R genes are present during the plant-pathogen interaction. The genes that are involved in the plant-pathogen interactions tend to evolve at a very rapid rate. Very often, the resistance mediated by R genes is due to them inducing HR, which leads to apoptosis. Most plant R genes encode NOD-like receptor (NLR) proteins. NLR protein domain architecture consists of an NB-ARC domain which is a nucleotide-binding domain, responsible for conformational changes associated with the activation of the NLR protein. In the inactive form, the NB-ARC domain is bound to Adenosine diphosphate (ADP). When a pathogen is sensed, the ADP is exchanged for Adenosine triphosphate (ATP) and this induces a conformational change in the NLR protein, which results in HR. At the N-terminus, the NLR either has a Toll-Interleukin receptor (TIR) domain (also found in mammalian toll-like receptors) or a coiled-coil (CC) motif. Both TIR and CC domains are implicated in causing cell death during HR. The C-terminus of the NLRs consists of a leucine-rich repeat (LRR) motif, which is involved in sensing the pathogen virulence factors. Mechanism HR is triggered by the plant when it recognizes a pathogen. The identification of a pathogen typically occurs when a virulence gene product, secreted by a pathogen, binds to, or indirectly interacts with the product of a plant R gene. R genes are highly polymorphic, and many plants produce several different types of R gene products, enabling them to recognize virulence products produced by many different pathogens. In phase one of the HR, the activation of R genes triggers an ion flux, involving an efflux of hydroxide and potassium to the outside the cells, and an influx of calcium and hydrogen ions into the cells. In phase two, the cells involved in the HR generate an oxidative burst by producing reactive oxygen species (ROS), superoxide anions, hydrogen peroxide, hydroxyl radicals and nitrous oxide. These compounds affect cellular membrane function, in part by inducing lipid peroxidation and by causing lipid damage. The alteration of ion components in the cell and the breakdown of cellular components in the presence of ROS result in the death of affected cells, as well as the formation of local lesions. Reactive oxygen species also trigger the deposition of lignin and callose, as well as the cross-linking of pre-formed hydroxyproline-rich glycoproteins such as P33 to the wall matrix via the tyrosine in the PPPPY motif. These compounds serve to reinforce the walls of cells surrounding the infection, creating a barrier and inhibiting the spread of the infection. Activation of HR also results in disruption of the cytoskeleton, mitochondrial function and metabolic changes, all of which might be implicated in causing cell death. Direct and indirect activation HR can be activated in two main ways: directly and indirectly. Direct binding of the virulence factors to the NLRs can result in the activation of HR. However, this seems to be quite rare. More commonly, the virulence factors target certain cellular proteins that they modify and this modification is then sensed by NLRs. Indirect recognition seems to be more common as multiple virulence factors can modify the same cellular protein with the same modifications thus allowing one receptor to recognize multiple virulence factors. Sometimes, the protein domains targeted by the virulence factors are integrated into the NLRs. An example of this can be observed in plant resistance to the rice blast pathogen, where the RGA5 NLR has a heavy-metal-associated (HMA) domain integrated into its structure, which is targeted by multiple effector proteins. An example of indirect recognition: AvrPphB is a type III effector protein secreted by Pseudomonas syringae. This is a protease which cleaves a cellular kinase called PBS1. The modified kinase is sensed by RPS5 NLR. The Resistosome Recent structural studies of CC-NLR proteins have suggested that after the virulence factors are sensed, the NLRs assemble into a pentameric structure known as the resistosome. The resistosome seems to have a high affinity for the cellular membrane. When the resistosome is assembled, a helix sticks out from the N-terminus of each NLR and this creates a pore in the membrane which allows leakage of ions to occur and thus the cell dies. However, this mechanism is only inferred from the structure and there are currently no mechanistic studies to support this. It is still not known how the TIR-NLR proteins are activated. Recent research suggests that they require CC-NLR proteins downstream of them, which are then activated to form the resistosomes and induce HR. NLR pairs and networks It is known that NLRs can function individually but there are also cases where the NLR proteins work in pairs. The pair consists of a sensor NLR and a helper NLR. The sensor NLR is responsible for recognizing the pathogen secreted effector protein and activating the helper NLR which then executes the cell death. The genes of both the sensor and the respective helper NLR are usually paired in the genome and their expression could be controlled by the same promoter. This allows the functional pair, instead of individual components, to be segregated during cell division and also ensures that equal amounts of both NLRs are made in the cell. The receptor pairs work through two main mechanisms: negative regulation or cooperation. In the negative regulation scenario, the sensor NLR is responsible for negatively regulating the helper NLR and preventing cell death under normal conditions. However, when the effector protein is introduced and recognized by the sensor NLR, the negative regulation of the helper NLR is relieved and HR is induced. In the cooperation mechanisms, when the sensor NLR recognizes the effector protein it signals to the helper NLR, thus activating it. Recently, it was discovered that in addition to acting as singletons or pairs, the plant NLRs can act in networks. In these networks, there are usually many sensor NLRs paired to relatively few helper NLRs. One example of proteins involved in NLR networks are those belonging to the NRC superclade. It seems that the networks evolved from a duplication event of a genetically linked NLR pair into an unlinked locus which allowed the new pair to evolve to respond to a new pathogen. This separation seems to provide plasticity to the system, as it allows the sensor NLRs to evolve more rapidly in response to the fast evolution of pathogen effectors whereas the helper NLR can evolve much slower to maintain its ability to induce HR. However, it seems that during evolution new helper NLRs also evolved, supposedly, because certain sensor NLRs require specific helper NLRs to function optimally. Bioinformatic analysis of plant NLRs has shown that there is a conserved MADA motif at the N-terminus of helper NLRs but not sensor NLRs. Around 20% of all CC-NLRs have the MADA motif, implying the motif's importance for the execution of HR. Regulation Accidental activation of HR through the NLR proteins could cause vast destruction of the plant tissue, thus, the NLRs are kept in an inactive form through tight negative regulation at both transcriptional and post-translational levels. Under normal conditions, the mRNA of NLRs are transcribed at very low levels, which results in low levels of protein in the cell. The NLRs also require a considerable number of chaperone proteins for their folding. Misfolded proteins are immediately ubiquitinated and degraded by the proteasome. It has been observed that in many cases, if the chaperone proteins involved in NLR biosynthesis are knocked-out, HR is abolished and NLR levels are significantly reduced. Intramolecular interactions are also essential for the regulation of HR. The NLR proteins are not linear: the NB-ARC domain is sandwiched in between the LRR and TIR/CC domains. Under normal conditions, there is a lot more ATP present in the cytoplasm than ADP, and this arrangement of the NLR proteins prevents the spontaneous exchange of ADP for ATP and thus activation of HR. Only when a virulence factor is sensed, the ADP is exchanged for ATP. Mutations in certain components of plant defence machinery result in HR being activated without the presence of pathogen effector proteins. Some of these mutations are observed in NLR genes and cause these NLR proteins to become auto-active due to disrupted intramolecular regulatory mechanisms. Other mutations causing spontaneous HR are present in proteins involved in ROS production during pathogen invasion. HR is also a temperature-sensitive process and it has been observed that in many cases plant-pathogen interactions do not induce HR at temperatures above 30 °C, which subsequently leads to increased susceptibility to the pathogen. The mechanisms behind the influence of temperature on plant resistance to pathogens are not understood in detail, however, research suggests that the NLR protein levels might be important in this regulation. It is also proposed that at higher temperatures the NLR proteins are less likely to form oligomeric complexes, thus inhibiting their ability to induce HR. It has also been shown that HR is dependent on the light conditions, which could be linked to the activity of chloroplasts and mainly their ability to generate ROS. Mediators Several enzymes have been shown to be involved in generation of ROS. For example, copper amine oxidase, catalyzes the oxidative deamination of polyamines, especially putrescine, and releases the ROS mediators hydrogen peroxide and ammonia. Other enzymes thought to play a role in ROS production include xanthine oxidase, NADPH oxidase, oxalate oxidase, peroxidases, and flavin containing amine oxidases. In some cases, the cells surrounding the lesion synthesize antimicrobial compounds, including phenolics, phytoalexins, and pathogenesis related (PR) proteins, including β-glucanases and chitinases. These compounds may act by puncturing bacterial cell walls; or by delaying maturation, disrupting metabolism, or preventing reproduction of the pathogen in question. Studies have suggested that the actual mode and sequence of the dismantling of plant cellular components depends on each individual plant-pathogen interaction, but all HR seem to require the involvement of cysteine proteases. The induction of cell death and the clearance of pathogens also requires active protein synthesis, an intact actin cytoskeleton, and the presence of salicylic acid. Pathogen evasion Pathogens have evolved several strategies to suppress plant defense responses. Host processes usually targeted by bacteria include; alterations to programmed cell death pathways, inhibiting cell wall-based defenses, and altering plant hormone signaling and expression of defense genes. Systemic immunity Local initiation of HR in response to certain necrotrophic pathogens has been shown to allow the plants to develop systemic immunity against the pathogen. Scientists have been trying to exploit the ability of HR to induce systemic resistance in plants in order to create transgenic plants resistant to certain pathogens. Pathogen-inducible promoters have been linked to auto-active NLR genes to induce HR response only when the pathogen is present but not at any other time. This approach, however, has been mostly unfeasible as the modification also leads to a substantial reduction in plant yields. Hypersensitive response as a driver for plant speciation It has been noticed in Arabidopsis that sometimes when two different plant lines are crossed together, the offspring show signs of hybrid necrosis. This is due to the parent plants containing incompatible NLRs, which when expressed together in the same cell, induce spontaneous HR. This observation raised a hypothesis that plant pathogens can lead to the speciation of plants – if plant populations from the same species develop incompatible NLRs in response to different pathogen effectors, this can lead to hybrid necrosis in the F1 offspring, which substantially reduces the fitness of the offspring and gene flow to subsequent generations. Comparison to animal innate immunity Both plants and animals have NLR proteins which seem to have the same biological function – to induce cell death. The N-termini of plant and animal NLRs vary but it seems that both have LRR domains at the C-terminus. A big difference between animal and plant NLRs is in what they recognise. Animal NLRs mainly recognise pathogen-associated molecular patterns (PAMPs), while plant NLRs mostly recognise pathogen effector proteins. This makes sense as NLRs are present inside of the cell and plants rarely have intracellular pathogens, except for viruses and viruses do not have PAMPs as they are rapidly evolving. Animals, on the other hand, have intracellular pathogens. The vast majority of plant lineages, except for certain algae, such as Chlamydomonas, have NLRs. NLRs are also present in many animal species, however, they are not present in, for example, Drosophila melanogaster and Arthropods. Upon recognition of PAMPs by NLRs in animals, the NLRs oligomerise to form a structure known as the inflammasome, which activates pyroptosis. In plants, structural studies have suggested that the NLRs also oligomerise to form a structure called the resistosome, which also leads to cell death. It seems that in both plants and animals, the formation of the resistosome or the inflammasome, respectively, leads to cell death by forming pores in the membrane. It is inferred from protein structures that in plants the NLRs themselves are responsible for forming pores in the membrane, while in the case of the inflammasome, the pore-forming activity arises from gasdermin D which is cleaved by caspases as a result of the oligomerisation of the NLRs. Plant cells do not have caspases. See also Plant disease resistance Phytopathogen Plant hormones Systemic acquired resistance Antimicrobial peptide References Immune system Plant physiology Phytopathology
Hypersensitive response
[ "Biology" ]
3,303
[ "Immune system", "Organ systems", "Plant physiology", "Plants" ]
8,895,193
https://en.wikipedia.org/wiki/Bomakellia
Bomakellia kelleri is a species of poorly understood Ediacaran fossil organism represented by only one specimen discovered in the Ust'-Pinega Formation of the Syuzma River (in Arkhangelsk Oblast, Russia) from rocks dated 555 million years old. Bomakellia was originally interpreted as an early Arthropod. A study by B. M. Waggoner even concluded that the organism was a primitive anomalocarid and erroneously identified the ridges of supposed Cephalon as being eyes making Bomakellia the oldest known animal with vision. But this hypothesis has not reached acceptance, nor acknowledgement. A closer examination of the specimen has identified a tetraradial symmetry in the body, and a frond-like morphology which closely resembles that of Rangea – the current interpretation of Bomakellia is as a rangeomorph frond, which could possibly mean that it's closely related to the Chinese Paracharnia. See also List of Ediacaran genera Charniidae References Rangeomorpha Charniidae Ediacaran life White Sea fossils Controversial taxa Fossil taxa described in 1985 Species known from a single specimen
Bomakellia
[ "Biology" ]
242
[ "Individual organisms", "Species known from a single specimen" ]
8,895,390
https://en.wikipedia.org/wiki/Poster%20session
A poster presentation, at a congress or conference with an academic or professional focus, is the presentation of research information in the form of a paper poster that conference participants may view. A poster session is an event at which many such posters are presented. Poster sessions are particularly prominent at scientific conferences such as medical and engineering congresses. Academic conference To participate in a poster session, an abstract is submitted to the academic or professional society for consideration and on occasion may include peer-reviewed material qualified for journal publication. Selected poster abstracts are then designated for oral presentation or poster presentation. Quite often, poster content is embargoed from release to the public until the commencement of the poster session. Typically a separate hall or area of a convention floor is reserved for the poster session where researchers accompany a paper poster, illustrating their research methods and outcomes. Each research project is usually presented on a conference schedule for a period ranging from 10 minutes to several hours. Very large events may feature thousands of poster presentations over a number of days. Presentations usually consist of affixing the research poster to a portable board with the researcher in attendance answering questions posed by passing colleagues. The poster boards are often or and the size of the poster itself varies according to whether the conference organizers decide to have one, two, or more posters on each board face. Posters are often created using a presentation program such as PowerPoint and may be printed on a large format printer. Glossy paper, matte paper (with or without gloss or matte lamination), satin paper, vinyl, and printable fabric are common substrates for poster presentations. On occasion, poster sessions are displayed digitally on large monitors and this allows for features such as embedded videos, narrations and external links. Collections of digital posters can successfully be viewed on desktop monitors with sufficient resolution and pixel density. This provides a means for academic and professional societies to create digital archives of current and past poster sessions. Organizers of digital poster sessions are challenged with managing the logistics of presenting hundreds, if not thousands, of posters on a limited number of monitors. With the traditional printed poster session, attendees can spend as much, or as little, time at each poster depending on interest, and the posters are continuously on display. They can interact with authors and discuss the research without time constraints. With a digital poster session, presentations are usually timed, with a limited amount of exposure, and follow a set schedule. Academic assessment Poster sessions are used as an alternative to oral presentations as a form of academic assessment. See also Abstract management Academic conference References External links Creating Effective Poster Presentations, North Carolina State University Academic conferences Presentation Research
Poster session
[ "Technology" ]
524
[ "Multimedia", "Presentation" ]
8,895,547
https://en.wikipedia.org/wiki/Interventional%20magnetic%20resonance%20imaging
Interventional magnetic resonance imaging, also interventional MRI or IMRI, is the use of magnetic resonance imaging (MRI) to do interventional radiology procedures. Because of the lack of harmful effects on the patient and the operator, MR is well suited for "interventional radiology", where the images produced by an MRI scanner are used to guide a minimally-invasive procedure intraoperatively and/or interactively. Interventional MRI can be used for a variety of specialized procedures. iMRI systems are often used for doing biopsies of lesions, resections of tumors, guiding thermal ablation of tissue, as well as other procedures. It is commonly used in neurosurgery, where every millimeter of tissue spared in surgery can make a difference for patient recovery. The non-magnetic environment required by the scanner and the strong magnetic radiofrequency and quasi-static fields generated by the scanner hardware require the use of specialized instruments. For example, use of non-magnetic (e.g. Titanium) surgical instruments and MR compatible patient surveillance accessories in addition to the MRI scanner itself increase the cost of IMRI. Often required is the use of an "open bore" magnet, which permits the operating staff better access to patients during the operation. Such open bore magnets are often lower field magnets, typically in the 0.2 tesla range, which decreases their sensitivity and temporal efficiency but also decreases the radio frequency power potentially absorbed by the patient during a protracted operation. Higher field magnet systems are beginning to be deployed in intraoperative imaging suites, which can combine high-field MRI with a surgical suite and even CT in a series of interconnected rooms. Specialty high-field interventional MR devices, such as the IMRIS system, can actually bring a high-field magnet to the patient within the operating theatre, permitting the use of standard surgical tools while the magnet is in an adjoining space. See also Interventional radiology References Interventional Medical monitoring
Interventional magnetic resonance imaging
[ "Chemistry" ]
403
[ "Nuclear magnetic resonance", "Magnetic resonance imaging" ]
8,895,763
https://en.wikipedia.org/wiki/Oren%E2%80%93Nayar%20reflectance%20model
The Oren–Nayar reflectance model, developed by Michael Oren and Shree K. Nayar, is a reflectivity model for diffuse reflection from rough surfaces. It has been shown to accurately predict the appearance of a wide range of natural surfaces, such as concrete, plaster, sand, etc. Introduction Reflectance is a physical property of a material that describes how it reflects incident light. The appearance of various materials are determined to a large extent by their reflectance properties. Most reflectance models can be broadly classified into two categories: diffuse and specular. In computer vision and computer graphics, the diffuse component is often assumed to be Lambertian. A surface that obeys Lambert's Law appears equally bright from all viewing directions. This model for diffuse reflection was proposed by Johann Heinrich Lambert in 1760 and has been perhaps the most widely used reflectance model in computer vision and graphics. For a large number of real-world surfaces, such as concrete, plaster, sand, etc., however, the Lambertian model is an inadequate approximation of the diffuse component. This is primarily because the Lambertian model does not take the roughness of the surface into account. Rough surfaces can be modelled as a set of facets with different slopes, where each facet is a small planar patch. Since photo receptors of the retina and pixels in a camera are both finite-area detectors, substantial macroscopic (much larger than the wavelength of incident light) surface roughness is often projected onto a single detection element, which in turn produces an aggregate brightness value over many facets. Whereas Lambert’s law may hold well when observing a single planar facet, a collection of such facets with different orientations is guaranteed to violate Lambert’s law. The primary reason for this is that the foreshortened facet areas will change for different viewing directions, and thus the surface appearance will be view-dependent. Analysis of this phenomenon has a long history and can be traced back almost a century. Past work has resulted in empirical models designed to fit experimental data as well as theoretical results derived from first principles. Much of this work was motivated by the non-Lambertian reflectance of the moon. The Oren–Nayar reflectance model, developed by Michael Oren and Shree K. Nayar in 1993, predicts reflectance from rough diffuse surfaces for the entire hemisphere of source and sensor directions. The model takes into account complex physical phenomena such as masking, shadowing and interreflections between points on the surface facets. It can be viewed as a generalization of Lambert’s law. Today, it is widely used in computer graphics and animation for rendering rough surfaces. It also has important implications for human vision and computer vision problems, such as shape from shading, photometric stereo, etc. Formulation The surface roughness model used in the derivation of the Oren-Nayar model is the microfacet model, proposed by Torrance and Sparrow, which assumes the surface to be composed of long symmetric V-cavities. Each cavity consists of two planar facets. The roughness of the surface is specified using a probability function for the distribution of facet slopes. In particular, the Gaussian distribution is often used, and thus the variance of the Gaussian distribution, , is a measure of the roughness of the surfaces. The standard deviation of the facet slopes (gradient of the surface elevation), ranges in . In the Oren–Nayar reflectance model, each facet is assumed to be Lambertian in reflectance. If is the irradiance when the facet is illuminated head-on, the radiance of the light reflected by the faceted surface, according to the Oren-Nayar model, is where the direct illumination term and the term that describes bounces of light between the facets are defined as follows. where , , , and is the albedo of the surface, and is the roughness of the surface. In the case of (i.e., all facets in the same plane), we have , and , and thus the Oren-Nayar model simplifies to the Lambertian model: Results Here is a real image of a matte vase illuminated from the viewing direction, along with versions rendered using the Lambertian and Oren-Nayar models. It shows that the Oren-Nayar model predicts the diffuse reflectance for rough surfaces more accurately than the Lambertian model. Here are rendered images of a sphere using the Oren-Nayar model, corresponding to different surface roughnesses (i.e. different values): Connection with other microfacet reflectance models See also List of common shading algorithms Phong reflection model Gamma correction References External links The official project page for the Oren-Nayar model at Shree Nayar's CAVE research group webpage Scattering, absorption and radiative transfer (optics) Shading
Oren–Nayar reflectance model
[ "Chemistry" ]
1,012
[ "Scattering", " absorption and radiative transfer (optics)" ]
8,895,933
https://en.wikipedia.org/wiki/ReBoot%3A%20My%20Two%20Bobs
ReBoot: My Two Bobs is a 2001 Canadian television film based on the series ReBoot, that continues the events set in motion by the cliffhanger ending in Daemon Rising. Along with Daemon Rising, the two films are considered the fourth season. It was originally broadcast in Canada as a film, but was later rebroadcast as four individual episodes, titled "My Two Bobs", "Life's a Glitch", "Null-Bot of the Bride", and "Crouching Binome, Hidden Virus". It was released on DVD along with Daemon Rising. Plot At the end of Daemon Rising, Bob and Dot got engaged. To the confusion of everyone, however, a portal then opened from the web, and Ray Tracer and another Bob step through it. Inasmuch as the second Bob looks like the original from Seasons 1 and 2, Dot calls him Bob and calls the Bob which merged with his keytool Glitch Bob. Most of My Two Bobs is taken up by the efforts of Dot, the two Bobs themselves, and the other Mainframers to ascertain which Bob is the original and which is the copy, and to come to terms with the situation in general. Because Bob can reboot and Glitch Bob can't, Bob spends much of the first half of the film bonding with Matrix and the others by helping them win games. After some counseling from Phong and Mouse, Dot decides to marry the new Bob, whereupon Glitch Bob — the nominal original — earnestly attempts to return himself to his original form in order to win Dot back. His efforts ultimately fail and leave him in a catatonic state, covered in a dark, starry crystal that proves to be impenetrable. Dot continues with her wedding plans as Glitch Bob is treated at the Supercomputer. She laments that her father Welman is too nullified to attend, but when the infection in Enzo's icon is transferred to Welman, he becomes intelligible enough to walk her down the isle in a mechanical suit. Glitch Bob's condition steadily worsens on Dot's wedding day. The impenetrable starry substance covering him gradually dims completely, and the Guardians believe that they have lost him. This moment of crisis prompts all of the other keytools (which had disconnected from the Guardians when they were infected by Daemon) to return to the Supercomputer to separate Glitch from Bob and revive him, before returning to the Guardians. The Guardians discover that this Bob's web-degraded code no longer matches what they have on file, suggesting that he is in fact the copy. Web Bob returns to Mainframe to stop the wedding, but Dot rejects him in favor of the new Bob. Even Glitch seems to leave him for the new Bob, leading everyone to believe that Web Bob is the copy. Just as Web Bob starts to leave in despair, Glitch steals some code from the groom and gives it to Web Bob, which restores his body to its original form. The loss of that code causes the Bob Dot was marrying to shapeshift, revealing a terrible truth: Web Bob was the original, while the new Bob had been Megabyte in disguise. Bob engages Megabyte in a spectacular battle in the church, but Megabyte escapes by disguising himself as a Binome. An investigation reveals that Megabyte has become a Trojan Horse virus, which gives him the power to shapeshift and effectively disguise himself as anyone. It is also revealed that Megabyte had inadvertently stolen part of Bob's Guardian code when he crushed Glitch at the end of Season 2, and he used that code to impersonate him until Glitch returned it to the real Bob during the wedding. Meanwhile, Megabyte starts disguising himself as other Mainframers, including Mike the TV, and reassembles his viral army. Megabyte eludes capture by using various aliases and a doppelgänger and ultimately infiltrates the war room by taking on the form of Frisket. After suborning various personnel, including Dot's father, and capturing Enzo, Megabyte gains "complete control" of the Principal Office. The film ends with him proclaiming that he will now follow his predatory virus nature; he is no longer out to take over Mainframe again or even the Supercomputer, he just wants revenge on the Mainframers. His last words, which are the final words of the series, are "Prepare yourselves... for the hunt!" Cast Kathleen Barr: Dot Matrix Michael Benyaer: Bob Garry Chalk: Slash Michael Donovan: Mike the TV / Phong Paul Dobson: Matrix (adult Enzo) Christopher Gray: Enzo Matrix Tony Jay: Megabyte Scott McNeil: Hack Shirley Millner: Hexadecimal References External links 2001 television films Canadian animated television films 2001 computer-animated films ReBoot Films about computing Films set in computers Cyberpunk films Mainframe Studios films Films based on television series Television films based on television series 2001 films 2000s Canadian animated films
ReBoot: My Two Bobs
[ "Technology" ]
1,062
[ "Works about computing", "Films about computing" ]
8,896,569
https://en.wikipedia.org/wiki/Inocybe%20aeruginascens
Inocybe aeruginascens is a member of the genus Inocybe which is widely distributed in Europe. The species was first documented by I. Ferencz in Ócsa, Hungary on June 15, 1965. Description Inocybe aeruginascens is a small mycorrhizal mushroom with a conic to convex cap which becomes plane in age and is often fibrillose near the margin. It is usually less than 5 cm across, has a slightly darker blunt umbo and an incurved margin when young. The cap color varies from buff to light yellow brown, usually with greenish stains which disappear when the mushroom dries. The gills are adnate to nearly free, numerous, colored pale brown, grayish brown, or tobacco brown. The fruit body has greenish tones and bruises blue where damaged. The spores are smooth and ellipsoid, measuring 6–9.5 x 4.5 micrometres and forming a clay brown spore print. The stem is 2–7 cm long, 3 to 8 mm thick, and is equal width for the whole length, sometimes with some swelling at the base. It is solid, pale grey, becoming bluish green from the bottom up. The stem is fibrous and appears to be covered with fine powder near the top. It has a partial veil which often disappears in age and an unpleasant soapy odor. Distribution and habitat Inocybe aeruginascens is widely distributed in temperate areas and has been reported in central Europe and western North America. It grows in moist sandy soils in a mycorrhizal relationship with poplar, linden, oak and willow trees. Edibility No toxicology information exists on Inocybe aeruginascens currently, however a minimum of "23 unintentional intoxications" were reported in 1982 by Drewitz and Babos. Unintentional consumption could be due to the similarity of Marasmius oreades. The symptoms of "intoxication" were hallucinogenic, leading Gartz and Drewitz to eventually discover the first source of psilocybin in any Inocybe species. There are no known deaths directly related to consumption, however edibility is not yet conclusive. Biochemistry Inocybe aeruginascens contains the formerly known alkaloids psilocybin, psilocin, baeocystin, as well a newly discovered indoleamine 4-phosphoryloxy-N,N,N-trimethyltryptamine. Jochen Gartz named this new substance aeruginascin after the mushroom species. Aeruginascin is the N-trimethyl analogue of psilocybin. Inocybe aeruginascens and Pholiotina cyanopus are the only known natural sources of aeruginascin. See also List of Inocybe species List of Psilocybin mushrooms References External links Extraction and analysis of indole derivatives from fungal biomass New Aspects of the Occurrence, Chemistry, and Cultivation of European Hallucinogen Mushrooms aeruginascens, Inocybe Psychoactive fungi Psychedelic tryptamine carriers Fungi described in 1968 Fungi of Europe Fungus species
Inocybe aeruginascens
[ "Biology" ]
659
[ "Fungi", "Fungus species" ]
8,897,050
https://en.wikipedia.org/wiki/Floor%20vibration
In the design of floor systems in buildings vibrations caused by walking, dancing, mechanical equipment or other rhythmic excitation may cause an annoyance to the occupants or impede the function of sensitive equipment. A calculation procedure to analyze steel framed floor systems is given in a design guide published by the American Institute of Steel Construction. Enhanced or increased leasing activity in office buildings resulting in higher building occupancy has also been attributed to increased floor vibration. Modern concrete under-floors often specify a 3mm layer of sound-deadening material above (is it foam plastic like ethaline - something with a long-life since it will be expensive to replace). See also Architectural acoustics External links American Institute of Steel Construction Floor Vibration Analysis Software Building defects Floors
Floor vibration
[ "Materials_science", "Engineering" ]
152
[ "Structural engineering", "Floors", "Building defects", "Architecture stubs", "Mechanical failure", "Architecture" ]
8,897,883
https://en.wikipedia.org/wiki/GoTo%20%28telescopes%29
In amateur astronomy, "GoTo" refers to a type of telescope mount and related software that can automatically point a telescope at astronomical objects that the user selects. Both axes of a GoTo mount are driven by a motor and controlled by a computer. It may be either a microprocessor-based integrated controller or an external personal computer. This differs from the single-axis semi-automated tracking of a traditional clock-drive equatorial mount. The user can command the mount to point the telescope to the celestial coordinates that the user inputs, or to objects in a pre-programmed database including ones from the Messier catalogue, the New General Catalogue, and even major Solar System bodies (the Sun, Moon, and planets). Like a standard equatorial mount, equatorial GoTo mounts can track the night sky by driving the right ascension axis. Since both axes are computer controlled, GoTo technology also allows telescope manufacturers to add equatorial tracking to mechanically simpler altazimuth mounts. How a GoTo mount works GoTo mounts are pre-aligned before use. When it is powered on, it may ask for the user's latitude, longitude, time, and date. It can also get this data from a GPS receiver connected to the telescope or built into the telescope mount itself, and the mount controller can have its own real time clock. Alt-azimuth mounts Alt-azimuth GoTo mounts need to be aligned on a known "alignment star", which the user will centre in the eyepiece. From the inputted time and location and the star's altitude and azimuth the telescope mount will know its orientation to the entire sky and can then find any object. For accuracy purposes, a second alignment star, as far away as possible from the first and if possible close to the object to be observed, may be used. This is because the mount might not be level with the ground; this will cause the telescope to accurately point to objects close to the initial alignment star, but less accurately for an object on the other side of the sky. An additional reason for using two alignment stars is that the time and location information entered by the user may not be accurate. For example, a one-degree inaccuracy in the latitude or a 4-minute inaccuracy in the time may result in the telescope pointing a degree away from the user's target. When the user selects an object from the mount's database, the object's altitude and azimuth will be computed from its right ascension and declination. Then, the mount will move the telescope to that altitude and azimuth and track the object so it remains in the field of view despite Earth's rotation. Moving to the location is called slewing. When astrophotography is involved, a further motor has to be used to rotate the camera to match the field of view for long exposure photographs. Equatorial mounts For an equatorial GoTo telescope mount, the user must align the mount by hand with either the north celestial pole or the south celestial pole. Assuming the user is accurate in the alignment, the mount points the telescope to a bright star, asking the user to center it in the eyepiece. Since the star's correct right ascension and declination is already known, the distance from what the user considered to be the celestial pole and the actual pole can be roughly deduced. Using another alignment star can further improve the accuracy of the alignment. After alignment the telescope mount will then know its orientation with respect to the night sky, and can point to any right-ascension and declination coordinates. When the user selects an object to view, the mount's software looks up the object's right ascension and declination and slews (moves) to those coordinates. To track the object so that it stays in the eyepiece despite Earth's rotation, only the right-ascension axis is moved. Smart telescopes Smart telescopes were introduced to the consumer market in the 2010s. They are self contained astronomical imaging devices that combine a small (50mm to 114mm objective) telescope and GoTo technology with pre-packaged software designed for astrophotography of deep-sky objects. They have no optical eyepiece or provision for use by eye but instead send an image gathered over time via image stacking to the user's smartphone or tablet, which also controls the device through an app. See also List of telescope parts and construction List of telescope types Software Cartes du Ciel Hallo Northern Sky (HN Sky) KStars Stellarium XEphem References External links Aligning an equatorial-mounted GoTo telescope in the northern hemisphere Telescopes Astronomy software
GoTo (telescopes)
[ "Astronomy" ]
947
[ "Astronomy software", "Works about astronomy", "Telescopes", "Astronomical instruments" ]
8,898,012
https://en.wikipedia.org/wiki/Naphtholphthalein
α-Naphtholphthalein (C28H18O4) is a phthalein dye used as a pH indicator with a visual transition from colorless/reddish to greenish blue at pH 7.3–8.7. References PH indicators 1-Naphthols Phthalides Triarylmethane dyes
Naphtholphthalein
[ "Chemistry", "Materials_science" ]
70
[ "Titration", "PH indicators", "Chromism", "Chemical tests", "Equilibrium chemistry" ]
8,898,050
https://en.wikipedia.org/wiki/Electron%20tomography
Electron tomography (ET) is a tomography technique for obtaining detailed 3D structures of sub-cellular, macro-molecular, or materials specimens. Electron tomography is an extension of traditional transmission electron microscopy and uses a transmission electron microscope to collect the data. In the process, a beam of electrons is passed through the sample at incremental degrees of rotation around the center of the target sample. This information is collected and used to assemble a three-dimensional image of the target. For biological applications, the typical resolution of ET systems are in the 5–20 nm range, suitable for examining supra-molecular multi-protein structures, although not the secondary and tertiary structure of an individual protein or polypeptide. Recently, atomic resolution in 3D electron tomography reconstructions has been demonstrated. BF-TEM and ADF-STEM tomography In the field of biology, bright-field transmission electron microscopy (BF-TEM) and high-resolution TEM (HRTEM) are the primary imaging methods for tomography tilt series acquisition. However, there are two issues associated with BF-TEM and HRTEM. First, acquiring an interpretable 3-D tomogram requires that the projected image intensities vary monotonically with material thickness. This condition is difficult to guarantee in BF/HRTEM, where image intensities are dominated by phase-contrast with the potential for multiple contrast reversals with thickness, making it difficult to distinguish voids from high-density inclusions. Second, the contrast transfer function of BF-TEM is essentially a high-pass filter – information at low spatial frequencies is significantly suppressed – resulting in an exaggeration of sharp features. However, the technique of annular dark-field scanning transmission electron microscopy (ADF-STEM), which is typically used on material specimens, more effectively suppresses phase and diffraction contrast, providing image intensities that vary with the projected mass-thickness of samples up to micrometres thick for materials with low atomic number. ADF-STEM also acts as a low-pass filter, eliminating the edge-enhancing artifacts common in BF/HRTEM. Thus, provided that the features can be resolved, ADF-STEM tomography can yield a reliable reconstruction of the underlying specimen which is extremely important for its application in materials science. For 3D imaging, the resolution is traditionally described by the Crowther criterion. In 2010, a 3D resolution of 0.5±0.1×0.5±0.1×0.7±0.2 nm was achieved with a single-axis ADF-STEM tomography. Atomic Electron Tomography (AET) Atomic level resolution in 3D electron tomography reconstructions has been demonstrated. Reconstructions of crystal defects such as stacking faults, grain boundaries, dislocations, and twinning in structures have been achieved. This method is relevant to the physical sciences, where cryo-EM techniques cannot always be used to locate the coordinates of individual atoms in disordered materials. AET reconstructions are achieved using the combination of an ADF-STEM tomographic tilt series and iterative algorithms for reconstruction. Currently, algorithms such as the real-space algebraic reconstruction technique (ART) and the fast Fourier transform equal slope tomography (EST) are used to address issues such as image noise, sample drift, and limited data. ADF-STEM tomography has recently been used to directly visualize the atomic structure of screw dislocations in nanoparticles. AET has also been used to find the 3D coordinates of 3,769 atoms in a tungsten needle with 19 pm precision and 20,000 atoms in a multiply twinned palladium nanoparticle. The combination of AET with electron energy loss spectroscopy (EELS) allows for investigation of electronic states in addition to 3D reconstruction. Challenges to atomic level resolution from electron tomography include the need for better reconstruction algorithms and increased precision of tilt angle required to image defects in non-crystalline samples. Different tilting methods The most popular tilting methods are the single-axis and the dual-axis tilting methods. The geometry of most specimen holders and electron microscopes normally precludes tilting the specimen through a full 180° range, which can lead to artifacts in the 3D reconstruction of the target. Standard single-tilt sample holders have a limited rotation of ±80°, leading to a missing wedge in the reconstruction. A solution is to use needle shaped-samples to allow for full rotation. By using dual-axis tilting, the reconstruction artifacts are reduced by a factor of compared to single-axis tilting. However, twice as many images need to be taken. Another method of obtaining a tilt-series is the so-called conical tomography method, in which the sample is tilted, and then rotated a complete turn. See also Tomography Tomographic reconstruction 3D reconstruction Cryo-electron tomography Positron emission tomography Crowther criterion X-ray computed tomography tomviz tomography software imod tomography software X-ray diffraction computed tomography References Electron microscopy Multidimensional signal processing Condensed matter physics
Electron tomography
[ "Physics", "Chemistry", "Materials_science", "Engineering" ]
1,039
[ "Electron", "Electron microscopy", "Phases of matter", "Materials science", "Condensed matter physics", "Microscopy", "Matter" ]
8,898,329
https://en.wikipedia.org/wiki/Frequency%20scaling
In computer architecture, frequency scaling (also known as frequency ramping) is the technique of increasing a processor's frequency so as to enhance the performance of the system containing the processor in question. Frequency ramping was the dominant force in commodity processor performance increases from the mid-1980s until roughly the end of 2004. The effect of processor frequency on computer speed can be seen by looking at the equation for computer program runtime: where instructions per program is the total instructions being executed in a given program, cycles per instruction is a program-dependent, architecture-dependent average value, and time per cycle is by definition the inverse of processor frequency. An increase in frequency thus decreases runtime. However, power consumption in a chip is given by the equation where P is power consumption, C is the capacitance being switched per clock cycle, V is voltage, and F is the processor frequency (cycles per second). Increases in frequency thus increase the amount of power used in a processor. Increasing processor power consumption led ultimately to Intel's May 2004 cancellation of its Tejas and Jayhawk processors, which is generally cited as the end of frequency scaling as the dominant computer architecture paradigm. Moore's Law was still in effect when frequency scaling ended. Despite power issues, transistor densities were still doubling every 18 to 24 months. With the end of frequency scaling, new transistors (which are no longer needed to facilitate frequency scaling) are used to add extra hardware, such as additional cores, to facilitate parallel computing - a technique that is being referred to as parallel scaling. The end of frequency scaling as the dominant cause of processor performance gains has caused an industry-wide shift to parallel computing in the form of multicore processors. See also Dynamic frequency scaling Overclocking Underclocking Voltage scaling References Computer architecture Central processing unit fr:Fréquence du processeur
Frequency scaling
[ "Technology", "Engineering" ]
381
[ "Computers", "Computer engineering", "Computer architecture" ]
351,417
https://en.wikipedia.org/wiki/Sky%20father
In comparative mythology, sky father is a term for a recurring concept in polytheistic religions of a sky god who is addressed as a "father", often the father of a pantheon and is often either a reigning or former King of the Gods. The concept of "sky father" may also be taken to include Sun gods with similar characteristics, such as Ra. The concept is complementary to an "earth mother". "Sky Father" is a direct translation of the Vedic Dyaus Pita, etymologically descended from the same Proto-Indo-European deity name as the Greek Zeûs Pater and Roman Jupiter, all of which are reflexes of the same Proto-Indo-European deity's name, *Dyēus Ph₂tḗr. While there are numerous parallels adduced from outside of Indo-European mythology, there are exceptions (e.g. In Egyptian mythology, Nut is the sky mother and Geb is the earth father). In historical religion See also Earth mother God in Abrahamic religions Sky deity Thunder god Worship of heavenly bodies References Father Comparative mythology Mythological archetypes Fatherhood
Sky father
[ "Physics" ]
226
[ "Weather", "Sky and weather deities", "Physical phenomena" ]
351,549
https://en.wikipedia.org/wiki/List%20of%20algebraic%20geometry%20topics
This is a list of algebraic geometry topics, by Wikipedia page. Classical topics in projective geometry Affine space Projective space Projective line, cross-ratio Projective plane Line at infinity Complex projective plane Complex projective space Plane at infinity, hyperplane at infinity Projective frame Projective transformation Fundamental theorem of projective geometry Duality (projective geometry) Real projective plane Real projective space Segre embedding of a product of projective spaces Rational normal curve Algebraic curves Conics, Pascal's theorem, Brianchon's theorem Twisted cubic Elliptic curve, cubic curve Elliptic function, Jacobi's elliptic functions, Weierstrass's elliptic functions Elliptic integral Complex multiplication Weil pairing Hyperelliptic curve Klein quartic Modular curve Modular equation Modular function Modular group Supersingular primes Fermat curve Bézout's theorem Brill–Noether theory Genus (mathematics) Riemann surface Riemann–Hurwitz formula Riemann–Roch theorem Abelian integral Differential of the first kind Jacobian variety Generalized Jacobian Moduli of algebraic curves Hurwitz's theorem on automorphisms of a curve Clifford's theorem on special divisors Gonality of an algebraic curve Weil reciprocity law Algebraic geometry codes Algebraic surfaces Enriques–Kodaira classification List of algebraic surfaces Ruled surface Cubic surface Veronese surface Del Pezzo surface Rational surface Enriques surface K3 surface Hodge index theorem Elliptic surface Surface of general type Zariski surface Algebraic geometry: classical approach Algebraic variety Hypersurface Quadric (algebraic geometry) Dimension of an algebraic variety Hilbert's Nullstellensatz Complete variety Elimination theory Gröbner basis Projective variety Quasiprojective variety Canonical bundle Complete intersection Serre duality Spaltenstein variety Arithmetic genus, geometric genus, irregularity Tangent space, Zariski tangent space Function field of an algebraic variety Ample line bundle Ample vector bundle Linear system of divisors Birational geometry Blowing up Resolution of singularities Rational variety Unirational variety Ruled variety Kodaira dimension Canonical ring Minimal model program Intersection theory Intersection number Chow ring Chern class Serre's multiplicity conjectures Albanese variety Picard group Modular form Moduli space Modular equation J-invariant Algebraic function Algebraic form Addition theorem Invariant theory Symbolic method of invariant theory Geometric invariant theory Toric variety Deformation theory Singular point, non-singular Singularity theory Newton polygon Weil conjectures Complex manifolds Kähler manifold Calabi–Yau manifold Stein manifold Hodge theory Hodge cycle Hodge conjecture Algebraic geometry and analytic geometry Mirror symmetry Algebraic groups Linear algebraic group Additive group Multiplicative group Algebraic torus Reductive group Borel subgroup Radical of an algebraic group Unipotent radical Lie-Kolchin theorem Haboush's theorem (also known as the Mumford conjecture) Group scheme Abelian variety Theta function Grassmannian Flag manifold Weil restriction Differential Galois theory Contemporary foundations Commutative algebra Prime ideal Valuation (algebra) Krull dimension Regular local ring Regular sequence Cohen–Macaulay ring Gorenstein ring Koszul complex Spectrum of a ring Zariski topology Kähler differential Generic flatness Irrelevant ideal Sheaf theory Locally ringed space Coherent sheaf Invertible sheaf Sheaf cohomology Coherent sheaf cohomology Hirzebruch–Riemann–Roch theorem Grothendieck–Riemann–Roch theorem Coherent duality Dévissage Schemes Affine scheme Scheme Éléments de géométrie algébrique Grothendieck's Séminaire de géométrie algébrique Fiber product of schemes Flat morphism Smooth scheme Finite morphism Quasi-finite morphism Proper morphism Semistable elliptic curve Grothendieck's relative point of view Hilbert scheme Category theory Grothendieck topology Topos Derived category Descent (category theory) Grothendieck's Galois theory Algebraic stack Gerbe Étale cohomology Motive (algebraic geometry) Motivic cohomology A¹ homotopy theory Homotopical algebra Algebraic geometers Niels Henrik Abel Carl Gustav Jacob Jacobi Jakob Steiner Julius Plücker Arthur Cayley Bernhard Riemann Max Noether William Kingdon Clifford David Hilbert Italian school of algebraic geometry Guido Castelnuovo Federigo Enriques Francesco Severi Solomon Lefschetz Oscar Zariski W. V. D. Hodge Sir Michael Atiyah Kunihiko Kodaira André Weil Jean-Pierre Serre Alexander Grothendieck Friedrich Hirzebruch Igor Shafarevich Heisuke Hironaka Shreeram S. Abhyankar Pierre Samuel C.P. Ramanujam David Mumford Michael Artin Phillip Griffiths Pierre Deligne Yuri Manin Shigefumi Mori Vladimir Drinfeld Vladimir Voevodsky Claire Voisin János Kollár Caucher Birkar Burt Totaro Patrick Brosnan Robin Hartshorne Joe Harris Mathematics-related lists Outlines of mathematics and logic Outlines
List of algebraic geometry topics
[ "Mathematics" ]
1,010
[ "Fields of abstract algebra", "nan", "Algebraic geometry" ]
351,581
https://en.wikipedia.org/wiki/Health%20informatics
Health informatics combines communications, information technology (IT), and health care to enhance patient care and is at the forefront of the medical technological revolution. It can be viewed as a branch of engineering and applied science. The health domain provides an extremely wide variety of problems that can be tackled using computational techniques. Health informatics is a spectrum of multidisciplinary fields that includes study of the design, development, and application of computational innovations to improve health care. The disciplines involved combine healthcare fields with computing fields, in particular computer engineering, software engineering, information engineering, bioinformatics, bio-inspired computing, theoretical computer science, information systems, data science, information technology, autonomic computing, and behavior informatics. In the healthcare industry, health informatics has provided such technological solutions as telemedicine, surgical robots, electronic health records (EHR), Picture Archiving and Communication Systems (PACS), and decision support, artificial intelligence, and machine learning innovations including IBM's Watson and Google's DeepMind platform. In academic institutions, health informatics includes research focuses on applications of artificial intelligence in healthcare and designing medical devices based on embedded systems. In some countries the term informatics is also used in the context of applying library science to data management in hospitals where it aims to develop methods and technologies for the acquisition, processing, and study of patient data, An umbrella term of biomedical informatics has been proposed. Subject areas Jan van Bemmel has described medical informatics as the theoretical and practical aspects of information processing and communication based on knowledge and experience derived from processes in medicine and health care. The Faculty of Clinical Informatics has identified six high level domains of core competency for clinical informaticians: Health and Wellbeing in Practice Information Technologies and Systems Working with Data and Analytical Methods Enabling Human and Organizational Change Decision Making Leading Informatics Teams and projects. Tools to support practitioners Clinical informaticians use their knowledge of patient care combined with their understanding of informatics concepts, methods, and health informatics tools to: Assess information and knowledge needs of health care professionals, patients and their families. Characterize, evaluate, and refine clinical processes, Develop, implement, and refine clinical decision support systems, and Lead or participate in the procurement, customization, development, implementation, management, evaluation, and continuous improvement of clinical information systems. Clinicians collaborate with other health care and information technology professionals to develop health informatics tools which promote patient care that is safe, efficient, effective, timely, patient-centered, and equitable. Many clinical informaticists are also computer scientists. Telehealth and telemedicine Telehealth is the distribution of health-related services and information via electronic information and telecommunication technologies. It allows long-distance patient and clinician contact, care, advice, reminders, education, intervention, monitoring, and remote admissions. Telemedicine is sometimes used as a synonym, or is used in a more limited sense to describe remote clinical services, such as diagnosis and monitoring. Remote monitoring, also known as self-monitoring or testing, enables medical professionals to monitor a patient remotely using various technological devices. This method is primarily used for managing chronic diseases or specific conditions, such as heart disease, diabetes mellitus, or asthma. These services can provide comparable health outcomes to traditional in-person patient encounters, supply greater satisfaction to patients, and may be cost-effective. Telerehabilitation (or e-rehabilitation[40][41]) is the delivery of rehabilitation services over telecommunications networks and the Internet. Most types of services fall into two categories: clinical assessment (the patient's functional abilities in his or her environment), and clinical therapy. Some fields of rehabilitation practice that have explored telerehabilitation are: neuropsychology, speech-language pathology, audiology, occupational therapy, and physical therapy. Telerehabilitation can deliver therapy to people who cannot travel to a clinic because the patient has a disability or because of travel time. Telerehabilitation also allows experts in rehabilitation to engage in a clinical consultation at a distance. Decision support, artificial intelligence and machine learning in healthcare A pioneer in the use of artificial intelligence in healthcare was American biomedical informatician Edward H. Shortliffe. This field deals with utilization of machine-learning algorithms and artificial intelligence, to emulate human cognition in the analysis, interpretation, and comprehension of complicated medical and healthcare data. Specifically, AI is the ability of computer algorithms to approximate conclusions based solely on input data. AI programs are applied to practices such as diagnosis processes, treatment protocol development, drug development, personalized medicine, and patient monitoring and care. A large part of industry focus of implementation of AI in the healthcare sector is in the clinical decision support systems. As more data is collected, machine learning algorithms adapt and allow for more robust responses and solutions. Numerous companies are exploring the possibilities of the incorporation of big data in the healthcare industry. Many companies investigate the market opportunities through the realms of "data assessment, storage, management, and analysis technologies" which are all crucial parts of the healthcare industry. The following are examples of large companies that have contributed to AI algorithms for use in healthcare: IBM's Watson Oncology is in development at Memorial Sloan Kettering Cancer Center and Cleveland Clinic. IBM is also working with CVS Health on AI applications in chronic disease treatment and with Johnson & Johnson on analysis of scientific papers to find new connections for drug development. In May 2017, IBM and Rensselaer Polytechnic Institute began a joint project entitled Health Empowerment by Analytics, Learning and Semantics (HEALS), to explore using AI technology to enhance healthcare. Microsoft's Hanover project, in partnership with Oregon Health & Science University's Knight Cancer Institute, analyzes medical research to predict the most effective cancer drug treatment options for patients. Other projects include medical image analysis of tumor progression and the development of programmable cells. Google's DeepMind platform is being used by the UK National Health Service to detect certain health risks through data collected via a mobile app. A second project with the NHS involves analysis of medical images collected from NHS patients to develop computer vision algorithms to detect cancerous tissues. Tencent is working on several medical systems and services. These include AI Medical Innovation System (AIMIS), an AI-powered diagnostic medical imaging service; WeChat Intelligent Healthcare; and Tencent Doctorwork. Intel's venture capital arm Intel Capital recently invested in startup Lumiata which uses AI to identify at-risk patients and develop care options. Kheiron Medical developed deep learning software to detect breast cancers in mammograms. Fractal Analytics has incubated Qure.ai which focuses on using deep learning and AI to improve radiology and speed up the analysis of diagnostic x-rays. Neuralink has come up with a next generation neuroprosthetic which intricately interfaces with thousands of neural pathways in the brain. Their process allows a chip, roughly the size of a quarter, to be inserted in place of a chunk of skull by a precision surgical robot to avoid accidental injury. Digital consultant apps like Babylon Health's GP at Hand, Ada Health, Alibaba Health Doctor You, KareXpert and Your.MD use AI to give medical consultation based on personal medical history and common medical knowledge. Users report their symptoms into the app, which uses speech recognition to compare against a database of illnesses. Babylon then offers a recommended action, taking into account the user's medical history. Entrepreneurs in healthcare have been effectively using seven business model archetypes to take AI solution[buzzword] to the marketplace. These archetypes depend on the value generated for the target user (e.g. patient focus vs. healthcare provider and payer focus) and value capturing mechanisms (e.g. providing information or connecting stakeholders). IFlytek launched a service robot "Xiao Man", which integrated artificial intelligence technology to identify the registered customer and provide personalized recommendations in medical areas. It also works in the field of medical imaging. Similar robots are also being made by companies such as UBTECH ("Cruzr") and Softbank Robotics ("Pepper"). The Indian startup Haptik recently developed a WhatsApp chatbot which answers questions associated with the deadly coronavirus in India. With the market for AI expanding constantly, large tech companies such as Apple, Google, Amazon, and Baidu all have their own AI research divisions, as well as millions of dollars allocated for acquisition of smaller AI based companies. Many automobile manufacturers are beginning to use machine learning healthcare in their cars as well. Companies such as BMW, GE, Tesla, Toyota, and Volvo all have new research campaigns to find ways of learning a driver's vital statistics to ensure they are awake, paying attention to the road, and not under the influence of substances or in emotional distress. Examples of projects in computational health informatics include the COACH project. Clinical Research Informatics Clinical research informatics (CRI) is a sub-field of health informatics that tries to improve the efficiency of clinical research by using informatics methods. Some of the problems tackled by CRI are: creation of data warehouses of health care data that can be used for research, support of data collection in clinical trials by the use of electronic data capture systems, streamlining ethical approvals and renewals (in US the responsible entity is the local institutional review board), maintenance of repositories of past clinical trial data (de-identified). CRI is a fairly new branch of informatics and has met growing pains as any up and coming field does. Some issue CRI faces is the ability for the statisticians and the computer system architects to work with the clinical research staff in designing a system and lack of funding to support the development of a new system. Researchers and the informatics team have a difficult time coordinating plans and ideas in order to design a system that is easy to use for the research team yet fits in the system requirements of the computer team. The lack of funding can be a hindrance to the development of the CRI. Many organizations who are performing research are struggling to get financial support to conduct the research, much less invest that money in an informatics system that will not provide them any more income or improve the outcome of the research (Embi, 2009). Ability to integrate data from multiple clinical trials is an important part of clinical research informatics. Initiatives, such as PhenX and Patient-Reported Outcomes Measurement Information System triggered a general effort to improve secondary use of data collected in past human clinical trials. CDE initiatives, for example, try to allow clinical trial designers to adopt standardized research instruments (electronic case report forms). A parallel effort to standardizing how data is collected are initiatives that offer de-identified patient level clinical study data to be downloaded by researchers who wish to re-use this data. Examples of such platforms are Project Data Sphere, dbGaP, ImmPort or Clinical Study Data Request. Informatics issues in data formats for sharing results (plain CSV files, FDA endorsed formats, such as CDISC Study Data Tabulation Model) are important challenges within the field of clinical research informatics. There are a number of activities within clinical research that CRI supports, including: More efficient and effective data collection and acquisition Improved recruitment into clinical trials Optimal protocol design and efficient management Patient recruitment and management Adverse event reporting Regulatory compliance Data storage, transfer, processing and analysis Repositories of data from completed clinical trials (for secondary analyses) One of the fundamental elements of biomedical and translation research is the use of integrated data repositories. A survey conducted in 2010 defined "integrated data repository" (IDR) as a data warehouse incorporating various sources of clinical data to support queries for a range of research-like functions. Integrated data repositories are complex systems developed to solve a variety of problems ranging from identity management, protection of confidentiality, semantic and syntactic comparability of data from different sources, and most importantly convenient and flexible query. Development of the field of clinical informatics led to the creation of large data sets with electronic health record data integrated with other data (such as genomic data). Types of data repositories include operational data stores (ODSs), clinical data warehouses (CDWs), clinical data marts, and clinical registries. Operational data stores established for extracting, transferring and loading before creating warehouse or data marts. Clinical registries repositories have long been in existence, but their contents are disease specific and sometimes considered archaic. Clinical data stores and clinical data warehouses are considered fast and reliable. Though these large integrated repositories have impacted clinical research significantly, it still faces challenges and barriers. One big problem is the requirement for ethical approval by the institutional review board (IRB) for each research analysis meant for publication. Some research resources do not require IRB approval. For example, CDWs with data of deceased patients have been de-identified and IRB approval is not required for their usage. Another challenge is data quality. Methods that adjust for bias (such as using propensity score matching methods) assume that a complete health record is captured. Tools that examine data quality (e.g., point to missing data) help in discovering data quality problems. Translational bioinformatics Translational Bioinformatics (TBI) is a relatively new field that surfaced in the year of 2000 when human genome sequence was released. The commonly used definition of TBI is lengthy and could be found on the AMIA website. In simpler terms, TBI could be defined as a collection of colossal amounts of health related data (biomedical and genomic) and translation of the data into individually tailored clinical entities. Today, TBI field is categorized into four major themes that are briefly described below: Clinical big data is a collection of electronic health records that are used for innovations. The evidence-based approach that is currently practiced in medicine is suggested to be merged with the practice-based medicine to achieve better outcomes for patients. As CEO of California-based cognitive computing firm Apixio, Darren Schutle, explains that the care can be better fitted to the patient if the data could be collected from various medical records, merged, and analyzed. Further, the combination of similar profiles can serve as a basis for personalized medicine pointing to what works and what does not for certain condition (Marr, 2016). Genomics in clinical careGenomic data are used to identify the genes involvement in unknown or rare conditions/syndromes. Currently, the most vigorous area of using genomics is oncology. The identification of genomic sequencing of cancer may define reasons of drug(s) sensitivity and resistance during oncological treatment processes. Omics for drugs discovery and repurposingRepurposing of the drug is an appealing idea that allows the pharmaceutical companies to sell an already approved drug to treat a different condition/disease that the drug was not initially approved for by the FDA. The observation of "molecular signatures in disease and compare those to signatures observed in cells" points to the possibility of a drug ability to cure and/or relieve symptoms of a disease. Personalized genomic testingIn the US, several companies offer direct-to-consumer (DTC) genetic testing. The company that performs the majority of testing is called 23andMe. Utilizing genetic testing in health care raises many ethical, legal and social concerns; one of the main questions is whether the health care providers are ready to include patient-supplied genomic information while providing care that is unbiased (despite the intimate genomic knowledge) and a high quality. The documented examples of incorporating such information into a health care delivery showed both positive and negative impacts on the overall health care related outcomes. Medical signal processing An important application of information engineering in medicine is medical signal processing. It refers to the generation, analysis, and use of signals, which could take many forms such as image, sound, electrical, or biological. Medical image computing and imaging informatics Imaging informatics and medical image computing develops computational and mathematical methods for solving problems pertaining to medical images and their use for biomedical research and clinical care. Those fields aims to extract clinically relevant information or knowledge from medical images and computational analysis of the images. The methods can be grouped into several broad categories: image segmentation, image registration, image-based physiological modeling, and others. Medical robotics A medical robot is a robot used in the medical sciences. They include surgical robots. These are in most telemanipulators, which use the surgeon's activators on one side to control the "effector" on the other side. There are the following types of medical robots: Surgical robots: either allow surgical operations to be carried out with better precision than an unaided human surgeon or allow remote surgery where a human surgeon is not physically present with the patient. Rehabilitation robots: facilitate and support the lives of infirm, elderly people, or those with dysfunction of body parts affecting movement. These robots are also used for rehabilitation and related procedures, such as training and therapy. Biorobots: a group of robots designed to imitate the cognition of humans and animals. Telepresence robots: allow off-site medical professionals to move, look around, communicate, and participate from remote locations. Pharmacy automation: robotic systems to dispense oral solids in a retail pharmacy setting or preparing sterile IV admixtures in a hospital pharmacy setting. Companion robot: has the capability to engage emotionally with users keeping them company and alerting if there is a problem with their health. Disinfection robot: has the capability to disinfect a whole room in mere minutes, generally using pulsed ultraviolet light. They are being used to fight Ebola virus disease. Pathology informatics Pathology informatics is a field that involves the use of information technology, computer systems, and data management to support and enhance the practice of pathology. It encompasses pathology laboratory operations, data analysis, and the interpretation of pathology-related information. Key aspects of pathology informatics include: Laboratory information management systems (LIMS): Implementing and managing computer systems specifically designed for pathology departments. These systems help in tracking and managing patient specimens, results, and other pathology data. Digital pathology: Involves the use of digital technology to create, manage, and analyze pathology images. This includes side scanning and automated image analysis. Telepathology: Using technology to enable remote pathology consultation and collaboration. Quality assurance and reporting: Implementing informatics solutions to ensure the quality and accuracy of pathology processes. International history Worldwide use of computer technology in medicine began in the early 1950s with the rise of the computers. In 1949, Gustav Wagner established the first professional organization for informatics in Germany. Specialized university departments and Informatics training programs began during the 1960s in France, Germany, Belgium and The Netherlands. Medical informatics research units began to appear during the 1970s in Poland and in the U.S. Since then the development of high-quality health informatics research, education and infrastructure has been a goal of the U.S. and the European Union. Early names for health informatics included medical computing, biomedical computing, medical computer science, computer medicine, medical electronic data processing, medical automatic data processing, medical information processing, medical information science, medical software engineering, and medical computer technology. The health informatics community is still growing, it is by no means a mature profession, but work in the UK by the voluntary registration body, the UK Council of Health Informatics Professions has suggested eight key constituencies within the domain–information management, knowledge management, portfolio/program/project management, ICT, education and research, clinical informatics, health records(service and business-related), health informatics service management. These constituencies accommodate professionals in and for the NHS, in academia and commercial service and solution providers. Since the 1970s the most prominent international coordinating body has been the International Medical Informatics Association (IMIA). History, current state and policy initiatives by region and country Americas Argentina The Argentinian health system is heterogeneous in its function, and because of that, the informatics developments show a heterogeneous stage. Many private health care centers have developed systems, such as the Hospital Aleman of Buenos Aires, or the Hospital Italiano de Buenos Aires that also has a residence program for health informatics. Brazil The first applications of computers to medicine and health care in Brazil started around 1968, with the installation of the first mainframes in public university hospitals, and the use of programmable calculators in scientific research applications. Minicomputers, such as the IBM 1130 were installed in several universities, and the first applications were developed for them, such as the hospital census in the School of Medicine of Ribeirão Preto and patient master files, in the Hospital das Clínicas da Universidade de São Paulo, respectively at the cities of Ribeirão Preto and São Paulo campuses of the University of São Paulo. In the 1970s, several Digital Corporation and Hewlett-Packard minicomputers were acquired for public and Armed Forces hospitals, and more intensively used for intensive-care unit, cardiology diagnostics, patient monitoring and other applications. In the early 1980s, with the arrival of cheaper microcomputers, a great upsurge of computer applications in health ensued, and in 1986 the Brazilian Society of Health Informatics was founded, the first Brazilian Congress of Health Informatics was held, and the first Brazilian Journal of Health Informatics was published. In Brazil, two universities are pioneers in teaching and research in medical informatics, both the University of São Paulo and the Federal University of São Paulo offer undergraduate programs highly qualified in the area as well as extensive graduate programs (MSc and PhD). In 2015 the Universidade Federal de Ciências da Saúde de Porto Alegre, Rio Grande do Sul, also started to offer undergraduate program. Canada Health Informatics projects in Canada are implemented provincially, with different provinces creating different systems. A national, federally funded, not-for-profit organisation called Canada Health Infoway was created in 2001 to foster the development and adoption of electronic health records across Canada. As of December 31, 2008, there were 276 EHR projects under way in Canadian hospitals, other health-care facilities, pharmacies and laboratories, with an investment value of $1.5-billion from Canada Health Infoway. Provincial and territorial programmes include the following: eHealth Ontario was created as an Ontario provincial government agency in September 2008. It has been plagued by delays and its CEO was fired over a multimillion-dollar contracts scandal in 2009. Alberta Netcare was created in 2003 by the Government of Alberta. Today the netCARE portal is used daily by thousands of clinicians. It provides access to demographic data, prescribed/dispensed drugs, known allergies/intolerances, immunizations, laboratory test results, diagnostic imaging reports, the diabetes registry and other medical reports. netCARE interface capabilities are being included in electronic medical record products that are being funded by the provincial government. United States Even though the idea of using computers in medicine emerged as technology advanced in the early 20th century, it was not until the 1950s that informatics began to have an effect in the United States. The earliest use of electronic digital computers for medicine was for dental projects in the 1950s at the United States National Bureau of Standards by Robert Ledley. During the mid-1950s, the United States Air Force (USAF) carried out several medical projects on its computers while also encouraging civilian agencies such as the National Academy of Sciences – National Research Council (NAS-NRC) and the National Institutes of Health (NIH) to sponsor such work. In 1959, Ledley and Lee B. Lusted published "Reasoning Foundations of Medical Diagnosis," a widely read article in Science, which introduced computing (especially operations research) techniques to medical workers. Ledley and Lusted's article has remained influential for decades, especially within the field of medical decision making. Guided by Ledley's late 1950s survey of computer use in biology and medicine (carried out for the NAS-NRC), and by his and Lusted's articles, the NIH undertook the first major effort to introduce computers to biology and medicine. This effort, carried out initially by the NIH's Advisory Committee on Computers in Research (ACCR), chaired by Lusted, spent over $40 million between 1960 and 1964 in order to establish dozens of large and small biomedical research centers in the US. One early (1960, non-ACCR) use of computers was to help quantify normal human movement, as a precursor to scientifically measuring deviations from normal, and design of prostheses. The use of computers (IBM 650, 1620, and 7040) allowed analysis of a large sample size, and of more measurements and subgroups than had been previously practical with mechanical calculators, thus allowing an objective understanding of how human locomotion varies by age and body characteristics. A study co-author was Dean of the Marquette University College of Engineering; this work led to discrete Biomedical Engineering departments there and elsewhere. The next steps, in the mid-1960s, were the development (sponsored largely by the NIH) of expert systems such as MYCIN and Internist-I. In 1965, the National Library of Medicine started to use MEDLINE and MEDLARS. Around this time, Neil Pappalardo, Curtis Marble, and Robert Greenes developed MUMPS (Massachusetts General Hospital Utility Multi-Programming System) in Octo Barnett's Laboratory of Computer Science at Massachusetts General Hospital in Boston, another center of biomedical computing that received significant support from the NIH. In the 1970s and 1980s it was the most commonly used programming language for clinical applications. The MUMPS operating system was used to support MUMPS language specifications. , a descendant of this system is being used in the United States Veterans Affairs hospital system. The VA has the largest enterprise-wide health information system that includes an electronic medical record, known as the Veterans Health Information Systems and Technology Architecture (VistA). A graphical user interface known as the Computerized Patient Record System (CPRS) allows health care providers to review and update a patient's electronic medical record at any of the VA's over 1,000 health care facilities. During the 1960s, Morris F. Collen, a physician working for Kaiser Permanente's Division of Research, developed computerized systems to automate many aspects of multi-phased health checkups. These systems became the basis the larger medical databases Kaiser Permanente developed during the 1970s and 1980s. The American Medical Informatics Association presents the Morris F. Collen Award of Excellence for an individual's lifetime achievement in biomedical informatics. In the 1970s a growing number of commercial vendors began to market practice management and electronic medical records systems. Although many products exist, only a small number of health practitioners use fully featured electronic health care records systems. In 1970, Warner V. Slack, MD, and Howard Bleich, MD, co-founded the academic division of clinical informatics (DCI) at Beth Israel Deaconess Medical Center and Harvard Medical School. Warner Slack is a pioneer of the development of the electronic patient medical history, and in 1977 Dr. Bleich created the first user-friendly search engine for the worlds biomedical literature. Computerised systems involved in patient care have led to a number of changes. Such changes have led to improvements in electronic health records which are now capable of sharing medical information among multiple health care stakeholders (Zahabi, Kaber, & Swangnetr, 2015); thereby, supporting the flow of patient information through various modalities of care. One opportunity for electronic health records (EHR) to be even more effectively used is to utilize natural language processing for searching and analyzing notes and text that would otherwise be inaccessible for review. These can be further developed through ongoing collaboration between software developers and end-users of natural language processing tools within the electronic health EHRs. Computer use today involves a broad ability which includes but is not limited to physician diagnosis and documentation, patient appointment scheduling, and billing. Many researchers in the field have identified an increase in the quality of health care systems, decreased errors by health care workers, and lastly savings in time and money (Zahabi, Kaber, & Swangnetr, 2015). The system, however, is not perfect and will continue to require improvement. Frequently cited factors of concern involve usability, safety, accessibility, and user-friendliness (Zahabi, Kaber, & Swangnetr, 2015). Homer R. Warner, one of the fathers of medical informatics, founded the Department of Medical Informatics at the University of Utah in 1968. The American Medical Informatics Association (AMIA) has an award named after him on application of informatics to medicine. The American Medical Informatics Association created a, board certification for medical informatics from the American Board of Preventive Medicine. The American Nurses Credentialing Center offers a board certification in Nursing Informatics. For Radiology Informatics, the CIIP (Certified Imaging Informatics Professional) certification was created by ABII (The American Board of Imaging Informatics) which was founded by SIIM (the Society for Imaging Informatics in Medicine) and ARRT (the American Registry of Radiologic Technologists) in 2005. The CIIP certification requires documented experience working in Imaging Informatics, formal testing and is a limited time credential requiring renewal every five years. The exam tests for a combination of IT technical knowledge, clinical understanding, and project management experience thought to represent the typical workload of a PACS administrator or other radiology IT clinical support role. Certifications from PARCA (PACS Administrators Registry and Certifications Association) are also recognized. The five PARCA certifications are tiered from entry-level to architect level. The American Health Information Management Association offers credentials in medical coding, analytics, and data administration, such as Registered Health Information Administrator and Certified Coding Associate. Certifications are widely requested by employers in health informatics, and overall the demand for certified informatics workers in the United States is outstripping supply. The American Health Information Management Association reports that only 68% of applicants pass certification exams on the first try. In 2017, a consortium of health informatics trainers (composed of MEASURE Evaluation, Public Health Foundation India, University of Pretoria, Kenyatta University, and the University of Ghana) identified the following areas of knowledge as a curriculum for the digital health workforce, especially in low- and middle-income countries: clinical decision support; telehealth; privacy, security, and confidentiality; workflow process improvement; technology, people, and processes; process engineering; quality process improvement and health information technology; computer hardware; software; databases; data warehousing; information networks; information systems; information exchange; data analytics; and usability methods. In 2004, President George W. Bush signed Executive Order 13335, creating the Office of the National Coordinator for Health Information Technology (ONCHIT) as a division of the U.S. Department of Health and Human Services (HHS). The mission of this office is widespread adoption of interoperable electronic health records (EHRs) in the US within 10 years. See quality improvement organizations for more information on federal initiatives in this area. In 2014 the Department of Education approved an advanced Health Informatics Undergraduate program that was submitted by the University of South Alabama. The program is designed to provide specific Health Informatics education, and is the only program in the country with a Health Informatics Lab. The program is housed in the School of Computing in Shelby Hall, a recently completed $50 million state of the art teaching facility. The University of South Alabama awarded David L. Loeser on May 10, 2014, with the first Health Informatics degree. The program currently is scheduled to have 100+ students awarded by 2016. The Certification Commission for Healthcare Information Technology (CCHIT), a private nonprofit group, was funded in 2005 by the U.S. Department of Health and Human Services to develop a set of standards for electronic health records (EHR) and supporting networks, and certify vendors who meet them. In July 2006, CCHIT released its first list of 22 certified ambulatory EHR products, in two different announcements. Harvard Medical School added a department of biomedical informatics in 2015. The University of Cincinnati in partnership with Cincinnati Children's Hospital Medical Center created a biomedical informatics (BMI) Graduate certificate program and in 2015 began a BMI PhD program. The joint program allows for researchers and students to observe the impact their work has on patient care directly as discoveries are translated from bench to bedside. The University of California, San Diego with support from the National Center for Advancing Translational Sciences provides training programs in biomedical inforamtics through Doctor of Philosophy and post-doctoral fellowship programs, including a course specifically in health informatics taught by the cancer informaticist Raphael E. Cuomo. Europe European Union The European Commission's preference, as exemplified in the 5th Framework as well as currently pursued pilot projects, is for Free/Libre and Open Source Software (FLOSS) for health care. The European Union's Member States are committed to sharing their best practices and experiences to create a European eHealth Area, thereby improving access to and quality health care at the same time as stimulating growth in a promising new industrial sector. The European eHealth Action Plan plays a fundamental role in the European Union's strategy. Work on this initiative involves a collaborative approach among several parts of the Commission services. The European Institute for Health Records is involved in the promotion of high quality electronic health record systems in the European Union. UK The broad history of health informatics has been captured in the book UK Health Computing: Recollections and reflections, Hayes G, Barnett D (Eds.), BCS (May 2008) by those active in the field, predominantly members of BCS Health and its constituent groups. The book describes the path taken as "early development of health informatics was unorganized and idiosyncratic". In the early 1950s, it was prompted by those involved in NHS finance and only in the early 1960s did solutions including those in pathology (1960), radiotherapy (1962), immunization (1963), and primary care (1968) emerge. Many of these solutions, even in the early 1970s were developed in-house by pioneers in the field to meet their own requirements. In part, this was due to some areas of health services (for example the immunization and vaccination of children) still being provided by Local Authorities. The coalition government has proposed broadly to return to the 2010 strategy Equity and Excellence: Liberating the NHS (July 2010); stating: "We will put patients at the heart of the NHS, through an information revolution and greater choice and control' with shared decision-making becoming the norm: "no decision about me without me' and patients having access to the information they want, to make choices about their care. They will have increased control over their own care records." There are different models of health informatics delivery in each of the home countries (England, Scotland, Northern Ireland and Wales) but some bodies like UKCHIP (see below) operate for those 'in and for' all the home countries and beyond. NHS informatics in England was contracted out to several vendors for national health informatics solutions under the National Programme for Information Technology (NPfIT) label in the early to mid-2000s, under the auspices of NHS Connecting for Health (part of the Health and Social Care Information Centre as of 1 April 2013). NPfIT originally divided the country into five regions, with strategic 'systems integration' contracts awarded to one of several Local Service Providers (LSP). The various specific technical solutions were required to connect securely with the NHS 'Spine', a system designed to broker data between different systems and care settings. NPfIT fell significantly behind schedule and its scope and design were being revised in real time, exacerbated by media and political lambasting of the Programme's spend (past and projected) against the proposed budget. In 2010 a consultation was launched as part of the new Conservative/Liberal Democrat Coalition Government's White Paper "Liberating the NHS". This initiative provided little in the way of innovative thinking, primarily re-stating existing strategies within the proposed new context of the Coalition's vision for the NHS. The degree of computerization in NHS secondary care was quite high before NPfIT, and the programme stagnated further development of the install base – the original NPfIT regional approach provided neither a single, nationwide solution nor local health community agility or autonomy to purchase systems, but instead tried to deal with a hinterland in the middle. Almost all general practices in England and Wales are computerized under the GP Systems of Choice programme, and patients have relatively extensive computerized primary care clinical records. System choice is the responsibility of individual general practices and while there is no single, standardized GP system, it sets relatively rigid minimum standards of performance and functionality for vendors to adhere to. Interoperation between primary and secondary care systems is rather primitive. It is hoped that a focus on interworking (for interfacing and integration) standards will stimulate synergy between primary and secondary care in sharing necessary information to support the care of individuals. Notable successes to date are in the electronic requesting and viewing of test results, and in some areas, GPs have access to digital x-ray images from secondary care systems. In 2019 the GP Systems of Choice framework was replaced by the GP IT Futures framework, which is to be the main vehicle used by clinical commissioning groups to buy services for GPs. This is intended to increase competition in an area that is dominated by EMIS and TPP. 69 technology companies offering more than 300 solutions have been accepted on to the new framework. Wales has a dedicated Health Informatics function that supports NHS Wales in leading on the new integrated digital information services and promoting Health Informatics as a career. The British Computer Society (BCS) provides 4 different professional registration levels for Health and Care Informatics Professionals: Practitioner, Senior Practitioner, Advanced Practitioner, and Leading Practitioner. The Faculty of Clinical Informatics (FCI) is the professional membership society for health and social care professionals in clinical informatics offering Fellowship, Membership and Associateship. BCS and FCI are member organizations of the Federation for Informatics Professionals in Health and Social Care (FedIP), a collaboration between the leading professional bodies in health and care informatics supporting the development of the informatics professions. The Faculty of Clinical Informatics has produced a Core Competency Framework that describes the wide range of skills needed by practitioners. Netherlands In the Netherlands, health informatics is currently a priority for research and implementation. The Netherlands Federation of University medical centers (NFU) has created the Citrienfonds, which includes the programs eHealth and Registration at the Source. The Netherlands also has the national organizations Society for Healthcare Informatics (VMBI) and Nictiz, the national center for standardization and eHealth. Asia and Oceania In Asia and Australia-New Zealand, the regional group called the Asia Pacific Association for Medical Informatics (APAMI) was established in 1994 and now consists of more than 15 member regions in the Asia Pacific Region. Australia The Australasian College of Health Informatics (ACHI) is the professional association for health informatics in the Asia-Pacific region. It represents the interests of a broad range of clinical and non-clinical professionals working within the health informatics sphere through a commitment to quality, standards and ethical practice. ACHI is an academic institutional member of the International Medical Informatics Association (IMIA) and a full member of the Australian Council of Professions. ACHI is a sponsor of the "e-Journal for Health Informatics", an indexed and peer-reviewed professional journal. ACHI has also supported the "Australian Health Informatics Education Council" (AHIEC) since its founding in 2009. Although there are a number of health informatics organizations in Australia, the Health Informatics Society of Australia (HISA) is regarded as the major umbrella group and is a member of the International Medical Informatics Association (IMIA). Nursing informaticians were the driving force behind the formation of HISA, which is now a company limited by guarantee of the members. The membership comes from across the informatics spectrum that is from students to corporate affiliates. HISA has a number of branches (Queensland, New South Wales, Victoria and Western Australia) as well as special interest groups such as nursing (NIA), pathology, aged and community care, industry and medical imaging (Conrick, 2006). China After 20 years, China performed a successful transition from its planned economy to a socialist market economy. Along this change, China's health care system also experienced a significant reform to follow and adapt to this historical revolution. In 2003, the data (released from Ministry of Health of the People's Republic of China (MoH)), indicated that the national health care-involved expenditure was up to RMB 662.33 billion totally, which accounted for about 5.56% of nationwide gross domestic products. Before the 1980s, the entire health care costs were covered in central government annual budget. Since that, the construct of health care-expended supporters started to change gradually. Most of the expenditure was contributed by health insurance schemes and private spending, which corresponded to 40% and 45% of total expenditure, respectively. Meanwhile, the financially governmental contribution was decreased to 10% only. On the other hand, by 2004, up to 296,492 health care facilities were recorded in statistic summary of MoH, and an average of 2.4 clinical beds per 1000 people were mentioned as well. Along with the development of information technology since the 1990s, health care providers realized that the information could generate significant benefits to improve their services by computerized cases and data, for instance of gaining the information for directing patient care and assessing the best patient care for specific clinical conditions. Therefore, substantial resources were collected to build China's own health informatics system. Most of these resources were arranged to construct hospital information system (HIS), which was aimed to minimize unnecessary waste and repetition, subsequently to promote the efficiency and quality-control of health care. By 2004, China had successfully spread HIS through approximately 35–40% of nationwide hospitals. However, the dispersion of hospital-owned HIS varies critically. In the east part of China, over 80% of hospitals constructed HIS, in northwest of China the equivalent was no more than 20%. Moreover, all of the Centers for Disease Control and Prevention (CDC) above rural level, approximately 80% of health care organisations above the rural level and 27% of hospitals over town level have the ability to perform the transmission of reports about real-time epidemic situation through public health information system and to analysis infectious diseases by dynamic statistics. China has four tiers in its health care system. The first tier is street health and workplace clinics and these are cheaper than hospitals in terms of medical billing and act as prevention centers. The second tier is district and enterprise hospitals along with specialist clinics and these provide the second level of care. The third tier is provisional and municipal general hospitals and teaching hospitals which provided the third level of care. In a tier of its own is the national hospitals which are governed by the Ministry of Health. China has been greatly improving its health informatics since it finally opened its doors to the outside world and joined the World Trade Organization (WTO). In 2001, it was reported that China had 324,380 medical institutions and the majority of those were clinics. The reason for that is that clinics are prevention centers and Chinese people like using traditional Chinese medicine as opposed to Western medicine and it usually works for the minor cases. China has also been improving its higher education in regards to health informatics. At the end of 2002, there were 77 medical universities and medical colleges. There were 48 university medical colleges which offered bachelor, master, and doctorate degrees in medicine. There were 21 higher medical specialty institutions that offered diploma degrees so in total, there were 147 higher medical and educational institutions. Since joining the WTO, China has been working hard to improve its education system and bring it up to international standards. SARS played a large role in China quickly improving its health care system. Back in 2003, there was an outbreak of SARS and that made China hurry to spread HIS or Hospital Information System and more than 80% of hospitals had HIS. China had been comparing itself to Korea's health care system and figuring out how it can better its own system. There was a study done that surveyed six hospitals in China that had HIS. The results were that doctors did not use computers as much so it was concluded that it was not used as much for clinical practice than it was for administrative purposes. The survey asked if the hospitals created any websites and it was concluded that only four of them had created websites and that three had a third-party company create it for them and one was created by the hospital staff. In conclusion, all of them agreed or strongly agreed that providing health information on the Internet should be utilized. Collected information at different times, by different participants or systems could frequently lead to issues of misunderstanding, dis-comparing or dis-exchanging. To design an issues-minor system, health care providers realized that certain standards were the basis for sharing information and interoperability, however a system lacking standards would be a large impediment to interfere the improvement of corresponding information systems. Given that the standardization for health informatics depends on the authorities, standardization events must be involved with government and the subsequently relevant funding and supports were critical. In 2003, the Ministry of Health released the Development Lay-out of National Health Informatics (2003–2010) indicating the identification of standardization for health informatics which is 'combining adoption of international standards and development of national standards'. In China, the establishment of standardization was initially facilitated with the development of vocabulary, classification and coding, which is conducive to reserve and transmit information for premium management at national level. By 2006, 55 international/ domestic standards of vocabulary, classification and coding have served in hospital information system. In 2003, the 10th revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-10) and the ICD-10 Clinical Modification (ICD-10-CM) were adopted as standards for diagnostic classification and acute care procedure classification. Simultaneously, the International Classification of Primary Care (ICPC) were translated and tested in China 's local applied environment. Another coding standard, named Logical Observation Identifiers Names and Codes (LOINC), was applied to serve as general identifiers for clinical observation in hospitals. Personal identifier codes were widely employed in different information systems, involving name, sex, nationality, family relationship, educational level and job occupation. However, these codes within different systems are inconsistent, when sharing between different regions. Considering this large quantity of vocabulary, classification and coding standards between different jurisdictions, the health care provider realized that using multiple systems could generate issues of resource wasting and a non-conflicting national level standard was beneficial and necessary. Therefore, in late 2003, the health informatics group in Ministry of Health released three projects to deal with issues of lacking national health information standards, which were the Chinese National Health Information Framework and Standardization, the Basic Data Set Standards of Hospital Information System and the Basic Data Set Standards of Public Health Information System. The objectives of the Chinese National Health Information Framework and Standardization project were: Establish national health information framework and identify in what areas standards and guidelines are required Identify the classes, relationships and attributes of national health information framework. Produce a conceptual health data model to cover the scope of the health information framework Create logical data model for specific domains, depicting the logical data entities, the data attributes, and the relationships between the entities according to the conceptual health data model Establish uniform represent standard for data elements according to the data entities and their attributes in conceptual data model and logical data model Circulate the completed health information framework and health data model to the partnership members for review and acceptance Develop a process to maintain and refine the China model and to align with and influence international health data models Comparing China's EHR Standard and ASTM E1384 In 2011, researchers from local universities evaluated the performance of China's Electronic Health Record (EHR) Standard compared with the American Society for Testing and Materials Standard Practice for Content and Structure of Electronic Health Records in the United States (ASTM E1384 Standard, withdrawn in 2017). The deficiencies that were found are listed in the following. The lack of supporting on privacy and security. The ISO/TS 18308 specifies "The EHR must support the ethical and legal use of personal information, in accordance with established privacy principles and frameworks, which may be culturally or jurisdictionally specific" (ISO 18308: Health Informatics-Requirements for an Electronic Health Record Architecture, 2004). However this China's EHR Standard did not achieve any of the fifteen requirements in the subclass of privacy and security. The shortage of supporting on different types of data and reference. Considering only ICD-9 is referenced as China's external international coding systems, other similar systems, such as SNOMED CT in clinical terminology presentation, cannot be considered as familiar for Chinese specialists, which could lead to internationally information-sharing deficiency. The lack of more generic and extensible lower level data structures. China's large and complex EHR Standard was constructed for all medical domains. However, the specific and time-frequent attributes of clinical data elements, value sets and templates identified that this once-for-all purpose cannot lead to practical consequence. In Hong Kong, a computerized patient record system called the Clinical Management System (CMS) has been developed by the Hospital Authority since 1994. This system has been deployed at all the sites of the authority (40 hospitals and 120 clinics). It is used for up to 2 million transactions daily by 30,000 clinical staff. The comprehensive records of 7 million patients are available on-line in the electronic patient record (ePR), with data integrated from all sites. Since 2004 radiology image viewing has been added to the ePR, with radiography images from any HA site being available as part of the ePR. The Hong Kong Hospital Authority placed particular attention to the governance of clinical systems development, with input from hundreds of clinicians being incorporated through a structured process. The health informatics section in the Hospital Authority has a close relationship with the information technology department and clinicians to develop health care systems for the organization to support the service to all public hospitals and clinics in the region. The Hong Kong Society of Medical Informatics (HKSMI) was established in 1987 to promote the use of information technology in health care. The eHealth Consortium has been formed to bring together clinicians from both the private and public sectors, medical informatics professionals and the IT industry to further promote IT in health care in Hong Kong. India eHCF School of Medical Informatics eHealth-Care Foundation Malaysia Since 2010, the Ministry of Health (MoH) has been working on the Malaysian Health Data Warehouse (MyHDW) project. MyHDW aims to meet the diverse needs of timely health information provision and management, and acts as a platform for the standardization and integration of health data from a variety of sources (Health Informatics Centre, 2013). The Ministry of Health has embarked on introducing the electronic Hospital Information Systems (HIS) in several public hospitals including Putrajaya Hospital, Serdang Hospital and Selayang Hospital. Similarly, under Ministry of Higher Education, hospitals such as University of Malaya Medical Centre (UMMC) and University Kebangsaan Malaysia Medical Centre (UKMMC) are also using HIS for healthcare delivery. A hospital information system (HIS) is a comprehensive, integrated information system designed to manage the administrative, financial and clinical aspects of a hospital. As an area of medical informatics, the aim of hospital information system is to achieve the best possible support of patient care and administration by electronic data processing. HIS plays a vital role in planning, initiating, organizing and controlling the operations of the subsystems of the hospital and thus provides a synergistic organization in the process. New Zealand Health informatics is taught at five New Zealand universities. The most mature and established programme has been offered for over a decade at Otago. Health Informatics New Zealand (HINZ), is the national organization that advocates for health informatics. HINZ organizes a conference every year and also publishes a journal, Healthcare Informatics Review Online. Saudi Arabia The Saudi Association for Health Information (SAHI) was established in 2006 to work under direct supervision of King Saud bin Abdulaziz University for Health Sciences to practice public activities, develop theoretical and applicable knowledge, and provide scientific and applicable studies. Russia The Russian health care system is based on the principles of the Soviet health care system, which was oriented on mass prophylaxis, prevention of infection and epidemic diseases, vaccination and immunization of the population on a socially protected basis. The current government health care system consists of several directions: Preventive health care Primary health care Specialized medical care Obstetrical and gynecologic medical care Pediatric medical care Surgery Rehabilitation/ Health resort treatment One of the main issues of the post-Soviet medical health care system was the absence of the united system providing optimization of work for medical institutes with one, single database and structured appointment schedule and hence hours-long lines. Efficiency of medical workers might have been also doubtful because of the paperwork administrating or lost book records. Along with the development of the information systems IT and health care departments in Moscow agreed on design of a system that would improve public services of health care institutes. Tackling the issues appearing in the existing system, the Moscow Government ordered that the design of a system would provide simplified electronic booking to public clinics and automate the work of medical workers on the first level. The system designed for that purposes was called EMIAS (United Medical Information and Analysis System) and presents an electronic health record (EHR) with the majority of other services set in the system that manages the flow of patients, contains outpatient card integrated in the system, and provides an opportunity to manage consolidated managerial accounting and personalized list of medical help. Besides that, the system contains information about availability of the medical institutions and various doctors. The implementation of the system started in 2013 with the organization of one computerized database for all patients in the city, including a front-end for the users. EMIAS was implemented in Moscow and the region and it is planned that the project should extend to most parts of the country. Law Health informatics law deals with evolving and sometimes complex legal principles as they apply to information technology in health-related fields. It addresses the privacy, ethical and operational issues that invariably arise when electronic tools, information and media are used in health care delivery. Health Informatics Law also applies to all matters that involve information technology, health care and the interaction of information. It deals with the circumstances under which data and records are shared with other fields or areas that support and enhance patient care. As many health care systems are making an effort to have patient records more readily available to them via the internet, it is important that providers implement security standards in order to ensure that the patients' information is safe. They have to be able to assure confidentiality, integrity, and security of the people, process, and technology. Since there is also the possibility of payments being made through this system, it is vital that this aspect of their private information will also be protected through cryptography. The use of technology in health care settings has become popular and this trend is expected to continue. Various health care facilities had instigated different kinds of health information technology systems in the provision of patient care, such as electronic health records (EHRs), computerized charting, etc. The growing popularity of health information technology systems and the escalation in the amount of health information that can be exchanged and transferred electronically increased the risk of potential infringement in patients' privacy and confidentiality. This concern triggered the establishment of strict measures by both policymakers and individual facility to ensure patient privacy and confidentiality. One of the federal laws enacted to safeguard patient's health information (medical record, billing information, treatment plan, etc.) and to guarantee patient's privacy is the Health Insurance Portability and Accountability Act of 1996 or HIPAA. HIPAA gives patients the autonomy and control over their own health records. Furthermore, according to the U.S. Department of Health & Human Services (n.d.), this law enables patients to: View their own health records Request a copy of their own medical records Request correction to any incorrect health information Know who has access to their health record Request who can and cannot view/access their health information Health and medical informatics journals Computers and Biomedical Research, published in 1967, was one of the first dedicated journals to health informatics. Other early journals included Computers and Medicine, published by the American Medical Association; Journal of Clinical Computing, published by Gallagher Printing; Journal of Medical Systems, published by Plenum Press; and MD Computing, published by Springer-Verlag. In 1984, Lippincott published the first nursing-specific journal, titled Journal Computers in Nursing, which is now known as Computers Informatics Nursing (CIN). As of September 7, 2016, there are roughly 235 informatics journals listed in the National Library of Medicine (NLM) catalog of journals. The Journal Citation Reports for 2018 gives the top three journals in medical informatics as the Journal of Medical Internet Research (impact factor of 4.945), JMIR mHealth and uHealth (4.301) and the Journal of the American Medical Informatics Association (4.292). Competencies, education and certification In the United States, clinical informatics is a subspecialty within several medical specialties. For example, in pathology, the American Board of Pathology offers clinical informatics certification for pathologists who have completed 24 months of related training, and the American Board of Preventive Medicine offers clinical informatics certification within preventive medicine. In October 2011 American Board of Medical Specialties (ABMS), the organization overseeing the certification of specialist MDs in the United States, announced the creation of MD-only physician certification in clinical informatics. The first examination for board certification in the subspecialty of clinical informatics was offered in October 2013 by American Board of Preventive Medicine (ABPM) with 432 passing to become the 2014 inaugural class of Diplomates in clinical informatics. Fellowship programs exist for physicians who wish to become board-certified in clinical informatics. Physicians must have graduated from a medical school in the United States or Canada, or a school located elsewhere that is approved by the ABPM. In addition, they must complete a primary residency program such as Internal Medicine (or any of the 24 subspecialties recognized by the ABMS) and be eligible to become licensed to practice medicine in the state where their fellowship program is located. The fellowship program is 24 months in length, with fellows dividing their time between Informatics rotations, didactic method, research, and clinical work in their primary specialty. See also Related concepts Clinical documentation improvement Continuity of care record (CCR) Diagnosis-related group (DRG) eHealth Health information exchange (HIE) Health information management (HIM) Human resources for health (HRH) information system International Classification of Diseases (ICD) National minimum dataset Neuroinformatics Nosology Nursing documentation Personal health record (PHR) Clinical data standards DICOM Health Metrics Network Health network surveillance HL7 Fast Healthcare Interoperability Resources (FHIR) Integrating the Healthcare Enterprise Omaha System openEHR SNOMED xDT Algorithms Datafly algorithm Governance References Further reading External links
Health informatics
[ "Biology" ]
12,250
[ "Health informatics", "Medical technology" ]
351,583
https://en.wikipedia.org/wiki/Liquid%20helium
Liquid helium is a physical state of helium at very low temperatures at standard atmospheric pressures. Liquid helium may show superfluidity. At standard pressure, the chemical element helium exists in a liquid form only at the extremely low temperature of . Its boiling point and critical point depend on the isotope of helium present: the common isotope helium-4 or the rare isotope helium-3. These are the only two stable isotopes of helium. See the table below for the values of these physical quantities. The density of liquid helium-4 at its boiling point and a pressure of one atmosphere (101.3 kilopascals) is about , or about one-eighth the density of liquid water. Liquefaction Helium was first liquefied on July 10, 1908, by the Dutch physicist Heike Kamerlingh Onnes at the University of Leiden in the Netherlands. At that time, helium-3 was unknown because the mass spectrometer had not yet been invented. In more recent decades, liquid helium has been used as a cryogenic refrigerant (which is used in cryocoolers), and liquid helium is produced commercially for use in superconducting magnets such as those used in magnetic resonance imaging (MRI), nuclear magnetic resonance (NMR), magnetoencephalography (MEG), and experiments in physics, such as low temperature Mössbauer spectroscopy. The Large Hadron Collider contains superconducting magnets that are cooled with 120 tonnes of liquid helium. Liquified helium-3 A helium-3 atom is a fermion and at very low temperatures, they form two-atom Cooper pairs which are bosonic and condense into a superfluid. These Cooper pairs are substantially larger than the interatomic separation. Characteristics The temperature required to produce liquid helium is low because of the weakness of the attractions between the helium atoms. These interatomic forces in helium are weak to begin with because helium is a noble gas, but the interatomic attractions are reduced even more by the effects of quantum mechanics. These are significant in helium because of its low atomic mass of about four atomic mass units. The zero point energy of liquid helium is less if its atoms are less confined by their neighbors. Hence in liquid helium, its ground state energy can decrease by a naturally occurring increase in its average interatomic distance. However at greater distances, the effects of the interatomic forces in helium are even weaker. Because of the very weak interatomic forces in helium, the element remains a liquid at atmospheric pressure all the way from its liquefaction point down to absolute zero. At temperatures below their liquefaction points, both helium-4 and helium-3 undergo transitions to superfluids. (See the table below.) Liquid helium can be solidified only under very low temperatures and high pressures. Liquid helium-4 and the rare helium-3 are not completely miscible. Below 0.9 kelvin at their saturated vapor pressure, a mixture of the two isotopes undergoes a phase separation into a normal fluid (mostly helium-3) that floats on a denser superfluid consisting mostly of helium-4. This phase separation happens because the overall mass of liquid helium can reduce its thermodynamic enthalpy by separating. At extremely low temperatures, the superfluid phase, rich in helium-4, can contain up to 6% helium-3 in solution. This makes the small-scale use of the dilution refrigerator possible, which is capable of reaching temperatures of a few millikelvins. Superfluid helium-4 has substantially different properties from ordinary liquid helium. History In 1908, Kamerlingh-Onnes succeeded in liquifying a small quantity of helium. In 1923, he provided advice to the Canadian physicist John Cunningham McLennan, who was the first to produce quantities of liquid helium almost on demand. In 1932 Einstein reported that the liquid helium could help in creating an atomic bomb. Important early work on the characteristics of liquid helium was done by the Soviet physicist Lev Landau, later extended by the American physicist Richard Feynman. In 1961, Vignos and Fairbank reported the existence of a different phase of solid helium-4, designated the gamma-phase. It exists for a narrow range of pressure between 1.45 and 1.78 K. Data Gallery See also Cryogenics Expansion ratio Industrial gas Liquid nitrogen Liquid oxygen Liquid hydrogen Liquid air Superfluidity Superfluid helium-3 Superfluid helium-4 Supersolid 2008 Large Hadron Collider liquid helium leak References General Freezing Physics: Heike Kamerlingh Onnes and the Quest for Cold, Van Delft Dirk (2007). Edita - The Publishing House Of The Royal Netherlands Academy of Arts and Sciences. . External links He-3 and He-4 phase diagrams, etc. Helium-3 phase diagram, etc. Onnes's liquifaction of helium Kamerlingh Onnes's 1908 article, online and analyzed on BibNum [for English analysis, click 'à télécharger'] CERN's cryogenic systems. Helium, liquid Coolants Cryogenics Helium Industrial gases Science and technology in the Netherlands Dutch inventions 1908 in science Superfluidity
Liquid helium
[ "Physics", "Chemistry", "Materials_science" ]
1,083
[ "Physical phenomena", "Phase transitions", "Applied and interdisciplinary physics", "Noble gases", "Phases of matter", "Cryogenics", "Nonmetals", "Superfluidity", "Industrial gases", "Condensed matter physics", "Exotic matter", "Chemical process engineering", "Matter", "Fluid dynamics" ]
351,616
https://en.wikipedia.org/wiki/Chyme
Chyme or chymus (; from Greek χυμός khymos, "juice") is the semi-fluid mass of partly digested food that is expelled by the stomach, through the pyloric valve, into the duodenum (the beginning of the small intestine). Chyme results from the mechanical and chemical breakdown of a bolus and consists of partially digested food, water, hydrochloric acid, and various digestive enzymes. Chyme slowly passes through the pyloric sphincter and into the duodenum, where the extraction of nutrients begins. Depending on the quantity and contents of the meal, the stomach will digest the food into chyme in some time from 40 minutes to 3 hours. With a pH of approximately 2, chyme emerging from the stomach is very acidic. The duodenum secretes a hormone, cholecystokinin (CCK), which causes the gall bladder to contract, releasing alkaline bile into the duodenum. CCK also causes the release of digestive enzymes from the pancreas. The duodenum is a short section of the small intestine located between the stomach and the rest of the small intestine. The duodenum also produces the hormone secretin to stimulate the pancreatic secretion of large amounts of sodium bicarbonate, which then raises pH of the chyme to 7. The chyme moves through the jejunum and the ileum, where digestion progresses, and the non-useful portion continues onward into the large intestine. The duodenum is protected by a thick layer of mucus and the neutralizing actions of the sodium bicarbonate and bile. At a pH of 7, the enzymes that were present from the stomach are no longer active. The breakdown of any nutrients still present is by anaerobic bacteria, which at the same time help to package the remains. These bacteria also help synthesize vitamin B and vitamin K, which will be absorbed along with other nutrients. Properties Chyme has a low pH that is countered by the production of bile, which helps the further digestion of food. Chyme is part liquid and part solid: a thick semifluid mass of partially digested food and digestive secretions that is formed in the stomach and small intestine during digestion. Chyme also contains cells from the mouth and esophagus that slough off from the mechanical action of chewing and swallowing. Path of chyme After hours of mechanical and chemical digestion, food has been reduced into chyme. As particles of food become small enough, they are passed out of the stomach at regular intervals into the small intestine, which stimulates the pancreas to release fluid containing a high concentration of bicarbonate. This fluid neutralizes the gastric juices, which can damage the lining of the intestine and result in duodenal ulcer. Other secretions from the pancreas, gallbladder, liver, and glands in the intestinal wall help in digestion, as these secretions contain a variety of digestive enzymes and chemicals that assist in the breakdown of complex compounds into those that can be absorbed and used by the body. When food particles are sufficiently reduced in size and composition, they are absorbed by the intestinal wall and transported to the bloodstream. Some food material is passed from the small intestine to the large intestine. In the large intestine, bacteria break down any proteins and starches in chyme that were not digested fully in the small intestine. When all of the nutrients have been absorbed from chyme, the remaining waste material changes into semisolids that are called feces. The feces pass to the rectum, to be stored until ready to be discharged from the body during defecation. Uses The chyme of an unweaned calf is the defining ingredient of pajata, a traditional Roman recipe. Chyme is sometimes used in Pinapaitan, a bitter Ilocano stew. See also Vomiting References Digestive system Body fluids
Chyme
[ "Biology" ]
843
[ "Digestive system", "Organ systems" ]
351,767
https://en.wikipedia.org/wiki/Toll%20bridge
A toll bridge is a bridge where a monetary charge (or toll) is required to pass over. Generally the private or public owner, builder and maintainer of the bridge uses the toll to recoup their investment, in much the same way as a toll road. History The practice of collecting tolls on bridges harks back to the days of ferry crossings where people paid a fee to be ferried across stretches of water. As boats became impractical to carry large loads, ferry operators looked for new sources of revenue. Having built a bridge, they hoped to recoup their investment by charging tolls for people, animals, vehicles, and goods to cross it. The original London Bridge across the river Thames opened as a toll bridge, but an accumulation of funds by the charitable trust that operated the bridge (Bridge House Estates) saw that the charges were dropped. Using interest on its capital assets, the trust now owns and runs all seven central London bridges at no cost to taxpayers or users. In the United States, private ownership of toll bridges peaked in the mid-19th century, and by the turn of the 20th century most toll bridges were taken over by state highway departments. In some instances, a quasi-governmental authority was formed, and toll revenue bonds were issued to raise funds for construction or operation (or both) of the facility. Peters and Kramer observed that "little research has been done to quantify the impact of toll collection on society as a whole" and therefore they published a comprehensive analysis of the Total Societal Cost (TSC) associated with toll collection as a means of taxation. TSC is the sum of administrative, compliance, fuel and pollution costs. In 2000 they estimated it to be $56,914,732, or 37.3% of revenue collected. They also found that a user of a toll road is subject to a form of triple taxation, and that toll collection is a very inefficient means of funding the development of highway infrastructure. Nakamura and Kockelman (2002) show that tolls are by nature regressive, shifting the burden of taxation disproportionately to the poor and middle classes. Electronic toll collection, branded under names such as EZ-Pass, SunPass, IPass, FasTrak, Treo, GoodToGo, and 407ETR, became increasingly prevalent to metropolitan areas in the 21st century. Amy Finkelstien, a public finance economist at MIT, reports that as the fraction of drivers using electronic toll collection increased, typically toll rates increased as well, because people were less aware of how much they were paying in tolls. Electronic tolling proposals that represented the shadow price of electronic toll collection (instead of the TSC) may have misled decision-makers. The general public has additionally endured an increased administrative burden associated with paying toll bills and navigating toll collection company on-line billing systems. Additionally, visitors to a region may incur e-toll tag fees imposed by their rental car company. The Paperwork Reduction Act of 1980 identified and attempted to address a similar problem associated with the government collection of information. Approvals were to be secured by government agencies before promulgating a paper form, website, survey or electronic submission that will impose an information collection burden on the general public. However, the act did not anticipate and thus address the burden on the public associated with funding infrastructure via electronic toll collection instead of through more traditional forms of taxation. Removal and continuation of tolls In some instances, tolls have been removed after retirement of the toll revenue bonds issued to raise funds. Examples include the Robert E. Lee Memorial Bridge in Richmond, Virginia which carries U.S. Route 1 across the James River, and the 4.5-mile long James River Bridge 80 miles downstream which carries U.S. Highway 17 across the river of the same name near its mouth at Hampton Roads. In other cases, especially major facilities such as the Chesapeake Bay Bridge near Annapolis, Maryland, and the George Washington Bridge over Hudson River between New York City and New Jersey, the continued collection of tolls provides a dedicated source of funds for ongoing maintenance and improvements. Sometimes citizens revolt against toll plazas, as was the case in Jacksonville, Florida. Tolls were in place on four bridges crossing the St. Johns River, including I-95. These tolls paid for the respective bridges as well as many other highway projects. As Jacksonville continued to grow, the tolls created bottlenecks on the roadway. In 1988, Jacksonville voters chose to eliminate all the toll booths and replace the revenue with a ½ cent sales tax increase. In 1989, the toll booths were removed, 36 years after the first toll booth went up. In Scotland, the Scottish Parliament purchased the Skye Bridge from its owners in late 2004, ending the requirement to pay an unpopular expensive toll to cross to Skye from the mainland. In 2004, the German government cancelled a contract with the "Toll Collect" syndicate after much negative publicity. The term "Toll Collect" became a popular byword among Germans used to describe everything wrong with their national economy. Toll collection It has become increasingly common for a toll bridge to only charge a fee in one direction. This helps reduce the traffic congestion in the other direction, and generally does not significantly reduce revenue, especially when those travelling the one direction are forced to come back over the same or a different toll bridge. Toll avoidance: shunpiking A practice known as shunpiking evolved which entails finding another route for the specific purpose of avoiding payment of tolls. In some situations where the tolls were increased or felt to be unreasonably high, informal shunpiking by individuals escalated into a form of boycott by regular users, with the goal of applying the financial stress of lost toll revenue to the authority determining the levy. One such example of shunpiking as a form of boycott occurred at the James River Bridge in eastern Virginia. After years of lower-than-anticipated revenues on the narrow privately funded structure built in 1928, the state of Virginia finally purchased the facility in 1949 and increased the tolls in 1955 without visibly improving the roadway, with the notable exception of a new toll plaza. The increased toll rates incensed the public and business users alike. Joseph W. Luter Jr., head of Smithfield Packing Company, the producer of Smithfield Hams, ordered his truck drivers to take a different route and cross a smaller and cheaper bridge. Tolls continued for 20 more years, and were finally removed from the old bridge in 1976. Historic examples of toll bridges England London Bridge The Humber Bridge: Previously the world's longest bridge, the Humber bridge links the counties of Yorkshire and Lincolnshire near the port city of Kingston upon Hull Bristol Clifton Suspension Bridge Wandsworth Bridge: Originally designed by Julian Tolmé in 1873, it was a toll bridge until it was taken into public ownership in 1880 and made toll-free. Ireland Ha'penny Bridge: This cast iron pedestrian bridge was built in 1816 over the River Liffey in Dublin and takes its name from the historical toll amount (a half-penny). North America Ambassador Bridge between Detroit, Michigan, and Windsor, Ontario, Canada; a bridge privately built in 1929. Collins Bridge, longest wooden bridge in the world when opened in 1913, across Biscayne Bay between Miami on the mainland and the barrier island which became Miami Beach, Florida. George Rogers Clark Memorial Bridge crossing the Ohio River between Louisville, Kentucky, and Clarksville, Indiana. Opened as a toll bridge in 1929; tolls removed in 1946. James River Bridge, longest bridge over water in the world when completed in 1928, across the James River between then-Warwick County and Isle of Wight County near Hampton Roads. Florida Overseas Highway between Florida and Key West, Florida. Built on the former alignment of the Key West Extensions of the Florida East Coast Railway, it included the Seven Mile Bridge. San Francisco–Oakland Bay Bridge between Oakland and San Francisco. Golden Gate Bridge between San Francisco and Marin County. Mackinac Bridge connecting the Upper Peninsula and Lower Peninsula of Michigan at the Strait of Mackinac connecting Lake Michigan and Lake Huron Sunshine Skyway Bridge between Tampa, Florida, and St. Petersburg, Florida See also List of toll bridges References Bridges Car costs +
Toll bridge
[ "Engineering" ]
1,682
[ "Structural engineering", "Bridges" ]
351,769
https://en.wikipedia.org/wiki/Robertson%E2%80%93Seymour%20theorem
In graph theory, the Robertson–Seymour theorem (also called the graph minor theorem) states that the undirected graphs, partially ordered by the graph minor relationship, form a well-quasi-ordering. Equivalently, every family of graphs that is closed under minors can be defined by a finite set of forbidden minors, in the same way that Wagner's theorem characterizes the planar graphs as being the graphs that do not have the complete graph K5 or the complete bipartite graph K3,3 as minors. The Robertson–Seymour theorem is named after mathematicians Neil Robertson and Paul D. Seymour, who proved it in a series of twenty papers spanning over 500 pages from 1983 to 2004. Before its proof, the statement of the theorem was known as Wagner's conjecture after the German mathematician Klaus Wagner, although Wagner said he never conjectured it. A weaker result for trees is implied by Kruskal's tree theorem, which was conjectured in 1937 by Andrew Vázsonyi and proved in 1960 independently by Joseph Kruskal and S. Tarkowski. Statement A minor of an undirected graph G is any graph that may be obtained from G by a sequence of zero or more contractions of edges of G and deletions of edges and vertices of G. The minor relationship forms a partial order on the set of all distinct finite undirected graphs, as it obeys the three axioms of partial orders: it is reflexive (every graph is a minor of itself), transitive (a minor of a minor of G is itself a minor of G), and antisymmetric (if two graphs G and H are minors of each other, then they must be isomorphic). However, if graphs that are isomorphic may nonetheless be considered as distinct objects, then the minor ordering on graphs forms a preorder, a relation that is reflexive and transitive but not necessarily antisymmetric. A preorder is said to form a well-quasi-ordering if it contains neither an infinite descending chain nor an infinite antichain. For instance, the usual ordering on the non-negative integers is a well-quasi-ordering, but the same ordering on the set of all integers is not, because it contains the infinite descending chain 0, −1, −2, −3... Another example is the set of positive integers ordered by divisibility, which has no infinite descending chains, but where the prime numbers constitute an infinite antichain. The Robertson–Seymour theorem states that finite undirected graphs and graph minors form a well-quasi-ordering. The graph minor relationship does not contain any infinite descending chain, because each contraction or deletion reduces the number of edges and vertices of the graph (a non-negative integer). The nontrivial part of the theorem is that there are no infinite antichains, infinite sets of graphs that are all unrelated to each other by the minor ordering. If S is a set of graphs, and M is a subset of S containing one representative graph for each equivalence class of minimal elements (graphs that belong to S but for which no proper minor belongs to S), then M forms an antichain; therefore, an equivalent way of stating the theorem is that, in any infinite set S of graphs, there must be only a finite number of non-isomorphic minimal elements. Another equivalent form of the theorem is that, in any infinite set S of graphs, there must be a pair of graphs one of which is a minor of the other. The statement that every infinite set has finitely many minimal elements implies this form of the theorem, for if there are only finitely many minimal elements, then each of the remaining graphs must belong to a pair of this type with one of the minimal elements. And in the other direction, this form of the theorem implies the statement that there can be no infinite antichains, because an infinite antichain is a set that does not contain any pair related by the minor relation. Forbidden minor characterizations A family F of graphs is said to be closed under the operation of taking minors if every minor of a graph in F also belongs to F. If F is a minor-closed family, then let S be the class of graphs that are not in F (the complement of F). According to the Robertson–Seymour theorem, there exists a finite set H of minimal elements in S. These minimal elements form a forbidden graph characterization of F: the graphs in F are exactly the graphs that do not have any graph in H as a minor. The members of H are called the excluded minors (or forbidden minors, or minor-minimal obstructions) for the family F. For example, the planar graphs are closed under taking minors: contracting an edge in a planar graph, or removing edges or vertices from the graph, cannot destroy its planarity. Therefore, the planar graphs have a forbidden minor characterization, which in this case is given by Wagner's theorem: the set H of minor-minimal nonplanar graphs contains exactly two graphs, the complete graph K5 and the complete bipartite graph K3,3, and the planar graphs are exactly the graphs that do not have a minor in the set {K5, K3,3}. The existence of forbidden minor characterizations for all minor-closed graph families is an equivalent way of stating the Robertson–Seymour theorem. For, suppose that every minor-closed family F has a finite set H of minimal forbidden minors, and let S be any infinite set of graphs. Define F from S as the family of graphs that do not have a minor in S. Then F is minor-closed and has a finite set H of minimal forbidden minors. Let C be the complement of F. S is a subset of C since S and F are disjoint, and H are the minimal graphs in C. Consider a graph G in H. G cannot have a proper minor in S since G is minimal in C. At the same time, G must have a minor in S, since otherwise G would be an element in F. Therefore, G is an element in S, i.e., H is a subset of S, and all other graphs in S have a minor among the graphs in H, so H is the finite set of minimal elements of S. For the other implication, assume that every set of graphs has a finite subset of minimal graphs and let a minor-closed set F be given. We want to find a set H of graphs such that a graph is in F if and only if it does not have a minor in H. Let E be the graphs which are not minors of any graph in F, and let H be the finite set of minimal graphs in E. Now, let an arbitrary graph G be given. Assume first that G is in F. G cannot have a minor in H since G is in F and H is a subset of E. Now assume that G is not in F. Then G is not a minor of any graph in F, since F is minor-closed. Therefore, G is in E, so G has a minor in H. Examples of minor-closed families The following sets of finite graphs are minor-closed, and therefore (by the Robertson–Seymour theorem) have forbidden minor characterizations: forests, linear forests (disjoint unions of path graphs), pseudoforests, and cactus graphs; planar graphs, outerplanar graphs, apex graphs (formed by adding a single vertex to a planar graph), toroidal graphs, and the graphs that can be embedded on any fixed two-dimensional manifold; graphs that are linklessly embeddable in Euclidean 3-space, and graphs that are knotlessly embeddable in Euclidean 3-space; graphs with a feedback vertex set of size bounded by some fixed constant; graphs with Colin de Verdière graph invariant bounded by some fixed constant; graphs with treewidth, pathwidth, or branchwidth bounded by some fixed constant. Obstruction sets Some examples of finite obstruction sets were already known for specific classes of graphs before the Robertson–Seymour theorem was proved. For example, the obstruction for the set of all forests is the loop graph (or, if one restricts to simple graphs, the cycle with three vertices). This means that a graph is a forest if and only if none of its minors is the loop (or, the cycle with three vertices, respectively). The sole obstruction for the set of paths is the tree with four vertices, one of which has degree 3. In these cases, the obstruction set contains a single element, but in general this is not the case. Wagner's theorem states that a graph is planar if and only if it has neither K5 nor K3,3 as a minor. In other words, the set {K5, K3,3} is an obstruction set for the set of all planar graphs, and in fact the unique minimal obstruction set. A similar theorem states that K4 and K2,3 are the forbidden minors for the set of outerplanar graphs. Although the Robertson–Seymour theorem extends these results to arbitrary minor-closed graph families, it is not a complete substitute for these results, because it does not provide an explicit description of the obstruction set for any family. For example, it tells us that the set of toroidal graphs has a finite obstruction set, but it does not provide any such set. The complete set of forbidden minors for toroidal graphs remains unknown, but it contains at least 17,535 graphs. Polynomial time recognition The Robertson–Seymour theorem has an important consequence in computational complexity, due to the proof by Robertson and Seymour that, for each fixed graph h, there is a polynomial time algorithm for testing whether a graph has h as a minor. This algorithm's running time is cubic (in the size of the graph to check), though with a constant factor that depends superpolynomially on the size of the minor h. The running time has been improved to quadratic by Kawarabayashi, Kobayashi, and Reed. As a result, for every minor-closed family F, there is polynomial time algorithm for testing whether a graph belongs to F: simply check whether the given graph contains h for each forbidden minor h in F’s obstruction set. However, this method requires a specific finite obstruction set to work, and the theorem does not provide one. The theorem proves that such a finite obstruction set exists, and therefore the problem is polynomial because of the above algorithm. However, the algorithm can be used in practice only if such a finite obstruction set is provided. As a result, the theorem proves that the problem can be solved in polynomial time, but does not provide a concrete polynomial-time algorithm for solving it. Such proofs of polynomiality are non-constructive: they prove polynomiality of problems without providing an explicit polynomial-time algorithm. In many specific cases, checking whether a graph is in a given minor-closed family can be done more efficiently: for example, checking whether a graph is planar can be done in linear time. Fixed-parameter tractability For graph invariants with the property that, for each k, the graphs with invariant at most k are minor-closed, the same method applies. For instance, by this result, treewidth, branchwidth, and pathwidth, vertex cover, and the minimum genus of an embedding are all amenable to this approach, and for any fixed k there is a polynomial time algorithm for testing whether these invariants are at most k, in which the exponent in the running time of the algorithm does not depend on k. A problem with this property, that it can be solved in polynomial time for any fixed k with an exponent that does not depend on k, is known as fixed-parameter tractable. However, this method does not directly provide a single fixed-parameter-tractable algorithm for computing the parameter value for a given graph with unknown k, because of the difficulty of determining the set of forbidden minors. Additionally, the large constant factors involved in these results make them highly impractical. Therefore, the development of explicit fixed-parameter algorithms for these problems, with improved dependence on k, has continued to be an important line of research. Finite form of the graph minor theorem showed that the following theorem exhibits the independence phenomenon by being unprovable in various formal systems that are much stronger than Peano arithmetic, yet being provable in systems much weaker than ZFC: Theorem: For every positive integer n, there is an integer m so large that if G1, ..., Gm is a sequence of finite undirected graphs, where each Gi has size at most n+i, then Gj ≤ Gk for some j < k. (Here, the size of a graph is the total number of its vertices and edges, and ≤ denotes the minor ordering.) See also Graph structure theorem Notes References . . . . . . . . . . External links Graph minor theory Wellfoundedness Theorems in graph theory
Robertson–Seymour theorem
[ "Mathematics" ]
2,689
[ "Mathematical induction", "Wellfoundedness", "Graph theory", "Theorems in graph theory", "Theorems in discrete mathematics", "Mathematical relations", "Order theory", "Graph minor theory" ]
351,798
https://en.wikipedia.org/wiki/Neighbor%20joining
In bioinformatics, neighbor joining is a bottom-up (agglomerative) clustering method for the creation of phylogenetic trees, created by Naruya Saitou and Masatoshi Nei in 1987. Usually based on DNA or protein sequence data, the algorithm requires knowledge of the distance between each pair of taxa (e.g., species or sequences) to create the phylogenetic tree. The algorithm Neighbor joining takes a distance matrix, which specifies the distance between each pair of taxa, as input. The algorithm starts with a completely unresolved tree, whose topology corresponds to that of a star network, and iterates over the following steps, until the tree is completely resolved, and all branch lengths are known: Based on the current distance matrix, calculate a matrix (defined below). Find the pair of distinct taxa i and j (i.e. with ) for which is smallest. Make a new node that joins the taxa i and j, and connect the new node to the central node. For example, in part (B) of the figure at right, node u is created to join f and g. Calculate the distance from each of the taxa in the pair to this new node. Calculate the distance from each of the taxa outside of this pair to the new node. Start the algorithm again, replacing the pair of joined neighbors with the new node and using the distances calculated in the previous step. The Q-matrix Based on a distance matrix relating the taxa, calculate the x matrix as follows: where is the distance between taxa and . Distance from the pair members to the new node For each of the taxa in the pair being joined, use the following formula to calculate the distance to the new node: and: Taxa and are the paired taxa and is the newly created node. The branches joining and and and , and their lengths, and are part of the tree which is gradually being created; they neither affect nor are affected by later neighbor-joining steps. Distance of the other taxa from the new node For each taxon not considered in the previous step, we calculate the distance to the new node as follows: where is the new node, is the node which we want to calculate the distance to and and are the members of the pair just joined. Complexity Neighbor joining on a set of taxa requires iterations. At each step one has to build and search a matrix. Initially the matrix is size , then the next step it is , etc. Implementing this in a straightforward way leads to an algorithm with a time complexity of ; implementations exist which use heuristics to do much better than this on average. Example Let us assume that we have five taxa and the following distance matrix : First step First joining We calculate the values by equation (). For example: We obtain the following values for the matrix (the diagonal elements of the matrix are not used and are omitted here): In the example above, . This is the smallest value of , so we join elements and . First branch length estimation Let denote the new node. By equation (), above, the branches joining and to then have lengths: First distance matrix update We then proceed to update the initial distance matrix into a new distance matrix (see below), reduced in size by one row and one column because of the joining of with into their neighbor . Using equation () above, we compute the distance from to each of the other nodes besides and . In this case, we obtain: The resulting distance matrix is: Bold values in correspond to the newly calculated distances, whereas italicized values are not affected by the matrix update as they correspond to distances between elements not involved in the first joining of taxa. Second step Second joining The corresponding matrix is: We may choose either to join and , or to join and ; both pairs have the minimal value of , and either choice leads to the same result. For concreteness, let us join and and call the new node . Second branch length estimation The lengths of the branches joining and to can be calculated: The joining of the elements and the branch length calculation help drawing the neighbor joining tree as shown in the figure. Second distance matrix update The updated distance matrix for the remaining 3 nodes, , , and , is now computed: Final step The tree topology is fully resolved at this point. However, for clarity, we can calculate the matrix. For example: For concreteness, let us join and and call the last node . The lengths of the three remaining branches can be calculated: The neighbor joining tree is now complete, as shown in the figure. Conclusion: additive distances This example represents an idealized case: note that if we move from any taxon to any other along the branches of the tree, and sum the lengths of the branches traversed, the result is equal to the distance between those taxa in the input distance matrix. For example, going from to we have . A distance matrix whose distances agree in this way with some tree is said to be 'additive', a property which is rare in practice. Nonetheless it is important to note that, given an additive distance matrix as input, neighbor joining is guaranteed to find the tree whose distances between taxa agree with it. Neighbor joining as minimum evolution Neighbor joining may be viewed as a greedy heuristic for the balanced minimum evolution (BME) criterion. For each topology, BME defines the tree length (sum of branch lengths) to be a particular weighted sum of the distances in the distance matrix, with the weights depending on the topology. The BME optimal topology is the one which minimizes this tree length. NJ at each step greedily joins that pair of taxa which will give the greatest decrease in the estimated tree length. This procedure does not guarantee to find the optimum for the BME criterion, although it often does and is usually quite close. Advantages and disadvantages The main virtue of NJ is that it is fast as compared to least squares, maximum parsimony and maximum likelihood methods. This makes it practical for analyzing large data sets (hundreds or thousands of taxa) and for bootstrapping, for which purposes other means of analysis (e.g. maximum parsimony, maximum likelihood) may be computationally prohibitive. Neighbor joining has the property that if the input distance matrix is correct, then the output tree will be correct. Furthermore, the correctness of the output tree topology is guaranteed as long as the distance matrix is 'nearly additive', specifically if each entry in the distance matrix differs from the true distance by less than half of the shortest branch length in the tree. In practice the distance matrix rarely satisfies this condition, but neighbor joining often constructs the correct tree topology anyway. The correctness of neighbor joining for nearly additive distance matrices implies that it is statistically consistent under many models of evolution; given data of sufficient length, neighbor joining will reconstruct the true tree with high probability. Compared with UPGMA and WPGMA, neighbor joining has the advantage that it does not assume all lineages evolve at the same rate (molecular clock hypothesis). Nevertheless, neighbor joining has been largely superseded by phylogenetic methods that do not rely on distance measures and offer superior accuracy under most conditions. Neighbor joining has the undesirable feature that it often assigns negative lengths to some of the branches. Implementations and variants There are many programs available implementing neighbor joining. Among implementations of canonical NJ (i.e. using the classical NJ optimisation criteria, therefore giving the same results), RapidNJ (started 2003, major update in 2011, still updated in 2023) and NINJA (started 2009, last update 2013) are considered state-of-the-art. They have typical run times proportional to approximately the square of the number of taxa. Variants that deviate from canonical include: BIONJ (1997) and Weighbor (2000), improving on the accuracy by making use of the fact that the shorter distances in the distance matrix are generally better known than the longer distances. The two methods have been extended to run on incomplete distance matrices. "Fast NJ" remembers the best node and is O(n^2) always; "relax NJ" performs a hill-climbing search and retains the worst-case complexity of O(n^3). Rapid NJ is faster than plain relaxed NJ. FastME is an implementation of the closely related balanced minimum evolution (BME) method (see ). It is about as fast as and more accurate than NJ. It starts with a rough tree then improves it using a set of topological moves such as Nearest Neighbor Interchanges (NNI). FastTree is a related method. It works on sequence "profiles" instead of a matrix. It starts with an approximately NJ tree, rearranges it into BME, then rearranges it into approximate maximum-likelihood. See also Nearest neighbor search UPGMA and WPGMA Minimum Evolution References Other sources External links The Neighbor-Joining Method — a tutorial Bioinformatics algorithms Phylogenetics Computational phylogenetics Cluster analysis algorithms
Neighbor joining
[ "Biology" ]
1,825
[ "Genetics techniques", "Computational phylogenetics", "Bioinformatics algorithms", "Taxonomy (biology)", "Bioinformatics", "Phylogenetics" ]
351,806
https://en.wikipedia.org/wiki/Fractionating%20column
A fractionating column or fractional column is equipment used in the distillation of liquid mixtures to separate the mixture into its component parts, or fractions, based on their differences in volatility. Fractionating columns are used in small-scale laboratory distillations as well as large-scale industrial distillations. Laboratory fractionating columns A laboratory fractionating column is a piece of glassware used to separate vaporized mixtures of liquid compounds with close volatility. Most commonly used is either a Vigreux column or a straight column packed with glass beads or metal pieces such as Raschig rings. Fractionating columns help to separate the mixture by allowing the mixed vapors to cool, condense, and vaporize again in accordance with Raoult's law. With each condensation-vaporization cycle, the vapors are enriched in a certain component. A larger surface area allows more cycles, improving separation. This is the rationale for a Vigreux column or a packed fractionating column. Spinning band distillation achieves the same outcome by using a rotating band within the column to force the rising vapors and descending condensate into close contact, achieving equilibrium more quickly. In a typical fractional distillation, a liquid mixture is heated in the distilling flask, and the resulting vapor rises up the fractionating column (see Figure 1). The vapor condenses on glass spurs (known as theoretical trays or theoretical plates) inside the column, and returns to the distilling flask, refluxing the rising distillate vapor. The hottest tray is at the bottom of the column and the coolest tray is at the top. At steady-state conditions, the vapor and liquid on each tray reach an equilibrium. Only the most volatile of the vapors stays in gas form all the way to the top, where it may then proceed through a condenser, which cools the vapor until it condenses into a liquid distillate. The separation may be enhanced by the addition of more trays (to a practical limitation of heat, flow, etc.). Industrial fractionating columns Fractional distillation is one of the unit operations of chemical engineering. Fractionating columns are widely used in chemical process industries where large quantities of liquids have to be distilled. Such industries are petroleum processing, petrochemical production, natural gas processing, coal tar processing, brewing, liquefied air separation, and hydrocarbon solvents production. Fractional distillation finds its widest application in petroleum refineries. In such refineries, the crude oil feedstock is a complex, multicomponent mixture that must be separated. Yields of pure chemical compounds are generally not expected, however, yields of groups of compounds within a relatively small range of boiling points, also called fractions, are expected. This process is the origin of the name fractional distillation or fractionation. Distillation is one of the most common and energy-intensive separation processes. Effectiveness of separation is dependent upon the height and diameter of the column, the ratio of the column's height to diameter, and the material that comprises the distillation column itself. In a typical chemical plant, it accounts for about 40% of the total energy consumption. Industrial distillation is typically performed in large, vertical cylindrical columns (as shown in Figure 2) known as "distillation towers" or "distillation columns" with diameters ranging from about 65 centimeters to 6 meters and heights ranging from about 6 meters to 60 meters or more. Industrial distillation towers are usually operated at a continuous steady state. Unless disturbed by changes in feed, heat, ambient temperature, or condensing, the amount of feed being added normally equals the amount of product being removed. The amount of heat entering the column from the reboiler and with the feed must equal the amount heat removed by the overhead condenser and with the products. The heat entering a distillation column is a crucial operating parameter, addition of excess or insufficient heat to the column can lead to foaming, weeping, entrainment, or flooding. Figure 3 depicts an industrial fractionating column separating a feed stream into one distillate fraction and one bottoms fraction. However, many industrial fractionating columns have outlets at intervals up the column so that multiple products having different boiling ranges may be withdrawn from a column distilling a multi-component feed stream. The "lightest" products with the lowest boiling points exit from the top of the columns and the "heaviest" products with the highest boiling points exit from the bottom. Industrial fractionating columns use external reflux to achieve better separation of products. Reflux refers to the portion of the condensed overhead liquid product that returns to the upper part of the fractionating column as shown in Figure 3. Inside the column, the downflowing reflux liquid provides cooling and condensation of upflowing vapors thereby increasing the efficacy of the distillation tower. The more reflux and/or more trays provided, the better is the tower's separation of lower boiling materials from higher boiling materials. The design and operation of a fractionating column depends on the composition of the feed as well as the composition of the desired products. Given a simple, binary component feed, analytical methods such as the McCabe–Thiele method or the Fenske equation can be used. For a multi-component feed, simulation models are used both for design, operation, and construction. Bubble-cap "trays" or "plates" are one of the types of physical devices, which are used to provide good contact between the upflowing vapor and the downflowing liquid inside an industrial fractionating column. Such trays are shown in Figures 4 and 5. The efficiency of a tray or plate is typically lower than that of a theoretical 100% efficient equilibrium stage. Hence, a fractionating column almost always needs more actual, physical plates than the required number of theoretical vapor–liquid equilibrium stages. In industrial uses, sometimes a packing material is used in the column instead of trays, especially when low pressure drops across the column are required, as when operating under vacuum. This packing material can either be random dumped packing ( wide) such as Raschig rings or structured sheet metal. Liquids tend to wet the surface of the packing, and the vapors pass across this wetted surface, where mass transfer takes place. Differently shaped packings have different surface areas and void space between packings. Both of these factors affect packing performance. See also Azeotropic distillation Batch distillation Continuous distillation Extractive distillation Laboratory glassware Steam distillation Theoretical plate Vacuum distillation Fractional distillation References External links Use of distillation columns in Oil & Gas More drawings of glassware including Vigreux columns Distillation Theory by Ivar J. Halvorsen and Sigurd Skogestad, Norwegian University of Science and Technology, Norway Distillation, An Introduction by Ming Tham, Newcastle University, UK Distillation by the Distillation Group, USA Distillation simulation software Fractional Distillation Explained for High School Students Distillation Chemical equipment Fractionation fr:Distillation fractionnée it:Colonna di distillazione
Fractionating column
[ "Chemistry", "Engineering" ]
1,501
[ "Fractionation", "Separation processes", "Chemical equipment", "Distillation", "nan" ]
351,853
https://en.wikipedia.org/wiki/Dominated%20convergence%20theorem
In measure theory, Lebesgue's dominated convergence theorem gives a mild sufficient condition under which limits and integrals of a sequence of functions can be interchanged. More technically it says that if a sequence of functions is bounded in absolute value by an integrable function and is almost everywhere point wise convergent to a function then the sequence converges in to its point wise limit, and in particular the integral of the limit is the limit of the integrals. Its power and utility are two of the primary theoretical advantages of Lebesgue integration over Riemann integration. In addition to its frequent appearance in mathematical analysis and partial differential equations, it is widely used in probability theory, since it gives a sufficient condition for the convergence of expected values of random variables. Statement Lebesgue's dominated convergence theorem. Let be a sequence of complex-valued measurable functions on a measure space . Suppose that the sequence converges pointwise to a function i.e. exists for every . Assume moreover that the sequence is dominated by some integrable function in the sense that for all points and all in the index set. Then are integrable (in the Lebesgue sense) and . In fact, we have the stronger statement Remark 1. The statement " is integrable" means that the measurable function is Lebesgue integrable; i.e since . Remark 2. The convergence of the sequence and domination by can be relaxed to hold only -almost everywhere i.e. except possibly on a measurable set of -measure . In fact we can modify the functions (hence its point wise limit ) to be 0 on without changing the value of the integrals. (If we insist on e.g. defining as the limit whenever it exists, we may end up with a non-measurable subset within where convergence is violated if the measure space is non complete, and so might not be measurable. However, there is no harm in ignoring the limit inside the null set ). We can thus consider the and as being defined except for a set of -measure 0. Remark 3. If , the condition that there is a dominating integrable function can be relaxed to uniform integrability of the sequence (fn), see Vitali convergence theorem. Remark 4. While is Lebesgue integrable, it is not in general Riemann integrable. For example, order the rationals in , and let be defined on to take the value 1 on the first n rationals and 0 otherwise. Then is the Dirichlet function on , which is not Riemann integrable but is Lebesgue integrable. Remark 5 The stronger version of the dominated convergence theorem can be reformulated as: if a sequence of measurable complex functions is almost everywhere pointwise convergent to a function and almost everywhere bounded in absolute value by an integrable function then in the Banach space Proof Without loss of generality, one can assume that f is real, because one can split f into its real and imaginary parts (remember that a sequence of complex numbers converges if and only if both its real and imaginary counterparts converge) and apply the triangle inequality at the end. Lebesgue's dominated convergence theorem is a special case of the Fatou–Lebesgue theorem. Below, however, is a direct proof that uses Fatou’s lemma as the essential tool. Since f is the pointwise limit of the sequence (fn) of measurable functions that are dominated by g, it is also measurable and dominated by g, hence it is integrable. Furthermore, (these will be needed later), for all n and The second of these is trivially true (by the very definition of f). Using linearity and monotonicity of the Lebesgue integral, By the reverse Fatou lemma (it is here that we use the fact that |f−fn| is bounded above by an integrable function) which implies that the limit exists and vanishes i.e. Finally, since we have that The theorem now follows. If the assumptions hold only everywhere, then there exists a set such that the functions fn 1S \ N satisfy the assumptions everywhere on S. Then the function f(x) defined as the pointwise limit of fn(x) for and by for , is measurable and is the pointwise limit of this modified function sequence. The values of these integrals are not influenced by these changes to the integrands on this μ-null set N, so the theorem continues to hold. DCT holds even if fn converges to f in measure (finite measure) and the dominating function is non-negative almost everywhere. Discussion of the assumptions The assumption that the sequence is dominated by some integrable g cannot be dispensed with. This may be seen as follows: define for x in the interval and otherwise. Any g which dominates the sequence must also dominate the pointwise supremum . Observe that by the divergence of the harmonic series. Hence, the monotonicity of the Lebesgue integral tells us that there exists no integrable function which dominates the sequence on [0,1]. A direct calculation shows that integration and pointwise limit do not commute for this sequence: because the pointwise limit of the sequence is the zero function. Note that the sequence (fn) is not even uniformly integrable, hence also the Vitali convergence theorem is not applicable. Bounded convergence theorem One corollary to the dominated convergence theorem is the bounded convergence theorem, which states that if (fn) is a sequence of uniformly bounded complex-valued measurable functions which converges pointwise on a bounded measure space (i.e. one in which μ(S) is finite) to a function f, then the limit f is an integrable function and Remark: The pointwise convergence and uniform boundedness of the sequence can be relaxed to hold only almost everywhere, provided the measure space is complete or f is chosen as a measurable function which agrees μ-almost everywhere with the everywhere existing pointwise limit. Proof Since the sequence is uniformly bounded, there is a real number M such that for all and for all n. Define for all . Then the sequence is dominated by g. Furthermore, g is integrable since it is a constant function on a set of finite measure. Therefore, the result follows from the dominated convergence theorem. If the assumptions hold only everywhere, then there exists a set such that the functions fn1S\N satisfy the assumptions everywhere on S. Dominated convergence in Lp-spaces (corollary) Let be a measure space, a real number and a sequence of -measurable functions . Assume the sequence converges -almost everywhere to an -measurable function , and is dominated by a (cf. Lp space), i.e., for every natural number we have: , μ-almost everywhere. Then all as well as are in and the sequence converges to in the sense of , i.e.: Idea of the proof: Apply the original theorem to the function sequence with the dominating function . Extensions The dominated convergence theorem applies also to measurable functions with values in a Banach space, with the dominating function still being non-negative and integrable as above. The assumption of convergence almost everywhere can be weakened to require only convergence in measure. The dominated convergence theorem applies also to conditional expectations. See also Convergence of random variables, Convergence in mean Monotone convergence theorem (does not require domination by an integrable function but assumes monotonicity of the sequence instead) Scheffé's lemma Uniform integrability Vitali convergence theorem (a generalization of Lebesgue's dominated convergence theorem) Notes References Theorems in real analysis Theorems in measure theory Probability theorems Articles containing proofs
Dominated convergence theorem
[ "Mathematics" ]
1,610
[ "Theorems in mathematical analysis", "Theorems in measure theory", "Theorems in real analysis", "Theorems in probability theory", "Mathematical problems", "Articles containing proofs", "Mathematical theorems" ]
351,882
https://en.wikipedia.org/wiki/Automotive%20engineering
Automotive engineering, along with aerospace engineering and naval architecture, is a branch of vehicle engineering, incorporating elements of mechanical, electrical, electronic, software, and safety engineering as applied to the design, manufacture and operation of motorcycles, automobiles, and trucks and their respective engineering subsystems. It also includes modification of vehicles. Manufacturing domain deals with the creation and assembling the whole parts of automobiles is also included in it. The automotive engineering field is research intensive and involves direct application of mathematical models and formulas. The study of automotive engineering is to design, develop, fabricate, and test vehicles or vehicle components from the concept stage to production stage. Production, development, and manufacturing are the three major functions in this field. Disciplines Automobile engineering Automobile engineering is a branch study of engineering which teaches manufacturing, designing, mechanical mechanisms as well as operations of automobiles. It is an introduction to vehicle engineering which deals with motorcycles, cars, buses, trucks, etc. It includes branch study of mechanical, electronic, software and safety elements. Some of the engineering attributes and disciplines that are of importance to the automotive engineer include: Safety engineering: Safety engineering is the assessment of various crash scenarios and their impact on the vehicle occupants. These are tested against very stringent governmental regulations. Some of these requirements include: seat belt and air bag functionality testing, front and side-impact testing, and tests of rollover resistance. Assessments are done with various methods and tools, including computer crash simulation (typically finite element analysis), crash-test dummy, and partial system sled and full vehicle crashes. Fuel economy/emissions: Fuel economy is the measured fuel efficiency of the vehicle in miles per gallon or kilometers per liter. Emissions-testing covers the measurement of vehicle emissions, including hydrocarbons, nitrogen oxides (), carbon monoxide (CO), carbon dioxide (), and evaporative emissions. NVH engineering (noise, vibration, and harshness): NVH involves customer feedback (both tactile [felt] and audible [heard]) concerning a vehicle. While sound can be interpreted as a rattle, squeal, or hot, a tactile response can be seat vibration or a buzz in the steering wheel. This feedback is generated by components either rubbing, vibrating, or rotating. NVH response can be classified in various ways: powertrain NVH, road noise, wind noise, component noise, and squeak and rattle. Note, there are both good and bad NVH qualities. The NVH engineer works to either eliminate bad NVH or change the "bad NVH" to good (i.e., exhaust tones). Vehicle electronics: Automotive electronics is an increasingly important aspect of automotive engineering. Modern vehicles employ dozens of electronic systems. These systems are responsible for operational controls such as the throttle, brake and steering controls; as well as many comfort-and-convenience systems such as the HVAC, infotainment, and lighting systems. It would not be possible for automobiles to meet modern safety and fuel-economy requirements without electronic controls. Performance: Performance is a measurable and testable value of a vehicle's ability to perform in various conditions. Performance can be considered in a wide variety of tasks, but it generally considers how quickly a car can accelerate (e.g. standing start 1/4 mile elapsed time, 0–60 mph, etc.), its top speed, how short and quickly a car can come to a complete stop from a set speed (e.g. 70-0 mph), how much g-force a car can generate without losing grip, recorded lap-times, cornering speed, brake fade, etc. Performance can also reflect the amount of control in inclement weather (snow, ice, rain). Shift quality: Shift quality is the driver's perception of the vehicle to an automatic transmission shift event. This is influenced by the powertrain (Internal combustion engine, transmission), and the vehicle (driveline, suspension, engine and powertrain mounts, etc.) Shift feel is both a tactile (felt) and audible (heard) response of the vehicle. Shift quality is experienced as various events: transmission shifts are felt as an upshift at acceleration (1–2), or a downshift maneuver in passing (4–2). Shift engagements of the vehicle are also evaluated, as in Park to Reverse, etc. Durability / corrosion engineering: Durability and corrosion engineering is the evaluation testing of a vehicle for its useful life. Tests include mileage accumulation, severe driving conditions, and corrosive salt baths. Drivability: Drivability is the vehicle's response to general driving conditions. Cold starts and stalls, RPM dips, idle response, launch hesitations and stumbles, and performance levels all contribute to the overall drivability of any given vehicle. Cost: The cost of a vehicle program is typically split into the effect on the variable cost of the vehicle, and the up-front tooling and fixed costs associated with developing the vehicle. There are also costs associated with warranty reductions and marketing. Program timing: To some extent programs are timed with respect to the market, and also to the production-schedules of assembly plants. Any new part in the design must support the development and manufacturing schedule of the model. Design for manufacturability (DFM): DFM refers to designing vehicular components in such a way that they are not only feasible to manufacture, but also such that they are cost-efficient to produce while resulting in acceptable quality that meets design specifications and engineering tolerances. This requires coördination between the design engineers and the assembly/manufacturing teams. Quality management: Quality control is an important factor within the production process, as high quality is needed to meet customer requirements and to avoid expensive recall campaigns. The complexity of components involved in the production process requires a combination of different tools and techniques for quality control. Therefore, the International Automotive Task Force (IATF), a group of the world's leading manufacturers and trade organizations, developed the standard ISO/TS 16949. This standard defines the design, development, production, and (when relevant) installation and service requirements. Furthermore, it combines the principles of ISO 9001 with aspects of various regional and national automotive standards such as AVSQ (Italy), EAQF (France), VDA6 (Germany) and QS-9000 (USA). In order to further minimize risks related to product failures and liability claims for automotive electric and electronic systems, the quality discipline functional safety according to ISO/IEC 17025 is applied. Since the 1950s, the comprehensive business approach total quality management (TQM) has operated to continuously improve the production process of automotive products and components. Some of the companies who have implemented TQM include Ford Motor Company, Motorola and Toyota Motor Company. Job functions Development engineer A development engineer has the responsibility for coordinating delivery of the engineering attributes of a complete automobile (bus, car, truck, van, SUV, motorcycle etc.) as dictated by the automobile manufacturer, governmental regulations, and the customer who buys the product. Much like the Systems engineer, the development engineer is concerned with the interactions of all systems in the complete automobile. While there are multiple components and systems in an automobile that have to function as designed, they must also work in harmony with the complete automobile. As an example, the brake system's main function is to provide braking functionality to the automobile. Along with this, it must also provide an acceptable level of: pedal feel (spongy, stiff), brake system "noise" (squeal, shudder, etc.), and interaction with the ABS (anti-lock braking system) Another aspect of the development engineer's job is a trade-off process required to deliver all of the automobile attributes at a certain acceptable level. An example of this is the trade-off between engine performance and fuel economy. While some customers are looking for maximum power from their engine, the automobile is still required to deliver an acceptable level of fuel economy. From the engine's perspective, these are opposing requirements. Engine performance is looking for maximum displacement (bigger, more power), while fuel economy is looking for a smaller displacement engine (ex: 1.4 L vs. 5.4 L). The engine size however, is not the only contributing factor to fuel economy and automobile performance. Different values come into play. Other attributes that involve trade-offs include: automobile weight, aerodynamic drag, transmission gearing, emission control devices, handling/roadholding, ride quality, and tires. The development engineer is also responsible for organizing automobile level testing, validation, and certification. Components and systems are designed and tested individually by the Product Engineer. The final evaluation is to be conducted at the automobile level to evaluate system to system interactions. As an example, the audio system (radio) needs to be evaluated at the automobile level. Interaction with other electronic components can cause interference. Heat dissipation of the system and ergonomic placement of the controls need to be evaluated. Sound quality in all seating positions needs to be provided at acceptable levels. Manufacturing engineer Manufacturing engineers are responsible for ensuring proper production of the automotive components or complete vehicles. While the development engineers are responsible for the function of the vehicle, manufacturing engineers are responsible for the safe and effective production of the vehicle. This group of engineers consist of process engineers, logistic coordinators, tooling engineers, robotics engineers, and assembly planners. In the automotive industry manufacturers are playing a larger role in the development stages of automotive components to ensure that the products are easy to manufacture. Design for manufacturability in the automotive world is crucial to make certain whichever design is developed in the Research and Development Stage of automotive design. Once the design is established, the manufacturing engineers take over. They design the machinery and tooling necessary to build the automotive components or vehicle and establish the methods of how to mass-produce the product. It is the manufacturing engineers job to increase the efficiency of the automotive plant and to implement lean manufacturing techniques such as Six Sigma and Kaizen. Other automotive engineering roles Other automotive engineers include those listed below: Aerodynamics engineers will often give guidance to the styling studio so that the shapes they design are aerodynamic, as well as attractive. Body engineers will also let the studio know if it is feasible to make the panels for their designs. Change control engineers make sure that all of the design and manufacturing changes that occur are organized, managed and implemented... NVH engineers perform sound and vibration testing to prevent loud cabin noises, detectable vibrations, and/or improve the sound quality while the vehicle is on the road. The modern automotive product engineering process Studies indicate that a substantial part of the modern vehicle's value comes from intelligent systems, and that these represent most of the current automotive innovation. To facilitate this, the modern automotive engineering process has to handle an increased use of mechatronics. Configuration and performance optimization, system integration, control, component, subsystem and system-level validation of the intelligent systems must become an intrinsic part of the standard vehicle engineering process, just as this is the case for the structural, vibro-acoustic and kinematic design. This requires a vehicle development process that is typically highly simulation-driven. The V-approach One way to effectively deal with the inherent multi-physics and the control systems development that is involved when including intelligent systems, is to adopt the V-Model approach to systems development, as has been widely used in the automotive industry for twenty years or more. In this V-approach, system-level requirements are propagated down the V via subsystems to component design, and the system performance is validated at increasing integration levels. Engineering of mechatronic systems requires the application of two interconnected "V-cycles": one focusing on the multi-physics system engineering (like the mechanical and electrical components of an electrically powered steering system, including sensors and actuators); and the other focuses on the controls engineering, the control logic, the software and realization of the control hardware and embedded software. References Automotive engineering
Automotive engineering
[ "Engineering" ]
2,477
[ "Automotive engineering", "Mechanical engineering by discipline" ]
351,887
https://en.wikipedia.org/wiki/Friendly%20artificial%20intelligence
Friendly artificial intelligence (friendly AI or FAI) is hypothetical artificial general intelligence (AGI) that would have a positive (benign) effect on humanity or at least align with human interests such as fostering the improvement of the human species. It is a part of the ethics of artificial intelligence and is closely related to machine ethics. While machine ethics is concerned with how an artificially intelligent agent should behave, friendly artificial intelligence research is focused on how to practically bring about this behavior and ensuring it is adequately constrained. Etymology and usage The term was coined by Eliezer Yudkowsky, who is best known for popularizing the idea, to discuss superintelligent artificial agents that reliably implement human values. Stuart J. Russell and Peter Norvig's leading artificial intelligence textbook, Artificial Intelligence: A Modern Approach, describes the idea: Yudkowsky (2008) goes into more detail about how to design a Friendly AI. He asserts that friendliness (a desire not to harm humans) should be designed in from the start, but that the designers should recognize both that their own designs may be flawed, and that the robot will learn and evolve over time. Thus the challenge is one of mechanism design—to define a mechanism for evolving AI systems under a system of checks and balances, and to give the systems utility functions that will remain friendly in the face of such changes. "Friendly" is used in this context as technical terminology, and picks out agents that are safe and useful, not necessarily ones that are "friendly" in the colloquial sense. The concept is primarily invoked in the context of discussions of recursively self-improving artificial agents that rapidly explode in intelligence, on the grounds that this hypothetical technology would have a large, rapid, and difficult-to-control impact on human society. Risks of unfriendly AI The roots of concern about artificial intelligence are very old. Kevin LaGrandeur showed that the dangers specific to AI can be seen in ancient literature concerning artificial humanoid servants such as the golem, or the proto-robots of Gerbert of Aurillac and Roger Bacon. In those stories, the extreme intelligence and power of these humanoid creations clash with their status as slaves (which by nature are seen as sub-human), and cause disastrous conflict. By 1942 these themes prompted Isaac Asimov to create the "Three Laws of Robotics"—principles hard-wired into all the robots in his fiction, intended to prevent them from turning on their creators, or allowing them to come to harm. In modern times as the prospect of superintelligent AI looms nearer, philosopher Nick Bostrom has said that superintelligent AI systems with goals that are not aligned with human ethics are intrinsically dangerous unless extreme measures are taken to ensure the safety of humanity. He put it this way: Basically we should assume that a 'superintelligence' would be able to achieve whatever goals it has. Therefore, it is extremely important that the goals we endow it with, and its entire motivation system, is 'human friendly.' In 2008, Eliezer Yudkowsky called for the creation of "friendly AI" to mitigate existential risk from advanced artificial intelligence. He explains: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." Steve Omohundro says that a sufficiently advanced AI system will, unless explicitly counteracted, exhibit a number of basic "drives", such as resource acquisition, self-preservation, and continuous self-improvement, because of the intrinsic nature of any goal-driven systems and that these drives will, "without special precautions", cause the AI to exhibit undesired behavior. Alexander Wissner-Gross says that AIs driven to maximize their future freedom of action (or causal path entropy) might be considered friendly if their planning horizon is longer than a certain threshold, and unfriendly if their planning horizon is shorter than that threshold. Luke Muehlhauser, writing for the Machine Intelligence Research Institute, recommends that machine ethics researchers adopt what Bruce Schneier has called the "security mindset": Rather than thinking about how a system will work, imagine how it could fail. For instance, he suggests even an AI that only makes accurate predictions and communicates via a text interface might cause unintended harm. In 2014, Luke Muehlhauser and Nick Bostrom underlined the need for 'friendly AI'; nonetheless, the difficulties in designing a 'friendly' superintelligence, for instance via programming counterfactual moral thinking, are considerable. Coherent extrapolated volition Yudkowsky advances the Coherent Extrapolated Volition (CEV) model. According to him, our coherent extrapolated volition is "our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted". Rather than a Friendly AI being designed directly by human programmers, it is to be designed by a "seed AI" programmed to first study human nature and then produce the AI that humanity would want, given sufficient time and insight, to arrive at a satisfactory answer. The appeal to an objective through contingent human nature (perhaps expressed, for mathematical purposes, in the form of a utility function or other decision-theoretic formalism), as providing the ultimate criterion of "Friendliness", is an answer to the meta-ethical problem of defining an objective morality; extrapolated volition is intended to be what humanity objectively would want, all things considered, but it can only be defined relative to the psychological and cognitive qualities of present-day, unextrapolated humanity. Other approaches Steve Omohundro has proposed a "scaffolding" approach to AI safety, in which one provably safe AI generation helps build the next provably safe generation. Seth Baum argues that the development of safe, socially beneficial artificial intelligence or artificial general intelligence is a function of the social psychology of AI research communities and so can be constrained by extrinsic measures and motivated by intrinsic measures. Intrinsic motivations can be strengthened when messages resonate with AI developers; Baum argues that, in contrast, "existing messages about beneficial AI are not always framed well". Baum advocates for "cooperative relationships, and positive framing of AI researchers" and cautions against characterizing AI researchers as "not want(ing) to pursue beneficial designs". In his book Human Compatible, AI researcher Stuart J. Russell lists three principles to guide the development of beneficial machines. He emphasizes that these principles are not meant to be explicitly coded into the machines; rather, they are intended for the human developers. The principles are as follows: The "preferences" Russell refers to "are all-encompassing; they cover everything you might care about, arbitrarily far into the future." Similarly, "behavior" includes any choice between options, and the uncertainty is such that some probability, which may be quite small, must be assigned to every logically possible human preference. Public policy James Barrat, author of Our Final Invention, suggested that "a public-private partnership has to be created to bring A.I.-makers together to share ideas about security—something like the International Atomic Energy Agency, but in partnership with corporations." He urges AI researchers to convene a meeting similar to the Asilomar Conference on Recombinant DNA, which discussed risks of biotechnology. John McGinnis encourages governments to accelerate friendly AI research. Because the goalposts of friendly AI are not necessarily eminent, he suggests a model similar to the National Institutes of Health, where "Peer review panels of computer and cognitive scientists would sift through projects and choose those that are designed both to advance AI and assure that such advances would be accompanied by appropriate safeguards." McGinnis feels that peer review is better "than regulation to address technical issues that are not possible to capture through bureaucratic mandates". McGinnis notes that his proposal stands in contrast to that of the Machine Intelligence Research Institute, which generally aims to avoid government involvement in friendly AI. Criticism Some critics believe that both human-level AI and superintelligence are unlikely and that, therefore, friendly AI is unlikely. Writing in The Guardian, Alan Winfield compares human-level artificial intelligence with faster-than-light travel in terms of difficulty and states that while we need to be "cautious and prepared" given the stakes involved, we "don't need to be obsessing" about the risks of superintelligence. Boyles and Joaquin, on the other hand, argue that Luke Muehlhauser and Nick Bostrom’s proposal to create friendly AIs appear to be bleak. This is because Muehlhauser and Bostrom seem to hold the idea that intelligent machines could be programmed to think counterfactually about the moral values that human beings would have had. In an article in AI & Society, Boyles and Joaquin maintain that such AIs would not be that friendly considering the following: the infinite amount of antecedent counterfactual conditions that would have to be programmed into a machine, the difficulty of cashing out the set of moral values—that is, those that are more ideal than the ones human beings possess at present, and the apparent disconnect between counterfactual antecedents and ideal value consequent. Some philosophers claim that any truly "rational" agent, whether artificial or human, will naturally be benevolent; in this view, deliberate safeguards designed to produce a friendly AI could be unnecessary or even harmful. Other critics question whether artificial intelligence can be friendly. Adam Keiper and Ari N. Schulman, editors of the technology journal The New Atlantis, say that it will be impossible ever to guarantee "friendly" behavior in AIs because problems of ethical complexity will not yield to software advances or increases in computing power. They write that the criteria upon which friendly AI theories are based work "only when one has not only great powers of prediction about the likelihood of myriad possible outcomes but certainty and consensus on how one values the different outcomes. The inner workings of advanced AI systems may be complex and difficult to interpret, leading to concerns about transparency and accountability. See also Affective computing AI alignment AI effect AI takeover Ambient intelligence Applications of artificial intelligence Artificial intelligence arms race Artificial intelligence systems integration Autonomous agent Embodied agent Emotion recognition Existential risk from artificial general intelligence Hallucination (artificial intelligence) Hybrid intelligent system Intelligence explosion Intelligent agent Intelligent control Machine ethics Machine Intelligence Research Institute OpenAI Regulation of algorithms Roko's basilisk Sentiment analysis Singularitarianism – a moral philosophy advocated by proponents of Friendly AI Suffering risks Technological singularity Three Laws of Robotics References Further reading Yudkowsky, E. (2008). Artificial Intelligence as a Positive and Negative Factor in Global Risk. In Global Catastrophic Risks, Oxford University Press.Discusses Artificial Intelligence from the perspective of Existential risk. In particular, Sections 1-4 give background to the definition of Friendly AI in Section 5. Section 6 gives two classes of mistakes (technical and philosophical) which would both lead to the accidental creation of non-Friendly AIs. Sections 7-13 discuss further related issues. Omohundro, S. (2008). The Basic AI Drives Appeared in AGI-08 – Proceedings of the First Conference on Artificial General Intelligence. Mason, C. (2008). Human-Level AI Requires Compassionate Intelligence Appears in AAAI 2008 Workshop on Meta-Reasoning: Thinking About Thinking. Froding, B. and Peterson, M. (2021). Friendly AI Ethics and Information Technology, Vol. 23, pp. 207–214. External links Ethical Issues in Advanced Artificial Intelligence by Nick Bostrom What is Friendly AI? — A brief description of Friendly AI by the Machine Intelligence Research Institute. Creating Friendly AI 1.0: The Analysis and Design of Benevolent Goal Architectures — A near book-length description from the MIRI Critique of the MIRI Guidelines on Friendly AI — by Bill Hibbard Commentary on MIRI's Guidelines on Friendly AI — by Peter Voss. The Problem with ‘Friendly’ Artificial Intelligence — On the motives for and impossibility of FAI; by Adam Keiper and Ari N. Schulman. Philosophy of artificial intelligence Singularitarianism Transhumanism Affective computing
Friendly artificial intelligence
[ "Technology", "Engineering", "Biology" ]
2,563
[ "Genetic engineering", "Transhumanism", "Ethics of science and technology" ]
351,899
https://en.wikipedia.org/wiki/VHF%20omnidirectional%20range
Very High Frequency Omnidirectional Range Station (VOR) is a type of short-range VHF radio navigation system for aircraft, enabling aircraft with a VOR receiver to determine the azimuth (also radial), referenced to magnetic north, between the aircraft to/from fixed VOR ground radio beacons. VOR and the first DME(1950) system (referenced to 1950 since different from today's DME/N) to provide the slant range distance, were developed in the United States as part of a U.S. civil/military program for Aeronautical Navigation Aids in 1945. Deployment of VOR and DME(1950) began in 1949 by the U.S. CAA (Civil Aeronautics Administration). ICAO standardized VOR and DME(1950) in 1950 in ICAO Annex ed.1. Frequencies for the use of VOR are standardized in the very high frequency (VHF) band between 108.00 and 117.95 MHz Chapter 3, Table A. To improve azimuth accuracy of VOR even under difficult siting conditions, Doppler VOR (DVOR) was developed in the 1960s. VOR is according to ICAO rules a primary means navigation system for commercial and general aviation, (D)VOR are gradually decommissioned and replaced by DME-DME RNAV (area navigation) 7.2.3 and satellite based navigation systems such as GPS in the early 21st century. In 2000 there were about 3,000 VOR stations operating around the world, including 1,033 in the US, but by 2013 the number in the US had been reduced to 967. The United States is decommissioning approximately half of its VOR stations and other legacy navigation aids as part of a move to performance-based navigation, while still retaining a "Minimum Operational Network" of VOR stations as a backup to GPS. In 2015, the UK planned to reduce the number of stations from 44 to 19 by 2020. A VOR beacon radiates via two or more antennas an amplitude modulated signal and a frequency modulated subcarrier. By comparing the fixed 30 Hz reference signal with the rotating azimuth 30 Hz signal the azimuth from an aircraft to a (D)VOR is detected. The phase difference is indicative of the bearing from the (D)VOR station to the receiver relative to magnetic north. This line of position is called the VOR "radial". While providing the same signal over the air at the VOR receiver antennas. DVOR is based on the Doppler shift to modulate the azimuth dependent 30 Hz signal in space, by continuously switching the signal of about 25 antenna pairs that form a circle around the center 30 Hz reference antenna. The intersection of radials from two different VOR stations can be used to fix the position of the aircraft, as in earlier radio direction finding (RDF) systems. VOR stations are short range navigation aids limited to the radio-line-of-sight (RLOS) between transmitter and receiver in an aircraft. Depending on the site elevation of the VOR and altitude of the aircraft Designated Operational Coverages (DOC) of at max. about Att.C, Fig.C-13 can be achieved. The prerequesite is that the EIRP provides in spite of losses, e.g. due to propagation and antenna pattern lobing, for a sufficiently strong signal at the aircraft VOR antenna that it can be processed successfully by the VOR receiver. Each (D)VOR station broadcasts a VHF radio composite signal, including the mentioned navigation and reference signal, and a station's identifier and optional additional voice. 3.3.5 The station's identifier is typically a three-letter string in Morse code. While defined in Annex 10 voice channel is seldomly used today, e.g. for recorded advisories like ATIS. 3.3.6 A VORTAC is a radio-based navigational aid for aircraft pilots consisting of a co-located VHF omnidirectional range and a tactical air navigation system (TACAN) beacon. Both types of beacons provide pilots azimuth information, but the VOR system is generally used by civil aircraft and the TACAN system by military aircraft. However, the TACAN distance measuring equipment is also used for civil purposes because civil DME equipment is built to match the military DME specifications. Most VOR installations in the United States are VORTACs. The system was designed and developed by the Cardion Corporation. The Research, Development, Test, and Evaluation (RDT&E) contract was awarded 28 December 1981. Description History Developed from earlier Visual Aural Radio Range (VAR) systems. The VOR development was part of a U.S. civil/military program for Aeronautical Navigation Aids. In 1949 VOR for the azimuth/bearing of an aircraft to/from a VOR installation and UHF DME(1950) and the first ICAO Distance Measuring Equipment standard, were put in operation by the U.S. CAA (Civil Aeronautics Administration). In 1950 ICAO standardized VOR and DME(1950) in Annex 10 ed.1. The VOR was designed to provide 360 courses to and from the station, selectable by the pilot. Early vacuum tube transmitters with mechanically rotated antennas were widely installed in the 1950s, and began to be replaced with fully solid-state units in the early 1960s. DVOR were gradually implemented They became the major radio navigation system in the 1960s, when they took over from the older radio beacon and four-course (low/medium frequency range) system. Some of the older range stations survived, with the four-course directional features removed, as non-directional low or medium frequency radiobeacons (NDBs). A worldwide land-based network of "air highways", known in the US as Victor airways (below ) and "jet routes" (at and above 18,000 feet), was set up linking VORs. An aircraft can follow a specific path from station to station by tuning into the successive stations on the VOR receiver, and then either following the desired course on a Radio Magnetic Indicator, or setting it on a course deviation indicator (CDI) or a horizontal situation indicator (HSI, a more sophisticated version of the VOR indicator) and keeping a course pointer centered on the display. As of 2005, due to advances in technology, many airports are replacing VOR and NDB approaches with RNAV (GNSS) approach procedures; however, receiver and data update costs are still significant enough that many small general aviation aircraft are not equipped with GNSS equipment certified for primary navigation or approaches. Features VOR signals provide considerably greater accuracy and reliability than NDBs due to a combination of factors. Most significant is that VOR provides a bearing from the station to the aircraft which does not vary with wind or orientation of the aircraft. VHF radio is less vulnerable to diffraction (course bending) around terrain features and coastlines. Phase encoding suffers less interference from thunderstorms. VOR signals offer a predictable accuracy of , 2 sigma at 2 NM from a pair of VOR beacons; as compared to the accuracy of unaugumented Global Positioning System (GPS) which is less than 13 meters, 95%. VOR stations, being VHF, operate on "line of sight". This means that if, on a perfectly clear day, you cannot see the transmitter from the receiver antenna, or vice versa, the signal will be either imperceptible or unusable. This limits VOR (and DME) range to the horizon—or closer if mountains intervene. Although the modern solid state transmitting equipment requires much less maintenance than the older units, an extensive network of stations, needed to provide reasonable coverage along main air routes, is a significant cost in operating current airway systems. Typically, a VOR station's identifier represents a nearby town, city or airport. For example, the VOR station located on the grounds of John F. Kennedy International Airport has the identifier JFK. Operation VORs are assigned radio channels between 108.0 MHz and 117.95 MHz (with 50 kHz spacing); this is in the very high frequency (VHF) range. The first 4 MHz is shared with the instrument landing system (ILS) band. In the United States, frequencies within the pass band of 108.00 to 111.95 MHz which have an even 100 kHz first digit after the decimal point (108.00, 108.05, 108.20, 108.25, and so on) are reserved for VOR frequencies while frequencies within the 108.00 to 111.95 MHz pass band with an odd 100 kHz first digit after the decimal point (108.10, 108.15, 108.30, 108.35, and so on) are reserved for ILS. The VOR encodes azimuth (direction from the station) as the phase relationship between a reference signal and a variable signal. One of them is amplitude modulated, and one is frequency modulated. On conventional VORs (CVOR), the 30 Hz reference signal is frequency modulated (FM) on a 9,960 Hz subcarrier. On these VORs, the amplitude modulation is achieved by rotating a slightly directional antenna exactly in phase with the reference signal at 30 revolutions per second. Modern installations are Doppler VORs (DVOR), which use a circular array of typically 48 omni-directional antennas and no moving parts. The active antenna is moved around the circular array electronically to create a doppler effect, resulting in frequency modulation. The amplitude modulation is created by making the transmission power of antennas at e.g. the north position lower than at the south position. The role of amplitude and frequency modulation is thus swapped in this type of VOR. Decoding in the receiving aircraft happens in the same way for both types of VORs: the AM and FM 30 Hz components are detected and then compared to determine the phase angle between them. The VOR signal also contains a modulated continuous wave (MCW) 7 wpm Morse code station identifier, and usually contains an amplitude modulated (AM) voice channel. This information is then fed over an analog or digital interface to one of four common types of indicators: A typical light-airplane VOR indicator, sometimes called an "omni-bearing indicator" or OBI is shown in the illustration at the top of this entry. It consists of a knob to rotate an "Omni Bearing Selector" (OBS), the OBS scale around the outside of the instrument, and a vertical course deviation indicator or (CDI) pointer. The OBS is used to set the desired course, and the CDI is centered when the aircraft is on the selected course, or gives left/right steering commands to return to the course. An "ambiguity" (TO-FROM) indicator shows whether following the selected course would take the aircraft to, or away from the station. The indicator may also include a glideslope pointer for use when receiving full ILS signals. A radio magnetic indicator (RMI) features a course arrow superimposed on a rotating card that shows the aircraft's current heading at the top of the dial. The "tail" of the course arrow points at the current radial from the station and the "head" of the arrow points at the reciprocal (180° different) course to the station. An RMI may present information from more than one VOR or ADF receiver simultaneously. A horizontal situation indicator (HSI), developed subsequently to the RMI, is considerably more expensive and complex than a standard VOR indicator but combines heading information with the navigation display in a much more user-friendly format, approximating a simplified moving map. An area navigation (RNAV) system is an onboard computer with display and may include an up-to-date navigation database. At least one VOR/DME station is required for the computer to plot aircraft position on a moving map or to display course deviation and distance relative to a waypoint (virtual VOR station). RNAV type systems have also been made to use two VORs or two DMEs to define a waypoint; these are typically referred to by other names such as "distance computing equipment" for the dual-VOR type or "DME-DME" for the type using more than one DME signal. In many cases, VOR stations have co-located distance measuring equipment (DME) or military Tactical Air Navigation (TACAN) – the latter includes both the DME distance feature and a separate TACAN azimuth feature that provides military pilots data similar to the civilian VOR. A co-located VOR and TACAN beacon is called a VORTAC. A VOR co-located only with DME is called a VOR-DME. A VOR radial with a DME distance allows a one-station position fix. Both VOR-DMEs and TACANs share the same DME system. VORTACs and VOR-DMEs use a standardized scheme of VOR frequency to TACAN/DME channel pairing so that a specific VOR frequency is always paired with a specific co-located TACAN or DME channel. On civilian equipment, the VHF frequency is tuned and the appropriate TACAN/DME channel is automatically selected. While the operating principles are different, VORs share some characteristics with the localizer portion of ILS and the same antenna, receiving equipment and indicator is used in the cockpit for both. When a VOR station is selected, the OBS is functional and allows the pilot to select the desired radial to use for navigation. When a localizer frequency is selected, the OBS is not functional and the indicator is driven by a localizer converter, typically built into the receiver or indicator. Service volumes A VOR station serves a volume of airspace called its Service Volume. Some VORs have a relatively small geographic area protected from interference by other stations on the same frequency—called "terminal" or T-VORs. Other stations may have protection out to or more. It is popularly thought that there is a standard difference in power output between T-VORs and other stations, but in fact the stations' power output is set to provide adequate signal strength in the specific site's service volume. In the United States, there are three standard service volumes (SSV): terminal, low, and high (standard service volumes do not apply to published instrument flight rules (IFR) routes). Additionally, two new service volumes – "VOR low" and "VOR high" – were added in 2021, providing expanded coverage above 5,000 feet AGL. This allows aircraft to continue to receive off-route VOR signals despite the reduced number of VOR ground stations provided by the VOR Minimum Operational Network. VORs, airways and the en route structure VOR and the older NDB stations were traditionally used as intersections along airways. A typical airway will hop from station to station in straight lines. When flying in a commercial airliner, an observer will notice that the aircraft flies in straight lines occasionally broken by a turn to a new course. These turns are often made as the aircraft passes over a VOR station or at an intersection in the air defined by one or more VORs. Navigational reference points can also be defined by the point at which two radials from different VOR stations intersect, or by a VOR radial and a DME distance. This is the basic form of RNAV and allows navigation to points located away from VOR stations. As RNAV systems have become more common, in particular those based on GPS, more and more airways have been defined by such points, removing the need for some of the expensive ground-based VORs. In many countries there are two separate systems of airway at lower and higher levels: the lower Airways (known in the US as Victor Airways) and Upper Air Routes (known in the US as Jet routes). Most aircraft equipped for instrument flight (IFR) have at least two VOR receivers. As well as providing a backup to the primary receiver, the second receiver allows the pilot to easily follow a radial to or from one VOR station while watching the second receiver to see when a certain radial from another VOR station is crossed, allowing the aircraft's exact position at that moment to be determined, and giving the pilot the option of changing to the new radial if they wish. Future , space-based Global Navigation Satellite Systems (GNSS) such as the Global Positioning System (GPS) are increasingly replacing VOR and other ground-based systems. In 2016, GNSS was mandated as the primary needs of navigation for IFR aircraft in Australia. GNSS systems have a lower transmitter cost per customer and provide distance and altitude data. Future satellite navigation systems, such as the European Union Galileo, and GPS augmentation systems are developing techniques to eventually equal or exceed VOR accuracy. However, low VOR receiver cost, broad installed base and commonality of receiver equipment with ILS are likely to extend VOR dominance in aircraft until space receiver cost falls to a comparable level. As of 2008 in the United States, GPS-based approaches outnumbered VOR-based approaches but VOR-equipped IFR aircraft outnumber GPS-equipped IFR aircraft. There is some concern that GNSS navigation is subject to interference or sabotage, leading in many countries to the retention of VOR stations for use as a backup. The VOR signal has the advantage of static mapping to local terrain. The US FAA plans by 2020 to decommission roughly half of the 967 VOR stations in the US, retaining a "Minimum Operational Network" to provide coverage to all aircraft more than 5,000 feet above the ground. Most of the decommissioned stations will be east of the Rocky Mountains, where there is more overlap in coverage between them. On July 27, 2016, a final policy statement was released specifying stations to be decommissioned by 2025. A total of 74 stations are to be decommissioned in Phase 1 (2016–2020), and 234 more stations are scheduled to be taken out of service in Phase 2 (2021–2025). In the UK, 19 VOR transmitters are to be kept operational until at least 2020. Those at Cranfield and Dean Cross were decommissioned in 2014, with the remaining 25 to be assessed between 2015 and 2020. Similar efforts are underway in Australia, and elsewhere. In the UK and the United States, DME transmitters are planned to be retained in the near future even after co-located VORs are decommissioned. However, there are long-term plans to decommission DME, TACAN and NDBs. Technical specification The VOR signal encodes a morse code identifier, optional voice, and a pair of navigation tones. The radial azimuth is equal to the phase angle between the lagging and leading navigation tone. Constants Variables CVOR The conventional signal encodes the station identifier, , optional voice , navigation reference signal in , and the isotropic (i.e. omnidirectional) component. The reference signal is encoded on an F3 subcarrier (colour). The navigation variable signal is encoded by mechanically or electrically rotating a directional, , antenna to produce A3 modulation (grey-scale). Receivers (paired colour and grey-scale trace) in different directions from the station paint a different alignment of F3 and A3 demodulated signal. DVOR The doppler signal encodes the station identifier, optional audio voice, navigation variable signal in and the isotropic (i.e. omnidirectional) component. The navigation variable signal is A3 modulated (greyscale). The navigation reference signal is delayed, by electrically revolving a pair of transmitters. The cyclic doppler blue shift, and corresponding doppler red shift, as a transmitter closes on and recedes from the receiver results in F3 modulation (colour). The pairing of transmitters offset equally high and low of the isotropic carrier frequency produce the upper and lower sidebands. Closing and receding equally on opposite sides of the same circle around the isotropic transmitter produce F3 subcarrier modulation, where the revolution radius is The transmitter acceleration (24,000 g) makes mechanical revolution impractical, and halves (gravitational redshift) the frequency change ratio compared to transmitters in free-fall. The mathematics to describe the operation of a DVOR is far more complex than indicated above. The reference to "electronically rotated" is a vast simplification. The primary complication relates to a process that is called "blending". Another complication is that the phase of the upper and lower sideband signals have to be locked to each other. The composite signal is detected by the receiver. The electronic operation of detection effectively shifts the carrier down to 0 Hz, folding the signals with frequencies below the Carrier, on top of the frequencies above the carrier. Thus the upper and lower sidebands are summed. If there is a phase shift between these two, then the combination will have a relative amplitude of If was , then the aircraft's receiver would not detect any sub-carrier (signal A3). "Blending" describes the process by which a sideband signal is switched from one antenna to the next. The switching is not discontinuous. The amplitude of the next antenna rises as the amplitude of the current antenna falls. When one antenna reaches its peak amplitude, the next and previous antennas have zero amplitude. By radiating from two antennas, the effective phase center becomes a point between the two. Thus the phase reference is swept continuously around the ring – not stepped as would be the case with antenna to antenna discontinuous switching. In the electromechanical antenna switching systems employed before solid state antenna switching systems were introduced, the blending was a by-product of the way the motorized switches worked. These switches brushed a coaxial cable past 48 or 50 antenna feeds. As the cable moved between two antenna feeds, it would couple signal into both. But blending accentuates another complication of a DVOR. Each antenna in a DVOR uses an omnidirectional antenna. These are usually Alford Loop antennas (see Andrew Alford). Unfortunately, the sideband antennas are very close together, so that approximately 55% of the energy radiated is absorbed by the adjacent antennas. Half of that is re-radiated, and half is sent back along the antenna feeds of the adjacent antennas. The result is an antenna pattern that is no longer omnidirectional. This causes the effective sideband signal to be amplitude modulated at 60 Hz as far as the aircraft's receiver is concerned. The phase of this modulation can affect the detected phase of the sub-carrier. This effect is called "coupling". Blending complicates this effect. It does this because when two adjacent antennas radiate a signal, they create a composite antenna. Imagine two antennas that are separated by half their wavelength. In the transverse direction the two signals will sum, but in the tangential direction they will cancel. Thus as the signal "moves" from one antenna to the next, the distortion in the antenna pattern will increase and then decrease. The peak distortion occurs at the midpoint. This creates a half-sinusoidal 1500 Hz amplitude distortion in the case of a 50 antenna system, (1,440 Hz in a 48 antenna system). This distortion is itself amplitude modulated with a 60 Hz amplitude modulation (also some 30 Hz as well). This distortion can add or subtract with the above-mentioned 60 Hz distortion depending on the carrier phase. In fact one can add an offset to the carrier phase (relative to the sideband phases) so that the 60 Hz components tend to null one another. There is a 30 Hz component, though, which has some pernicious effects. DVOR designs use all sorts of mechanisms to try to compensate these effects. The methods chosen are major selling points for each manufacturer, with each extolling the benefits of their technique over their rivals. Note that ICAO Annex 10 limits the worst case amplitude modulation of the sub-carrier to 40%. A DVOR that did not employ some technique to compensate for coupling and blending effects would not meet this requirement. Accuracy and reliability The predicted accuracy of the VOR system is ±1.4°. However, test data indicates that 99.94% of the time a VOR system has less than ±0.35° of error. Internal monitoring of a VOR station will shut it down, or change over to a standby system if the station error exceeds some limit. A Doppler VOR beacon will typically change over or shut down when the bearing error exceeds 1.0°. National air space authorities may often set tighter limits. For instance, in Australia, a Primary Alarm limit may be set as low as ±0.5° on some Doppler VOR beacons. ARINC 711 – 10 January 30, 2002, states that receiver accuracy should be within 0.4° with a statistical probability of 95% under various conditions. Any receiver compliant with this standard can be expected to perform within these tolerances. All radio navigation beacons are required to monitor their own output. Most have redundant systems, so that the failure of one system will cause automatic change-over to one or more standby systems. The monitoring and redundancy requirements in some instrument landing systems (ILS) can be very strict. The general philosophy followed is that no signal is preferable to a poor signal. VOR beacons monitor themselves by having one or more receiving antennas located away from the beacon. The signals from these antennas are processed to monitor many aspects of the signals. The signals monitored are defined in various US and European standards. The principal standard is European Organisation for Civil Aviation Equipment (EuroCAE) Standard ED-52. The five main parameters monitored are the bearing accuracy, the reference and variable signal modulation indices, the signal level, and the presence of notches (caused by individual antenna failures). Note that the signals received by these antennas, in a Doppler VOR beacon, are different from the signals received by an aircraft. This is because the antennas are close to the transmitter and are affected by proximity effects. For example, the free space path loss from nearby sideband antennas will be 1.5 dB different (at 113 MHz and at a distance of 80 m) from the signals received from the far side sideband antennas. For a distant aircraft there will be no measurable difference. Similarly the peak rate of phase change seen by a receiver is from the tangential antennas. For the aircraft these tangential paths will be almost parallel, but this is not the case for an antenna near the DVOR. The bearing accuracy specification for all VOR beacons is defined in the International Civil Aviation Organization Convention on International Civil Aviation Annex 10, Volume 1. This document sets the worst case bearing accuracy performance on a Conventional VOR (CVOR) to be ±4°. A Doppler VOR (DVOR) is required to be ±1°. All radio-navigation beacons are checked periodically to ensure that they are performing to the appropriate International and National standards. This includes VOR beacons, distance measuring equipment (DME), instrument landing systems (ILS), and non-directional beacons (NDB). Their performance is measured by aircraft fitted with test equipment. The VOR test procedure is to fly around the beacon in circles at defined distances and altitudes, and also along several radials. These aircraft measure signal strength, the modulation indices of the reference and variable signals, and the bearing error. They will also measure other selected parameters, as requested by local/national airspace authorities. Note that the same procedure is used (often in the same flight test) to check distance measuring equipment (DME). In practice, bearing errors can often exceed those defined in Annex 10, in some directions. This is usually due to terrain effects, buildings near the VOR, or, in the case of a DVOR, some counterpoise effects. Note that Doppler VOR beacons utilize an elevated groundplane that is used to elevate the effective antenna pattern. It creates a strong lobe at an elevation angle of 30° which complements the 0° lobe of the antennas themselves. This groundplane is called a counterpoise. A counterpoise though, rarely works exactly as one would hope. For example, the edge of the counterpoise can absorb and re-radiate signals from the antennas, and it may tend to do this differently in some directions than others. National air space authorities will accept these bearing errors when they occur along directions that are not the defined air traffic routes. For example, in mountainous areas, the VOR may only provide sufficient signal strength and bearing accuracy along one runway approach path. Doppler VOR beacons are inherently more accurate than conventional VORs because they are less affected by reflections from hills and buildings. The variable signal in a DVOR is the 30 Hz FM signal; in a CVOR it is the 30 Hz AM signal. If the AM signal from a CVOR beacon bounces off a building or hill, the aircraft will see a phase that appears to be at the phase centre of the main signal and the reflected signal, and this phase center will move as the beam rotates. In a DVOR beacon, the variable signal, if reflected, will seem to be two FM signals of unequal strengths and different phases. Twice per 30 Hz cycle, the instantaneous deviation of the two signals will be the same, and the phase locked loop will get (briefly) confused. As the two instantaneous deviations drift apart again, the phase locked loop will follow the signal with the greatest strength, which will be the line-of-sight signal. If the phase separation of the two deviations is small, however, the phase locked loop will become less likely to lock on to the true signal for a larger percentage of the 30 Hz cycle (this will depend on the bandwidth of the output of the phase comparator in the aircraft). In general, some reflections can cause minor problems, but these are usually about an order of magnitude less than in a CVOR beacon. Using a VOR If a pilot wants to approach the VOR station from due east then the aircraft will have to fly due west to reach the station. The pilot will use the OBS to rotate the compass dial until the number 27 (270°) aligns with the pointer (called the primary index) at the top of the dial. When the aircraft intercepts the 90° radial (due east of the VOR station) the needle will be centered and the To/From indicator will show "To". Notice that the pilot sets the VOR to indicate the reciprocal; the aircraft will follow the 90° radial while the VOR indicates that the course "to" the VOR station is 270°. This is called "proceeding inbound on the 090 radial." The pilot needs only to keep the needle centered to follow the course to the VOR station. If the needle drifts off-center the aircraft would be turned towards the needle until it is centered again. After the aircraft passes over the VOR station the To/From indicator will indicate "From" and the aircraft is then proceeding outbound on the 270° radial. The CDI needle may oscillate or go to full scale in the "cone of confusion" directly over the station but will recenter once the aircraft has flown a short distance beyond the station. In the illustration on the right, notice that the heading ring is set with 360° (north) at the primary index, the needle is centred and the To/From indicator is showing "TO". The VOR is indicating that the aircraft is on the 360° course (north) to the VOR station (i.e. the aircraft is south of the VOR station). If the To/From indicator were showing "From" it would mean the aircraft was on the 360° radial from the VOR station (i.e. the aircraft is north of the VOR). Note that there is absolutely no indication of what direction the aircraft is flying. The aircraft could be flying due West and this snapshot of the VOR could be the moment when it crossed the 360° radial. Testing Before using a VOR indicator for the first time, it can be tested and calibrated at an airport with a VOR test facility, or VOT. A VOT differs from a VOR in that it replaces the variable directional signal with another omnidirectional signal, in a sense transmitting a 360° radial in all directions. The NAV receiver is tuned to the VOT frequency, then the OBS is rotated until the needle is centred. If the indicator reads within four degrees of 000 with the FROM flag visible or 180 with the TO flag visible, it is considered usable for navigation. The FAA requires testing and calibration of a VOR indicator no more than 30 days before any flight under IFR. Intercepting VOR radials There are many methods available to determine what heading to fly to intercept a radial from the station or a course to the station. The most common method involves the acronym T-I-T-P-I-T. The acronym stands for Tune – Identify – Twist – Parallel – Intercept – Track. Each of these steps are quite important to ensure the aircraft is headed where it is being directed. First, tune the desired VOR frequency into the navigation radio, second and most important, Identify the correct VOR station by verifying the Morse code heard with the sectional chart. Third, twist the VOR OBS knob to the desired radial (FROM) or course (TO) the station. Fourth, bank the aircraft until the heading indicator indicates the radial or course set in the VOR. The fifth step is to fly towards the needle. If the needle is to the left, turn left by 30–45° and vice versa. The last step is once the VOR needle is centred, turn the heading of the aircraft back to the radial or course to track down the radial or course flown. If there is wind, a wind correction angle will be necessary to maintain the VOR needle centred. Another method to intercept a VOR radial exists and more closely aligns itself with the operation of an HSI (Horizontal Situation Indicator). The first three steps above are the same; tune, identify and twist. At this point, the VOR needle should be displaced to either the left or the right. Looking at the VOR indicator, the numbers on the same side as the needle will always be the headings needed to return the needle back to centre. The aircraft heading should then be turned to align itself with one of those shaded headings. If done properly, this method will produce reverse sensing. Using this method will ensure quick understanding of how an HSI works as the HSI visually shows what we are mentally trying to do. In the adjacent diagram, an aircraft is flying a heading of 180° while located at a bearing of 315° from the VOR. After twisting the OBS knob to 360°, the needle deflects to the right. The needle shades the numbers between 360 and 090. If the aircraft turns to a heading anywhere in this range, the aircraft will intercept the radial. Although the needle deflects to the right, the shortest way of turning to the shaded range is a turn to the left. See also Index of aviation articles Airway (aviation) Direction finding (DF) Distance measuring equipment (DME) Global Positioning System (GPS) Head-up display (HUD) Instrument flight rules (IFR) Instrument landing system (ILS) Non-directional beacon (NDB) Performance-based navigation TACAN Transponder landing system (TLS) Victor airways Wide Area Augmentation System (WAAS) References External links UK Navigation Aids Gallery & Photos Navigation aid search from airnav.com A free online VOR and ADF simulator Latest Air Navigation Aid in Use Here Gives Pilots Wide Choice – newspaper article from 1951 explaining the then new system in depth Aeronautical navigation systems Aircraft instruments Avionics Radio navigation
VHF omnidirectional range
[ "Technology", "Engineering" ]
7,485
[ "Avionics", "Aircraft instruments", "Measuring instruments" ]
351,908
https://en.wikipedia.org/wiki/Almost%20surely
In probability theory, an event is said to happen almost surely (sometimes abbreviated as a.s.) if it happens with probability 1 (with respect to the probability measure). In other words, the set of outcomes on which the event does not occur has probability 0, even though the set might not be empty. The concept is analogous to the concept of "almost everywhere" in measure theory. In probability experiments on a finite sample space with a non-zero probability for each outcome, there is no difference between almost surely and surely (since having a probability of 1 entails including all the sample points); however, this distinction becomes important when the sample space is an infinite set, because an infinite set can have non-empty subsets of probability 0. Some examples of the use of this concept include the strong and uniform versions of the law of large numbers, the continuity of the paths of Brownian motion, and the infinite monkey theorem. The terms almost certainly (a.c.) and almost always (a.a.) are also used. Almost never describes the opposite of almost surely: an event that happens with probability zero happens almost never. Formal definition Let be a probability space. An event happens almost surely if . Equivalently, happens almost surely if the probability of not occurring is zero: . More generally, any set (not necessarily in ) happens almost surely if is contained in a null set: a subset in such that The notion of almost sureness depends on the probability measure . If it is necessary to emphasize this dependence, it is customary to say that the event occurs P-almost surely, or almost surely . Illustrative examples In general, an event can happen "almost surely", even if the probability space in question includes outcomes which do not belong to the event—as the following examples illustrate. Throwing a dart Imagine throwing a dart at a unit square (a square with an area of 1) so that the dart always hits an exact point in the square, in such a way that each point in the square is equally likely to be hit. Since the square has area 1, the probability that the dart will hit any particular subregion of the square is equal to the area of that subregion. For example, the probability that the dart will hit the right half of the square is 0.5, since the right half has area 0.5. Next, consider the event that the dart hits exactly a point in the diagonals of the unit square. Since the area of the diagonals of the square is 0, the probability that the dart will land exactly on a diagonal is 0. That is, the dart will almost never land on a diagonal (equivalently, it will almost surely not land on a diagonal), even though the set of points on the diagonals is not empty, and a point on a diagonal is no less possible than any other point. Tossing a coin repeatedly Consider the case where a (possibly biased) coin is tossed, corresponding to the probability space , where the event occurs if a head is flipped, and if a tail is flipped. For this particular coin, it is assumed that the probability of flipping a head is , from which it follows that the complement event, that of flipping a tail, has probability . Now, suppose an experiment were conducted where the coin is tossed repeatedly, with outcomes and the assumption that each flip's outcome is independent of all the others (i.e., they are independent and identically distributed; i.i.d). Define the sequence of random variables on the coin toss space, where . i.e. each records the outcome of the th flip. In this case, any infinite sequence of heads and tails is a possible outcome of the experiment. However, any particular infinite sequence of heads and tails has probability 0 of being the exact outcome of the (infinite) experiment. This is because the i.i.d. assumption implies that the probability of flipping all heads over flips is simply . Letting yields 0, since by assumption. The result is the same no matter how much we bias the coin towards heads, so long as we constrain to be strictly between 0 and 1. In fact, the same result even holds in non-standard analysis—where infinitesimal probabilities are allowed. Moreover, the event "the sequence of tosses contains at least one " will also happen almost surely (i.e., with probability 1). But if instead of an infinite number of flips, flipping stops after some finite time, say 1,000,000 flips, then the probability of getting an all-heads sequence, , would no longer be 0, while the probability of getting at least one tails, , would no longer be 1 (i.e., the event is no longer almost sure). Asymptotically almost surely In asymptotic analysis, a property is said to hold asymptotically almost surely (a.a.s.) if over a sequence of sets, the probability converges to 1. This is equivalent to convergence in probability. For instance, in number theory, a large number is asymptotically almost surely composite, by the prime number theorem; and in random graph theory, the statement " is connected" (where denotes the graphs on vertices with edge probability ) is true a.a.s. when, for some     In number theory, this is referred to as "almost all", as in "almost all numbers are composite". Similarly, in graph theory, this is sometimes referred to as "almost surely". See also Almost Almost everywhere, the corresponding concept in measure theory Convergence of random variables, for "almost sure convergence" With high probability Cromwell's rule, which says that probabilities should almost never be set as zero or one Degenerate distribution, for "almost surely constant" Infinite monkey theorem, a theorem using the aforementioned terms List of mathematical jargon Notes References Probability theory Mathematical terminology
Almost surely
[ "Mathematics" ]
1,216
[ "nan" ]
351,914
https://en.wikipedia.org/wiki/Self-assembly
Self-assembly is a process in which a disordered system of pre-existing components forms an organized structure or pattern as a consequence of specific, local interactions among the components themselves, without external direction. When the constitutive components are molecules, the process is termed molecular self-assembly. Self-assembly can be classified as either static or dynamic. In static self-assembly, the ordered state forms as a system approaches equilibrium, reducing its free energy. However, in dynamic self-assembly, patterns of pre-existing components organized by specific local interactions are not commonly described as "self-assembled" by scientists in the associated disciplines. These structures are better described as "self-organized", although these terms are often used interchangeably. In chemistry and materials science Self-assembly in the classic sense can be defined as the spontaneous and reversible organization of molecular units into ordered structures by non-covalent interactions. The first property of a self-assembled system that this definition suggests is the spontaneity of the self-assembly process: the interactions responsible for the formation of the self-assembled system act on a strictly local level—in other words, the nanostructure builds itself. Although self-assembly typically occurs between weakly-interacting species, this organization may be transferred into strongly-bound covalent systems. An example for this may be observed in the self-assembly of polyoxometalates. Evidence suggests that such molecules assemble via a dense-phase type mechanism whereby small oxometalate ions first assemble non-covalently in solution, followed by a condensation reaction that covalently binds the assembled units. This process can be aided by the introduction of templating agents to control the formed species. In such a way, highly organized covalent molecules may be formed in a specific manner. Self-assembled nano-structure is an object that appears as a result of ordering and aggregation of individual nano-scale objects guided by some physical principle. A particularly counter-intuitive example of a physical principle that can drive self-assembly is entropy maximization. Though entropy is conventionally associated with disorder, under suitable conditions entropy can drive nano-scale objects to self-assemble into target structures in a controllable way. Another important class of self-assembly is field-directed assembly. An example of this is the phenomenon of electrostatic trapping. In this case an electric field is applied between two metallic nano-electrodes. The particles present in the environment are polarized by the applied electric field. Because of dipole interaction with the electric field gradient the particles are attracted to the gap between the electrodes. Generalizations of this type approach involving different types of fields, e.g., using magnetic fields, using capillary interactions for particles trapped at interfaces, elastic interactions for particles suspended in liquid crystals have also been reported. Regardless of the mechanism driving self-assembly, people take self-assembly approaches to materials synthesis to avoid the problem of having to construct materials one building block at a time. Avoiding one-at-a-time approaches is important because the amount of time required to place building blocks into a target structure is prohibitively difficult for structures that have macroscopic size. Once materials of macroscopic size can be self-assembled, those materials can find use in many applications. For example, nano-structures such as nano-vacuum gaps are used for storing energy and nuclear energy conversion. Self-assembled tunable materials are promising candidates for large surface area electrodes in batteries and organic photovoltaic cells, as well as for microfluidic sensors and filters. Distinctive features At this point, one may argue that any chemical reaction driving atoms and molecules to assemble into larger structures, such as precipitation, could fall into the category of self-assembly. However, there are at least three distinctive features that make self-assembly a distinct concept. Order First, the self-assembled structure must have a higher order than the isolated components, be it a shape or a particular task that the self-assembled entity may perform. This is generally not true in chemical reactions, where an ordered state may proceed towards a disordered state depending on thermodynamic parameters. Interactions The second important aspect of self-assembly is the predominant role of weak interactions (e.g. Van der Waals, capillary, , hydrogen bonds, or entropic forces) compared to more "traditional" covalent, ionic, or metallic bonds. These weak interactions are important in materials synthesis for two reasons. First, weak interactions take a prominent place in materials, especially in biological systems. For instance, they determine the physical properties of liquids, the solubility of solids, and the organization of molecules in biological membranes. Second, in addition to the strength of the interactions, interactions with varying degrees of specificity can control self-assembly. Self-assembly that is mediated by DNA pairing interactions constitutes the interactions of the highest specificity that have been used to drive self-assembly. At the other extreme, the least specific interactions are possibly those provided by emergent forces that arise from entropy maximization. Building blocks The third distinctive feature of self-assembly is that the building blocks are not only atoms and molecules, but span a wide range of nano- and mesoscopic structures, with different chemical compositions, functionalities, and shapes. Research into possible three-dimensional shapes of self-assembling micrites examines Platonic solids (regular polyhedral). The term 'micrite' was created by DARPA to refer to sub-millimeter sized microrobots, whose self-organizing abilities may be compared with those of slime mold. Recent examples of novel building blocks include polyhedra and patchy particles. Examples also included microparticles with complex geometries, such as hemispherical, dimer, discs, rods, molecules, as well as multimers. These nanoscale building blocks can in turn be synthesized through conventional chemical routes or by other self-assembly strategies such as directional entropic forces. More recently, inverse design approaches have appeared where it is possible to fix a target self-assembled behavior, and determine an appropriate building block that will realize that behavior. Thermodynamics and kinetics Self-assembly in microscopic systems usually starts from diffusion, followed by the nucleation of seeds, subsequent growth of the seeds, and ends at Ostwald ripening. The thermodynamic driving free energy can be either enthalpic or entropic or both. In either the enthalpic or entropic case, self-assembly proceeds through the formation and breaking of bonds, possibly with non-traditional forms of mediation. The kinetics of the self-assembly process is usually related to diffusion, for which the absorption/adsorption rate often follows a Langmuir adsorption model which in the diffusion controlled concentration (relatively diluted solution) can be estimated by the Fick's laws of diffusion. The desorption rate is determined by the bond strength of the surface molecules/atoms with a thermal activation energy barrier. The growth rate is the competition between these two processes. Examples Important examples of self-assembly in materials science include the formation of molecular crystals, colloids, lipid bilayers, phase-separated polymers, and self-assembled monolayers. The folding of polypeptide chains into proteins and the folding of nucleic acids into their functional forms are examples of self-assembled biological structures. Recently, the three-dimensional macroporous structure was prepared via self-assembly of diphenylalanine derivative under cryoconditions, the obtained material can find the application in the field of regenerative medicine or drug delivery system. P. Chen et al. demonstrated a microscale self-assembly method using the air-liquid interface established by Faraday wave as a template. This self-assembly method can be used for generation of diverse sets of symmetrical and periodic patterns from microscale materials such as hydrogels, cells, and cell spheroids. Yasuga et al. demonstrated how fluid interfacial energy drives the emergence of three-dimensional periodic structures in micropillar scaffolds. Myllymäki et al. demonstrated the formation of micelles, that undergo a change in morphology to fibers and eventually to spheres, all controlled by solvent change. Properties Self-assembly extends the scope of chemistry aiming at synthesizing products with order and functionality properties, extending chemical bonds to weak interactions and encompassing the self-assembly of nanoscale building blocks at all length scales. In covalent synthesis and polymerization, the scientist links atoms together in any desired conformation, which does not necessarily have to be the energetically most favoured position; self-assembling molecules, on the other hand, adopt a structure at the thermodynamic minimum, finding the best combination of interactions between subunits but not forming covalent bonds between them. In self-assembling structures, the scientist must predict this minimum, not merely place the atoms in the location desired. Another characteristic common to nearly all self-assembled systems is their thermodynamic stability. For self-assembly to take place without intervention of external forces, the process must lead to a lower Gibbs free energy, thus self-assembled structures are thermodynamically more stable than the single, unassembled components. A direct consequence is the general tendency of self-assembled structures to be relatively free of defects. An example is the formation of two-dimensional superlattices composed of an orderly arrangement of micrometre-sized polymethylmethacrylate (PMMA) spheres, starting from a solution containing the microspheres, in which the solvent is allowed to evaporate slowly in suitable conditions. In this case, the driving force is capillary interaction, which originates from the deformation of the surface of a liquid caused by the presence of floating or submerged particles. These two properties—weak interactions and thermodynamic stability—can be recalled to rationalise another property often found in self-assembled systems: the sensitivity to perturbations exerted by the external environment. These are small fluctuations that alter thermodynamic variables that might lead to marked changes in the structure and even compromise it, either during or after self-assembly. The weak nature of interactions is responsible for the flexibility of the architecture and allows for rearrangements of the structure in the direction determined by thermodynamics. If fluctuations bring the thermodynamic variables back to the starting condition, the structure is likely to go back to its initial configuration. This leads us to identify one more property of self-assembly, which is generally not observed in materials synthesized by other techniques: reversibility. Self-assembly is a process which is easily influenced by external parameters. This feature can make synthesis rather complex because of the need to control many free parameters. Yet self-assembly has the advantage that a large variety of shapes and functions on many length scales can be obtained. The fundamental condition needed for nanoscale building blocks to self-assemble into an ordered structure is the simultaneous presence of long-range repulsive and short-range attractive forces. By choosing precursors with suitable physicochemical properties, it is possible to exert a fine control on the formation processes that produce complex structures. Clearly, the most important tool when it comes to designing a synthesis strategy for a material, is the knowledge of the chemistry of the building units. For example, it was demonstrated that it was possible to use diblock copolymers with different block reactivities in order to selectively embed maghemite nanoparticles and generate periodic materials with potential use as waveguides. In 2008 it was proposed that every self-assembly process presents a co-assembly, which makes the former term a misnomer. This thesis is built on the concept of mutual ordering of the self-assembling system and its environment. At the macroscopic scale The most common examples of self-assembly at the macroscopic scale can be seen at interfaces between gases and liquids, where molecules can be confined at the nanoscale in the vertical direction and spread over long distances laterally. Examples of self-assembly at gas-liquid interfaces include breath-figures, self-assembled monolayers, droplet clusters, and Langmuir–Blodgett films, while crystallization of fullerene whiskers is an example of macroscopic self-assembly in between two liquids. Another remarkable example of macroscopic self-assembly is the formation of thin quasicrystals at an air-liquid interface, which can be built up not only by inorganic, but also by organic molecular units. Furthermore, it was reported that Fmoc protected L-DOPA amino acid (Fmoc-DOPA) can present a minimal supramolecular polymer model, displaying a spontaneous structural transition from meta-stable spheres to fibrillar assemblies to gel-like material and finally to single crystals. Self-assembly processes can also be observed in systems of macroscopic building blocks. These building blocks can be externally propelled or self-propelled. Since the 1950s, scientists have built self-assembly systems exhibiting centimeter-sized components ranging from passive mechanical parts to mobile robots. For systems at this scale, the component design can be precisely controlled. For some systems, the components' interaction preferences are programmable. The self-assembly processes can be easily monitored and analyzed by the components themselves or by external observers. In April 2014, a 3D printed plastic was combined with a "smart material" that self-assembles in water, resulting in "4D printing". Consistent concepts of self-organization and self-assembly People regularly use the terms "self-organization" and "self-assembly" interchangeably. As complex system science becomes more popular though, there is a higher need to clearly distinguish the differences between the two mechanisms to understand their significance in physical and biological systems. Both processes explain how collective order develops from "dynamic small-scale interactions". Self-organization is a non-equilibrium process where self-assembly is a spontaneous process that leads toward equilibrium. Self-assembly requires components to remain essentially unchanged throughout the process. Besides the thermodynamic difference between the two, there is also a difference in formation. The first difference is what "encodes the global order of the whole" in self-assembly whereas in self-organization this initial encoding is not necessary. Another slight contrast refers to the minimum number of units needed to make an order. Self-organization appears to have a minimum number of units whereas self-assembly does not. The concepts may have particular application in connection with natural selection. Eventually, these patterns may form one theory of pattern formation in nature. See also Assembly theory Crystal engineering Autopoiesis Langmuir–Blodgett film Nanotechnology Pick-and-place machine Programmable matter Self-assembly based manufacturing Self-assembly of nanoparticles References Further reading External links Kuniaki Nagayama, Freeview Video 'Self-Assembly: Nature's Way To Do It, A Royal Institution Lecture by the Vega Science Trust. Paper Molecular Self-Assembly Wiki: C2 Self Assembly from a computer programming perspective. Pelesko, J.A., (2007) Self Assembly: The Science of Things That Put Themselves Together, Chapman & Hall/CRC Press. A brief page on self-assembly at the University of Delaware Self Assembly Structure and Dynamics of Organic Nanostructures Metal organic coordination networks of oligopyridines and Cu on graphite Nanotechnology Cell biology Systems theory Self-organization
Self-assembly
[ "Materials_science", "Mathematics", "Engineering", "Biology" ]
3,189
[ "Self-organization", "Cell biology", "Materials science", "Nanotechnology", "Dynamical systems" ]
351,931
https://en.wikipedia.org/wiki/Explosive%20train
A triggering sequence, also called an explosive train or a firing train, is a sequence of events that culminates in the detonation of explosives. For safety reasons, most widely used high explosives are difficult to detonate. A primary explosive of higher sensitivity is used to trigger a uniform and predictable detonation of the main body of the explosive. Although the primary explosive itself is generally a more sensitive and expensive compound, it is only used in small quantities and in relatively safely packaged forms. By design there are low explosives and high explosives made such that the low explosives are highly sensitive (i.e. their Figure of Insensitivity is low) and high explosives are comparatively insensitive. This not only affords inherent safety to the usage of explosives during handling and transport, but also necessitates an explosive triggering sequence or explosive train. The explosive triggering sequence or the explosive train essentially consists of an 'initiator', an 'intermediary' and the 'high explosive'. For example, a match will not cause plastic explosive to explode, but it will light a fuse coupled with a blasting cap which will detonate a primary explosive that will shock a secondary high explosive and cause it to detonate. In this way, even very insensitive explosives may be used; the primary detonates a "booster" charge that then detonates the main charge. Triggering sequences are used in the mining industry for the detonation of ANFO and other cheap, bulk, and insensitive explosives that cannot be fired by only a blasting cap or similar item. Low explosive train An example of a low-explosive train is a rifle cartridge, which consists of a primer consisting of a small amount of primary high explosive which initiates the explosive train an igniter which is initiated by the primer and creates a flame that ignites the propellant a propellant consisting of a secondary low explosive that emits a large amount of gas as it deflagrates. High explosive train High-explosives trains can be either two-step (e.g., detonator, [containing primary explosive] and dynamite / other sensitive secondary) or three-step configuration (e.g., initiator, [detonator, compound cap or NPED] booster of intermediate explosive, and main charge of insensitive secondary explosive). Primary components A high explosive train includes three primary high explosive components which are used to initiate explosives: Fuse or fuze Primer Detonator Detonators are conventionally made from tetryl and fulminates, but can be made of other initiating explosive materials. Secondary components In an explosive train there are two secondary high explosive components: Boosters Bursting charges, also known as the main charge Examples of explosives used in bursting charges are TNT Composition B Ammonal Semtex RDX HMX ETN PETN C-4 Other suitable Binary explosives Tertiary components main charge Examples of main charges are TNT Composition B Pentolite Baratol Amatol PLX HMX ETN PETN Other suitable binary explosives In some cases, the main charge is so insensitive that using typical primary materials becomes impractical due to large amount required. Thus, an explosive booster is used to deliver a sufficient shockwave to successfully initiate the main charge, as so full detonation occurs. The most significant tertiary material in widespread general usage is ANFO, an explosive binary made from Ammonium nitrate and Fuel oil. References Explosives
Explosive train
[ "Chemistry" ]
706
[ "Explosives", "Explosions" ]
351,945
https://en.wikipedia.org/wiki/Anthropological%20Society%20of%20London
The Anthropological Society of London (ASL) was a short-lived organisation of the 1860s whose founders aimed to furnish scientific evidence for white supremacy which they construed in terms of polygenism. It was founded in 1863 by Richard Francis Burton and James Hunt. Hunt had previously been the secretary of the Ethnological Society of London, which was founded in 1843. When he founded the breakaway ASL, Hunt claimed that society had "the object of promoting the study of Anthropology in a strictly scientific manner". Nevertheless he reminded his audience that, whatever evidence might be uncovered, "we still know that the Races of Europe now have much in their mental and moral nature which the races of Africa have not got." The ASL only lasted 8 years: following Hunt's death in 1869 it was absorbed into the Royal Anthropological Institute of Great Britain and Ireland. Prelude to the ASL James Hunt had encountered the disgraced Edinburgh anatomist, Robert Knox in 1855. During the trial of the grave robbers Burke and Hare, Knox had been exposed as the anatomist for whom they had procured their victims bodies. Despite attacks by hostile mobs, he continued as an anatomist despite never regaining such a steady source of dead bodies. He was debarred from running further teaching by the Royal College of Surgeons of Edinburgh after falsifying records and expelled from the Royal Society of Edinburgh. Polygenism versus monogenism The real differences between the two societies ran much deeper. The members of the Ethnological Society were, on the whole, inclined to believe that humans were shaped by their environment; when Charles Darwin published his theory of evolution by natural selection, they supported it. They also advocated in monogenism and tended to be politically liberal, especially on matters related to race. Hunt and his closest followers tended to be supporters of polygenism and sceptical of Darwin (though they made him an honorary fellow). They found the Ethnological Society's politics distasteful, and (for example) supported the Confederacy in the American Civil War. The issue that most sharply divided the two groups was the "Negro question." In his opening speech to the society Hunt enunciated a strongly racist view: Whatever may be the conclusion to which our scientific inquiries may lead us, we should always remember, that by whatever means the Negro, for instance, acquired his present physical, mental and moral character, whether he has risen from an ape or descended from a perfect man, we still know that the Races of Europe have now much in their mental and moral nature which the races of Africa have not got. However he was careful to distance himself from the slave trade: A serious charge has been made against the American School of Anthropology, when it is affirmed that their interest in keeping up slavery induced the scientific men of that country to advocate a distinct origin for the African race.... I would therefore express a hope that the objects of this Society will never be prostituted to such an object as the support of the slave trade, with all its abuses. He did this by redefining slavery in such a way that it did not occur in America: Our Bristol and Liverpool merchants, perhaps, helped to benefit the race when they transplanted some of them to America; and our mistaken legislature has done the Negro race much injury by their absurd and unwarrantable attempts to prevent Africa from exporting her worthless or surplus population.... I cannot shut my eyes to the fact that slavery as understood by the ancients does not exist out of Africa and that the highest type of the Negro race is at present to be found in the Confederate States of America. According to noted Darwin biographers Adrian Desmond and James Moore, however, founder James Hunt was a paid agent of the Confederate States of America, as were his friend Henry Hotze and two other council members. Their purpose in founding the society was "to swing London opinion during the [American Civil] war." Hunt and Hotze put pro-slavery pseudoscience into the Anthropological Society library, "bought journalists, printed and distributed thousands of pamphlets,... ran a propaganda weekly in Fleet Street, The Index..." and in general promoted the pro-slavery dogma that black people were a separate species and inherently capable of no higher development than that of enslavement. Cannibal Club Hunt and Burton established the Cannibal Club as a gentleman's club, which drew many members from within the ASL. Merger In 1864, Hunt attempted to persuade the British Association to rename Section E (Geography and Ethnology) to include Anthropology and in 1865 his attempt create a new Anthropology sub-section devoted to the study of man was strongly resisted by others. However with the support of T. H. Huxley it was created under Biology section D in 1866, and in 1869, Section E dropped the "Ethnology" part of its title. At the same time, Hunt's position was weakened by an allegation made by one of the members, Hyde Clarke, about the finances of the organisation. Although he managed to satisfy the other members and expel Clarke, the stress seriously affected his health. A merger of the two organisations was already under way before Hunt died early at a young age in 1869, and in 1871 they formed the Royal Anthropological Institute of Great Britain and Ireland. Other organisations In 1863, Richard Burton and others founded a breakaway London Anthropological Society which for several years published a journal "Anthropologia". Burton said "My motive was to supply travellers with an organ that would rescue their observations from the outer darkness of manuscripts and print their curious information on social and sexual matters out of place in the popular book". There was also an Anthropological Society of London founded in 1836 by John Isaac Hawkins which had more to do with phrenology. Publications Memoirs read before the Anthropological Society of London Vol 1:1863-4, 2:1865-6, 3:1867-9 Journal of the Anthropological Society of London Vol 7:1868 Anthropological Review. Vol 1, 2:1864, 3, 4, 5, 6, 7, 8:1870 Journal of Anthropology. No. I-III:1870-1. The Popular Magazine of Anthropology. Vol 1 Anthropologia. Vol. 1. London: Baillière, Tindall and Cox. [1873-1875] References Further reading Efram Sera-Shriar, ‘Observing Human Difference: James Hunt, Thomas Huxley, and Competing Disciplinary Strategies in the 1860s’, Annals of Science, 70 (2013), 461-491 1863 establishments in England Anthropology-related professional associations Learned societies of the United Kingdom Organizations established in 1863 Scientific racism White supremacist groups Royal Anthropological Institute of Great Britain and Ireland
Anthropological Society of London
[ "Biology" ]
1,381
[ "Biology theories", "Obsolete biology theories", "Scientific racism" ]
352,026
https://en.wikipedia.org/wiki/Clearance%20Diving%20Branch%20%28RAN%29
The Clearance Diving Branch is the specialist diving unit of the Royal Australian Navy (RAN) whose versatile role covers all spheres of military diving, and includes explosive ordnance disposal and maritime counter-terrorism. The Branch has evolved from traditional maritime diving, and explosive ordnance disposal, to include a special operations focus. History The RAN has used divers on a regular basis since the 1920s, but it was not until World War II that clearance diving operations came to the fore, with RAN divers working alongside Royal Navy divers to remove naval mines from British waters, and from the waters of captured ports on the European mainland such as Hugh Syme, John Mould, George Gosse and Leon Goldsworthy all highly decorated. RAN divers were also used in performing duties including reconnaissance of amphibious landing sites. The skills learned in the European theatre were brought back to Australia, and used in the war against Japan. After the war, RAN divers were used during the clean-up of Australian and Papua New Guinea waters of defensive mines. The utility of clearance and commando divers demonstrated during and after World War II prompted the Australian Commonwealth Naval Board to establish a clearance diving branch within the RAN in 1951. Divers were initially attached to the Underwater Research and Development Unit, based at . In 1956, they were organised into a separate Mobile Clearance Diving Team. In March 1966, the divers underwent further reorganisation, splitting into two Clearance Diving Teams. Clearance Diving Team 1 (CDT 1) was the operational team assigned to mine clearance and reconnaissance operations throughout the Australia Station, while Clearance Diving Team 2 (CDT 2) was dedicated to mine warfare in the Sydney area, but was not cleared for operations outside this area. In late 1966, Clearance Diving Team 3 (CDT 3) was established specifically for deployment to the Vietnam War to assist the overworked United States Navy Explosive Ordnance Disposal units, and to give RAN personnel in clearance diving work in an operational environment. Sending CDT 1 or CDT 2, in full or in part, would have impacted on the teams' existing commitments, along with the continuity of training and postings. CDT 3 was formed from available personnel; this was sufficient to keep a six-man team on station in Vietnam from early 1967 until early 1971, with six-month deployments. CDT 3 was disbanded at the end of the Vietnam War, but the designation is reactivated for overseas wartime deployments, including in 1991 for the Gulf War, and again in 2003 for the Iraq War. Structure The Clearance Diving Branch consists of: Clearance Diving Team One (AUSCDT1); assigned to the east of Australia and based at HMAS Waterhen in New South Wales Clearance Diving Team Four (AUSCDT4); assigned to the west of Australia and based at HMAS Stirling in Western Australia For overseas operational deployments, the designation of Clearance Diving Team Three (AUSCDT3) is used for a specifically formed team. The Royal Australian Naval Reserve has eight Reserve Diving Teams (ANRDT) which provide supplementary or surge capability in support of regular CDTs in addition to localised fleet underwater taskings: Reserve Diving Team Five (ANRDT5) – based at HMAS Waterhen Reserve Diving Team Six (ANRDT6) – based in Melbourne Reserve Diving Team Seven (ANRDT7) – based at HMAS Stirling Reserve Diving Team Eight (ANRDT8) – based in Brisbane Reserve Diving Team Nine (ANRDT9) – based in Adelaide Reserve Diving Team Ten (ANRDT10) – based in Hobart Reserve Diving Team Eleven (ANRDT11) – based in Darwin Reserve Diving Team Twelve (ANRDT12) – based in Cairns Role The Clearance Diving Branch force elements are: 1. Maritime Tactical Operations (MTO): Clandestine beach reconnaissance (including back of beach operations up to 2 km inland) Clandestine hydrographic survey of seabed prior to an amphibious assault Clandestine clearance or demolition of sea mines and/or obstacles Clandestine placing of demolitions charges for the purpose of diversion or demonstration (ship/wharf attacks) Clandestine document collection 2. Mine Counter Measures (MCM): Location and disposal of sea mines in shallow waters Rendering safe and recovering enemy mines The search for and disposal of ordnance below the high water mark Clearance of surface ordnance in port or on naval facilities Search for, rendering safe or disposal of all ordnance in RAN ships and facilities 3. Underwater Battle Damage Repair (UBDR): Surface supplied breathing apparatus diving Use of underwater tools including welders, explosive nailguns and pneumatic drills and chainsaws 4. Task Group Explosive Ordnance Disposal (TGEOD): Embarking on warships for Operation MANITOU rotations in the Middle East to provide specialist support for boarding parties with improvised explosive devices (IED) and explosive ordnance 5. Maritime counter terrorism-explosive ordnance disposal (MCT-EOD): Provide explosive ordnance (EOD) and improvised explosive device disposal (IEDD) mobility support to the Tactical Assault Group at a rapid speed to maintain the momentum of a direct assault mission A Clearance Diver may be posted to a Clearance Diving Team, Huon Class Minehunter Coastal ship, training position in the Australian Defence Force Diving School at HMAS Penguin and can apply to serve in the Tactical Assault Group-East (TAG-E). Since January 2002, Special Duties Units of Clearance Divers from AUSCDT1 and AUSCDT4 have provided the maritime counter terrorism element of Tactical Assault Group-East (TAG-E), attached to the Australian Army 2nd Commando Regiment, which became operational on 22 July 2002 to respond to terrorist incidents in the Eastern States of Australia. Clearance Divers need to successfully pass the Army Special Forces Screen Test and then successfully complete specific elements of Commando Reinforcement Training before serving in either the water platoon as an assaulter or in the water sniper team in the sniper platoon. Service in TAG-E is normally 12 to 18 months online before rotating back into the Branch with divers able to rotate back into TAG-E after 12 to 18 months offline. Selection and training The RAN's diver training program is commenced with a 5-day Clearance Diver Aptitude Assessment, or CDAA, focused on demonstrating in-water confidence, physical endurance, mental resilience and attention, and supported through psychological assessment. CDAA is aimed to ensure officer and sailor clearance diver candidates have the right aptitude to commence the lengthy training program. Historically there have been a few variants, including the 10-day clearance diver acceptance test (CDAT), colloquially known as "hell week". During CDAT, candidates began each day at 02:00, and were put through over thirty staged dives designed to test their strength and endurance. CDAT was shortened to a 7-day program, then into the current 5-day program with an increasing emphasis on in-water confidence and endurance. Upon passing aptitude assessment, students must successfully pass the 60-week Clearance Diver Initial Employment Training course. In 2019, the first females graduated from the Clearance Diver Initial Employment Training course. Clearance Divers who are promoted to Leading Seaman Clearance Diver complete the Intermediate Clearance Diver course and for Petty Officer Clearance Diver complete the Advanced Clearance Diver course. Officers complete the Clearance Diving component of the Mine Warfare and Clearance Diving Officers course. The MCT-EOD role requires clearance divers to be familiar with Tactical Assault Group (TAG) specialist insertion techniques including diving, fast roping and parachuting to be able to integrate into the unit to provide IED expertise. Operations In the Vietnam War, Clearance Diving Team 3 was awarded the United States Presidential Unit Citation, the United States Navy Unit Commendation twice, and the United States Meritorious Unit Commendation for its mine clearance work: see Non-US recipients of US gallantry awards. Took part in Operation Navy Help 1991: Performed mine clearance operations for coalition forces during the Gulf War. 1999: In the East Timor independence crisis as part of INTERFET, CDTs clandestinely mapped harbours and beaches in preparation for the arrival of peacekeepers. 2003: In Operation Falconer (the invasion phase of the Iraq War), CDTs were attached to Commander Task Unit 55.4.3, along with US and British partners, tasked with conducting deep/shallow water mine counter measure operations to clearing shipping lanes. CDTs notably participated in opening up the port at Umm Qasr. 2003–2009: In Operation Catalyst (post-invasion Iraq), CDTs were attached to Coalition counter improvise explosive device (IED) task forces. 2008–2013: In Operation Slipper – CDTs deployed explosive ordnance disposal (EOD) technicians in Afghanistan and provided tactical boarding parties for ships combating smugglers and piracy. See also Notes Footnotes Citations References External links Royal Australian Navy Clearance Diver – Defence Jobs Combat diving Frogman operations Armed forces diving Royal Australian Navy Special forces of Australia Military units and formations of the Royal Australian Navy Bomb disposal Recipients of the Meritorious Unit Citation Cold War history of Australia Military Units in Western Australia Military units and formations of Australia in the Vietnam War Military units and formations of the Gulf War Military units and formations of Australia in the Iraq War
Clearance Diving Branch (RAN)
[ "Chemistry" ]
1,838
[ "Explosion protection", "Bomb disposal" ]
352,047
https://en.wikipedia.org/wiki/Laryngoscopy
Laryngoscopy () is endoscopy of the larynx, a part of the throat. It is a medical procedure that is used to obtain a view, for example, of the vocal folds and the glottis. Laryngoscopy may be performed to facilitate tracheal intubation during general anaesthesia or cardiopulmonary resuscitation or for surgical procedures on the larynx or other parts of the upper tracheobronchial tree. Direct laryngoscopy Direct laryngoscopy is carried out (usually) with the patient lying on their back; the laryngoscope is inserted into the mouth on the right side and flipped to the left to trap and move the tongue out of the line of sight, and, depending on the type of blade used, inserted either anterior or posterior to the epiglottis and then lifted with an upwards and forward motion ("away from you and towards the roof "). This move makes a view of the glottis possible. This procedure is done in an operation theatre with full preparation for resuscitative measures to deal with respiratory distress. There are at least ten different types of laryngoscope used for this procedure, each of which has a specialized use for the otolaryngologist and medical speech pathologist. This procedure is most often employed by anaesthetists for endotracheal intubation under general anaesthesia, but also in direct diagnostic laryngoscopy with biopsy. It is extremely uncomfortable and is not typically performed on conscious patients, or on patients with an intact gag reflex. Indirect laryngoscopy Indirect laryngoscopy is performed whenever the provider visualizes the patient's vocal cords by a means other than obtaining a direct line of sight (e.g. a mirror). For the purpose of intubation, this is facilitated by fiberoptic bronchoscopes, video laryngoscopes, fiberoptic stylets and mirror or prism optically enhanced laryngoscopes. History Some historians (for example, Morell Mackenzie) credit Benjamin Guy Babington (1794–1866), who called his device the "glottiscope", with the invention of the laryngoscope. Philipp von Bozzini (1773–1809) and Garignard de la Tour were other early physicians to use mouth mirrors to inspect the oropharynx and hypopharynx. In 1854, the vocal pedagogist Manuel García (1805–1906) became the first man to view the functioning glottis and larynx in a living human. García developed a tool that used two mirrors for which the Sun served as an external light source. Using this device, he was able to observe the function of his own glottic apparatus and the uppermost portion of his trachea. He presented his findings at the Royal Society of London in 1855. All previous observations of the glottis and larynx had been performed under indirect vision (using mirrors) until 23 April 1895, when Alfred Kirstein (1863–1922) of Germany first described direct visualization of the vocal cords. Kirstein performed the first direct laryngoscopy in Berlin, using an esophagoscope he had modified for this purpose; he called this device an autoscope. It is believed that the death in 1888 of Emperor Frederick III motivated Kirstein to develop the autoscope. In 1913, Chevalier Jackson was the first to report a high rate of success for the use of direct laryngoscopy as a means to intubate the trachea. Jackson introduced a new laryngoscope blade that had a light source at the distal tip, rather than the proximal light source used by Kirstein. This new blade incorporated a component that the operator could slide out to allow room for passage of an endoracheal tube or bronchoscope. That same year, Henry Harrington Janeway (1873–1921) published results he had achieved using another new laryngoscope he had recently developed. An American anesthesiologist practicing at Bellevue Hospital in New York City, Janeway believed that direct intratracheal insufflation of volatile anesthetics would provide improved conditions for surgery of the nose, mouth and throat. With this in mind, he developed a laryngoscope designed for the sole purpose of tracheal intubation. Similar to Jackson's device, Janeway's instrument incorporated a distal light source. Unique however was the inclusion of batteries within the handle, a central notch in the blade for maintaining the tracheal tube in the midline of the oropharynx during intubation, and a slight curve to the distal tip of the blade to help guide the tube through the glottis. The success of this design led to its subsequent use in other types of surgery. Janeway was thus instrumental in popularizing the widespread use of direct laryngoscopy and tracheal intubation in the practice of anesthesiology. Applications Helps in intubation during the administration of general anaesthesia or for mechanical ventilation. Detects causes of voice problems, such as breathing voice, hoarse voice, weak voice, or no voice. Detects causes of throat and ear pain. Evaluates difficulty in swallowing : a persistent sensation of lump in the throat, or mucus with blood. Detects strictures or injury to the throat, or obstructive masses in the airway. Conventional laryngoscope The vast majority of tracheal intubations involve the use of a viewing instrument of one type or another. Since its introduction by Kirstein in 1895, the conventional laryngoscope has been the most popular device used for this purpose. Today, the conventional laryngoscope consists of a handle containing batteries with a light source, and a set of interchangeable blades. Laryngoscope blades Early laryngoscopes used a straight "Magill Blade", and this design is still the standard pattern veterinary laryngoscopes are based upon; however the blade is difficult to control in adult humans and can cause pressure on the vagus nerve, which can cause unexpected cardiac arrhythmias to spontaneously occur in adults. Two basic styles of laryngoscope blade are currently commercially available: the curved blade and the straight blade. The Macintosh blade is the most widely used of the curved laryngoscope blades, while the Miller blade is the most popular style of straight blade. Both Miller and Macintosh laryngoscope blades are available in sizes 0 (neonatal) through 4 (large adult). There are many other styles of curved and straight blades (e.g., Phillips, Robertshaw, Sykes, Wisconsin, Wis-Hipple, etc.) with accessories such as mirrors for enlarging the field of view and even ports for the administration of oxygen. These specialty blades are primarily designed for use by anesthetists, most commonly in the operating room. Additionally, paramedics are trained to use direct laryngoscopy to assist with intubation in the field. The Macintosh blade is positioned in the vallecula, anterior to the epiglottis, lifting it out of the visual pathway, while the Miller blade is positioned posterior to the epiglottis, trapping it while exposing the glottis and vocal folds. Incorrect usage can cause trauma to the front incisors; the correct technique is to displace the chin upwards and forward at the same time, not to use the blade as a lever with the teeth serving as the fulcrum. The Miller, Wisconsin, Wis-Hipple, and Robertshaw blades are commonly used for infants. It is easier to visualize the glottis using these blades than the Macintosh blade in infants, due to the larger size of the epiglottis relative to that of the glottis. Fiberoptic laryngoscopes Besides the conventional laryngoscopes, many other devices have been developed as alternatives to direct laryngoscopy. These include a number of indirect fiberoptic viewing laryngoscopes such as the flexible fiberoptic bronchoscope. The flexible fiberoptic bronchoscope or rhinoscope can be used for office-based diagnostics or for tracheal intubation. The patient can remain conscious during the procedure, so that the vocal folds can be observed during phonation. Surgical instruments passed through the scope can be used for performing procedures such as biopsies of suspicious masses. These instruments have become indispensable within the otolaryngology, pulmonology and anesthesia communities. Other available fiberoptic devices include the Bullard scope, UpsherScope, and the WuScope. These devices are widely employed for tracheal intubation, especially in the setting of the difficult intubation (see below). Video laryngoscope The conventional direct laryngoscope uses a line of sight provided by a rigid viewing instrument with a light on the blade or intra-oral portion which requires a direct view of the target larynx; this view is clearly seen in 80-90% of attempts. The frequent failure of direct laryngoscopy to provide an adequate view for tracheal intubation led to the development of alternative devices such as the lighted stylet, and a number of indirect fiberoptic viewing laryngoscopes, such as the fiberscope, Bullard scope, Upsher scope, and the WuScope. Though these devices can be effective alternatives to direct laryngoscopy, they each have certain limitations, and none of them is effective under all circumstances. One important limitation commonly associated with these devices is fogging of the lens. In an attempt to address some of these limitations, Jon Berall, a New York City internist and emergency medicine physician, designed the camera screen straight video laryngoscope in 1998. The first true video laryngoscope Glidescope was produced in 1999 and a production version with 60 degree angle, an onboard heater, and a custom screen was first sold in dec 2000. The true video laryngoscope has a camera on the blade with no intervening fiberoptic components. The concept is important because it is simpler to produce and handle the resultant images from CMOS cameras. The integrated camera leads to a series of low cost variants that are not possible with the hybrid Fiberoptic units. GlideScope In 2001, the GlideScope (designed by vascular and general surgeon John Allen Pacey) became the first commercially available video laryngoscope. It incorporates a high resolution digital camera, connected by a video cable to a high resolution LCD monitor. It can be used for tracheal intubation to provide controlled mechanical ventilation, as well as for removal of foreign bodies from the airway. GlideScope owes its superior results to a combination of five key factors: The steep 60-degree angulation of its blade improves the view of the glottis by reducing the requirement for anterior displacement of the tongue. The CMOS APS digital camera is located at the point of angulation of the blade (rather than at the tip). This placement allows the operator to more effectively view the field in front of the camera. The video camera is recessed for protection from blood and secretions which might otherwise obstruct the view. The video camera has a relatively wide viewing angle of 50 degrees. The heated lens innovation helps to prevent fogging of the lens, which might otherwise obscure the view. Tracheal intubation with the GlideScope can be facilitated by the use of the Verathon Stylet, a rigid stylet that is curved to follow the 60° angulation of the blade. To achieve a 99% successful rate of intubation with the GlideScope requires the operator to acquire a new skill set with this stylet. In a 2003 study, the authors noted that the GlideScope provided adequate vision of the glottis (Cormack and Lehane grade I-II) even when the oral, pharyngeal and laryngeal axes could not be optimally aligned due to the presence of a cervical collar. Despite this significant limitation, the average time to intubate the trachea with the GlideScope was only 38 seconds. In 2005, the first major clinical study comparing the Glidescope to the conventional laryngoscope was published. In 133 patients in whom both Glidescope and conventional laryngoscopy were performed, excellent or good laryngeal exposure was obtained in 124/133 (93%) of Glidescope laryngoscopy patients, compared with only 98/133 (74%) of patients in whom conventional laryngoscopy was used. Intubation was successful in 128/133 (96%) of Glidescope laryngoscopy patients. These early results suggest that this device may be a useful alternative in the management of difficult tracheal intubation. The Verathon design team later produced the Ranger Video Laryngoscope for a United States Air Force requirement that is now rolling forward into EMS and military use. The Cobalt series of GlideScope then introduced a single-use variant that encompasses weights from 1000 grams to morbid obesity and is successful in many airway syndromes as well. The GlideScope Ranger is a variant designed for use in pre-hospital airway management including air, land, and sea applications. This device weighs 1.5 pounds, and is waterproof as well as airworthy to 20,000 feet altitude. The GlideScope Cobalt is a variant that has a reusable video camera with light-emitting core which has a disposable or single use external shell for prevention of cross infection. In August 2009, the team at Verathon collaborated with Professor John Sakles from the University of Arizona Emergency Department in achieving the world's first tracheal intubation conducted with the assistance of telemedicine technology. During this demonstration, Sakles and the University of Arizona Telemedicine service guided physicians in a rural hospital as they performed a tracheal intubation using the GlideScope. Other video laryngoscopes Several types of video laryngoscopes are also currently available, such as the HEINE visionPRO, Truview PCD-R Manufactured by Truphatek Israel, Glidescope, McGrath laryngoscope, Daiken Medical Coopdech C-scope VLP-100, the Storz C-Mac, Pentax-AWS(or Airway Scope), Video Macintosh Intubating Laryngoscope System (VMS), the Berci DCI, and the Copilot VL. These laryngoscopes employ a variety of features such as a monitor on the handle and or channels to assist in guiding the endotracheal tube into the trachea. The superior performance of video laryngoscopes in airway management where cervical spine injury is possible has raised the question of whether these scopes should supersede direct laryngoscopy in routine airway management. Further evidence in support of videolaryngoscopy has accumulated over the years, indicating a favourable risk profile for video laryngoscopes over direct laryngoscopes. Other noninvasive intubation devices Other "noninvasive" devices which can be employed to assist in tracheal intubation are the laryngeal mask airway (Some types of which may be used as a conduit for endotracheal tube placement), the lighted stylet, and the AirTraq. Due to the widespread availability of such devices, the technique of blind digital intubation of the trachea is rarely practiced today, though it may still be useful in emergency situations under austere conditions such as natural or man-made disasters. Complications Cases of mild or severe injury caused by rough and inexperienced use of laryngoscopes have been reported. These include minor damage to the soft tissues within the throat which causes a sore throat after the operation to major injuries to the larynx and pharynx causing permanent scarring, ulceration and abscesses if left untreated. Additionally, there is a risk of causing tooth damage. Etymology and pronunciation The word laryngoscopy uses combining forms of laryngo- and -scopy. References External links Airway management Anesthesia Emergency medical procedures Medical equipment Spanish inventions Endoscopy
Laryngoscopy
[ "Biology" ]
3,413
[ "Medical equipment", "Medical technology" ]
352,169
https://en.wikipedia.org/wiki/Sibling
A sibling is a relative that shares at least one parent with the other person. A male sibling is a brother, and a female sibling is a sister. A person with no siblings is an only child. While some circumstances can cause siblings to be raised separately (such as foster care), most societies have siblings grow up together. This causes the development of strong emotional bonds, with siblinghood considered a unique type of relationship. The emotional bond between siblings is often complicated and is influenced by factors such as parental treatment, birth order, personality, and personal experiences outside the family. Medically, a full-sibling is a first-degree relative and a half-sibling is a second-degree relative as they are related by 50% and 25%, respectively. Definitions The word sibling was reintroduced in 1903 in an article in Biometrika, as a translation for the German Geschwister, having not been used since Middle English, specifically 1425. Siblings or full-siblings ([full] sisters or brothers) share the same biological parents. Full-siblings are also the most common type of siblings. Twins are siblings that are born from the same pregnancy. Often, twins with a close relationship will develop a twin language from infanthood, a language only shared and understood between the two. Studies corroborate that identical twins appear to display more twin talk than fraternal twins. At about 3, twin talk usually ends. Twins generally share a greater bond due to growing up together and being the same age. Half-siblings (half-sisters or half-brothers) are people who share one parent. They may share the same mother but different fathers (in which case they are known as uterine siblings or maternal half-siblings), or they may have the same father but different mothers (in which case, they are known as agnate siblings or paternal half-siblings. In law, the term consanguine is used in place of agnate). In law (and especially inheritance law), half-siblings have often been accorded treatment unequal to that of full-siblings. Old English common law at one time incorporated inequalities into the laws of intestate succession, with half-siblings taking only half as much property of their intestate siblings' estates as siblings of full-blood. Unequal treatment of this type has been wholly abolished in England, but still exists in Florida. Three-quarter siblings share one parent, while the unshared parents are first-degree relatives to each other, for example, if a man has children with two women who are sisters, or a woman has children with a man and his son. In the first case, the children are half-siblings as well as first cousins; in the second, the children are half-siblings as well as an avuncular pair. They are genetically closer than half-siblings but less genetically close than full-siblings, a degree of genetic relationship that is rare in humans and little-studied. One notable example of three-quarter siblings is the family of American aviator Charles Lindbergh, who fathered children with two German sisters, Brigitte and Marietta Hesshaimer. Diblings, a portmanteau of donor sibling, or donor-conceived sibling, or donor-sperm sibling, are biologically connected through donated eggs or sperm. Diblings are biologically siblings though not legally for the purposes of family rights and inheritance. The anonymity of donation is seen to add complication to the process of courtship. Non-blood relations Related through affinity: Stepsiblings (stepbrothers or stepsisters) are the children of one's stepparent from a previous relationship. Adoptive siblings are raised by a person who is the adoptive parent of one and the adoptive or biological parent of the other. Siblings-in-law are the siblings of one's spouse, the spouse of one's sibling, or the spouse of one's spouse's sibling. The spouse of one's spouse's sibling may also be called a co-sibling. Not related: siblings are children who are raised in the same foster home: foster children of one's parent(s), or the children or foster children of one's foster parent. God siblings are the children of the godfather or godmother or the godchildren of the father or mother. Milk siblings are children who have been nursed by the same woman. This relationship exists in cultures with milk kinship and in Islamic law. Cross-siblings are individuals who share one or more half-siblings; if one person has at least one maternal half-sibling and at least one paternal half-sibling, the maternal and paternal half-siblings are cross-siblings to each other. Consanguinity and genetics Consanguinity is the measure of how closely people are related. Genetic relatedness measures how many genes a person shares. As all humans share over 99% of the same genes, consanguinity only matters for the small fraction of genes which vary between different people. Inheritance of genes has a random element to it, and these two concepts are different. Consanguinity decreases by half for every generation of reproductive separation through their most recent common ancestor. Siblings are 50% related by consanguinity as they are separated from each other by two generation (sibling to parent to sibling), and they share two parents as common ancestors (). A fraternal twin is a sibling and, therefore, is related by 50% consanguinity. Fraternal twins are no more genetically similar than regular siblings. As identical twins come from the same zygote, their most recent common ancestor is each other. They’re genetically identical and 100% consanguineous as they’re separated by zero generations (). Twin studies have been conducted by scientists to examine the roles that genetics and environment play in the development of various traits. Such studies examine how often identical twins possess the same behavioral trait and compare it to how often fraternal twins possess the same trait. In other studies twins are raised in separate families, and studies compare the passing on of a behavioral trait by the family environment and the possession of a common trait between identical twins. This kind of study has revealed that for personality traits which are known to be heritable, genetics play a substantial role throughout life and an even larger role during early years. Half-siblings are 25% related by consanguinity as they share one parent and separated from each other by two generations (). A person may share more than the standard consanguinity with their sibling if their parents are related (the coefficient of inbreeding is greater than zero). Interestingly, half-siblings can be related by as "three-quarters siblings" (related by 3/8) if their unshared parents have a consanguinity of 50%. This means the unshared parents are either siblings, making the half-siblings cousins, or parent and child, making them half- aunt-uncle and niece-nephew. Percentage distribution In practice, full siblings do not share exactly 50% of their DNA, as chromosomal crossover only occurs a limited number of times and, therefore, large chunks of a chromosome are shared or not shared at one time. In fact, the mean DNA fraction shared is 50.28% with a standard deviation of 3.68%, meaning approximately 1/4 of sibling pairs share more than 52.76% of their DNA, while 1/4 share less than 47.8%. There is a very small chance that two half-siblings might not share any genes if they didn't inherit any of the same chromosomes from their shared parent. This is possible for full-siblings as well, though even more unlikely. But because of how homologous chromosomes swap genes (due to chromosomal crossover during meiosis) during the development of an egg or sperm cell, however, the odds of this ever actually occurring are practically non-existent. Birth order Birth order is a person's rank by age among his or her siblings. Typically, researchers classify siblings as "eldest", "middle child", and "youngest" or simply distinguish between "first-born" and "later-born" children. Birth order is commonly believed in pop psychology and popular culture to have a profound and lasting effect on psychological development and personality. For example, firstborns are seen as conservative and high-achieving, middle children as natural mediators, and youngest children as charming and outgoing. Despite its lasting presence in the public domain, studies have failed to consistently produce clear, valid, compelling findings; therefore, it has earned the title of a pseudo-psychology amongst the scientific psychological community. History The theorizing and study of birth order can be traced back to Francis Galton's (1822–1911) theory of birth order and eminence and Alfred Adler's (1870–1937) theory of birth order and personality characteristics. Galton In his book English Men of Science: Their Nature and Nurture (1874), Galton noted that prominent composers and scientists are over-represented as first-borns. He theorized three main reasons as to why first-borns are generally more eminent: Primogeniture laws: first-borns have access to their parents' financial resources to continue their education. First-borns are given more responsibility than their younger siblings and are treated more as companions by their parents. First-borns are given more attention and nourishment in families with limited financial resources. Adler First Borns: Fulfilling family roles of leadership and authority, obedient of protocol and hierarchy. Seek out and prefer order, structure and adherence to norms and rules. They partake in goal-striving behaviour as their lives are centred around achievement and accomplishment themes. They fear the loss of their position in the top of the hierarchy. Middle Children: Feel like outcasts of families as they lack primacy of the first child and the "attention garnering recency" of the youngest. These children often go to great lengths to de-identify themselves with their siblings, in an attempt to make a different and individualized identity for themselves as they feel like they were "squeezed out" of their families. Youngest Children: Feel disadvantaged compared to older siblings, are often perceived as less capable or experienced and are therefore indulged and spoiled. Because of this, they are skilled in coaxing/charming others to do things for them or provide. This contributes to the image of them being popular and outgoing, as they engage in attention-seeking behaviour to meet their needs. Contemporary findings Today, the flaws and inconsistencies in birth order research eliminate its validity. It is very difficult to control solely for factors related to birth order, and therefore most studies produce ambiguous results. Embedded into theories of birth order is a debate of nature versus nurture. It has been disproved that there is something innate in the position one is born into, and therefore creating a preset role. Birth order has no genetic basis. The social interaction that occurs as a result of birth order however is the most notable. Older siblings often become role models of behaviour, and younger siblings become learners and supervisees. Older siblings are at a developmental advantage both cognitively and socially. The role of birth order also depends greatly and varies greatly on family context. Family size, sibling identification, age gap, modeling, parenting techniques, gender, class, race, and temperament are all confounding variables that can influence behaviour and therefore perceived behaviour of specific birth categories. The research on birth order does have stronger correlations, however, in areas such as intelligence and physical features, but are likely caused by other factors other than the actual position of birth. Some research has found that firstborn children have slightly higher IQs on average than later born children. However, other research finds no such effect. It has been found that first-borns score three points higher compared to second borns and that children born earlier in a family are on average, taller and weigh more than those born later. However, it is impossible to generalize birth order characteristics and apply them universally to all individuals in that subgroup. Contemporary explanations for IQ findings Resource dilution model (Blake, 1981) provide three potential reasons for the higher scoring of older siblings on IQ tests: Parental resources are finite, first-born children get full and primary access to these resources. As the number of a children in a family goes up, the more resources must be shared. These parental resources have an important impact on a child's educational success. Confluence model Robert Zajonc proposed that the intellectual environment within a family is ever-changing due to three factors, and therefore more permissive of first-born children's intellectual advancement: Firstborns do not need to share parental attention and have their parents' complete absorption. More siblings in the family limit the attention devoted to each of them. Firstborns are exposed to more adult language. Later-borns are exposed to the less-mature speech of their older siblings. Firstborns and older siblings must answer questions and explain things to younger siblings, acting as tutors. This advances their cognitive processing of information and language skills. In 1996, interest in the science behind birth order was re-sparked when Frank Sulloway’s book Born To Rebel was published. In this book, Sulloway argues that firstborns are more conscientious, more socially dominant, less agreeable, and less open to new ideas compared to later-borns. While being seemingly empirical and academic, as many studies are cited throughout the book, it is still often criticized as a biased and incomplete account of the whole picture of siblings and birth order. Because it is a novel, the research and theories proposed throughout were not criticized and peer-reviewed by other academics before its release. Literature reviews that have examined many studies and attempted to control for confounding variables tend to find minimal effects for birth order on personality. In her review of the scientific literature, Judith Rich Harris suggests that birth order effects may exist within the context of the family of origin, but that they are not enduring aspects of personality. In practice, systematic birth order research is a challenge because it is difficult to control for all of the variables that are statistically related to birth order. For example, large families are generally lower in socioeconomic status than small families, so third-born children are more likely than first-born children to come from poorer families. Spacing of children, parenting style, and gender are additional variables to consider. Regressive behavior at birth Regressive behaviors are the child's way of demanding the parents' love and attention. The arrival of a new baby is especially stressful for firstborns and for siblings between 3 and 5 years old. In such situations, regressive behavior may be accompanied by aggressive behavior, such as handling the baby roughly. All of these symptoms are considered to be typical and developmentally appropriate for children between the ages of 3 and 5. While some can be prevented, the remainder can be improved within a few months. Regressive behavior may include demand for a bottle, thumb sucking, requests to wear diapers (even if toilet-trained), or requests to carry a security blanket. The American Academy of Pediatrics suggests that instead of protesting or telling children to act their age, parents should simply grant their requests without becoming upset. The affected children will soon return to their normal routine when they realize that they now have just as important a place in the family as the new sibling. Most of the behaviors can be improved within a few months. The University of Michigan Health System advises that most occurrences of regressive behavior are mild and to be expected; however, it recommends parents to contact a pediatrician or child psychologist if the older child tries to hurt the baby, if regressive behavior does not improve within 2 or 3 months, or if the parents have other questions or concerns. Rivalry "Sibling rivalry" is a type of competition or animosity among brothers and sisters. It appears to be particularly intense when children are very close in age or of the same gender. Sibling rivalry can involve aggression; however, it is not the same as sibling abuse where one child victimizes another. Sibling rivalry usually starts right after, or before, the arrival of the second child. While siblings will still love each other, it is not uncommon for them to bicker and be malicious to each other. Children are sensitive from the age of 1 year to differences in parental treatment and by 3 years they have a sophisticated grasp of family rules and can evaluate themselves in relation to their siblings. Sibling rivalry often continues throughout childhood and can be very frustrating and stressful to parents. One study found that the age group 10–15 reported the highest level of competition between siblings. Sibling rivalry can continue into adulthood and sibling relationships can change dramatically over the years. Approximately one-third of adults describe their relationship with siblings as rivalrous or distant. However, rivalry often lessens over time and at least 80% of siblings over age 60 enjoy close ties. Each child in a family competes to define who they are as persons and want to show that they are separate from their siblings. Sibling rivalry increases when children feel they are getting unequal amounts of their parents' attention, where there is stress in the parents' and children's lives, and where fighting is accepted by the family as a way to resolve conflicts. Sigmund Freud saw the sibling relationship as an extension of the Oedipus complex, where brothers were in competition for their mother's attention and sisters for their father's. Evolutionary psychologists explain sibling rivalry in terms of parental investment and kin selection: a parent is inclined to spread resources equally among all children in the family, but a child wants most of the resources for him or herself. Relationships Jealousy Jealousy is not a single emotion. The basic emotions expressed in jealous interactions are fear, anger, relief, sadness, and anxiety. Jealousy occurs in a social triangle of relationships which do not require a third person. The social triangle involves the relationships between the jealous individual and the parent, the relationship between the parent and the rival, and the relationship between jealous individual and the rival. Newborn First-borns' attachment to their parents is directly related to their jealous behaviour. In a study by Volling, four classes of children were identified based on their different responses of jealousy to new infant siblings and parent interactions. Regulated Exploration Children: 60% of children fall into this category. These children closely watch their parents interact with their newborn sibling, approach them positively and sometimes join the interaction. They show fewer behaviour problems in the months following the new birth and do not display problematic behaviours during the parent-infant interaction. These children are considered secure as they act how a child would be expected to act in a familiar home setting with their parents present as secure bases to explore the environment. Approach-Avoidant Children: 30% of children fall into this category. These children observe parent-infant interaction closely and are less likely to approach the infant and the parent. They are anxious to explore the new environment as they tend to seek little comfort from their parents. Anxious-Clingy Children: 6% of children fell into this category. These children have an intense interest in parent-infant interaction and a strong desire to seek proximity and contact with the parent, and sometimes intrude on parent-child interaction. Disruptive Children: 2.7% of children fall into this category. These children are emotionally reactive and aggressive. They have difficulty regulating their negative emotions and may be likely to externalize it as negative behaviour around the newborn. Parental effect Children are more jealous of the interactions between newborns and their mothers than they are with newborns and their fathers. This is logical as up until the birth of the infant, the first-born child had the mother as their primary care-giver all to themselves. Some research has suggested that children display less jealous reactions over father-newborn interactions because fathers tend to punish negative emotion and are less tolerant than mothers of clinginess and visible distress, although this is hard to generalize. Children that have parents with a better marital relationship are better at regulating their jealous emotions. Children are more likely to express jealousy when their parents are directing their attention to the sibling as opposed to when the parents are solely interacting with them. Parents who are involved in good marital communication help their children cope adaptively with jealousy. They do this by modelling problem-solving and conflict resolution for their children. Children are also less likely to have jealous feelings when they live in a home in which everyone in the family shares and expresses love and happiness. Implicit theories Implicit theories about relationships are associated with the ways children think of strategies to deal with a new situation. Children can fall into two categories of implicit theorizing. They may be malleable theorists and believe that they can affect change on situations and people. Alternatively, they may be fixed theorists, believing situations and people are not changeable. These implicit beliefs determine both the intensity of their jealous feelings, and how long those jealous feelings last. Malleable Theorists display engaging behaviours, like interacting with the parent or sibling in an attempt to improve the situation. They tend to have more intense and longer-lasting feelings of jealousy because they spend more time ruminating on the situation and constructing ways to make it better. Fixed Theorists display non-engaging behaviours, for example retreating to their room because they believe none of their actions will affect or improve the situation. They tend to have less intense and shorter lasting feelings of jealousy than malleable theorists. Different ages Older children tend to be less jealous than their younger sibling. This is due to their ability to mentally process the social situation in a way that gives them more positive, empathetic feelings toward their younger sibling. Older children are better able to cope with their jealous feelings toward their younger sibling due to their understanding of the necessary relationship between the parent and younger sibling. Older children are also better at self-regulating their emotions and are less dependent on their caregivers for external regulation as opposed to their younger siblings. Younger siblings' feelings of jealousy are overpowered by feelings of anger. The quality of the relationship between the younger child and the older child is also a factor in jealousy, as the better the relationship the less jealous feelings occurred and vice versa. Conflict Sibling conflict is pervasive and often shrugged off as an accepted part of sibling dynamics. In spite of the broad variety of conflict that siblings are often involved in, sibling conflicts can be grouped into two broader categories. The first category is conflict about equality or fairness. It is not uncommon to see siblings who think that their sibling is favored by their teachers, peers, or especially their parents. In fact it is not uncommon to see siblings who both think that their parents favor the other sibling. Perceived inequalities in the division of resources such as who got a larger dessert also fall into this category of conflict. This form of conflict seems to be more prevalent in the younger sibling. The second category of conflict involves an invasion of a child's perceived personal domain by their sibling. An example of this type of conflict is when a child enters their sibling's room when they are not welcome, or when a child crosses over into their sibling's side of the car in a long road trip. These types of fights seem to be more important to older siblings due to their larger desire for independence. Warmth Sibling warmth is a term for the degree of affection and companionship shared by siblings. Sibling warmth seems to have an effect on siblings. Higher sibling warmth is related to better social skill and higher perceived social competence. Even in cases where there is a high level of sibling conflict if there is also a high level of sibling warmth then social skills and competence remain unaffected. Negative effects of conflict The saying that people "fight like siblings" shows just how charged sibling conflict can be and how well recognized sibling squabbles are. In spite of how widely acknowledged these squabbles can be, sibling conflict can have several impacts on the sibling pair. It has been shown that increased levels of sibling conflict are related to higher levels of anxiety and depression in siblings, along with lower levels of self-worth and lower levels of academic competence. In addition, sibling warmth is not a protective factor for the negative effects of anxiety, depression, lack of self-worth and lower levels of academic competence. This means that sibling warmth does not counteract these negative effects. Sibling conflict is also linked to an increase in more risky behavior including: smoking cigarettes, skipping days of school, contact with the police, and other behaviors in Caucasian sibling pairs with the exception of firstborns with younger brothers. Except for the elder brother in this pair sibling conflict is positively correlated with risky behavior, thus sibling conflict may be a risk factor for behavioral problems. A study on what the topic of the fight was (invasion of personal domain or inequality) also shows that the topic of the fight may have a result on the effects of the conflict. This study showed that sibling conflict over personal domain were related to lower levels of self-esteem, and sibling conflict over perceived inequalities seem to be more related to depressive symptoms. However, the study also showed that greater depressive and anxious symptoms were also related to more frequent sibling conflict and more intense sibling conflict. Parental management techniques of conflict Techniques used by parents to manage their children's conflicts include parental non-intervention, child-centered parental intervention strategies, and more rarely the encouragement of physical conflict between siblings. Parental non-intervention included techniques in which the parent ignores the siblings' conflict and lets them work it out between themselves without outside guidance. In some cases, this technique is chosen to avoid situations in which the parent decides which sibling is in the right and may favor one sibling over the other, however, by following this technique the parent may sacrifice the opportunity to instruct their children on how to deal with conflict. Child-centered parental interventions include techniques in which the parent mediates the argument between the two children and helps them come to an agreement. Using this technique, parents may help model how the children can deal with conflicts in the future; however, parents should avoid dictating the outcome to the children, and make sure that they are mediating the argument making suggestions, allowing the children to decide the outcome. This may be especially important when some of the children have autism. Techniques in which parents encourage physical aggression between siblings may be chosen by the parents to help children deal with aggression in the future, however, this technique does not appear to be effective as it is linked to greater conflict levels between children. Parental non-intervention is also linked to higher levels of sibling conflict, and lower levels of sibling warmth. It appears that child-centered parental interventions have the best effect on sibling's relationship with a link to greater levels of sibling warmth and lower levels of sibling conflict. Long-term effects of presence Studies on social skill and personality differences between only children and children with siblings suggest that overall the presence of a sibling does not have any effect on the child as an adult. Gender roles among children and parents There have always been some differences between siblings, especially different sex siblings. Often, different sex sibling may consider things to be unfair because their brother or sister is allowed to do certain things because of their gender, while they get to do something less fun or just different. McHale and her colleague conducted a longitudinal study using middle-childhood aged children and observed the way in which the parents contributed to stereotypical attitudes in their kids. In their study the experimenters analysed two different types of families, one with the same sex siblings, and the other with different sex siblings, as well as the children's birth order. The experiment was conducted using phone interviews, in which the experimenters would ask the children about the activities they performed throughout their day outside of school. The experimenters found that in the homes where there were mixed gender kids, and the father held traditional values, the kids also held traditional values and therefore also played gender based roles in the home. In contrast, in homes where the father did not hold traditional values, the house chores were divided more equally among his kids. However, if fathers had two male children, the younger male tended to help more with household chores, but as he reached his teenage years the younger child stopped being as helpful around the house. However, education may be a confounder affecting both the father's attitude and the siblings' behavior, and the mother's attitudes did not have a noticeable impact. Westermarck effect Anthropologist Edvard Westermarck found that children who are brought up together as siblings are desensitized to sexual attraction to one another later in life. This is known as the Westermarck Effect. It can be seen in biological and adoptive families, but also in other situations where children are brought up in close contact, such as the Israeli kibbutz system and the Chinese shim-pua marriage. See also Immediate family List of sibling groups Sibling relationship Sibling estrangement Siblings Day Sladdbarn Step-sibling Multiple birth List of twins Triplet Twin Other symmetric relations Cousin Friend Sibling-in-law Significant other (SO; boyfriend or girlfriend) Spouse References Further reading External links Family Kinship and descent
Sibling
[ "Biology" ]
5,978
[ "Behavior", "Human behavior", "Kinship and descent" ]
352,181
https://en.wikipedia.org/wiki/List%20of%20abstract%20algebra%20topics
Abstract algebra is the subject area of mathematics that studies algebraic structures, such as groups, rings, fields, modules, vector spaces, and algebras. The phrase abstract algebra was coined at the turn of the 20th century to distinguish this area from what was normally referred to as algebra, the study of the rules for manipulating formulae and algebraic expressions involving unknowns and real or complex numbers, often now called elementary algebra. The distinction is rarely made in more recent writings. Basic language Algebraic structures are defined primarily as sets with operations. Algebraic structure Subobjects: subgroup, subring, subalgebra, submodule etc. Binary operation Closure of an operation Associative property Distributive property Commutative property Unary operator Additive inverse, multiplicative inverse, inverse element Identity element Cancellation property Finitary operation Arity Structure preserving maps called homomorphisms are vital in the study of algebraic objects. Homomorphisms Kernels and cokernels Image and coimage Epimorphisms and monomorphisms Isomorphisms Isomorphism theorems There are several basic ways to combine algebraic objects of the same type to produce a third object of the same type. These constructions are used throughout algebra. Direct sum Direct limit Direct product Inverse limit Quotient objects: quotient group, quotient ring, quotient module etc. Tensor product Advanced concepts: Category theory Category of groups Category of abelian groups Category of rings Category of modules (over a fixed ring) Morita equivalence, Morita duality Category of vector spaces Homological algebra Filtration (algebra) Exact sequence Functor Zorn's lemma Semigroups and monoids Semigroup Subsemigroup Free semigroup Green's relations Inverse semigroup (or inversion semigroup, cf. ) Krohn–Rhodes theory Semigroup algebra Transformation semigroup Monoid Aperiodic monoid Free monoid Monoid (category theory) Monoid factorisation Syntactic monoid Group theory Structure Group (mathematics) Lagrange's theorem (group theory) Subgroup Coset Normal subgroup Characteristic subgroup Centralizer and normalizer subgroups Derived group Frattini subgroup Fitting subgroup Classification of finite simple groups Sylow theorems Local analysis Constructions Free group Presentation of a group Word problem for groups Quotient group Extension problem Direct sum, direct product Semidirect product Wreath product Types Simple group Finite group Abelian group Torsion subgroup Free abelian group Finitely generated abelian group Rank of an abelian group Cyclic group Locally cyclic group Solvable group Composition series Nilpotent group Divisible group Dedekind group, Hamiltonian group Examples Examples of groups Trivial group Additive group Permutation group Symmetric group Alternating group p-group List of small groups Klein four-group Quaternion group Dihedral group Dicyclic group Automorphism group Point group Circle group Linear group Orthogonal group Applications Group action Conjugacy class Inner automorphism Conjugate closure Stabilizer subgroup Orbit (group theory) Orbit-stabilizer theorem Cayley's theorem Burnside's lemma Burnside's problem Loop group Fundamental group Ring theory General Ring (mathematics) Commutative algebra, Commutative ring Ring theory, Noncommutative ring Algebra over a field Non-associative algebra Relatives to rings: Semiring, Nearring, Rig (algebra) Structure Subring, Subalgebra Center (algebra) Ring ideal Principal ideal Ideal quotient Maximal ideal, minimal ideal Primitive ideal, prime ideal, semiprime ideal Radical of an ideal Jacobson radical Socle of a ring unit (ring theory), Idempotent, Nilpotent, Zero divisor Characteristic (algebra) Ring homomorphism, Algebra homomorphism Ring epimorphism Ring monomorphism Ring isomorphism Skolem–Noether theorem Graded algebra Morita equivalence Brauer group Constructions Direct sum of rings, Product of rings Quotient ring Matrix ring Endomorphism ring Polynomial ring Formal power series Monoid ring, Group ring Localization of a ring Tensor algebra Symmetric algebra, Exterior algebra, Clifford algebra Free algebra Completion (ring theory) Types Field (mathematics), Division ring, division algebra Simple ring, Central simple algebra, Semisimple ring, Semisimple algebra Primitive ring, Semiprimitive ring Prime ring, Semiprime ring, Reduced ring Integral domain, Domain (ring theory) Field of fractions, Integral closure Euclidean domain, Principal ideal domain, Unique factorization domain, Dedekind domain, Prüfer domain Von Neumann regular ring Quasi-Frobenius ring Hereditary ring, Semihereditary ring Local ring, Semi-local ring Discrete valuation ring Regular local ring Cohen–Macaulay ring Gorenstein ring Artinian ring, Noetherian ring Perfect ring, semiperfect ring Baer ring, Rickart ring Lie ring, Lie algebra Ideal (Lie algebra) Jordan algebra Differential algebra Banach algebra Examples Rational number, Real number, Complex number, Quaternions, Octonions Hurwitz quaternion Gaussian integer Theorems and applications Algebraic geometry Hilbert's Nullstellensatz Hilbert's basis theorem Hopkins–Levitzki theorem Krull's principal ideal theorem Levitzky's theorem Galois theory Abel–Ruffini theorem Artin-Wedderburn theorem Jacobson density theorem Wedderburn's little theorem Lasker–Noether theorem Field theory Basic concepts Field (mathematics) Subfield (mathematics) Multiplicative group Primitive element (field theory) Field extension Algebraic extension Splitting field Algebraically closed field Algebraic element Algebraic closure Separable extension Separable polynomial Normal extension Galois extension Abelian extension Transcendence degree Field norm Field trace Conjugate element (field theory) Tensor product of fields Types Algebraic number field Global field Local field Finite field Symmetric function Formally real field Real closed field Applications Galois theory Galois group Inverse Galois problem Kummer theory Module theory General Module (mathematics) Bimodule Annihilator (ring theory) Structure Submodule Pure submodule Module homomorphism Essential submodule Superfluous submodule Singular submodule Socle of a module Radical of a module Constructions Free module Quotient module Direct sum, Direct product of modules Direct limit, Inverse limit Localization of a module Completion (ring theory) Types Simple module, Semisimple module Indecomposable module Artinian module, Noetherian module Homological types: Projective module Projective cover Swan's theorem Quillen–Suslin theorem Injective module Injective hull Flat module Flat cover Coherent module Finitely-generated module Finitely-presented module Finitely related module Algebraically compact module Reflexive module Concepts and theorems Composition series Length of a module Structure theorem for finitely generated modules over a principal ideal domain Homological dimension Projective dimension Injective dimension Flat dimension Global dimension Weak global dimension Cohomological dimension Krull dimension Regular sequence (algebra), depth (algebra) Fitting lemma Schur's lemma Nakayama's lemma Krull–Schmidt theorem Steinitz exchange lemma Jordan–Hölder theorem Artin–Rees lemma Schanuel's lemma Morita equivalence Progenerator Representation theory Representation theory Algebra representation Group representation Lie algebra representation Maschke's theorem Schur's lemma Equivariant map Frobenius reciprocity Induced representation Restricted representation Affine representation Projective representation Modular representation theory Quiver (mathematics) Representation theory of Hopf algebras Non-associative systems General Associative property, Associator Heap (mathematics) Magma (algebra) Loop (algebra), Quasigroup Nonassociative ring, Non-associative algebra Universal enveloping algebra Lie algebra (see also list of Lie group topics and list of representation theory topics) Jordan algebra Alternative algebra Power associativity Flexible algebra Examples Cayley–Dickson construction Octonions Sedenions Trigintaduonions Hyperbolic quaternions Virasoro algebra Generalities Algebraic structure Universal algebra Variety (universal algebra) Congruence relation Free object Generating set (universal algebra) Clone (algebra) Kernel of a function Kernel (algebra) Isomorphism class Isomorphism theorem Fundamental theorem on homomorphisms Universal property Filtration (mathematics) Category theory Monoidal category Groupoid Group object Coalgebra Bialgebra Hopf algebra Magma object Torsion (algebra) Computer algebra Symbolic mathematics Finite field arithmetic Gröbner basis Buchberger's algorithm See also List of commutative algebra topics List of homological algebra topics List of linear algebra topics List of algebraic structures Glossary of field theory Glossary of group theory Glossary of ring theory Glossary of tensor theory Mathematics-related lists Outlines of mathematics and logic Outlines
List of abstract algebra topics
[ "Mathematics" ]
1,798
[ "Abstract algebra", "nan", "Algebra" ]
352,184
https://en.wikipedia.org/wiki/Tincture
A tincture is typically an extract of plant or animal material dissolved in ethanol (ethyl alcohol). Solvent concentrations of 25–60% are common, but may run as high as 90%. In chemistry, a tincture is a solution that has ethanol as its solvent. In herbal medicine, alcoholic tinctures are made with various ethanol concentrations, which should be at least 20% alcohol for preservation purposes. Other solvents for producing tinctures include vinegar, glycerol (also called glycerine), diethyl ether and propylene glycol, not all of which can be used for internal consumption. Ethanol has the advantage of being an excellent solvent for both acidic and basic (alkaline) constituents. A tincture using glycerine is called a glycerite. Glycerine is generally a poorer solvent than ethanol. Vinegar, being acidic, is a better solvent for obtaining alkaloids but a poorer solvent for acidic components. For individuals who choose not to ingest alcohol, non-alcoholic extracts offer an alternative for preparations meant to be taken internally. Low volatility substances such as iodine and mercurochrome can also be turned into tinctures. Characteristics Tinctures are often made of a combination of ethyl alcohol and water as solvents, each dissolving constituents the other is unable to, or weaker at. Varying their proportions can also produce different levels of constituents in the final extraction. As an antimicrobial, alcohol also acts as a preservative. A downside of using alcohol as a solvent is that ethanol has a tendency to denature some organic compounds, reducing or destroying their effectiveness. This tendency can also have undesirable effects when extracting botanical constituents, such as polysaccharides. Certain other constituents, common among them proteins, can become irreversibly denatured, or "pickled" by the alcohol. Alcohol can also have damaging effects on some aromatic compounds. Ether and propylene glycol based tinctures are not suitable for internal consumption, although they are used in preparations for external use, such as personal care creams and ointments. Examples Some examples that were formerly common in medicine include: Tincture of benzoin Tincture of cannabis Tincture of cantharides Tincture of castoreum Tincture of ferric citrochloride, a chelate of citric acid and Iron(III) chloride Tincture of green soap, which classically contains lavender oil Tincture of guaiac gum Tincture of iodine Tincture of opium (laudanum) Camphorated tincture of opium (paregoric) Tincture of pennyroyal Warburg's tincture ("Tinctura Antiperiodica" or "Antiperiodic Tincture", a 19th-century antipyretic) Examples of spirits include: Spirit of ammonia (spirits of hartshorn) Spirit of camphor Spirit of ether, a solution of diethyl ether in alcohol "Spirit of Mindererus", ammonium acetate in alcohol "Spirit of nitre" is not a spirit in this sense, but an old name for nitric acid (but "sweet spirit of nitre" was ethyl nitrite) Similarly "spirit(s) of salt" actually meant hydrochloric acid. The concentrated, fuming, 35% acid is still sold under this name in the UK, for use as a drain-cleaning fluid. "Spirit of vinegar" is an antiquated term for glacial acetic acid "Spirit of vitriol" is an antiquated term for sulfuric acid "Spirit of wine" or "spirits of wine" is an old term for alcohol (especially food grade alcohol derived from the distillation of wine) "Spirit of wood" referred to methanol, often derived from the destructive distillation of wood See also Nalewka, traditional Polish category of alcoholic tincture. Infusion, water or oil based extract with similar historical uses to a tincture. Elixir, pharmaceutical preparation containing an active ingredient that is dissolved in a solution containing some percentage of ethyl alcohol. Extract Klosterfrau Melissengeist Spagyric, fermentation, distillation, and extraction of mineral components from the ash residue of calcinated plants. Topical, categorization of topical skin preparation options Theriac References Dosage forms Drug delivery devices Polysubstance alcoholic drinks
Tincture
[ "Chemistry" ]
926
[ "Pharmacology", "Drug delivery devices" ]
352,267
https://en.wikipedia.org/wiki/Lyapunov%20function
In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s second method for stability) are important to stability theory of dynamical systems and control theory. A similar concept appears in the theory of general state-space Markov chains usually under the name Foster–Lyapunov functions. For certain classes of ODEs, the existence of Lyapunov functions is a necessary and sufficient condition for stability. Whereas there is no general technique for constructing Lyapunov functions for ODEs, in many specific cases the construction of Lyapunov functions is known. For instance, quadratic functions suffice for systems with one state, the solution of a particular linear matrix inequality provides Lyapunov functions for linear systems, and conservation laws can often be used to construct Lyapunov functions for physical systems. Definition A Lyapunov function for an autonomous dynamical system with an equilibrium point at is a scalar function that is continuous, has continuous first derivatives, is strictly positive for , and for which the time derivative is non positive (these conditions are required on some region containing the origin). The (stronger) condition that is strictly positive for is sometimes stated as is locally positive definite, or is locally negative definite. Further discussion of the terms arising in the definition Lyapunov functions arise in the study of equilibrium points of dynamical systems. In an arbitrary autonomous dynamical system can be written as for some smooth An equilibrium point is a point such that Given an equilibrium point, there always exists a coordinate transformation such that: Thus, in studying equilibrium points, it is sufficient to assume the equilibrium point occurs at . By the chain rule, for any function, the time derivative of the function evaluated along a solution of the dynamical system is A function is defined to be locally positive-definite function (in the sense of dynamical systems) if both and there is a neighborhood of the origin, , such that: Basic Lyapunov theorems for autonomous systems Let be an equilibrium point of the autonomous system and use the notation to denote the time derivative of the Lyapunov-candidate-function : Locally asymptotically stable equilibrium If the equilibrium point is isolated, the Lyapunov-candidate-function is locally positive definite, and the time derivative of the Lyapunov-candidate-function is locally negative definite: for some neighborhood of origin, then the equilibrium is proven to be locally asymptotically stable. Stable equilibrium If is a Lyapunov function, then the equilibrium is Lyapunov stable. The converse is also true, and was proved by José Luis Massera. Globally asymptotically stable equilibrium If the Lyapunov-candidate-function is globally positive definite, radially unbounded, the equilibrium isolated and the time derivative of the Lyapunov-candidate-function is globally negative definite: then the equilibrium is proven to be globally asymptotically stable. The Lyapunov-candidate function is radially unbounded if (This is also referred to as norm-coercivity.) Example Consider the following differential equation on : Considering that is always positive around the origin it is a natural candidate to be a Lyapunov function to help us study . So let on . Then, This correctly shows that the above differential equation, is asymptotically stable about the origin. Note that using the same Lyapunov candidate one can show that the equilibrium is also globally asymptotically stable. See also Lyapunov stability Ordinary differential equations Control-Lyapunov function Chetaev function Foster's theorem Lyapunov optimization References External links Example of determining the stability of the equilibrium solution of a system of ODEs with a Lyapunov function Stability theory
Lyapunov function
[ "Mathematics" ]
819
[ "Stability theory", "Dynamical systems" ]
352,296
https://en.wikipedia.org/wiki/Gamma-glutamyltransferase
Gamma-glutamyltransferase (also γ-glutamyltransferase, GGT, gamma-GT, gamma-glutamyl transpeptidase; ) is a transferase (a type of enzyme) that catalyzes the transfer of gamma-glutamyl functional groups from molecules such as glutathione to an acceptor that may be an amino acid, a peptide or water (forming glutamate). GGT plays a key role in the gamma-glutamyl cycle, a pathway for the synthesis and degradation of glutathione as well as drug and xenobiotic detoxification. Other lines of evidence indicate that GGT can also exert a pro-oxidant role, with regulatory effects at various levels in cellular signal transduction and cellular pathophysiology. This transferase is found in many tissues, the most notable one being the liver, and has significance in medicine as a diagnostic marker. Nomenclature The name γ-glutamyltransferase is preferred by the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology. The Expert Panel on Enzymes of the International Federation of Clinical Chemistry also used this name. The older name is gamma-glutamyl transpeptidase (GGTP). Function GGT is present in the cell membranes of many tissues, including the kidneys, bile duct, pancreas, gallbladder, spleen, heart, brain, and seminal vesicles. It is involved in the transfer of amino acids across the cellular membrane and leukotriene metabolism. It is also involved in glutathione metabolism by transferring the glutamyl moiety to a variety of acceptor molecules including water, certain L-amino acids, and peptides, leaving the cysteine product to preserve intracellular homeostasis of oxidative stress. This general reaction is: (5-L-glutamyl)-peptide + an amino acid peptide + 5-L-glutamyl amino acid Biochemistry In prokaryotes and eukaryotes, GGT consists of two polypeptide chains, a heavy and a light subunit, processed from a single chain precursor by an autocatalytic cleavage. The active site of GGT is known to be located in the light subunit. Co-translational N-glycosylation serves a significant role in the proper autocatalytic cleavage and proper folding of GGT. Single site mutations at asparagine residues were shown to result in a functionally active yet slightly less thermally stable version of the enzyme in vitro, while knockout of all asparagine residues resulted in an accumulation of the uncleaved, propeptide form of the enzyme. Clinical significance A GGT test is predominantly used as a diagnostic marker for liver disease. Elevated serum GGT activity can be found in diseases of the liver, biliary system, pancreas and kidneys. Latent elevations in GGT are typically seen in patients with chronic viral hepatitis infections often taking 12 months or more to present. Individual test results should always be interpreted using the reference range from the laboratory that performed the test, though example reference ranges are 15–85 IU/L for men, and 5–55 IU/L for women. GGT is similar to alkaline phosphatase (ALP) in detecting disease of the biliary tract. Indeed, the two markers correlate well, though there are conflicting data about whether GGT has better sensitivity. In general, ALP is still the first test for biliary disease. The main value of GGT is in verifying that ALP elevations are, in fact, due to biliary disease; ALP can also be increased in certain bone diseases, but GGT is not. Alcohol use GGT is elevated by ingestion of large quantities of alcohol (needs reference) However, determination of high levels of total serum GGT activity is not specific to alcohol intoxication, and the measurement of selected serum forms of the enzyme offer more specific information. Isolated elevation or disproportionate elevation compared to other liver enzymes (such as ALT or alanine transaminase) can indicate harmful alcohol use or alcoholic liver disease, and can indicate excess alcohol consumption up to 3 or 4 weeks prior to the test. The mechanism for this elevation is unclear. Alcohol might increase GGT production by inducing hepatic microsomal production, or it might cause the leakage of GGT from hepatocytes. Xenobiotics Numerous drugs can raise GGT levels, including phenobarbitone and phenytoin. GGT elevation has also been occasionally reported following nonsteroidal anti-inflammatory drugs (including aspirin), St. John's wort and kava. Cardiovascular disease More recently, slightly elevated serum GGT has also been found to correlate with cardiovascular diseases and is under active investigation as a cardiovascular risk marker. GGT in fact accumulates in atherosclerotic plaques, suggesting a potential role in pathogenesis of cardiovascular diseases, and circulates in blood in the form of distinct protein aggregates, some of which appear to be related to specific pathologies such as metabolic syndrome, alcohol addiction and chronic liver disease. Elevated levels of GGT can also be due to congestive heart failure. Neoplasms GGT is expressed in high levels in many different tumors. It is known to accelerate tumor growth and to increase resistance to cisplatin in tumors. Examples Human proteins that belong to this family include GGT1, GGT2, GGT6, GGTL3, GGTL4, GGTLA1 and GGTLA4. References External links GGT - Lab Tests Online Chemical pathology EC 2.3.2
Gamma-glutamyltransferase
[ "Chemistry", "Biology" ]
1,211
[ "Biochemistry", "Chemical pathology" ]
352,343
https://en.wikipedia.org/wiki/Outerplanar%20graph
In graph theory, an outerplanar graph is a graph that has a planar drawing for which all vertices belong to the outer face of the drawing. Outerplanar graphs may be characterized (analogously to Wagner's theorem for planar graphs) by the two forbidden minors and , or by their Colin de Verdière graph invariants. They have Hamiltonian cycles if and only if they are biconnected, in which case the outer face forms the unique Hamiltonian cycle. Every outerplanar graph is 3-colorable, and has degeneracy and treewidth at most 2. The outerplanar graphs are a subset of the planar graphs, the subgraphs of series–parallel graphs, and the circle graphs. The maximal outerplanar graphs, those to which no more edges can be added while preserving outerplanarity, are also chordal graphs and visibility graphs. History Outerplanar graphs were first studied and named by , in connection with the problem of determining the planarity of graphs formed by using a perfect matching to connect two copies of a base graph (for instance, many of the generalized Petersen graphs are formed in this way from two copies of a cycle graph). As they showed, when the base graph is biconnected, a graph constructed in this way is planar if and only if its base graph is outerplanar and the matching forms a dihedral permutation of its outer cycle. Chartrand and Harary also proved an analogue of Kuratowski's theorem for outerplanar graphs, that a graph is outerplanar if and only if it does not contain a subdivision of one of the two graphs K4 or K2,3. Definition and characterizations An outerplanar graph is an undirected graph that can be drawn in the plane without crossings in such a way that all of the vertices belong to the unbounded face of the drawing. That is, no vertex is totally surrounded by edges. Alternatively, a graph G is outerplanar if the graph formed from G by adding a new vertex, with edges connecting it to all the other vertices, is a planar graph. A maximal outerplanar graph is an outerplanar graph that cannot have any additional edges added to it while preserving outerplanarity. Every maximal outerplanar graph with n vertices has exactly 2n − 3 edges, and every bounded face of a maximal outerplanar graph is a triangle. Forbidden graphs Outerplanar graphs have a forbidden graph characterization analogous to Kuratowski's theorem and Wagner's theorem for planar graphs: a graph is outerplanar if and only if it does not contain a subdivision of the complete graph K4 or the complete bipartite graph K2,3. Alternatively, a graph is outerplanar if and only if it does not contain K4 or K2,3 as a minor, a graph obtained from it by deleting and contracting edges. A triangle-free graph is outerplanar if and only if it does not contain a subdivision of K2,3. Colin de Verdière invariant A graph is outerplanar if and only if its Colin de Verdière graph invariant is at most two. The graphs characterized in a similar way by having Colin de Verdière invariant at most one, three, or four are respectively the linear forests, planar graphs, and linklessly embeddable graphs. Properties Biconnectivity and Hamiltonicity An outerplanar graph is biconnected if and only if the outer face of the graph forms a simple cycle without repeated vertices. An outerplanar graph is Hamiltonian if and only if it is biconnected; in this case, the outer face forms the unique Hamiltonian cycle. More generally, the size of the longest cycle in an outerplanar graph is the same as the number of vertices in its largest biconnected component. For this reason finding Hamiltonian cycles and longest cycles in outerplanar graphs may be solved in linear time, in contrast to the NP-completeness of these problems for arbitrary graphs. Every maximal outerplanar graph satisfies a stronger condition than Hamiltonicity: it is node pancyclic, meaning that for every vertex v and every k in the range from three to the number of vertices in the graph, there is a length-k cycle containing v. A cycle of this length may be found by repeatedly removing a triangle that is connected to the rest of the graph by a single edge, such that the removed vertex is not v, until the outer face of the remaining graph has length k. A planar graph is outerplanar if and only if each of its biconnected components is outerplanar. Coloring All loopless outerplanar graphs can be colored using only three colors; this fact features prominently in the simplified proof of Chvátal's art gallery theorem by . A 3-coloring may be found in linear time by a greedy coloring algorithm that removes any vertex of degree at most two, colors the remaining graph recursively, and then adds back the removed vertex with a color different from the colors of its two neighbors. According to Vizing's theorem, the chromatic index of any graph (the minimum number of colors needed to color its edges so that no two adjacent edges have the same color) is either the maximum degree of any vertex of the graph or one plus the maximum degree. However, in a connected outerplanar graph, the chromatic index is equal to the maximum degree except when the graph forms a cycle of odd length. An edge coloring with an optimal number of colors can be found in linear time based on a breadth-first traversal of the weak dual tree. Other properties Outerplanar graphs have degeneracy at most two: every subgraph of an outerplanar graph contains a vertex with degree at most two. Outerplanar graphs have treewidth at most two, which implies that many graph optimization problems that are NP-complete for arbitrary graphs may be solved in polynomial time by dynamic programming when the input is outerplanar. More generally, k-outerplanar graphs have treewidth O(k). Every outerplanar graph can be represented as an intersection graph of axis-aligned rectangles in the plane, so outerplanar graphs have boxicity at most two. Related families of graphs Every outerplanar graph is a planar graph. Every outerplanar graph is also a subgraph of a series–parallel graph. However, not all planar series–parallel graphs are outerplanar. The complete bipartite graph K2,3 is planar and series–parallel but not outerplanar. On the other hand, the complete graph K4 is planar but neither series–parallel nor outerplanar. Every forest and every cactus graph are outerplanar. The weak planar dual graph of an embedded outerplanar graph (the graph that has a vertex for every bounded face of the embedding, and an edge for every pair of adjacent bounded faces) is a forest, and the weak planar dual of a Halin graph is an outerplanar graph. A planar graph is outerplanar if and only if its weak dual is a forest, and it is Halin if and only if its weak dual is biconnected and outerplanar. There is a notion of degree of outerplanarity. A 1-outerplanar embedding of a graph is the same as an outerplanar embedding. For k > 1 a planar embedding is said to be k-outerplanar if removing the vertices on the outer face results in a (k − 1)-outerplanar embedding. A graph is k-outerplanar if it has a k-outerplanar embedding. An outer-1-planar graph, analogously to 1-planar graphs can be drawn in a disk, with the vertices on the boundary of the disk, and with at most one crossing per edge. Every maximal outerplanar graph is a chordal graph. Every maximal outerplanar graph is the visibility graph of a simple polygon. Maximal outerplanar graphs are also formed as the graphs of polygon triangulations. They are examples of 2-trees, of series–parallel graphs, and of chordal graphs. Every outerplanar graph is a circle graph, the intersection graph of a set of chords of a circle. Notes References . . . . . . . . . As cited by . . . . . . . . . . . As cited by . . . . . As cited by . . External links Outerplanar graphs at the Information System on Graph Classes and Their Inclusions Planar graphs Graph families
Outerplanar graph
[ "Mathematics" ]
1,847
[ "Planes (geometry)", "Planar graphs" ]
352,349
https://en.wikipedia.org/wiki/Fonio
Fonio, also sometimes called findi or acha, is the term for two cultivated grasses in the genus Digitaria that are important crops in parts of West Africa. The nutritious food with a favorable taste is a vital food source in many rural areas, especially in the mountains of Fouta Djalon, Guinea, but it is also cultivated in Mali, Burkina Faso, Ivory Coast, Nigeria, and Senegal. The global fonio market was estimated at 721,400 tonnes in 2020. Guinea annually produces the most fonio in the world, accounting for over 75% of the world's production in 2019. The name fonio (borrowed into English from French) is from Wolof foño. In West Africa, the species black fonio (Digitaria iburua) and white fonio (Digitaria exilis) are cultivated; the latter is the economically more important crop. Fonio is a glumaceous monocot belonging to the grass family Poaceae and the genus Digitaria. While hundreds of these crabgrass species exist, only a few of them are produced for their grains. It is a small annual herbaceous plant with an inflorescence containing two or three racemes. The racemes have spikelets grouped in twos, threes, or fours, with a sterile and a fertile flower producing the fonio grain. Fonio has a short growing season and is well adjusted to harsh environments. The size of its root system, which can extend down to more than one meter in depth, is advantageous in periods of drought and helps with its adaptation to poor soils. Once considered a humble and often overlooked grain commonly known as the "cereal of the poor," fonio is now gaining attention in urban West Africa. Its unique cooking properties and nutritional benefits are sparking renewed interest in this once underrated staple. Types White fonio White fonio, Digitaria exilis, also called "hungry rice" by Europeans, is the most common of a diverse group of wild and domesticated Digitaria species that are harvested in the savannas of West Africa. Fonio has the smallest seeds of all species of millet. It has potential to improve nutrition, boost food security, foster rural development, and support sustainable use of the land. Nutritious, gluten-free, and high in dietary fiber, fonio is one of the world's fastest-growing cereals, reaching maturity in as little as six to eight weeks. The grains are used to make porridge, couscous, bread, and beer. Black fonio Black fonio, D. iburua, also known as iburu, is a similar crop grown in several countries of West Africa, particularly Nigeria, Togo, and Benin. Like white fonio, it is nutritious, fast-growing, and has the benefit of maturing before other grains, allowing for harvest during the "hungry season." However, it contains considerably more protein compared to D. exilis. Black fonio is mostly cultivated in rural communities and is rarely sold commercially, even in West African cities. Cultivation and processing Climate and attributes Fonio is cultivated in all West Africa as a staple crop. Guinea is the biggest producer of fonio with a production of and a cultivated surface area of in 2021, followed by Nigeria () and Mali (). Fonio grows in dry climates without irrigation, and is unlikely to be a successful crop in humid regions. It is planted in light (sandy to stony) soils, and will grow in poor soil. Fonio is cultivated at sea level in Gambia, Sierra Leone and Guinea-Bissau, but it is otherwise mostly cultivated in altitudes ranging between . The growth cycle ranges from 70–130 days, depending on variety. It is mostly grown in areas with an average annual rainfall of . Fonio plants are medium in height. Indeed, D. exilis can reach a height of , and D. iburua a height of . The ploidy level for the species ranges from diploid (2n), tetraploid (4n), to hexaploid (6n). Like many other grasses, fonio has a C4 carbon fixation, which makes it drought tolerant. Ploughing and sowing The ploughing is done by the men by hand, animal traction or with tractors. The sowing is generally done by hand by the women, depending on the onset of the rainy season. The fonio plant grows quickly; some landrace reach maturity in 8 weeks. It is, however, a weak weed competitor at the beginning of its growth, so weeding is important in the first development stages. Harvest Fonio is labor-intensive to harvest and process. In some regions, the mature fonio plants are uprooted, but the most common method is to cut the straws with knives and sickles which often leads to wounds on the hands. Women then gather the sheaves into cylindrical stacks or horizontal beams to store the sheaves and allow them to dry before the threshing without overheating. The threshing is then done by trampling on the plants or by beating the plants with rigid rods or more flexible sticks The fonio plants are prone to lodging in the soil, which makes potential mechanization of the harvest processes difficult. Dehusking After the threshing, the fonio grains are still in their husk and the small grains make husk removal difficult and time-consuming. Traditional methods include pounding it in a mortar with sand, and then separating the grains and sand, or "popping" it over a flame and then pounding it, which yields a toasted-color grain (a technique used among the Akposso). The invention of a simple fonio husking machine offers an easier mechanical way to dehusk. Gender role Gender role plays a big part in the cultivation of fonio; tasks are distributed differently between men and women. Women do the weeding, the threshing by trampling, the cleaning as well as the drying and processing, while men do the harvest and the threshing by beating. Women's role is predominant in fonio's production. Half of the cultivation's tasks are exclusively done by women, against 14% for men. The tasks assigned to women require patience and meticulousness, while those assigned to men call for strength. Effect of processing methods on nutrient value Before consumption, fonio grains must be processed using mechanical (dehusking, milling) or thermal (precooking, parboiling, roasting) methods. Depending on the processing method, the nutrient value may be affected. Regarding the macronutrients, the carbohydrate content remains higher when the grains are precooked rather than roasted. The protein content is much lower after milling because the bran that gets removed contains a lot of protein. The highest protein content is achieved when parboiling. The lipid content is increased when roasted and decreased when milled or precooked. Regarding micronutrients, the iron and zinc content remains the highest when parboiled while milling leads to a loss due to the removal of the bran. Phytate, an anti-nutritional factor that inhibits the absorption of minerals like iron and zinc, is reduced by washing and cooking but is still high enough to inhibit adequate mineral absorption. Generally, parboiled fonio shows the best nutritional composition when compared to the other processing methods. However, parboiling fonio does not lead to as efficient redistribution of nutrients as is the case with parboiled rice. Additionally, the process of parboiling changes the color of the fonio grains which is disliked by some consumers. Commercialization outside of Africa Fonio has been relatively unknown outside the African continent until recently, when companies in Europe and the United States began to import the grain from West Africa, often citing its ecological and nutritional benefits in their marketing. United States In the United States, Yolélé Foods, led by Senegalese-American chef Pierre Thiam, started importing and selling fonio in 2017. Thiam hopes to introduce Americans to the grain while simultaneously supporting sustainable and traditional agriculture in Burkina Faso, Ghana, Mali and Senegal. What is considered to be a peasant's food in West Africa is now sold in luxury grocery stores in the United States. However, Thiam positions his project as part of a larger movement to elevate the economic power of African farmers, who for centuries have been suppressed by Western hegemony in the global food system. European Union In December 2018, the European Commission approved commercialization of fonio as a novel food in the European Union, after submission by the Italian company Obà Food to manufacture and market new food products. These products include fonio pasta, revealing a desire to change fonio to be more recognizable to the European palate. Since this initial approval, fonio has gradually become more popular and more accessible in Europe. By 2021, the EU was importing 422 metric tonnes (465.2 tons) of fonio, a significant increase from the 172 metric tonnes (189.6 tons) imported in 2016. See also References Further reading Digitaria Millets Grasses of Africa Crops originating from Africa Plant common names
Fonio
[ "Biology" ]
1,931
[ "Plant common names", "Common names of organisms", "Plants" ]
352,353
https://en.wikipedia.org/wiki/Spirit%20level
A spirit level, bubble level, or simply a level, is an instrument designed to indicate whether a surface is horizontal (level) or vertical (plumb). Two basic designs exist: tubular (or linear) and bull's eye (or circular). Different types of spirit levels may be used by carpenters, stonemasons, bricklayers, other building trades workers, surveyors, millwrights and other metalworkers, and in some photographic or videographic work. History The history of the spirit level was discussed in brief in an 1887 article appearing in Scientific American. Melchisédech Thévenot, a French scientist, invented the instrument some time before February 2, 1661. This date can be established from Thevenot's correspondence with scientist Christiaan Huygens. Within a year of this date the inventor circulated details of his invention to others, including Robert Hooke in London and Vincenzo Viviani in Florence. It is occasionally argued that these "bubble levels" did not come into widespread use until the beginning of the the earliest surviving examples being from that but Adrien Auzout had recommended that the Académie Royale des Sciences take "levels of the Thevenot type" on its expedition to Madagascar in 1666. It is very likely that these levels were in use in France and elsewhere long before the turn of the century. The Fell All-Way precision level, one of the first successful American made bull's eye levels for machine tool use, was invented by William B. Fell of Rockford, Illinois in 1939. The device was unique in that it could be placed on a machine bed and show tilt on the x-y axes simultaneously, eliminating the need to rotate the level 90 degrees. The level was so accurate it was restricted from export during World War II. The device set a new standard of .0005 inches per foot resolution (five ten thousands per foot or five arc seconds tilt). Production of the level stopped around 1970, and was restarted in the 1980s by Thomas Butler Technology, also of Rockford, Illinois, but finally ended in the mid-1990s. However, there are still hundreds of the devices in existence. Design and construction Early tubular spirit levels had very slightly curved glass vials with constant inner diameter at each viewing point. These vials are filled, incompletely, with a usually a colored spirit or leaving a bubble in the tube. They have a slight upward curve, so that the bubble naturally rests in the center, the highest point. At slight inclinations the bubble travels away from the marked center position. Where a spirit level must also be usable upside-down or on its side, the curved constant-diameter tube is replaced by an uncurved barrel-shaped tube with a slightly larger diameter in its middle. Alcohols such as ethanol are often used rather than water. Alcohols have low viscosity and surface tension, which allows the bubble to travel the tube quickly and settle accurately with minimal interference from the glass surface. Alcohols also have a much wider liquid temperature range, and will not break the vial as water could due to ice expansion. A colorant such as fluorescein, typically yellow or green, may be added to increase the visibility of the bubble. A variant of the linear spirit level is the bull's eye level: a circular, flat-bottomed device with the liquid under a slightly convex glass face with a circle at the center. It serves to level a surface across a plane, while the tubular level only does so in the direction of the tube. Calibration To check the accuracy of a carpenter's type level, a perfectly horizontal surface is not needed. The level is placed on a flat and roughly level surface and the reading on the bubble tube is noted. This reading indicates to what extent the surface is parallel to the horizontal plane, according to the level, which at this stage is of unknown accuracy. The spirit level is then rotated through 180 degrees in the horizontal plane, and another reading is noted. If the level is accurate, it will indicate the same orientation with respect to the horizontal plane. A difference implies that the level is inaccurate. Adjustment of the spirit level is performed by successively rotating the level and moving the bubble tube within its housing to take up roughly half of the discrepancy, until the magnitude of the reading remains constant when the level is flipped. A similar procedure is applied to more sophisticated instruments such as a surveyor's optical level or a theodolite and is a matter of course each time the instrument is set up. In this latter case, the plane of rotation of the instrument is levelled, along with the spirit level. This is done in two horizontal perpendicular directions. Sensitivity Sensitivity is an important specification for a spirit level, as the device's accuracy depends on its sensitivity. The sensitivity of a level is given as the change of angle or gradient required to move the bubble by unit distance. If the bubble housing has graduated divisions, then the sensitivity is the angle or gradient change that moves the bubble by one of these divisions. is the usual spacing for graduations; on a surveyor's level, the bubble will move when the vial is tilted about 0.005 degree. For a precision machinist level with divisions, when the vial is tilted one division, the level will change one meter from the pivot point, referred to by machinists as 5 tenths per foot. This terminology is unique to machinists and indicates a length of 5 tenths of 1 thousandth of an inch. Types There are different types of spirit levels for different uses: Carpenter's level (either wood, aluminium or composite materials) Mason's level Torpedo level Post level Line level Engineer's precision level Electronic level Inclinometer Slip or Skid Indicator Bull's eye level A spirit level is usually found on the head of combination squares. Carpenter's level A traditional carpenter's spirit level looks like a short plank of wood and often has a wide body to ensure stability, and that the surface is being measured correctly. In the middle of the spirit level is a small window where the bubble and the tube is mounted. Two notches (or rings) designate where the bubble should be if the surface is level. Often an indicator for a 45 degree inclination is included. Line level A line level is a level designed to hang on a builder's string line. The body of the level incorporates small hooks to allow it to attach and hang from the string line. The body is lightweight, so as not to weigh down the string line, it is also small in size as the string line in effect becomes the body; when the level is hung in the center of the string, each 'leg' of the string line extends the level's plane. Engineer's precision levels An engineer's precision level permits leveling items to greater accuracy than a plain spirit level. They are used to level the foundations, or beds of machines to ensure the machine can output workpieces to the accuracy pre-built in the machine. Surveyor's leveling instrument Combining a spirit level with an optical telescope results in a tilting level or dumpy level. These leveling instruments as used in surveying to measure height differences over larger distances. A surveyor's leveling instrument has a spirit level mounted on a telescope (perhaps 30 power) with cross-hairs, itself mounted on a tripod. The observer reads height values off two graduated vertical rods, one 'behind' and one 'in front', to obtain the height difference between the ground points on which the rods are resting. Starting from a point with a known elevation and going cross country (successive points being perhaps apart) height differences can be measured cumulatively over long distances and elevations can be calculated. Precise levelling is supposed to give the difference in elevation between two points apart correct to within a few millimeters. Alternatives Alternatives include: Reed level Laser line level Water level Today level tools are available in most smartphones by using the device's accelerometer. These mobile apps come with various features and easy designs. Also new web standards allow websites to get orientation of devices. Digital spirit levels are increasingly common in replacing conventional spirit levels, particularly in civil engineering applications such as traditional building construction and steel structure erection, for on-site angle alignment and leveling tasks. The industry practitioners often refer to those levelling tools as a "construction level", "heavy duty level", "inclinometer", or "protractor". These modern electronic levels are capable of displaying precise numeric angles within 360° with 0.1° to 0.05° accuracy, can be read from a distance with clarity, and are affordably priced due to mass adoption. They provide features that traditional levels are unable to match. Typically, these features enable steel beam frames under construction to be precisely aligned and levelled to the required orientation, which is vital to ensure the stability, strength and rigidity of steel structures on sites. Digital levels, embedded with angular MEMS technology effectively improve productivity and quality of many modern civil structures. Some recent models feature waterproof IP65 and impact resistance features for harsh working environments. See also Glossary of levelling terms Horizontal and vertical Inclinometer Plumb bob Theodolite Turn and bank indicator References External links Surveying Geodesy Inclinometers Woodworking measuring instruments
Spirit level
[ "Mathematics", "Engineering" ]
1,901
[ "Applied mathematics", "Civil engineering", "Surveying", "Geodesy" ]
352,354
https://en.wikipedia.org/wiki/Limit%20state%20design
Limit State Design (LSD), also known as Load And Resistance Factor Design (LRFD), refers to a design method used in structural engineering. A limit state is a condition of a structure beyond which it no longer fulfills the relevant design criteria. The condition may refer to a degree of loading or other actions on the structure, while the criteria refer to structural integrity, fitness for use, durability or other design requirements. A structure designed by LSD is proportioned to sustain all actions likely to occur during its design life, and to remain fit for use, with an appropriate level of reliability for each limit state. Building codes based on LSD implicitly define the appropriate levels of reliability by their prescriptions. The method of limit state design, developed in the USSR and based on research led by Professor N.S. Streletski, was introduced in USSR building regulations in 1955. Criteria Limit state design requires the structure to satisfy two principal criteria: the ultimate limit state (ULS) and the serviceability limit state (SLS). Any design process involves a number of assumptions. The loads to which a structure will be subjected must be estimated, sizes of members to check must be chosen and design criteria must be selected. All engineering design criteria have a common goal: that of ensuring a safe structure and ensuring the functionality of the structure. Ultimate limit state (ULS) A clear distinction is made between the ultimate state (US) and the ultimate limit state (ULS). The Ultimate State is a physical situation that involves either excessive deformations leading and approaching collapse of the component under consideration or the structure as a whole, as relevant, or deformations exceeding pre-agreed values. It involves, of course, considerable inelastic (plastic) behavior of the structural scheme and residual deformations. In contrast, the ULS is not a physical situation but rather an agreed computational condition that must be fulfilled, among other additional criteria, in order to comply with the engineering demands for strength and stability under design loads. A structure is deemed to satisfy the ultimate limit state criterion if all factored bending, shear and tensile or compressive stresses are below the factored resistances calculated for the section under consideration. The factored stresses referred to are found by applying Magnification Factors to the loads on the section. Reduction Factors are applied to determine the various factored resistances of the section. The limit state criteria can also be set in terms of load rather than stress: using this approach the structural element being analysed (i.e. a beam or a column or other load bearing elements, such as walls) is shown to be safe when the "Magnified" loads are less than the relevant "Reduced" resistances. Complying with the design criteria of the ULS is considered as the minimum requirement (among other additional demands) to provide the proper structural safety. Serviceability limit state (SLS) In addition to the ULS check mentioned above, a Service Limit State (SLS) computational check must be performed. To satisfy the serviceability limit state criterion, a structure must remain functional for its intended use subject to routine (everyday) loading, and as such the structure must not cause occupant discomfort under routine conditions. As for the ULS, the SLS is not a physical situation but rather a computational check. The aim is to prove that under the action of Characteristic design loads (un-factored), and/or whilst applying certain (un-factored) magnitudes of imposed deformations, settlements, or vibrations, or temperature gradients etc. the structural behavior complies with, and does not exceed, the SLS design criteria values, specified in the relevant standard in force. These criteria involve various stress limits, deformation limits (deflections, rotations and curvature), flexibility (or rigidity) limits, dynamic behavior limits, as well as crack control requirements (crack width) and other arrangements concerned with the durability of the structure and its level of everyday service level and human comfort achieved, and its abilities to fulfill its everyday functions. In view of non-structural issues it might also involve limits applied to acoustics and heat transmission that might also affect the structural design. This calculation check is performed at a point located at the lower half of the elastic zone, where characteristic (un-factored) actions are applied and the structural behavior is purely elastic. Factor development The load and resistance factors are determined using statistics and a pre-selected probability of failure. Variability in the quality of construction, consistency of the construction material are accounted for in the factors. Generally, a factor of unity (one) or less is applied to the resistances of the material, and a factor of unity or greater to the loads. Not often used, but in some load cases a factor may be less than unity due to a reduced probability of the combined loads. These factors can differ significantly for different materials or even between differing grades of the same material. Wood and masonry typically have smaller factors than concrete, which in turn has smaller factors than steel. The factors applied to resistance also account for the degree of scientific confidence in the derivation of the values — i.e. smaller values are used when there isn't much research on the specific type of failure mode). Factors associated with loads are normally independent on the type of material involved, but can be influenced by the type of construction. In determining the specific magnitude of the factors, more deterministic loads (like dead loads, the weight of the structure and permanent attachments like walls, floor treatments, ceiling finishes) are given lower factors (for example 1.4) than highly variable loads like earthquake, wind, or live (occupancy) loads (1.6). Impact loads are typically given higher factors still (say 2.0) in order to account for both their unpredictable magnitudes and the dynamic nature of the loading vs. the static nature of most models. While arguably not philosophically superior to permissible or allowable stress design, it does have the potential to produce a more consistently designed structure as each element is intended to have the same probability of failure. In practical terms this normally results in a more efficient structure, and as such, it can be argued that LSD is superior from a practical engineering viewpoint. Example treatment of LSD in building codes The following is the treatment of LSD found in the National Building Code of Canada: NBCC 1995 Format φR > αDD + ψ γ {αLL + αQQ + αTT} where φ = Resistance Factor ψ = Load Combination Factor γ = Importance Factor αD = Dead Load Factor αL = Live Load Factor αQ = Earthquake Load Factor αT = Thermal Effect (Temperature) Load Factor Limit state design has replaced the older concept of permissible stress design in most forms of civil engineering. A notable exception is transportation engineering. Even so, new codes are currently being developed for both geotechnical and transportation engineering which are LSD based. As a result, most modern buildings are designed in accordance with a code which is based on limit state theory. For example, in Europe, structures are designed to conform with the Eurocodes: Steel structures are designed in accordance with EN 1993, and reinforced concrete structures to EN 1992. Australia, Canada, China, France, Indonesia, and New Zealand (among many others) utilise limit state theory in the development of their design codes. In the purest sense, it is now considered inappropriate to discuss safety factors when working with LSD, as there are concerns that this may lead to confusion. Previously, it has been shown that the LRFD and ASD can produce significantly different designs of steel gable frames. There are few situations where ASD produces significantly lighter weight steel gable frame designs. Additionally, it has been shown that in high snow regions, the difference between the methods is more dramatic. In the United States The United States has been particularly slow to adopt limit state design (known as Load and Resistance Factor Design in the US). Design codes and standards are issued by diverse organizations, some of which have adopted limit state design, and others have not. The ACI 318 Building Code Requirements for Structural Concrete uses Limit State design. The ANSI/AISC 360 Specification for Structural Steel Buildings, the ANSI/AISI S-100 North American Specification for the Design of Cold Formed Steel Structural Members, and The Aluminum Association's Aluminum Design Manual contain two methods of design side by side: Load and Resistance Factor Design (LRFD), a Limit States Design implementation, and Allowable Strength Design (ASD), a method where the nominal strength is divided by a safety factor to determine the allowable strength. This allowable strength is required to equal or exceed the required strength for a set of ASD load combinations. ASD is calibrated to give the same structural reliability and component size as the LRFD method with a live to dead load ratio of 3. Consequently, when structures have a live to dead load ratio that differs from 3, ASD produces designs that are either less reliable or less efficient as compared to designs resulting from the LRFD method. In contrast, the ANSI/AWWA D100 Welded Carbon Steel Tanks for Water Storage and API 650 Welded Tanks for Oil Storage still use allowable stress design. In Europe In Europe, the limit state design is enforced by the Eurocodes. See also Allowable stress design Probabilistic design Seismic performance Structural engineering References Citations Sources Structural engineering Civil engineering
Limit state design
[ "Engineering" ]
1,936
[ "Construction", "Civil engineering", "Structural engineering" ]
352,376
https://en.wikipedia.org/wiki/Cantor%20space
In mathematics, a Cantor space, named for Georg Cantor, is a topological abstraction of the classical Cantor set: a topological space is a Cantor space if it is homeomorphic to the Cantor set. In set theory, the topological space 2ω is called "the" Cantor space. Examples The Cantor set itself is a Cantor space. But the canonical example of a Cantor space is the countably infinite topological product of the discrete 2-point space {0, 1}. This is usually written as or 2ω (where 2 denotes the 2-element set {0,1} with the discrete topology). A point in 2ω is an infinite binary sequence, that is a sequence that assumes only the values 0 or 1. Given such a sequence a0, a1, a2,..., one can map it to the real number This mapping gives a homeomorphism from 2ω onto the Cantor set, demonstrating that 2ω is indeed a Cantor space. Cantor spaces occur abundantly in real analysis. For example, they exist as subspaces in every perfect, complete metric space. (To see this, note that in such a space, any non-empty perfect set contains two disjoint non-empty perfect subsets of arbitrarily small diameter, and so one can imitate the construction of the usual Cantor set.) Also, every uncountable, separable, completely metrizable space contains Cantor spaces as subspaces. This includes most of the common spaces in real analysis. Characterization A topological characterization of Cantor spaces is given by Brouwer's theorem: The topological property of having a base consisting of clopen sets is sometimes known as "zero-dimensionality". Brouwer's theorem can be restated as: This theorem is also equivalent (via Stone's representation theorem for Boolean algebras) to the fact that any two countable atomless Boolean algebras are isomorphic. Properties As can be expected from Brouwer's theorem, Cantor spaces appear in several forms. But many properties of Cantor spaces can be established using 2ω, because its construction as a product makes it amenable to analysis. Cantor spaces have the following properties: The cardinality of any Cantor space is , that is, the cardinality of the continuum. The product of two (or even any finite or countable number of) Cantor spaces is a Cantor space. Along with the Cantor function, this fact can be used to construct space-filling curves. A (non-empty) Hausdorff topological space is compact metrizable if and only if it is a continuous image of a Cantor space. Let C(X) denote the space of all real-valued, bounded continuous functions on a topological space X. Let K denote a compact metric space, and Δ denote the Cantor set. Then the Cantor set has the following property: C(K) is isometric to a closed subspace of C(Δ). In general, this isometry is not unique, and thus is not properly a universal property in the categorical sense. The group of all homeomorphisms of the Cantor space is simple. See also Space (mathematics) Cantor set Cantor cube References External links Topological spaces Descriptive set theory Georg Cantor Binary sequences
Cantor space
[ "Mathematics" ]
669
[ "Topological spaces", "Topology", "Mathematical structures", "Space (mathematics)" ]
352,398
https://en.wikipedia.org/wiki/Space-filling%20curve
In mathematical analysis, a space-filling curve is a curve whose range reaches every point in a higher dimensional region, typically the unit square (or more generally an n-dimensional unit hypercube). Because Giuseppe Peano (1858–1932) was the first to discover one, space-filling curves in the 2-dimensional plane are sometimes called Peano curves, but that phrase also refers to the Peano curve, the specific example of a space-filling curve found by Peano. The closely related FASS curves (approximately space-Filling, self-Avoiding, Simple, and Self-similar curves) can be thought of as finite approximations of a certain type of space-filling curves. Definition Intuitively, a curve in two or three (or higher) dimensions can be thought of as the path of a continuously moving point. To eliminate the inherent vagueness of this notion, Jordan in 1887 introduced the following rigorous definition, which has since been adopted as the precise description of the notion of a curve: In the most general form, the range of such a function may lie in an arbitrary topological space, but in the most commonly studied cases, the range will lie in a Euclidean space such as the 2-dimensional plane (a planar curve) or the 3-dimensional space (space curve). Sometimes, the curve is identified with the image of the function (the set of all possible values of the function), instead of the function itself. It is also possible to define curves without endpoints to be a continuous function on the real line (or on the open unit interval ). History In 1890, Giuseppe Peano discovered a continuous curve, now called the Peano curve, that passes through every point of the unit square. His purpose was to construct a continuous mapping from the unit interval onto the unit square. Peano was motivated by Georg Cantor's earlier counterintuitive result that the infinite number of points in a unit interval is the same cardinality as the infinite number of points in any finite-dimensional manifold, such as the unit square. The problem Peano solved was whether such a mapping could be continuous; i.e., a curve that fills a space. Peano's solution does not set up a continuous one-to-one correspondence between the unit interval and the unit square, and indeed such a correspondence does not exist (see below). It was common to associate the vague notions of thinness and 1-dimensionality to curves; all normally encountered curves were piecewise differentiable (that is, have piecewise continuous derivatives), and such curves cannot fill up the entire unit square. Therefore, Peano's space-filling curve was found to be highly counterintuitive. From Peano's example, it was easy to deduce continuous curves whose ranges contained the n-dimensional hypercube (for any positive integer n). It was also easy to extend Peano's example to continuous curves without endpoints, which filled the entire n-dimensional Euclidean space (where n is 2, 3, or any other positive integer). Most well-known space-filling curves are constructed iteratively as the limit of a sequence of piecewise linear continuous curves, each one more closely approximating the space-filling limit. Peano's ground-breaking article contained no illustrations of his construction, which is defined in terms of ternary expansions and a mirroring operator. But the graphical construction was perfectly clear to him—he made an ornamental tiling showing a picture of the curve in his home in Turin. Peano's article also ends by observing that the technique can be obviously extended to other odd bases besides base 3. His choice to avoid any appeal to graphical visualization was motivated by a desire for a completely rigorous proof owing nothing to pictures. At that time (the beginning of the foundation of general topology), graphical arguments were still included in proofs, yet were becoming a hindrance to understanding often counterintuitive results. A year later, David Hilbert published in the same journal a variation of Peano's construction. Hilbert's article was the first to include a picture helping to visualize the construction technique, essentially the same as illustrated here. The analytic form of the Hilbert curve, however, is more complicated than Peano's. Outline of the construction of a space-filling curve Let denote the Cantor space . We start with a continuous function from the Cantor space onto the entire unit interval . (The restriction of the Cantor function to the Cantor set is an example of such a function.) From it, we get a continuous function from the topological product onto the entire unit square by setting Since the Cantor set is homeomorphic to its cartesian product with itself , there is a continuous bijection from the Cantor set onto . The composition of and is a continuous function mapping the Cantor set onto the entire unit square. (Alternatively, we could use the theorem that every compact metric space is a continuous image of the Cantor set to get the function .) Finally, one can extend to a continuous function whose domain is the entire unit interval . This can be done either by using the Tietze extension theorem on each of the components of , or by simply extending "linearly" (that is, on each of the deleted open interval in the construction of the Cantor set, we define the extension part of on to be the line segment within the unit square joining the values and ). Properties If a curve is not injective, then one can find two intersecting subcurves of the curve, each obtained by considering the images of two disjoint segments from the curve's domain (the unit line segment). The two subcurves intersect if the intersection of the two images is non-empty. One might be tempted to think that the meaning of curves intersecting is that they necessarily cross each other, like the intersection point of two non-parallel lines, from one side to the other. However, two curves (or two subcurves of one curve) may contact one another without crossing, as, for example, a line tangent to a circle does. A non-self-intersecting continuous curve cannot fill the unit square because that will make the curve a homeomorphism from the unit interval onto the unit square (any continuous bijection from a compact space onto a Hausdorff space is a homeomorphism). But a unit square has no cut-point, and so cannot be homeomorphic to the unit interval, in which all points except the endpoints are cut-points. There exist non-self-intersecting curves of nonzero area, the Osgood curves, but by Netto's theorem they are not space-filling. For the classic Peano and Hilbert space-filling curves, where two subcurves intersect (in the technical sense), there is self-contact without self-crossing. A space-filling curve can be (everywhere) self-crossing if its approximation curves are self-crossing. A space-filling curve's approximations can be self-avoiding, as the figures above illustrate. In 3 dimensions, self-avoiding approximation curves can even contain knots. Approximation curves remain within a bounded portion of n-dimensional space, but their lengths increase without bound. Space-filling curves are special cases of fractal curves. No differentiable space-filling curve can exist. Roughly speaking, differentiability puts a bound on how fast the curve can turn. Michał Morayne proved that the continuum hypothesis is equivalent to the existence of a Peano curve such that at each point of the real line at least one of its components is differentiable. The Hahn–Mazurkiewicz theorem The Hahn–Mazurkiewicz theorem is the following characterization of spaces that are the continuous image of curves: Spaces that are the continuous image of a unit interval are sometimes called Peano spaces. In many formulations of the Hahn–Mazurkiewicz theorem, second-countable is replaced by metrizable. These two formulations are equivalent. In one direction a compact Hausdorff space is a normal space and, by the Urysohn metrization theorem, second-countable then implies metrizable. Conversely, a compact metric space is second-countable. Kleinian groups There are many natural examples of space-filling, or rather sphere-filling, curves in the theory of doubly degenerate Kleinian groups. For example, showed that the circle at infinity of the universal cover of a fiber of a mapping torus of a pseudo-Anosov map is a sphere-filling curve. (Here the sphere is the sphere at infinity of hyperbolic 3-space.) Integration Wiener pointed out in The Fourier Integral and Certain of its Applications that space-filling curves could be used to reduce Lebesgue integration in higher dimensions to Lebesgue integration in one dimension. See also Dragon curve Gosper curve Hilbert curve Koch curve Moore curve Murray polygon Sierpiński curve Space-filling tree Spatial index Hilbert R-tree Bx-tree Z-order (curve) (Morton order) Cannon–Thurston map Self-avoiding walk (all SFC is) List of fractals by Hausdorff dimension Notes References . . . . External links Multidimensional Space-Filling Curves Proof of the existence of a bijection at cut-the-knot Java applets: Peano Plane Filling Curves at cut-the-knot Hilbert's and Moore's Plane Filling Curves at cut-the-knot All Peano Plane Filling Curves at cut-the-knot Theory of continuous functions Fractal curves Iterated function system fractals
Space-filling curve
[ "Mathematics" ]
1,980
[ "Theory of continuous functions", "Topology" ]
352,510
https://en.wikipedia.org/wiki/Chloromethyl%20chloroformate
Chloromethyl chloroformate (CClO2CH2Cl), also known as palite gas, is a chemical compound developed into gas form and used for chemical warfare during World War I. It is a tearing agent designed to cause temporary blindness. It is a colorless liquid with a penetrating, irritating odor. Industrially, chloromethyl chloroformate is used to manufacture other chemicals. References Lachrymatory agents Chloroformates
Chloromethyl chloroformate
[ "Chemistry" ]
101
[ "Chemical weapons", "Organic compounds", "Lachrymatory agents", "Organic compound stubs", "Organic chemistry stubs" ]
352,513
https://en.wikipedia.org/wiki/Diphosgene
Diphosgene is an organic chemical compound with the formula ClCO2CCl3. This colorless liquid is a valuable reagent in the synthesis of organic compounds. Diphosgene is related to phosgene and has comparable toxicity, but is more conveniently handled because it is a liquid, whereas phosgene is a gas. Production and uses Diphosgene is prepared by radical chlorination of methyl chloroformate under UV light: Cl-CO-OCH3 + 3 Cl2 —(hv)→ Cl-CO-OCCl3 + 3 HCl Another method is the radical chlorination of methyl formate: H-CO-OCH3 + 4 Cl2 —(hv)→ Cl-CO-OCCl3 + 4 HCl Diphosgene converts to phosgene upon heating or upon catalysis with charcoal. It is thus useful for reactions traditionally relying on phosgene. For example, it convert amines into isocyanates, secondary amines into carbamoyl chlorides, carboxylic acids into acid chlorides, and formamides into isocyanides. Diphosgene serves as a source of two equivalents of phosgene: 2 RNH2 + ClCO2CCl3 → 2 RNCO + 4 HCl With α-amino acids diphosgene gives the acid chloride-isocyanates, OCNCHRCOCl, or N-carboxy-amino acid anhydrides depending on the conditions. It hydrolyzes to release HCl in humid air. Diphosgene is used in some laboratory preparations because it is easier to handle than phosgene. Role in warfare Diphosgene was originally developed as a pulmonary agent for chemical warfare, a few months after the first use of phosgene. It was used as a poison gas in artillery shells by Germany during World War I. The first recorded battlefield use was in May 1916. Diphosgene was developed because the vapors could destroy the filters of the gas masks in use at the time. Safety Diphosgene has a relatively high vapor pressure of 10 mm Hg (1.3 kPa) at 20 °C and decomposes to phosgene around 300 °C. Exposure to diphosgene is similar in hazard to phosgene. See also Phosgene Triphosgene Carbonyldiimidazole References External links medical care guide. NATO guide, includes treatment advice material safety data sheet (PDF, for phosgene and diphosgene treated as one). MSDS for diphosgene specifically Pulmonary agents Chloroformates Trichloromethyl esters Carbon oxohalides
Diphosgene
[ "Chemistry" ]
578
[ "Pulmonary agents", "Chemical weapons" ]
352,541
https://en.wikipedia.org/wiki/Charged%20particle
In physics, a charged particle is a particle with an electric charge. For example, some elementary particles, like the electron or quarks are charged. Some composite particles like protons are charged particles. An ion, such as a molecule or atom with a surplus or deficit of electrons relative to protons are also charged particles. A plasma is a collection of charged particles, atomic nuclei and separated electrons, but can also be a gas containing a significant proportion of charged particles. Charged particles are labeled as either positive (+) or negative (-). The designations are arbitrary. Nothing is inherent to a positively charged particle that makes it "positive", and the same goes for negatively charged particles. Examples Positively charged particles protons positrons (antielectrons) positively charged pions alpha particles cations Negatively charged particles electrons antiprotons muons tauons negative charged pions anions Particles with zero charge neutrons photons neutrinos neutral pions z boson higgs boson atoms See also Charge carrier – refers to moving charged particles that create an electric current References External links Charged particle motion in E/B Field Charge carriers Particle physics
Charged particle
[ "Physics", "Materials_science" ]
236
[ "Physical phenomena", "Charge carriers", "Electrical phenomena", "Condensed matter physics", "Particle physics" ]
352,603
https://en.wikipedia.org/wiki/Morgan%20le%20Fay
Morgan le Fay (; Welsh and Cornish: Morgen; with le Fay being garbled French la Fée, thus meaning 'Morgan the Fairy'), alternatively known as Morgan[n]a, Morgain[a/e], Morgant[e], Morg[a]ne, Morgayn[e], Morgein[e], and Morgue[in] among other names and spellings, is a powerful and ambiguous enchantress from the legend of King Arthur, in which most often she and he are siblings. Early appearances of Morgan in Arthurian literature do not elaborate her character beyond her role as a goddess, a fay, a witch, or a sorceress, generally benevolent and connected to Arthur as his magical saviour and protector. Her prominence increased as the legend of Arthur developed over time, as did her moral ambivalence, and in some texts there is an evolutionary transformation of her to an antagonist, particularly as portrayed in cyclical prose such as the Lancelot-Grail and the Post-Vulgate Cycle. A significant aspect in many of Morgan's medieval and later iterations is the unpredictable duality of her nature, with potential for both good and evil. Her character may have originated from Welsh mythology as well as from other ancient and medieval myths and historical figures. The earliest documented account, by Geoffrey of Monmouth in (written ) refers to Morgan in association with the Isle of Apples (Avalon), to which Arthur was carried after having been fatally wounded at the Battle of Camlann, as the leader of the nine magical sisters unrelated to Arthur. Therein, and in the early chivalric romances by Chrétien de Troyes and others, Morgan's chief role is that of a great healer. Several of numerous and often unnamed fairy-mistress and maiden-temptress characters found through the Arthurian romance genre may also be considered as appearances of Morgan in her different aspects. Romance authors of the late 12th century established Morgan as Arthur's supernatural elder sister. In the 13th-century prose cycles – and the later works based on them, including the influential Le Morte d'Arthur – she is usually described as the youngest daughter of Arthur's mother Igraine and her first husband Gorlois. Arthur, son of Igraine and Uther Pendragon, is thus Morgan's half-brother, and her full sisters include Mordred's mother, the Queen of Orkney. The young Morgan unhappily marries Urien, with whom she has a son, Yvain. She becomes an apprentice of Merlin, and a capricious and vindictive adversary of some knights of the Round Table, all the while harbouring a special hatred for Arthur's wife Guinevere. In this tradition, she is also sexually active and even predatory, taking numerous lovers that may include Merlin and Accolon, with an unrequited love for Lancelot. In some variants, including in the popular retelling by Malory, Morgan is the greatest enemy of Arthur, scheming to usurp his throne and indirectly becoming an instrument of his death. However, she eventually reconciles with Arthur, retaining her original role of taking him on his final journey to Avalon. Many other medieval and Renaissance tales feature continuations from the aftermath of Camlann where Morgan appears as the immortal queen of Avalon in both Arthurian and non-Arthurian stories, sometimes alongside Arthur. After a period of being largely absent from contemporary culture, Morgan's character again rose to prominence in the 20th and 21st centuries, appearing in a wide variety of roles and portrayals. Notably, her modern character is frequently being conflated with that of her sister, the Queen of Orkney, thus making Morgan the mother of Arthur's son and nemesis Mordred. Etymology and origins The earliest spelling of the name (found in Geoffrey of Monmouth's , written c. 1150) is Morgen, which is likely derived from Old Welsh or Old Breton Morgen, meaning 'sea-born' (from Common Brittonic *Mori-genā, the masculine form of which, *Mori-genos, survived in Middle Welsh as Moryen or Morien; a cognate form in Old Irish is Muirgen, the name of a Celtic Christian shapeshifting female saint who was associated with the sea). The name is not to be confused with the unrelated Modern Welsh masculine name Morgan (spelled Morcant in the Old Welsh period). As her epithet "le Fay" (a pseudo-French phrase coined up in the 15th century by Thomas Malory, who derived it from the original French descriptive form la fée 'the fairy'; Malory would also use the form "le Fey" alternatively with "le Fay") and some traits indicate, the figure of Morgan appears to have been a remnant of supernatural females from Celtic mythology, and her main name could be connected to the myths of Morgens (also known as Mari-Morgans or just Morgans), the Welsh and Breton fairy water spirits related to the legend of Princess Dahut (Ahes). Speculatively, beginning with Lucy Allen Paton in 1904, Morgan has been connected with the shapeshifting and multifaced Irish goddess of strife known as the Morrígan ('Great Queen'). Proponents of this theory have included Roger Sherman Loomis, who doubted the Muirgen connection. Further early inspiration for her figure likely came from other Welsh folklore, as well as possibly other works of medieval Irish literature and hagiography, and perhaps historical figures such as Empress Matilda. One of the proposed candidates for the historical Arthur, Artuir mac Áedán, was recorded as having a sister named Maithgen (daughter of king Áedán mac Gabráin, a 6th-century king of Dál Riata), whose name also appears as that of a prophetic druid in the Irish legend of Saint Brigid of Kildare. Geoffrey's description of Morgen and her sisters in the Vita Merlini closely resembles the story of the nine Gaulish priestesses of the isle of Sena (now Île de Sein) called Gallisenae (or Gallizenae), as described by the 1st-century Roman geographer Pomponius Mela, strongly suggesting that Pomponius' Description of the World (De situ orbis) was one of Geoffrey's prime sources for at least his own, unique version. Also suggested have been possible influence by other magical women from the Irish mythology such as the mother of hero Fráech, and elements of the classical Greek mythology sorceresses or goddesses such as Circe and especially Medea — who, similar to Morgan, are often alternately benevolent and malicious. A chiefly Greek (instead of Celtic) construction Morgan in medieval romances is a relatively new theory by Carolyne Larrington. Morgan has also been often linked with the supernatural mother Modron, derived from the continental mother goddess figure of Dea Matrona and featured in medieval Welsh literature. Modron appears in Welsh Triad 70 ("Three Blessed Womb-Burdens of the Island of Britain") – in which her children by Urien are named Owain mab Urien (son) and Morfydd (daughter) – and a later folktale have recorded more fully in the manuscript Peniarth 147. A fictionalised version of the historical king Urien is usually Morgan le Fay's husband in the variations of Arthurian legend informed by continental romances, wherein their son is named Yvain. Furthermore, the historical Urien had a treacherous ally named Morcant Bulc who plotted to assassinate him, much as Morgan attempts to kill Urien. Additionally, Modron is called "daughter of Afallach", a Welsh ancestor figure also known as Avallach or Avalloc, whose name can also be interpreted as a noun meaning 'a place of apples'; in the tale of Owain and Morfydd's conception in Peniarth 147, Modron is called the "daughter of the King of Annwn", a Celtic Otherworld. This evokes Avalon, the marvelous "Isle of Apples" with which Morgan has been associated since her earliest appearances, and the Irish legend of the otherworldly woman Niamh including the motif of apple in connection to Avalon-like Otherworld isle of Tír na nÓg ("Land of Youth"). As summarised by Will Hasty, "while this is difficult to establish with certainty the relationship between female figures such as these in the Arthurian tradition and the otherworldly goddesses, sprites, and nymphs of Irish and Welsh myths (a relationship is assumed especially in the case of Morgan le Fay), both groups demonstrate similar ambivalent characteristics: they are by turns dangerous and desirable, implicated alternately in fighting, death, sexuality, and fertility." While many works make Morgan specifically human, she almost always keeps her magical powers and often also her otherworldly if not divine attributes and qualities. Some medieval authors refer to her as a fairy queen or even outright a goddess (dea, déesse, gotinne). According to Gerald of Wales in his 12th-century De instructione principis, a noblewoman and close relative of King Arthur named Morganis carried the dead Arthur to her island of Avalon (identified by him as Glastonbury), where he was buried. Writing in the early 13th century in Speculum ecclesiae, Gerald also wrote that "as a result, the fanciful Britons and their bards invented the legend that some kind of a fantastic goddess (dea quaedam phantastica) had removed Arthur's body to the Isle of Avalon, so that she might cure his wounds there," for the purpose of enabling the possibility of King Arthur's messianic return. In his encyclopaedic work, Otia Imperialia, written around the same time and with similar derision for this belief, Gervase of Tilbury calls her Morganda Fatata (Morganda the Fairy). Morgan retains her early role as Arthur's legendary healer throughout later Arthurian tradition. Medieval and Renaissance literature Geoffrey, Chrétien and other early authors Morgan first appears by name in , written by Norman-Welsh cleric Geoffrey of Monmouth. Purportedly an account of the life of Merlin, it elaborates some episodes from Geoffrey's more famous earlier work, Historia Regum Britanniae (1136). In Historia, Geoffrey relates how King Arthur, gravely wounded by Mordred at the Battle of Camlann, is taken off to the blessed Isle of Apple Trees (Latin Insula Pomorum), Avalon, to be healed; Avalon (Ynys Afallach in the Welsh versions of Historia) is also mentioned as the place where Arthur's sword Excalibur was forged. (Geoffrey's Arthur does have a sister, whose name is Anna, but the possibility of her being a predecessor to Morgan is unknown.) In Vita Merlini, Geoffrey describes this island in more detail and names Morgen as the chief of the nine magical queen sisters who dwell there, ruling in their own right. Morgen agrees to take Arthur, delivered to her by Taliesin to have him revived. She and her sisters are capable of shapeshifting and flying, and (at least seemingly) use their powers only for good. Morgen is also said to be a learned mathematician and to have taught it and astronomy to her fellow nymph (nymphae) sisters, whose names are listed as Moronoe, Mazoe, Gliten, Glitonea, Gliton, Tyronoe, Thiten (Thitis), and Thiton (Thetis). In the making of this arguably Virgin Mary-type character and her sisters, Geoffrey might have been influenced by the first-century Roman cartographer Pomponius Mela, who has described an oracle at the Île de Sein off the coast of Brittany and its nine virgin priestesses believed by the continental Celtic Gauls to have the power to cure disease and perform various other awesome magic, such as controlling the sea through incantations, foretelling the future, and changing themselves into any animal. In addition, according to a theory postulated by R. S. Loomis, it is possible that Geoffrey has not been the original inventor of Morgan, as character may have had already existed in Breton folklore in the hypothetical unrecorded oral stories that featured her as Arthur's fairy saviour, or even also his fairy godmother (her earliest shared supernatural ability being able to traverse on or under water). Such stories being told by wandering storytellers (as credited by Gerald of Wales) would then influence multiple authors writing independently from each other, especially since Vita Merlini was a relatively little-known text. Geoffrey's description of Morgan is notably very similar to that in Benoît de Sainte-Maure's epic poem Roman de Troie (c. 1155–1160), a story of the ancient Trojan War in which Morgan herself makes an unexplained appearance in this second known text featuring her. As Orvan the Fairy (Orva la fée, likely a corruption of a spelling such as *Morgua in the original-text), there she first lustfully loves the Trojan hero Hector and gifts him a wonderful horse, but then pursues him with hate after he rejects her. The abrupt way in which she is used suggests Benoît did expect his aristocratic audience to have been already familiar with her character. Another such ancient-times appearance of a Morgan character can be found in the much later Perceforest (1330s), within the fourth book which is set in Britain during Julius Caesar's invasions, where the fairy Morgane lives in the isle of Zeeland and has learned her magic from Zephir. Here, she has a daughter named Morganette and an adoptive son named Passelion, who in turn have a son named Morgan, described as an ancestor of the Lady of the Lake. In Jaufre, an early Occitan language Arthurian romance dated c. 1180, Morgan seems to appear, without being named other than introducing herself as the "Fairy of Gibel" (fada de Gibel; Gibel was the Arabic name of Sicily's Mount Etna that is also occurring in an Italian version of the Avalon motif in some later works). Here, she is the ruler of an underground kingdom who takes the protagonist knight Jaufre (Griflet) through a fountain to gift him her magic ring of protection. In the romance poem Lanzelet, translated by the end of the 12th century by Ulrich von Zatzikhoven from a now-lost French text, the infant Lancelot is spirited away by a water fairy (merfeine in Old High German) and raised in her paradise island country of Meidelant ('Land of Maidens'). Ulrich's unnamed fairy queen character might be also related to Geoffrey's Morgen, as well as to the early Breton oral tradition of Morgan's figure, especially as her son there is named Mabuz, similar to the name of Modron's son Mabon ap Modron. In Layamon's Middle English poem The Chronicle of Britain (c. 1215), Arthur was taken to Avalon by two women to be healed there by its most beautiful elfen () queen named Argante or Argane; it is possible her name had been originally Margan(te) before it was changed in manuscript transmission. The 12th-century French poet Chrétien de Troyes already mentions her in his first romance, Erec and Enide, completed around 1170. In it, a love of Morgan (Morgue) is Guigomar (Guingomar, Guinguemar), the Lord of the Isle of Avalon and a nephew of King Arthur, a character derivative of Guigemar from the Breton lai Guigemar by Marie de France. Guingamor's own lai links him to the beautiful magical entity known only as the "fairy mistress", who was later identified by Thomas Chestre's Sir Launfal as Dame Tryamour, the daughter of the King of the Celtic Otherworld who shares many characteristics with Chrétien's Morgan. It was noted that even Chrétien' earliest mention of Morgan already shows an enmity between her and Queen Guinevere, and although Morgan is represented only in a benign role by Chrétien, she resides in a mysterious place known as the Vale Perilous (which some later authors would say she has created as a place of punishment for unfaithful knights). She is later mentioned in the same poem when Arthur provides the wounded hero Erec with a healing balm made by his sister Morgan. This episode affirms her early role as a healer, in addition to being one of the first instances of Morgan presented as Arthur's sister. Healing remains Morgan's chief ability, but Chrétien also hints at her potential to harm. Chrétien again refers to Morgan as a great healer in his later romance Yvain, the Knight of the Lion, in an episode in which the Lady of Norison restores the maddened hero Yvain to his senses with a magical potion provided by Morgan the Wise (Morgue la sage). Morgan the Wise is female in Chrétien's original, as well as in the Norse version Ivens saga, but male in the English Ywain and Gawain. While the fairy Modron is mother of Owain mab Urien in the Welsh myth, and Morgan would be assigned this role in the later literature, this first continental association between Yvain (the romances' version of Owain) and Morgan does not imply they are son and mother. The earliest mention of Morgan as Yvain's mother is found in Tyolet, an early 13th-century Breton lai. The Middle Welsh Arthurian tale Geraint son of Erbin, either based on Chrétien's Erec and Enide or derived from a common source, mentions King Arthur's chief physician named Morgan Tud. It is believed that this character, though considered a male in Gereint, may be derived from Morgan le Fay, though this has been a matter of debate among Arthurian scholars since the 19th century (the epithet Tud may be a Welsh or Breton cognate or borrowing of Old Irish tuath, 'north, left', 'sinister, wicked', also 'fairy (fay), elf'). There, Morgan is called to treat Edern ap Nudd, Knight of the Sparrowhawk, following the latter's defeat at the hands of his adversary Geraint, and is later called on by Arthur to treat Geraint himself. In the German version of Erec, the 12th-century knight and poet Hartmann von Aue has Erec healed by Guinevere with a special plaster that was given to Arthur by the king's sister, the goddess (gotinne) Feimurgân (Fâmurgân, Fairy Murgan): In writing that, Hartmann might have not been influenced by Chrétien, but rather by an earlier oral tradition from the stories of Breton bards. Hartmann also separated Arthur's sister (that is Feimurgân) from the fairy mistress of the lord of Avalon (Chrétien's Guigomar), who in his version is named Marguel. In the anonymous First Continuation of Chrétien's Perceval, the Story of the Grail, the fairy lover of its variant of Guigomar (here as Guingamuer) is named Brangepart, and the two have a son Brangemuer who became the king of an otherworldly isle "where no mortal lived". In the 13th-century romance Parzival, another German knight-poet Wolfram von Eschenbach inverted Hartmann's Fâmurgân's name to create that of Arthur's fairy ancestor named Terdelaschoye de Feimurgân, the wife of Mazadân, where the part "Terdelaschoye" comes from Terre de la Joie, or Land of Joy; the text also mentions the mountain of Fâmorgân. Jean Markale further identified a Morganian figure in Wolfram's ambiguous character of Cundrie the Sorceress (later better known as Kundry) through her plot function as mistress of illusions in an enchanted fairy garden. Speculatively, Loomis and John Matthews further identified other perceived avatars of Morgan as the "Besieged Lady" archetype in various early works associated with the Castle of Maidens motif, often appearing as (usually unnamed) wife of King Lot and mother of Gawain. These characters include the Queen of Meidenlant in Diu Crône, the lady of Castellum Puellarum in De Ortu Waluuanii, and the nameless heroine of the Breton lai Doon, among others, including some in later works (such as with Lady Lufamore of Maydenlande in Sir Perceval of Galles). Loomis also linked her to the eponymous seductress evil queen from The Queen of Scotland, a 19th-century ballad "containing Arthurian material dating back to the year 1200." A recently discovered moralistic manuscript written in Anglo-Norman French is the only known instance of medieval Arthurian literature presented as being composed by Morgan herself. This late 12th-century text is purportedly addressed to her court official and tells of the story of a knight called Piers the Fierce; it is likely that the author's motive was to draw a satirical moral from the downfall of the English knight Piers Gaveston, 1st Earl of Cornwall. Morgayne is titled in it as "empress of the wilderness, queen of the damsels, lady of the isles, and governor of the waves of the great sea." Morgan (Morganis) is also mentioned in the Draco Normannicus, a 12th-century (c. 1167–1169) Latin chronicle by Étienne de Rouen, which contains a fictitious letter addressed by King Arthur to Henry II of England, written for political propaganda purpose of having 'Arthur' criticise King Henry for invading the Duchy of Brittany. Notably, it is one of the first known texts that made her a sister to Arthur, as she is in the works of Chrétien and many others after him. As described by Étienne, French prose cycles Morgan's role was greatly expanded by the unknown authors of the early-13th-century Old French prose romances of the Vulgate Cycle, also known as the Lancelot-Grail cycle, and its subsequent rewrite, the Prose Tristan-influenced Post-Vulgate Cycle. (Both of these cycles are believed to be at least influenced by the Cistercian religious order, which might explain the texts' demonisation of pagan motifs and increasingly anti-sexual attitudes, altrough some of these attitudes may be arguably shared with the pre-Christian source material.) Integrating her figure fully into the Arthurian world, they also portray Morgan's ways and deeds as being much more sinister and aggressive than they are in Geoffrey or Chrétien, showing her undergoing a series of transformations in the process of becoming a much more chaotic and unpredictable character. Beginning as an erratic ally of Arthur and a notorious temptress opposed to his wife and some of his knights (especially Lancelot, doubling as her unrequited love interest) in the original stories of the Vulgate Cycle, Morgan's figure eventually often turns into an ambitious and depraved nemesis of King Arthur himself in the Post-Vulgate stories. A common image of Morgan becomes a malicious, jealous and cruel sorceress, the source of many intrigues at the royal court of Arthur and elsewhere. In some of the later works, she is also subversively working to take over Arthur's throne through her mostly harmful magic and scheming, including manipulating men. Most of the time, Morgan's magic arts correspond with these of Merlin's and the Lady of the Lake's, featuring shapeshifting, illusion, and sleeping spells (Richard Kieckhefer connected it with Norse magic). Some scholars even see the figure of the Lady (or Ladies) of the Lake as Morgan's split-off literary double serving as a "benevolent anti-Morgan", especially in the Post-Vulgate tradition: a largely (but not entirely) opposite character created using Morgan's copied traits. Although Morgan is usually depicted in medieval romances as beautiful and seductive, the medieval archetype of the loathly lady is used frequently, as Morgan can be in a contradictory fashion described as both beautiful and ugly even within the same narration. Family and upbringing This version of Morgan (usually named Morgane, Morgain or Morgue) first appears in the few surviving verses of the Old French poem Merlin, which later served as the original source for the Vulgate Cycle and consequently also the Post-Vulgate Cycle. It was written c. 1200 by the French knight-poet Robert de Boron, who described her as an illegitimate daughter of Lady Igraine with an initially unnamed Duke of Tintagel, after whose death she is adopted by King Neutres of Garlot. Merlin is the first known work linking Morgan to Igraine and mentioning her learning sorcery after having been sent away for an education. The reader is informed that Morgan was given her moniker 'la fée' ("the fairy") due to her great knowledge. A 14th-century massive prequel to the Arthurian legend, Perceforest, also implies that Arthur's sister was later named after its fée character Morgane from several centuries earlier. In the Huth-Merlin version of Merlin, Morgain and Morgue la fee are introduced as two different half-sisters of Arthur who then become merged into one character later in the text. In a popular tradition, Morgan is the youngest of the daughters of Igraine and her husband, a Duke of Cornwall (or Tintagel) who today best known as Gorlois. Her father dies in battle with the army of the British high king Uther Pendragon in a war over his wife (Morgan's mother) at the same moment as when Arthur is conceived by Uther, who infiltrates Tintagel Castle with the half-demon Merlin's magic aid. In the poem's prose version and its continuations, she has at least two elder sisters. Various manuscripts list up to five sisters or half-sisters of Arthur, sometimes from different fathers, and some do not mention Morgan being a bastard (step)child. In the best-known version, her sisters are Elaine (Blasine) and the Queen of Orkney sometimes known as Morgause, the latter of whom is the mother of Arthur's knights Gawain, Agravain, Gaheris and Gareth by King Lot, and the traitor Mordred by Arthur (in some romances the wife of King Lot is called Morcades, a name that R. S. Loomis argued was another name of Morgan). At a young age, Morgan is sent to a convent after Arthur's father Uther marries her mother, who later gives him a son, Arthur (which makes him Morgan's younger half-brother). There, Morgan masters the seven arts and begins her study of magic, going on to specialise in astronomie (astronomy and astrology) and healing; the Prose Merlin describes her as "wonderfully adept" and "working hard all the time." The Vulgate Suite du Merlin narration describes Morgan's unmatched beauty and her various skills and qualities of character: Schism with Guinevere and Arthur Uther (or Arthur himself in the Post-Vulgate) betroths her to his ally, King Urien of Gorre (Gore), the realm described as an Otherworldly northern British kingdom, possibly the historical Rheged (early versions have alternatively named Morgan's husband as Nentres of Garlot, who later was recast as the husband of her sister Elaine). Now a queen but unhappy with her husband, Morgan serves as a lady-in-waiting for the high queen, Arthur's newly married young wife Guinevere. At first, Morgan and the also young Guinevere are close friends, even wearing shared near-identical rings. However, everything changes when Morgan is caught in an affair with her lover Guiomar (derived from Chrétien's Guigomar) by Guinevere. Usually, Guiomar is depicted as Guinevere's cousin (alternatively, appearing there as Gaimar, he is Guinevere's early lover instead of her relative in the German version Lancelot und Ginevra). The high queen intervenes to break their relationship to prevent the loss of honor (according to some scholarship, possibly also because of Guinevere's perception of Morgan, with her kinship and close relationship with Arthur, as a rival in political power). This incident, introduced in the Prose Merlin and expanded in the Vulgate Lancelot and the Post-Vulgate Suite du Merlin (the Huth Merlin), begins a lifelong feud between Guinevere and Morgan, who leaves the court of Camelot with all her wealth to seek out Merlin and greater powers. The pregnant Morgan later gives birth to Guiomar's son, who is not named in the story but is said to grow up to become a great knight. Morgan then either undertakes or continues her studies of dark magic under Merlin, enamored for her, the details of which vary widely depending on the telling. In the Prose Merlin, for instance, it is Morgan who finds Merlin, whom she "loves passionately". In the Livre d'Artus, where Morgan's first lover is a knight named Bertolais, it is rather Merlin who goes to live with Morgan and her two ladies for a long time following the betrayal of him by Niniane (the Lady of the Lake) with her other lover, just as Morgan wished for him to do. In the Post-Vulgate Suite, Morgan had been tutored by Merlin even before her relationship with Guiomar, and later she returns to learn more. They meet at Lot's funeral, during the time when Morgan is pregnant with Yvain. After Merlin teaches her so much she becomes "the wisest woman in the world", Morgan scorns and drives Merlin away by threatening to torture and kill him if he would not leave her alone, which causes him great sorrow out of his "foolish love" (fol amor) for her. In the Vulgate Lancelot, Morgan learns all her magic only from Merlin (and not in the nunnery). In any case, having finished her studies under Merlin, Morgan begins scheming her vengeance as she tries to undermine virtue and achieve Guinevere's downfall whenever she can. While Morgan's antagonistic actions in the Vulgate Cycle have been motivated by her "great hatred" (grant hayne) toward Guinevere, in the Post-Vulgate Cycle, where Morgan's explicitly evil nature is directly stated and accented, she also works to destroy Arthur's rule and end his life. The most famous and important of these machinations is introduced in the Post-Vulgate Suite, where she arranges for her devoted lover Accolon to obtain the enchanted sword Excalibur as well as its protective scabbard, which has been previously confided to Morgan by Arthur himself as he had trusted her even more than his wife, replacing the real ones with fakes. In a conspiracy with the villainous lord Damas, Morgan plans for Accolon to use Arthur's own magic items against him in single combat, so she and her beloved Accolon would become the rulers. As part of her convoluted plan, both Arthur and Accolon are spirited away from their hunt with Urien by a magical boat of twelve damsels. Confident of her coming victory, Morgan also attempts to murder her sleeping husband Urien with his own sword, but in this act she is stopped by their son Yvain (Uwayne), who pardons her when she protests she has been under the devil's power and promises to abandon her wicked ways. After Arthur nevertheless mortally defeats Accolon in a duel arranged by Morgan, her former mentor Merlin, still having feelings for her, saves her from Arthur's wrath by enabling her to escape. To avenge Accolon's death, which caused her great sorrow, Morgan again steals the scabbard from the sleeping king. Pursued by Arthur for her betrayal, Morgan throws the scabbard into a lake, before temporarily turning herself and her entourage to stone, the sight of which makes Arthur think they have been already punished by God. That action of Morgan ultimately causes the death of Arthur, who would otherwise be protected by the scabbard's magic in his final battle. On her way out, Morgan saves Arthur's knight named Manassen (Manessen) from certain death when she learns Accolon was Manessen's cousin and enables him to kill his captor. In the same narrative, having been banished from Camelot, Morgan then retires to her lands in the magical kingdom of Gorre and then to her castle near the stronghold of Tauroc (possibly in North Wales). However, her treacherous attempts to bring about Arthur's demise in the Suite are repeatedly frustrated by the king's new sorceress advisor Ninianne (the Lady of the Lake). An iconic case of Morgan's such further and very underhanded plots to kill Arthur in the Post-Vulgate occurs when Morgan sends him a supposed offering of peace in the form of a rich mantle cloak, but Morgan's messenger maiden is made put on the gift first by Ninianne'a advice to Arthur, for "if she dies of it, Morgan will be angrier than at anything else that could happen to her, for she loves her with a very great love." The girl indeed falls dead, and Arthur has her body burned. It is possible that this motif was inspired by classical stories like that how Medea killed her rival for Jason's affection or how Deianira sent a poisoned tunic to Hercules. The reasons for Morgan's hatred of her brother in the Post-Vulgate narrative are never fully explained, other than by just a "natural" extreme antipathy against goodness by the evil that she is an embodiment of. Lancelot, Tristan and other knights Morgan is often emphasised as promiscuous, even more than her sister Morgause, as she is "so lustful and wanton that a looser [noble]woman could not have been found." In some versions, she also associates with two other lascivious enchantresses, Queen Sebile (Sedile) and the unnamed Queen of Sorestan. Together, the three "knew so much about magic, they enjoyed one another's company and always rode together and ate and drank together." Sebile and Morgan are particularly close companions, working their magic together, but they tend to fall into petty squabbles due to their rivalries and bad tempers, including a conflict between them when they both seduce Hector de Maris in the late 13th-century Prophéties de Merlin. Their friendship is further tested when a quarrel over a handsome widower named Berengier (captured by Sebile after Morgan kidnapped his child) ends in a violent attack by Sebile that leaves Morgan half-dead; Morgan swears revenge, but their relationship is later restored. After Merlin's entombment by the Lady of the Lake, Morgan and her three enchantresses also try to find and rescue him but they fail in that task. Morgan's other allies in the Prophéties include the opponents of chivalry such as Mark and Claudas, and she enlists the help of the latter in her failed attempt to eliminate the Lady of the Lake. Morgan uses her skills in her dealings, amorous or otherwise, with several of Arthur's Knights of the Round Table. It applies in particular to the greatest of them all, Lancelot, whom she alternately tries to seduce and to expose as Guinevere's adulterous lover. Her magic aside, Lancelot is always disempowered in his dealings with Morgan as he could never hurt a woman, which, coupled with her being his king's kin, made the Vulgate's Morgan a perfect foil for Lancelot as "the woman he most feared in the world." As told in the Prose Lancelot, they first meet in her magical domain known as the Val sans Retour (the Vale of No Return), serving as an enchanted prison for false lovers since she took an unnamed knight as her lover but then discovered his affair with another woman. There, Lancelot frees the 250 unfaithful knights entrapped by Morgan, including her former lover Guiomar whom she has turned to stone for his infidelity, but Morgan then captures Lancelot himself under her spell, using a magic ring and keeps him prisoner in the hope Guinevere would then go mad or die of sorrow. She also otherwise torments Guinevere, causing her great distress and making her miserable until the Lady of the Lake gives her a ring that protects her from Morgan's power. Since then, Lancelot becomes Morgan's prime object of sexual desire but he consistently refuses her obsessive advances due to his great love of Guinevere, even as Morgan repeatedly courts, drugs, enchants or imprisons the knight. Their one-sided relationship (as well as interactions between her and Arthur) may evoke that of the goddess Morrígan and the Celtic hero Cú Chulainn. One time, she lets the captive Lancelot go to rescue Gawain when he promises to come back (but also keeping him the company of the most beautiful of her maidens to do "whatever she could to entice him"), and he keeps his word and does return; she eventually releases him altogether after over a year, when his health falters and he is near death. On another occasion, Lancelot captured in Cart Castle (Charyot) by Morgan and her fellow magical queens, each of whom tries to make Lancelot her lover; he refuses to choose either of them and escapes with the help of one of their maidservants, Rocedon. Another of Morgan's illicit love subjects is the rescued-but-abducted young Cornish knight Alexander the Orphan (Alisaunder le Orphelin), a cousin of Tristan and Mark's enemy from a later addition in the Prose Tristan as well as the Prophéties de Merlin, whom she promises to heal but he vows to castrate himself rather than to pleasure her. Nevertheless, Alexander promises to defend her castle of Fair Guard (Belle Garde), where he has been held, for a year and a day, and then dutifully continues to guard it even after the castle gets burned down; this eventually leads to his death. Morgan's other fancied good knights include Alexander's relative Tristan, but her interest in him turns into burning hatred of him and his true love Isolde after he kills her lover as introduced in the Prose Tristan. In this story, Morgan's paramours include Huneson the Bald (Hemison in Malory's version) who is mortally wounded when he attacks the great Cornish knight out of his jealously for her attention; the knight soon dies after returning to her, and the anguished Morgan buries him in a grand tomb. In one variation, Morgan then takes revenge as she takes possession of the lance that was used to kill Huneson, enchants it, and sends it to King Mark of Cornwall, her possible lover, who years later uses it to slay Tristan. In the Prose Tristan, wherein Morgan presents herself as Arthur's full sister, she delivers by Lamorak to Arthur's court a magical drinking horn from which no unfaithful lady can drink without spilling, hoping to disgrace Guinevere by revealing her infidelity, but it is Isolde whose adultery is disclosed instead. With same intent, when Tristan was to be Morgan's champion at a jousting tournament, she also gives him an enchanted shield depicting Arthur, Guinevere and Lancelot to deliver to Camelot in the Prose Tristan. In the Vulgate Queste, after Morgan hosts her nephews Gawain, Mordred and Gaheriet to heal them, Mordred spots the images of Lancelot's passionate love for Guinevere that Lancelot painted on her castle's walls while he was imprisoned there. Morgan shows it to Gawain and his brothers, encouraging them to take action in the name of loyalty to their king, but they decide not to do this. Later years and Avalon It is said that Morgan concentrates on witchcraft to such degree that she goes to live in seclusion in the exile of far-away forests. She learns more spells than any other woman, gains an ability to transform herself into any animal, and people begin to call her Morgan the Goddess (Morgain-la-déesse, Morgue la dieuesse). In the Post-Vulgate version of Queste del Saint Graal, Lancelot has a vision of Hell where Morgan still will be able to control demons even in afterlife as they torture Guinevere. In one of her castles, Tugan in Garlot, Morgan has hidden a magic book given to her by Merlin, which actually prophesied the deaths of Arthur and Gawain and who would kill them, but no one can read this passage without dying instantly. In the Vulgate La Mort le Roi Artu (The Death of King Arthur, also known as just the Mort Artu), Morgan ceases troubling Arthur and vanishes for a long time, and the king assumes her to be dead. One day, he and Sagramor wander into Morgan's incredibly beautiful castle while lost in a forest, where Arthur is received extremely well and instantly reconciles with his sister. Overjoyed with their reunion, the king allows Morgan to return to Camelot, but she refuses and declares her plan to move to the Isle of Avalon, "where the women live who know all the world's magic," so she can dwell there with these (unspecified) other sorceresses. However, disaster strikes Arthur when the sight of Lancelot's frescoes and Morgan's confession finally convinces him about the truth to the rumours of the two's secret love affair (about which he has been already warned by his nephew Agravain). This leads to a great conflict between Arthur and Lancelot, which brings down the fellowship of the Round Table. At the end of the Vulgate Mort Artu, Morgan is the only one who is recognised among the black-hooded ladies who take the dying Arthur to his final rest and possible revival in Avalon. Depending on the manuscript, she is either the leading lady (usually, being recognised by Griflet as the one holding Arthur's hand as he enters the boat), a subordinate to another who is unnamed, or neither of them are superior. The latter part of the Post-Vulgate versions of Queste and Mort both seem to revert to Morgan's friendly attitude toward Arthur from the end of the Vulgate Cycle, despite the Post-Vulgate' own characterisation of Morgan as thoroughly evil and the earlier fierce hostility between them. As Arthur steps into her boat after Camlann but assures he is not going to return, she makes no mention of Avalon or her intentions when taking him away. His supposed grave is later said to be found mysteriously empty but for his helmet. (Spanish poem La Faula has Morgan explain that by saying the tomb's purpose was to prevent knights from searching for Arthur.) Malory and other medieval English authors Middle English writer Thomas Malory follows Morgan's portrayals from the Old French prose cycles in his late-15th-century seminal work of the selective compilation book Le Morte d'Arthur (The Death of Arthur), though he reduces her in role and detail of characterisation, in particular either removing or limiting her traditions of healing and prophecy, and making her more consistently and inherently evil than she is in most of his sources, just as he makes Merlin more good. He also diminishes Morgan's conflict with Guinevere, since there is no mention of Guiomar and instead Accolon ("of Gaul") is her first named lover in a much abbreviated version of his story, but does not clarify Morgan's motivations for her very antagonistic behaviour against Arthur. Overall, up until the war between Arthur and Lancelot and the rebellion of Mordred, it is the evil and chaotic Morgan who remains the main and constant source of direct and indirect threat to the realm. In Malory's backstory, Morgan has studied astrology as well as nigremancie (which might actually mean black magic in general rather than "necromancy") in the nunnery where she was raised, before being married to Urien (Uriens) as a young teenager; in this narrative she did not study with Merlin. Unlike Malory's good sorceress Nimue, Morgan deals mostly in "black" rather than "white" magic, employed usually through enchantments and potions. Her powers, however, seem to be inspired by fairy magic of Celtic folklore rather than by medieval Christian demonology. Morgan is widely feared and hated, so much that "many knights wished her burnt." She is now the leader of the four (not three) witch queens who capture Lancelot (the others being the Queen of the Northgales, the Queen of Eastland, and the Queen of the Outer Isles). In an episode that had been first introduced by the anonymous writer of the earlier Prose Lancelot, Lancelot rescues Elaine of Corbenic from being trapped in an enchanted boiling bath by Morgan and the Queen of the Northgales, both envious of Elaine's great beauty (echoing Circe's treatment of Scylla). Malory also reused the magic mantle assassination plot from the Huth Merlin in a slightly modified form, resulting in Morgan's damsel instantly burnt to cinders by its curse when she is forced to take it on. In one of later episodes, Morgan plots an elaborate ambush in "The Book of Sir Tristram de Lyons", after learning of the death of one of her favourites in a tournament, but Tristan ends up killing or routing thirty of her knights. Malory mentions Arthur's attempts to conquer at least one of her castles, which originally had been his own gift to her, and which he could not retake (apparently due to magical defences). Nevertheless, despite all of their prior hostility towards each other and her numerous designs directed against Arthur personally (and his own promise to get a terrible revenge on her as long as he lives), she is still redeemed and is one of the four grieving enchantress queens (the others being Nimue, marking the end of conflict between her and Morgan, and two of Morgan's allies, the Queen of the Northgales and the Queen of the Wasteland) who arrive in a black boat to transport the wounded king to Avalon in the end. Unlike in the French and earlier stories on which Le Morte d'Arthur is based, and where Morgan and Arthur usually would either have first made peace or have just never fought to begin with, here her change of attitude towards him is sudden and unexplained (similar to the Post-Vulgate). Arthur is last seen in Morgan's lap, with her lament of sorrow referring to him as her "dear brother" (dere brothir), as they disappear from the work's narrative together. In the c. 1400 English poem Alliterative Morte Arthure, Morgan appears in Arthur's dream as Lady Fortune (that is, the goddess Fortuna) with the Wheel of Fortune to warn Arthur prior to his fatal final battle, foretelling his death. She also appears in some other English texts, such as the early-13th-century Anglo-Norman Roman de Waldef where she is only "name-dropped" as a minor character. Middle English romance Arthour and Merlin, written around 1270, casts a villainous Morgan in the role of the Lady of the Lake and gives her a brother named Morganor as an illegitimate son of King Urien; her wondrous castle Palaus is built mostly of crystal and glass. Conversely, a 14th-century Middle English version of the Vulgate Mort Artu known as the Stanzaic Morte Arthur makes Morgan an unquestionably good sister of Arthur, concerned only about his honour in regard to the affair of Lancelot and Guinevere. Entering her boat (she is not named in the scene, but addresses him as her brother), Arthur believes he is going to be healed, yet his tomb is later discovered by Bedivere. At the end of the 14th-century Middle English romance Sir Gawain and the Green Knight, one of the best-known Arthurian tales, it is revealed that the entire Green Knight plot has been instigated by Gawain's aunt, the goddess Morgan le Fay (Morgue la Faye, Morgne þe goddes), whose prior mentorship by Merlin is mentioned. Here, she is an ambiguous trickster who takes an appearance of an elderly woman (contrasting from the beautiful Lady Bertilak in a role evoking the loathly lady tradition), as a test for Arthur and his knights and to frighten Guinevere to death. Morgan's importance to this particular narrative has been disputed and called a deus ex machina and simply an artistic device to further connect Gawain's episode to the Arthurian legend, but some regard her as a central character and the driving force of the plot. Opinions are also divided regarding Morgan's intentions and whether she succeeds or fails, and how the story's shapeshifting and enigmatic Morgan might be, or might be not, also Lady Bertilak herself. Other later portrayals in various countries Morgan further turns up frequently throughout the Western European literature of the High and Late Middle Ages, as well as of the Renaissance. She appears in a variety of roles, generally appearing in works related to the literary cycles of Arthur (the Matter of Britain) or Charlemagne (the Matter of France) and written mostly in various Romance languages and dialects, especially still in France but also in Italy, Spain and elsewhere. In the case of Spain, even public edicts dating from the end of the 14th and the beginning of the 15th century tell of the belief in Morgan continuing to enchant and imprison people at Tintagel and in "the Valley of False Trickery". Later standalone romances often feature Morgan as a lover and benefactor of various heroes, and yet she can also be their opponent, especially when abducting those who turned down her amorous offers or working to separate true lovers. Such texts may also introduce her additional offspring or alternate siblings, or connect her closer with the figure of the Lady of the Lake. For instance, the fairy queen Lady Morgan (Dame Morgue, Morgue li fee) shows up in Adam de la Halle's late-13th-century French farce Jeu de la feuillée, in which she visits a contemporary Arras. She arrives accompanied by two of her fay sisters named Arsile and Maglore to dispense enchantment gifts to and curses upon several characters including the author himself, and in the course of the story reverts her love interest in the local mortal (and unfaithful) knight Robert to her previous lover Hellequin (Hellekin), a demonic prince of Faerie who has been trying to woo her back. Hellequin's character in this case may be connected in some way to Arthur, who like him sometimes also figures as the leader of the Wild Hunt. In Thomas III of Saluzzo's Le Chevalier Errant, the fairy Morgan (la fée Morgane) holds the eponymous Wandering Knight captive inside a magnificent castle in her forest realm Païenie ('Pagania'), until messengers from her brother Arthur arrive with a request to lift her enchantment and let him go, to which she agrees. Loosely drawing from the Vulgate Cycle, the Old French anonymous Li Romans de Claris et Laris better known as just Claris and Laris (c. 1270), has its Morgan (Morgane la Faye) as a fairy sister of Arthur as well as a former pupil of the Lady of Lake, Viviane. Ever lascivious and sexual, Morgan lives in a splendid enchanted castle in the wilderness (identified as Brocéliande in a later manuscript) with twelve other beautiful fairy ladies including the sorceress Madoine. There, they lure and ensnare many hundreds of young and attractive knights, who then spend the rest of their lives in the palace: A human Morgan is named Dioneta in the 14th-century Welsh fragment known as The Birth of Arthur, where she is a sister of both Gwyar (Morgause) and Gwalchmei (Gawain), as well as of the other sisters Gracia and Graeria, and is sent off by Uther to Avallach (Avalon). The island of Avalon is often described as an otherworldly place ruled by Morgan in other later texts from all over Western Europe, especially these written in Iberia. In the 14th-century French Crusadic fantasy Le Bâtard de Bouillon, the island kingdom of Arthur and his fairy sister Morgan the Beautiful is hidden by a cloud in the Red Sea, where it is visited by King Bauduins (Baldwin II of Jerusalem). In his 14th-century Catalan poem La faula, Guillem de Torroella writes about having visited the Enchanted Isle and met Arthur who has been brought back to life by the fay Morgan (Morgan la feya, Morguan la fea) and they both are now forever young due to the power of the Holy Grail. In the 15th-century Valencian romance Tirant lo Blanc, the noble Queen Morgan searches the world for her missing brother. Finally finding him entranced in Constantinople, Morgan brings Arthur back to his senses by removing Excalibur from his hands, after which they celebrate and leave to Avalon. The Castilian Arderique begins where the Mort Artu ends, that is with the departure and disappearance of Arthur and his sister Morgaina, described there as a fairy necromancer, after the battle with Mordred. Another Spanish work, Francisco de Enciso Zárate's Florambel de Lucea (1532), features a later appearance of Arthur together with his sister Morgaina, "better known as Morgana the fairy" (fada Morgana), who explains how she saved her brother and gifts Excalibur to the eponymous hero Florambel. In Tristán de Leonis, Morgana offers her love to Tristan. In the rondalla ('folk tale' in Catalan) La fada Morgana, the protagonist Joana ends up marrying the fairy queen Morgana's son named Beuteusell after passing his mother's test with his help. In the legend of the Paladins of Charlemagne, she is most associated with one of the Paladins, the Danish folklore hero Ogier the Dane: following his initial epics, when he is 100 years old, the fairy queen Morgan restores him to his youthful form but removes his memory, then takes him to her mystical island palace in Avalon (where Arthur and Gawain are also still alive) to be her lover for 200 years. She later protects him during his adventures in the mortal world as he defends France from Muslim invasion, before his eventual return to Avalon. In some accounts, Ogier begets her two sons, including Marlyn (Meurvin). In the 14th-century pseudo-chronicle Ly Myreur des Histors written by the French-Belgian author Jean d'Outremeuse, one of their sons is a giant and they live in a palace made of jewels. In the 13th-century chanson de geste story of another Paladin, Huon of Bordeaux, Morgan is a protector of the eponymous hero and the mother of the fairy king Oberon by none other than Julius Caesar. In the 14th-century Ogier le Danois, a prose redaction of the epic poem Roman d'Ogier, Morgue la Fée lives in her palace in Avalon together with Arthur and Oberon, who both seem to be her brothers. Variants of Ogier's and Huon's stories typically involve Morgan, Arthur, and Oberon (Auberon) all living in a fairyland where time passes much slower than in human world. Such works include the 14th century's French Tristan de Nanteuil and the Chanson de Lion de Bourges, the 15th-century French Mabrien, and John Bourchier's 16th-century English The Boke of Duke Huon of Burdeux in which Arthur's sister Morgan is mother of not Oberon but Merlin. In another French chanson de geste, the early-13th-century La Bataille Loquifer, the fays Morgan (Morgue) and her sister Marsion (Marrion) bring the Saracen hero Renoart (Renouart, Rainouart) to Avalon, where Arthur is the king. Renoart falls in love with Morgan and impregnates her with his illegitimate son named Corbon (Corbans), "a live devil who did nothing but evil." When Renoart jilts her and escapes to rescue his other son Maileffer, Morgan sends her demonic monster servant Kapalu (character derived from the Welsh legends' Cath Palug) after him; the shipwrecked Renoart ends up luckily rescued by a mermaid. The 14th-century Italian romance titled La Pulzella Gaia (The Merry Maiden) features the titular beautiful young fairy daughter of Morgana (Italian version of Morgan's name, here too also a sister of the Lady of the Lake) with Hemison. In her own tale, Morgana's daughter defeats Gawain (Galvano) in her giant serpent form before becoming his lover; she and her fairy army then save Gawain from the jealous Guinevere, who wants Gawain dead after having been spurned by him. She then herself is imprisoned in a magical torment in her mother's glass-and-diamond magical castle Pela-Orso, because of how Morgana wanted to force her to marry Tristan. Eventually, Gawain storms the castle after three years of siege and frees her from a cursed dungeon, also capturing her tyrannical mother for the same punishment. The 15th-century Italian compilation of Arthur and Tristan legends, La Tavola Ritonda (The Round Table), too makes Morgan a sister to the Lady of the Lake as well as to Arthur (about the fate of whom it says Morgan "brought him away to a little island in the sea; and there he died of his wounds, and the fairy buried him on that island"). It is based on the French prose romances, but here Morgan is a prophetic figure whose main role is to ensure the fulfilment of fate. Her daughter also appears, as Gaia Donzella, in the Tavola Ritonda, where she is kidnapped by the knight Burletta of the Desert (Burletta della Diserta) who wants to rape her but she is rescued by Lancelot. The Italian Morgana appears in a number of cantari poems of the 14th to 15th century. Some of these are original new episodes, such as the Cantari di Tristano group's Cantare di Astore e Morgana, in which Morgana heals the wounded Hector de Maris (Astore) but turns him evil, and gives him an armour made in Hell as well as a magical ship in her revenge plot against Gawain as well as Arthur himself, and the Cantari del Falso Scudo that features her evil fairy son, the Knight of the False Shield, who ends up slain by Galahad. Other include Lasencis, a standalone version of the Tavola Ritonda story of the eponymous Corsican knight armed by Morgan with enchanted weapons to avenge his brother killed by Lancelot, and a yet another telling of the familiar story of Morgana's good fairy daughter titled the Ponzela Gaia. Evangelista Fossa combined and retold some of those in his Innamoramento di Galvano (Gawain Falling in Love, c. 1494). Morgan le Fay, or Fata Morgana in Italian, has been in particular associated with Sicily as a location of her enchanted realm in the mythological landscape of medieval Europe (at least since the Norman conquest of southern Italy), and local folklore describes her as living in a magical castle located at or floating over Mount Etna. As such she gave her name to the form of mirage common off the shores of Sicily, the Fata Morgana, since the 14th century. References linking Avalon to Sicily can be found in Otia Imperialia (c. 1211) and La faula, as well as in Breton and Provençal literature, for example in the aforementioned Jaufre and La Bataille Loquifer. The 13th-century Chrétien-inspired romance Floriant et Florete places Morgan's secret mountain castle of Mongibel (also Montgibel or Montegibel, derived from the Arabic name for Etna), where, in the role of a fairy godmother, Morgane and two other fays spirit away and raises Floriant, a son of a murdered Sicilian king and the hero of the story. Floriant, with the help of her magic ship, eventually reunites with Morgane at her castle when he returns there with his wife Florete. The 15th-century French romance La Chevalier du Papegau (The Knight of the Parrot) gives Morgaine the Fairy of Montgibel (Morgaine, la fée de Montgibel, as she is also known in Floriant et Florete) a sister known as the Lady Without Pride (la Dame sans Orgueil), whom Arthur saves from the evil Knight of the Wasteland (similar to the story in the Tavola Ritonda). Meanwhile, the Fastnachtspiel (Ain Hupsches Vasnacht Spill von Künig Artus), a German retelling of the enchanted horn episode, moved Morgan's Mediterranean Sea island domain to the east of Sicily, referring to her only as the Queen of Cyprus. During the Italian Renaissance, Morgan has been primarily featured in relation to the cycle of epic poems of Orlando (based on Roland of the historical Charlemagne). In Matteo Maria Boiardo's late-15th-century Orlando Innamorato, fata Morgana (initially as lady Fortune) is beautiful but wicked fairy enchantress, a sister of King Arthur and a pupil of Merlin. Morgana lives in her paradise-like garden in a crystal cavern under a lake, plotting to eventually destroy the entire world. There, she abducts her favourites until she is thwarted by Orlando who defeats, chases and captures Morgana, destroying her underwater prison and letting her keep only one of her forced lovers, a knight named Ziliante. In Ludovico Ariosto's continuation of this tale, Orlando Furioso (1532), Morgana is revealed as a twin sister of two other sorceresses, the good Logistilla and the evil Alcina; Orlando again defeats Morgana, rescuing Ziliante who has been turned into a dragon, and forces Morgana to swear by her lord Demogorgon to abandon her plots. The story also features the medieval motif where uses a magic horn to convince Arthur of the infidelity of his queen (Geneura), here successfully. Bernardo Tasso's L'Amadigi (1560) further introduces Morgana's three daughters: Carvilia, Morganetta, and Nivetta, themselves temptresses of knights. Morgan's other 16th-century appearances include these of Morgue la fée in François Rabelais' French satirical fantasy novel Les grandes chroniques du grand et énorme géant Gargantua et il publie Pantagruel (1532) and of the good Morgana in Erasmo di Valvasone's Italian didactic poem La caccia (1591). In Edmund Spenser's English epic poem The Faerie Queene (1590), Argante (Layamon's name for Morgan) is lustful giantess queen of the "secret Ile", evoking the Post-Vulgate story of Morgan's kidnapping of Sir Alexander. It also features three other counterpart characters: Acrasia, Duessa, and Malecasta, all representing different themes from Malory's description of Morgan. Morgan might have also inspired the characters of the healer Loosepaine and the fay Oriande in the Scots language poem Greysteil, possibly originally written in 15th-century England. Modern culture The character of Morgan has become ubiquitous in works of the modern era, spanning fantasy, historical fiction and other genres across various mediums, especially since the mid-20th century. See also King Arthur's family Margot the fairy Medieval female sexuality Medea Notes References Citations Bibliography External links Morgan le Fay at The Camelot Project Arthurian characters Fairy royalty Female characters in literature Female literary villains Fictional astronomers Fictional characters introduced in the 12th century Fictional immortals Fictional Christian nuns Fictional giants Fictional goddesses Fictional kidnappers Fictional mathematicians Fictional prophets Fictional shapeshifters Fictional Welsh people Fictional characters who use magic Family of King Arthur Literary archetypes Matter of France Merlin Mythological princesses Mythological queens Possibly fictional people from Europe Supernatural legends Witches in folklore
Morgan le Fay
[ "Astronomy" ]
13,775
[ "Astronomers", "Fictional astronomers" ]
352,631
https://en.wikipedia.org/wiki/Messier%2061
Messier 61 (also known as M61, NGC 4303, or the Swelling Spiral Galaxy) is an intermediate barred spiral galaxy in the Virgo Cluster of galaxies. It was first discovered by Barnaba Oriani on May 5, 1779, six days before Charles Messier discovered the same galaxy. Messier had observed it on the same night as Oriani but had mistaken it for a comet. Its distance has been estimated to be 45.61 million light years from the Milky Way Galaxy. It is a member of the M61 Group of galaxies, which is a member of the Virgo II Groups, a series of galaxies and galaxy clusters strung out from the southern edge of the Virgo Supercluster. Properties M61 is one of the largest members of Virgo Cluster, and belongs to a smaller subgroup known as the S Cloud. The morphological classification of SAB(rs)bc indicates a weakly-barred spiral (SAB) with the suggestion of a ring structure (rs) and moderate to loosely wound spiral arms. It has an active galactic nucleus and is classified as a starburst galaxy containing a massive nuclear star cluster with an estimated mass of 105 solar masses and an age of 4 million years, as well as a central candidate supermassive black hole weighing around solar masses. It cohabits with an older massive star cluster as well as a likely older starburst. Evidence of significant star formation and active bright nebulae appears across M61's disk. Unlike most late-type spiral galaxies within the Virgo Cluster, M61 shows an unusual abundance of neutral hydrogen (H I). Supernovae Eight supernovae have been observed in M61, making it one of the most prodigious galaxies for such cataclysmic events. These include: SN 1926A (type II, mag. 14) was discovered by Max Wolf and Karl Wilhelm Reinmuth on 9 May 1926. SN 1961I (Type II, mag. 13) was discovered by Milton Humason on 3 June 1961. SN 1964F (Type II, mag. 14) was discovered by Leonida Rosino on 30 June 1964. SN 1999gn (Type II, mag. 16) was discovered by Alessandro Dimai on 17 December 1999. SN 2006ov (Type II, mag. 14.9) was discovered by Kōichi Itagaki on 24 November 2006. SN 2008in (Type II, mag. 14.9) was discovered by Kōichi Itagaki on 26 December 2008. SN 2014dt (type Ia-pec, mag. 13.6) was discovered by Kōichi Itagaki on 29 October 2014. SN 2020jfo (Type II, mag. 16) was discovered by the Zwicky Transient Facility on 6 May 2020. Gallery See also List of Messier objects References External links messier.seds.org/m/m061.html Intermediate spiral galaxies Messier 061 Messier 061 061 Messier 061 07420 40001 17790505
Messier 61
[ "Astronomy" ]
635
[ "Virgo (constellation)", "Constellations" ]
352,709
https://en.wikipedia.org/wiki/Feistel%20cipher
In cryptography, a Feistel cipher (also known as Luby–Rackoff block cipher) is a symmetric structure used in the construction of block ciphers, named after the German-born physicist and cryptographer Horst Feistel, who did pioneering research while working for IBM; it is also commonly known as a Feistel network. A large number of block ciphers use the scheme, including the US Data Encryption Standard, the Soviet/Russian GOST and the more recent Blowfish and Twofish ciphers. In a Feistel cipher, encryption and decryption are very similar operations, and both consist of iteratively running a function called a "round function" a fixed number of times. History Many modern symmetric block ciphers are based on Feistel networks. Feistel networks were first seen commercially in IBM's Lucifer cipher, designed by Horst Feistel and Don Coppersmith in 1973. Feistel networks gained respectability when the U.S. Federal Government adopted the DES (a cipher based on Lucifer, with changes made by the NSA) in 1976. Like other components of the DES, the iterative nature of the Feistel construction makes implementing the cryptosystem in hardware easier (particularly on the hardware available at the time of DES's design). Design A Feistel network uses a round function, a function which takes two inputs a data block and a subkey and returns one output of the same size as the data block. In each round, the round function is run on half of the data to be encrypted, and its output is XORed with the other half of the data. This is repeated a fixed number of times, and the final output is the encrypted data. An important advantage of Feistel networks compared to other cipher designs such as substitution–permutation networks is that the entire operation is guaranteed to be invertible (that is, encrypted data can be decrypted), even if the round function is not itself invertible. The round function can be made arbitrarily complicated, since it does not need to be designed to be invertible. Furthermore, the encryption and decryption operations are very similar, even identical in some cases, requiring only a reversal of the key schedule. Therefore, the size of the code or circuitry required to implement such a cipher is nearly halved. Unlike substitution-permutation networks, Feistel networks also do not depend on a substitution box that could cause timing side-channels in software implementations. Theoretical work The structure and properties of Feistel ciphers have been extensively analyzed by cryptographers. Michael Luby and Charles Rackoff analyzed the Feistel cipher construction and proved that if the round function is a cryptographically secure pseudorandom function, with Ki used as the seed, then 3 rounds are sufficient to make the block cipher a pseudorandom permutation, while 4 rounds are sufficient to make it a "strong" pseudorandom permutation (which means that it remains pseudorandom even to an adversary who gets oracle access to its inverse permutation). Because of this very important result of Luby and Rackoff, Feistel ciphers are sometimes called Luby–Rackoff block ciphers. Further theoretical work has generalized the construction somewhat and given more precise bounds for security. Construction details Let be the round function and let be the sub-keys for the rounds respectively. Then the basic operation is as follows: Split the plaintext block into two equal pieces: (, ). For each round , compute where means XOR. Then the ciphertext is . Decryption of a ciphertext is accomplished by computing for Then is the plaintext again. The diagram illustrates both encryption and decryption. Note the reversal of the subkey order for decryption; this is the only difference between encryption and decryption. Unbalanced Feistel cipher Unbalanced Feistel ciphers use a modified structure where and are not of equal lengths. The Skipjack cipher is an example of such a cipher. The Texas Instruments digital signature transponder uses a proprietary unbalanced Feistel cipher to perform challenge–response authentication. The Thorp shuffle is an extreme case of an unbalanced Feistel cipher in which one side is a single bit. This has better provable security than a balanced Feistel cipher but requires more rounds. Other uses The Feistel construction is also used in cryptographic algorithms other than block ciphers. For example, the optimal asymmetric encryption padding (OAEP) scheme uses a simple Feistel network to randomize ciphertexts in certain asymmetric-key encryption schemes. A generalized Feistel algorithm can be used to create strong permutations on small domains of size not a power of two (see format-preserving encryption). Feistel networks as a design component Whether the entire cipher is a Feistel cipher or not, Feistel-like networks can be used as a component of a cipher's design. For example, MISTY1 is a Feistel cipher using a three-round Feistel network in its round function, Skipjack is a modified Feistel cipher using a Feistel network in its G permutation, and Threefish (part of Skein) is a non-Feistel block cipher that uses a Feistel-like MIX function. List of Feistel ciphers Feistel or modified Feistel: Blowfish Camellia CAST-128 DES FEAL GOST 28147-89 ICE KASUMI LOKI97 Lucifer MAGENTA MARS MISTY1 RC5 Simon TEA Triple DES Twofish XTEA Generalised Feistel: CAST-256 CLEFIA MacGuffin RC2 RC6 Skipjack SMS4 See also Cryptography Stream cipher Substitution–permutation network Lifting scheme for discrete wavelet transform has pretty much the same structure Format-preserving encryption Lai–Massey scheme References Cryptography Feistel ciphers
Feistel cipher
[ "Mathematics", "Engineering" ]
1,243
[ "Applied mathematics", "Cryptography", "Cybersecurity engineering" ]
352,711
https://en.wikipedia.org/wiki/Zoom%20lens
A zoom lens is a system of camera lens elements for which the focal length (and thus angle of view) can be varied, as opposed to a fixed-focal-length (FFL) lens (prime lens). A true zoom lens or optical zoom lens is a type of parfocal lens, one that maintains focus when its focal length changes. Most consumer zoom lenses do not maintain perfect focus, but are still nearly parfocal. Most camera phones that are advertised as having optical zoom actually use a few cameras of different but fixed focal length, combined with digital zoom to make a hybrid system. The convenience of variable focal length comes at the cost of complexity – and some compromises on image quality, weight, dimensions, aperture, autofocus performance, and cost. For example, all zoom lenses suffer from at least slight, if not considerable, loss of image resolution at their maximum aperture, especially at the extremes of their focal length range. This effect is evident in the corners of the image, when displayed in a large format or high resolution. The greater the range of focal length a zoom lens offers, the more exaggerated these compromises must become. Characteristics Zoom lenses are often described by the ratio of their longest to shortest focal lengths. For example, a zoom lens with focal lengths ranging from 100 mm to 400 mm may be described as a 4:1 or "4×" zoom. The term superzoom or hyperzoom is used to describe photographic zoom lenses with very large focal length factors, typically more than 5× and ranging up to 19× in SLR camera lenses and 22× in amateur digital cameras. This ratio can be as high as 300× in professional television camera lenses. As of 2009, photographic zoom lenses beyond about 3× cannot generally produce imaging quality on par with prime lenses. Constant fast aperture zooms (usually 2.8 or 2.0) are typically restricted to this zoom range. Quality degradation is less perceptible when recording moving images at low resolution, which is why professional video and TV lenses are able to feature high zoom ratios. High zoom ratio TV lenses are complex, with dozens of optical elements, often weighing more than . Digital photography can also accommodate algorithms that compensate for optical flaws, both within in-camera processors and post-production software. Some photographic zoom lenses are long-focus lenses, with focal lengths longer than a normal lens, some are wide-angle lenses (wider than normal), and others cover a range from wide-angle to long-focus. Lenses in the latter group of zoom lenses, sometimes referred to as "normal" zooms, have displaced the fixed focal length lens as the popular one-lens selection on many contemporary cameras. The markings on these lenses usually say W and T for "Wide" and "Telephoto". Telephoto is designated because the longer focal length supplied by the negative diverging lens is longer than the overall lens assembly (the negative diverging lens acting as the "telephoto group"). Some digital cameras allow cropping and enlarging of a captured image, in order to emulate the effect of a longer focal length zoom lens (narrower angle of view). This is commonly known as digital zoom and produces an image of lower optical resolution than optical zoom. Exactly the same effect can be obtained by using digital image processing software on a computer to crop the digital image and enlarge the cropped area. Many digital cameras have both, combining them by first using the optical, then the digital zoom. Zoom and superzoom lenses are commonly used with still, video, motion picture cameras, projectors, some binoculars, microscopes, telescopes, telescopic sights, and other optical instruments. In addition, the afocal part of a zoom lens can be used as a telescope of variable magnification to make an adjustable beam expander. This can be used, for example, to change the size of a laser beam so that the irradiance of the beam can be varied. History Early forms of zoom lenses were used in optical telescopes to provide continuous variation of the magnification of the image, and this was first reported in the proceedings of the Royal Society in 1834. Early patents for telephoto lenses also included movable lens elements which could be adjusted to change the overall focal length of the lens. Lenses of this kind are now called varifocal lenses, since when the focal length is changed, the position of the focal plane also moves, requiring refocusing of the lens after each change. The first true zoom lens, which retained near-sharp focus while the effective focal length of the lens assembly was changed, was patented in 1902 by Clile C. Allen (). An early use of the zoom lens in cinema can be seen in the opening shot of the movie "It" starring Clara Bow, from 1927. The first industrial production was the Bell and Howell Cooke "Varo" 40–120 mm lens for 35mm movie cameras introduced in 1932. The most impressive early TV Zoom lens was the VAROTAL III, from Rank Taylor Hobson from UK built in 1953. The Kilfitt 36–82 mm/2.8 Zoomar lens introduced in 1959 was the first varifocal lens in regular production for still 35mm photography. The first modern film zoom lens, the Pan-Cinor, was designed around 1950 by Roger Cuvillier, a French engineer working for SOM-Berthiot. It had an optical compensation zoom system. In 1956, Pierre Angénieux introduced the mechanical compensation system, enabling precise focus while zooming, in his 17-68mm lens for 16mm released in 1958. The same year a prototype of the 35mm version of the Angénieux 4x zoom, the 35-140mm was first used by cinematographer Roger Fellous for the production of Julie La Rousse. Angénieux received a 1964 technical award from the academy of motion pictures for the design of the 10 to 1 zoom lenses, including the 12-120mm for 16mm film cameras and the 25-250mm for 35mm film cameras. Because of their relative bulk, it wasn't until as recently as 1986 that a zoom lens was designed with sufficiently compact dimensions and finally found its way into a consumer compact (point and shoot) camera, this being the Pentax Zoom 70. Since then advances in optical lens design, particularly the use of computers for optical ray tracing, has made the design and construction of zoom lenses much easier, and they are now used widely in professional and amateur photography. Design There are many possible designs for zoom lenses, the most complex ones having upwards of thirty individual lens elements and multiple moving parts. Most, however, follow the same basic design. Generally they consist of a number of individual lenses that may be either fixed or slide axially along the body of the lens. While the magnification of a zoom lens changes, it is necessary to compensate for any movement of the focal plane to keep the focused image sharp. This compensation may be done by mechanical means (moving the complete lens assembly while the magnification of the lens changes) or optically (arranging the position of the focal plane to vary as little as possible while the lens is zoomed). A simple scheme for a zoom lens divides the assembly into two parts: a focusing lens similar to a standard, fixed-focal-length photographic lens, preceded by an afocal zoom system, an arrangement of fixed and movable lens elements that does not focus the light, but alters the size of a beam of light travelling through it, and thus the overall magnification of the lens system. In this simple optically compensated zoom lens, the afocal system consists of two positive (converging) lenses of equal focal length (lenses L1 and L3) with a negative (diverging) lens (L2) between them, with an absolute focal length less than half that of the positive lenses. Lens L3 is fixed, but lenses L1 and L2 can be moved axially in a particular non-linear relationship. This movement is usually performed by a complex arrangement of gears and cams in the lens housing, although some modern zoom lenses use computer-controlled servos to perform this positioning. While the negative lens L2 moves from the front to the back of the lens, the lens L1 moves forward and then backward in a parabolic arc. In doing so, the overall angular magnification of the system varies, changing the effective focal length of the complete zoom lens. At each of the three points shown, the three-lens system is afocal (neither diverging or converging the light), and hence does not alter the position of the focal plane of the lens. Between these points, the system is not exactly afocal, but the variation in focal plane position can be small enough (about ±0.01 mm in a well-designed lens) not to make a significant change to the sharpness of the image. An important issue in zoom lens design is the correction of optical aberrations (such as chromatic aberration and, in particular, field curvature) across the whole operating range of the lens; this is considerably harder in a zoom lens than a fixed lens, which needs only to correct the aberrations for one focal length. This problem was a major reason for the slow uptake of zoom lenses, with early designs being considerably inferior to contemporary fixed lenses and usable only with a narrow range of f-numbers. Modern optical design techniques have enabled the construction of zoom lenses with good aberration correction over widely variable focal lengths and apertures. Whereas lenses used in cinematography and video applications are required to maintain focus while the focal length is changed, there is no such requirement for still photography and for zoom lenses used as projection lenses. Since it is harder to construct a lens that does not change focus with the same image quality as one that does, the latter applications often use lenses that require refocusing once the focal length has changed (and thus strictly speaking are varifocal lenses, not zoom lenses). As most modern still cameras are autofocusing, this is not a problem. Designers of zoom lenses with large zoom ratios often trade one or more aberrations for higher image sharpness. For example, a greater degree of barrel and pincushion distortion is tolerated in lenses that span the focal length range from wide angle to telephoto with a focal ratio of 10× or more than would be acceptable in a fixed focal length lens or a zoom lens with a lower ratio. Although modern design methods have been continually reducing this problem, barrel distortion of greater than one percent is common in these large-ratio lenses. Another price paid is that at the extreme telephoto setting of the lens the effective focal length changes significantly while the lens is focused on closer objects. The apparent focal length can more than halve while the lens is focused from infinity to medium close-up. To a lesser degree, this effect is also seen in fixed focal length lenses that move internal lens elements, rather than the entire lens, to effect changes in magnification. Varifocal lens Many so-called "zoom" lenses, particularly in the case of fixed-lens cameras, are actually varifocal lenses, which gives lens designers more flexibility in optical design trade-offs (focal length range, maximal aperture, size, weight, cost) than true parfocal zoom, and which is practical because of autofocus, and because the camera processor can move the lens to compensate for the change in the position of the focal plane while changing magnification ("zooming"), making operation essentially the same as a true parfocal zoom. See also Zooming (filmmaking) Pan tilt zoom camera (PTZ) Professional video camera Zoomar lens Superzoom By focal length Wide-angle lens Normal lens Telephoto lens References Citations Sources Kingslake, R. (1960), "The development of the zoom lens". Journal of the SMPTE 69, 534 Clark, A.D. (1973), Zoom Lenses, Monographs on Applied Optics No. 7. Adam Hildger (London). Malacara, Daniel and Malacara, Zacarias (1994), Handbook of Lens Design. Marcel Dekker, Inc. "What is Inside a Zoom Lens?". Adaptall-2.com. 2005. Audiovisual introductions in 1932 Photographic lenses Telescopes Television terminology
Zoom lens
[ "Astronomy" ]
2,554
[ "Telescopes", "Astronomical instruments" ]
352,714
https://en.wikipedia.org/wiki/Germ%20%28mathematics%29
In mathematics, the notion of a germ of an object in/on a topological space is an equivalence class of that object and others of the same kind that captures their shared local properties. In particular, the objects in question are mostly functions (or maps) and subsets. In specific implementations of this idea, the functions or subsets in question will have some property, such as being analytic or smooth, but in general this is not needed (the functions in question need not even be continuous); it is however necessary that the space on/in which the object is defined is a topological space, in order that the word local has some meaning. Name The name is derived from cereal germ in a continuation of the sheaf metaphor, as a germ is (locally) the "heart" of a function, as it is for a grain. Formal definition Basic definition Given a point x of a topological space X, and two maps (where Y is any set), then and define the same germ at x if there is a neighbourhood U of x such that restricted to U, f and g are equal; meaning that for all u in U. Similarly, if S and T are any two subsets of X, then they define the same germ at x if there is again a neighbourhood U of x such that It is straightforward to see that defining the same germ at x is an equivalence relation (be it on maps or sets), and the equivalence classes are called germs (map-germs, or set-germs accordingly). The equivalence relation is usually written Given a map f on X, then its germ at x is usually denoted [f]x. Similarly, the germ at x of a set S is written [S]x. Thus, A map germ at x in X that maps the point x in X to the point y in Y is denoted as When using this notation, f is then intended as an entire equivalence class of maps, using the same letter f for any representative map. Notice that two sets are germ-equivalent at x if and only if their characteristic functions are germ-equivalent at x: More generally Maps need not be defined on all of X, and in particular they don't need to have the same domain. However, if f has domain S and g has domain T, both subsets of X, then f and g are germ equivalent at x in X if first S and T are germ equivalent at x, say and then moreover , for some smaller neighbourhood V with . This is particularly relevant in two settings: f is defined on a subvariety V of X, and f has a pole of some sort at x, so is not even defined at x, as for example a rational function, which would be defined off a subvariety. Basic properties If f and g are germ equivalent at x, then they share all local properties, such as continuity, differentiability etc., so it makes sense to talk about a differentiable or analytic germ, etc. Similarly for subsets: if one representative of a germ is an analytic set then so are all representatives, at least on some neighbourhood of x. Algebraic structures on the target Y are inherited by the set of germs with values in Y. For instance, if the target Y is a group, then it makes sense to multiply germs: to define [f]x[g]x, first take representatives f and g, defined on neighbourhoods U and V respectively, and define [f]x[g]x to be the germ at x of the pointwise product map fg (which is defined on ). In the same way, if Y is an abelian group, vector space, or ring, then so is the set of germs. The set of germs at x of maps from X to Y does not have a useful topology, except for the discrete one. It therefore makes little or no sense to talk of a convergent sequence of germs. However, if X and Y are manifolds, then the spaces of jets (finite order Taylor series at x of map(-germs)) do have topologies as they can be identified with finite-dimensional vector spaces. Relation with sheaves The idea of germs is behind the definition of sheaves and presheaves. A presheaf of abelian groups on a topological space X assigns an abelian group to each open set U in X. Typical examples of abelian groups here are: real-valued functions on U, differential forms on U, vector fields on U, holomorphic functions on U (when X is a complex manifold), constant functions on U and differential operators on U. If then there is a restriction map satisfying certain compatibility conditions. For a fixed x, one says that elements and are equivalent at x if there is a neighbourhood of x with resWU(f) = resWV(g) (both elements of ). The equivalence classes form the stalk at x of the presheaf . This equivalence relation is an abstraction of the germ equivalence described above. Interpreting germs through sheaves also gives a general explanation for the presence of algebraic structures on sets of germs. The reason is that formation of stalks preserves finite limits. This implies that if T is a Lawvere theory and a sheaf F is a T-algebra, then any stalk Fx is also a T-algebra. Examples If and have additional structure, it is possible to define subsets of the set of all maps from X to Y or more generally sub-presheaves of a given presheaf and corresponding germs: some notable examples follow. If are both topological spaces, the subset of continuous functions defines germs of continuous functions. If both and admit a differentiable structure, the subset of -times continuously differentiable functions, the subset of smooth functions and the subset of analytic functions can be defined ( here is the ordinal for infinity; this is an abuse of notation, by analogy with and ), and then spaces of germs of (finitely) differentiable, smooth, analytic functions can be constructed. If have a complex structure (for instance, are subsets of complex vector spaces), holomorphic functions between them can be defined, and therefore spaces of germs of holomorphic functions can be constructed. If have an algebraic structure, then regular (and rational) functions between them can be defined, and germs of regular functions (and likewise rational) can be defined. The germ of at positive infinity (or simply the germ of ) is . These germs are used in asymptotic analysis and Hardy fields. Notation The stalk of a sheaf on a topological space at a point of is commonly denoted by As a consequence, germs, constituting stalks of sheaves of various kind of functions, borrow this scheme of notation: is the space of germs of continuous functions at . for each natural number is the space of germs of -times-differentiable functions at . is the space of germs of infinitely differentiable ("smooth") functions at . is the space of germs of analytic functions at . is the space of germs of holomorphic functions (in complex geometry), or space of germs of regular functions (in algebraic geometry) at . For germs of sets and varieties, the notation is not so well established: some notations found in literature include: is the space of germs of analytic varieties at . When the point is fixed and known (e.g. when is a topological vector space and ), it can be dropped in each of the above symbols: also, when , a subscript before the symbol can be added. As example are the spaces of germs shown above when is a -dimensional vector space and . Applications The key word in the applications of germs is locality: all local properties of a function at a point can be studied by analyzing its germ. They are a generalization of Taylor series, and indeed the Taylor series of a germ (of a differentiable function) is defined: you only need local information to compute derivatives. Germs are useful in determining the properties of dynamical systems near chosen points of their phase space: they are one of the main tools in singularity theory and catastrophe theory. When the topological spaces considered are Riemann surfaces or more generally complex analytic varieties, germs of holomorphic functions on them can be viewed as power series, and thus the set of germs can be considered to be the analytic continuation of an analytic function. Germs can also be used in the definition of tangent vectors in differential geometry. A tangent vector can be viewed as a point-derivation on the algebra of germs at that point. Algebraic properties As noted earlier, sets of germs may have algebraic structures such as being rings. In many situations, rings of germs are not arbitrary rings but instead have quite specific properties. Suppose that X is a space of some sort. It is often the case that, at each x ∈ X, the ring of germs of functions at x is a local ring. This is the case, for example, for continuous functions on a topological space; for k-times differentiable, smooth, or analytic functions on a real manifold (when such functions are defined); for holomorphic functions on a complex manifold; and for regular functions on an algebraic variety. The property that rings of germs are local rings is axiomatized by the theory of locally ringed spaces. The types of local rings that arise, however, depend closely on the theory under consideration. The Weierstrass preparation theorem implies that rings of germs of holomorphic functions are Noetherian rings. It can also be shown that these are regular rings. On the other hand, let be the ring of germs at the origin of smooth functions on R. This ring is local but not Noetherian. To see why, observe that the maximal ideal m of this ring consists of all germs that vanish at the origin, and the power mk consists of those germs whose first k − 1 derivatives vanish. If this ring were Noetherian, then the Krull intersection theorem would imply that a smooth function whose Taylor series vanished would be the zero function. But this is false, as can be seen by considering This ring is also not a unique factorization domain. This is because all UFDs satisfy the ascending chain condition on principal ideals, but there is an infinite ascending chain of principal ideals The inclusions are strict because x is in the maximal ideal m. The ring of germs at the origin of continuous functions on R even has the property that its maximal ideal m satisfies m2 = m. Any germ f ∈ m can be written as where sgn is the sign function. Since |f| vanishes at the origin, this expresses f as the product of two functions in m, whence the conclusion. This is related to the setup of almost ring theory. See also Analytic variety Catastrophe theory Gluing axiom Riemann surface Sheaf Stalk References , chapter I, paragraph 6, subparagraph 10 "Germs at a point". , chapter 2, paragraph 2.1, "Basic Definitions". , chapter 2 "Local Rings of Holomorphic Functions", especially paragraph A "The Elementary Properties of the Local Rings" and paragraph E "Germs of Varieties". Ian R. Porteous (2001) Geometric Differentiation, page 71, Cambridge University Press . , paragraph 31, "Germi di funzioni differenziabili in un punto di (Germs of differentiable functions at a point of )" (in Italian). External links A research preprint dealing with germs of analytic varieties in an infinite dimensional setting. Topology Sheaf theory
Germ (mathematics)
[ "Physics", "Mathematics" ]
2,486
[ "Mathematical structures", "Sheaf theory", "Topology", "Space", "Category theory", "Geometry", "Spacetime" ]
352,783
https://en.wikipedia.org/wiki/Phylogenesis
Phylogenesis (from Greek φῦλον phylon "tribe" + γένεσις genesis "origin") is the biological process by which a taxon (of any rank) appears. The science that studies these processes is called phylogenetics. These terms may be confused with the term phylogenetics, the application of molecular - analytical methods (i.e. molecular biology and genomics), in the explanation of phylogeny and its research. Phylogenetic relationships are discovered through phylogenetic inference methods that evaluate observed heritable traits, such as DNA sequences or overall morpho-anatomical, ethological, and other characteristics. Phylogeny The result of these analyses is a phylogeny (also known as a phylogenetic tree) – a diagrammatic hypothesis about the history of the evolutionary relationships of a group of organisms. Phylogenetic analyses have become central to understanding biodiversity, evolution, ecological genetics and genomes. Cladistics Cladistics (Greek , klados, i.e. "branch") is an approach to biological classification in which organisms are categorized based on shared, derived characteristics that can be traced to a group's most recent common ancestor and are not present in more distant ancestors. Therefore, members of a group are assumed to share a common history and are considered to be closely related. The cladistic method interprets each character state transformation implied by the distribution of shared character states among taxa (or other terminals) as a potential piece of evidence for grouping. The outcome of a cladistic analysis is a cladogram – a tree-shaped diagram (dendrogram) that is interpreted to represent the best hypothesis of phylogenetic relationships. Although traditionally such cladograms were generated largely on the basis of morphological characteristics calculated by hand, genetic sequencing data and computational phylogenetics are now commonly used and the parsimony criterion has been abandoned by many phylogeneticists in favor of more "sophisticated" (but less parsimonious) evolutionary models of character state transformation. Taxonomy Taxonomy (Greek language , taxis = 'order', 'arrangement' + , nomos = 'law' or 'science') is the classification, identification and naming of organisms. It is usually richly informed by phylogenetics, but remains a methodologically and logically distinct discipline. The degree to which taxonomies depend on phylogenies (or classification depends on evolutionary development) differs depending on the school of taxonomy: phenetics ignores phylogeny altogether, trying to represent the similarity between organisms instead; cladistics (phylogenetic systematics) tries to reproduce phylogeny in its classification Ontophylogenesis An extension of phylogenesis to the cellular level by Jean-Jacques Kupiec is known as Ontophylogenesis Similarities and differences Phylogenesis ≠ Phylogeny; Phylogenesis ≠ (≈) Phylogenetics; Phylogenesis ≠ Cladistics; Phylogenetics ≠ Cladistics; Taxonomy ≠ Cladistics. See also Phylogeny Phylogenetics Taxonomy Cladistics Ontogeny Evolution References External links Phylogenetics Evolutionary biology
Phylogenesis
[ "Biology" ]
635
[ "Evolutionary biology", "Phylogenetics", "Bioinformatics", "Taxonomy (biology)" ]
352,827
https://en.wikipedia.org/wiki/Spin%E2%80%93statistics%20theorem
The spin–statistics theorem proves that the observed relationship between the intrinsic spin of a particle (angular momentum not due to the orbital motion) and the quantum particle statistics of collections of such particles is a consequence of the mathematics of quantum mechanics. In units of the reduced Planck constant ħ, all particles that move in 3 dimensions have either integer spin and obey Bose–Einstein statistics or half-integer spin and obey Fermi–Dirac statistics. Spin-statistics connection All known particles obey either Fermi–Dirac statistics or Bose–Einstein statistics. A particle's intrinsic spin always predicts the statistics of a collection of such particles and conversely: integral-spin particles are bosons with Bose–Einstein statistics, half-integral-spin particles are fermions with Fermi–Dirac statistics. A spin–statistics theorem shows that the mathematical logic of quantum mechanics predicts or explains this physical result. The statistics of indistinguishable particles is among the most fundamental of physical effects. The Pauli exclusion principle that every occupied quantum state contains at most one fermion controls the formation of matter. The basic building blocks of matter such as protons, neutrons, and electrons are all fermions. Conversely, particles such as the photon, which mediate forces between matter particles, are all bosons. A spin–statistics theorem attempts to explain the origin of this fundamental dichotomy. Background Naively, spin, an angular momentum property intrinsic to a particle, would be unrelated to fundamental properties of a collection of such particles. However, these are indistinguishable particles: any physical prediction relating multiple indistinguishable particles must not change when the particles are exchanged. Quantum states and indistinguishable particles In a quantum system, a physical state is described by a state vector. A pair of distinct state vectors are physically equivalent if they differ only by an overall phase factor, ignoring other interactions. A pair of indistinguishable particles such as this have only one state. This means that if the positions of the particles are exchanged (i.e., they undergo a permutation), this does not identify a new physical state, but rather one matching the original physical state. In fact, one cannot tell which particle is in which position. While the physical state does not change under the exchange of the particles' positions, it is possible for the state vector to change sign as a result of an exchange. Since this sign change is just an overall phase, this does not affect the physical state. The essential ingredient in proving the spin-statistics relation is relativity, that the physical laws do not change under Lorentz transformations. The field operators transform under Lorentz transformations according to the spin of the particle that they create, by definition. Additionally, the assumption (known as microcausality) that spacelike-separated fields either commute or anticommute can be made only for relativistic theories with a time direction. Otherwise, the notion of being spacelike is meaningless. However, the proof involves looking at a Euclidean version of spacetime, in which the time direction is treated as a spatial one, as will be now explained. Lorentz transformations include 3-dimensional rotations and boosts. A boost transfers to a frame of reference with a different velocity and is mathematically like a rotation into time. By analytic continuation of the correlation functions of a quantum field theory, the time coordinate may become imaginary, and then boosts become rotations. The new "spacetime" has only spatial directions and is termed Euclidean. Exchange symmetry or permutation symmetry Bosons are particles whose wavefunction is symmetric under such an exchange or permutation, so if we swap the particles, the wavefunction does not change. Fermions are particles whose wavefunction is antisymmetric, so under such a swap the wavefunction gets a minus sign, meaning that the amplitude for two identical fermions to occupy the same state must be zero. This is the Pauli exclusion principle: two identical fermions cannot occupy the same state. This rule does not hold for bosons. In quantum field theory, a state or a wavefunction is described by field operators operating on some basic state called the vacuum. In order for the operators to project out the symmetric or antisymmetric component of the creating wavefunction, they must have the appropriate commutation law. The operator (with an operator and a numerical function with complex values) creates a two-particle state with wavefunction , and depending on the commutation properties of the fields, either only the antisymmetric parts or the symmetric parts matter. Let us assume that and the two operators take place at the same time; more generally, they may have spacelike separation, as is explained hereafter. If the fields commute, meaning that the following holds: then only the symmetric part of contributes, so that , and the field will create bosonic particles. On the other hand, if the fields anti-commute, meaning that has the property that then only the antisymmetric part of contributes, so that , and the particles will be fermionic. Proofs An elementary explanation for the spin–statistics theorem cannot be given despite the fact that the theorem is so simple to state. In The Feynman Lectures on Physics, Richard Feynman said that this probably means that we do not have a complete understanding of the fundamental principle involved. Numerous notable proofs have been published, with different kinds of limitations and assumptions. They are all "negative proofs", meaning that they establish that integral spin fields cannot result in fermion statistics while half-integral spin fields cannot result in boson statistics. Proofs that avoid using any relativistic quantum field theory mechanism have defects. Many such proofs rely on a claim that where the operator permutes the coordinates. However, the value on the left-hand side represents the probability of particle 1 at , particle 2 at , and so on, and is thus quantum-mechanically invalid for indistinguishable particles. The first proof was formulated in 1939 by Markus Fierz, a student of Wolfgang Pauli, and was rederived in a more systematic way by Pauli the following year. In a later summary, Pauli listed three postulates within relativistic quantum field theory as required for these versions of the theorem: Any state with particle occupation has higher energy than the vacuum state. Spatially separated measurements do not disturb each other (they commute). Physical probabilities are positive (the metric of the Hilbert space is positive-definite). Their analysis neglected particle interactions other than commutation/anti-commutation of the state. In 1949 Richard Feynman gave a completely different type of proof based on vacuum polarization, which was later critiqued by Pauli. Pauli showed that Feynman's proof explicitly relied on the first two postulates he used and implicitly used the third one by first allowing negative probabilities but then rejecting field theory results with probabilities greater than one. A proof by Julian Schwinger in 1950 based on time-reversal invariance followed a proof by Frederik Belinfante in 1940 based on charge-conjugation invariance, leading to a connection to the CPT theorem more fully developed by Pauli in 1955. These proofs were notably difficult to follow. Work on the axiomatization of quantum field theory by Arthur Wightman lead to a theorem that stated that the expectation value of the product of two fields, , could be analytically continued to all separations . (The first two postulates of the Pauli-era proofs involve the vacuum state and fields at separate locations.) The new result allowed more rigorous proofs of the spin–statistics theorems by Gerhart Luders and Bruno Zumino and by Peter Burgoyne. In 1957 Res Jost derived the CPT theorem using the spin–statistics theorem, and Burgoyne's proof of the spin–statistics theorem in 1958 required no constraints on the interactions nor on the form of the field theories. These results are among the most rigorous practical theorems. In spite of these successes, Feynman, in his 1963 undergraduate lecture that discussed the spin–statistics connection, says: "We apologize for the fact that we cannot give you an elementary explanation." Neuenschwander echoed this in 1994, asking whether there was any progress, spurring additional proofs and books. Neuenschwander's 2013 popularization of the spin–statistics connection suggested that simple explanations remain elusive. Experimental tests In 1987 Greenberg and Mohaparra proposed that the spin–statistics theorem could have small violations. With the help of very precise calculations for states of the He atom that violate the Pauli exclusion principle, Deilamian, Gillaspy and Kelleher looked for the 1s2s 1S0 state of He using an atomic-beam spectrometer. The search was unsuccessful with an upper limit of 5×10−6. Relation to representation theory of the Lorentz group The Lorentz group has no non-trivial unitary representations of finite dimension. Thus it seems impossible to construct a Hilbert space in which all states have finite, non-zero spin and positive, Lorentz-invariant norm. This problem is overcome in different ways depending on particle spin–statistics. For a state of integer spin the negative norm states (known as "unphysical polarization") are set to zero, which makes the use of gauge symmetry necessary. For a state of half-integer spin the argument can be circumvented by having fermionic statistics. Quasiparticle anyons in 2 dimensions In 1982, physicist Frank Wilczek published a research paper on the possibilities of possible fractional-spin particles, which he termed anyons from their ability to take on "any" spin. He wrote that they were theoretically predicted to arise in low-dimensional systems where motion is restricted to fewer than three spatial dimensions. Wilczek described their spin statistics as "interpolating continuously between the usual boson and fermion cases". The effect has become the basis for understanding the fractional quantum hall effect. See also Parastatistics Anyonic statistics Braid statistics References Further reading External links A nice nearly-proof at John Baez's home page Animation of the Dirac belt trick with a double belt, showing that belts behave as spin 1/2 particles Animation of a Dirac belt trick variant showing that spin 1/2 particles are fermions Articles containing proofs Particle statistics Physics theorems Quantum field theory Statistical mechanics theorems Theorems in quantum mechanics Theorems in mathematical physics
Spin–statistics theorem
[ "Physics", "Mathematics" ]
2,197
[ "Theorems in dynamical systems", "Theorems in quantum mechanics", "Quantum field theory", "Mathematical theorems", "Equations of physics", "Particle statistics", "Quantum mechanics", "Statistical mechanics theorems", "Theorems in mathematical physics", "Articles containing proofs", "Statistical ...