id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
13,403,591
https://en.wikipedia.org/wiki/Rate%20of%20response
In behaviorism, rate of response is a ratio between two measurements with different units. Rate of responding is the number of responses per minute, or some other time unit. It is usually written as R. Its first major exponent was B.F. Skinner (1939). It is used in the Matching Law. R = # of Responses/Unit of time = B/t See also Rate of reinforcement References Herrnstein, R.J. (1961). Relative and absolute strength of responses as a function of frequency of reinforcement. Journal of the Experimental Analysis of Behaviour, 4, 267–272. Herrnstein, R.J. (1970). On the law of effect. Journal of the Experimental Analysis of Behavior, 13, 243–266. Skinner, B.F. (1938). The behavior of organisms: An experimental analysis. , . Behaviorism Quantitative analysis of behavior Response
Rate of response
[ "Physics", "Biology" ]
183
[ "Temporal quantities", "Behavior", "Physical quantities", "Temporal rates", "Quantitative analysis of behavior", "Behaviorism" ]
13,404,205
https://en.wikipedia.org/wiki/Hofstadter%20sequence
In mathematics, a Hofstadter sequence is a member of a family of related integer sequences defined by non-linear recurrence relations. Sequences presented in Gödel, Escher, Bach: an Eternal Golden Braid The first Hofstadter sequences were described by Douglas Richard Hofstadter in his book Gödel, Escher, Bach. In order of their presentation in chapter III on figures and background (Figure-Figure sequence) and chapter V on recursive structures and processes (remaining sequences), these sequences are: Hofstadter Figure-Figure sequences The Hofstadter Figure-Figure (R and S) sequences are a pair of complementary integer sequences defined as follows: with the sequence defined as a strictly increasing series of positive integers not present in . The first few terms of these sequences are R: 1, 3, 7, 12, 18, 26, 35, 45, 56, 69, 83, 98, 114, 131, 150, 170, 191, 213, 236, 260, ... S: 2, 4, 5, 6, 8, 9, 10, 11, 13, 14, 15, 16, 17, 19, 20, 21, 22, 23, 24, 25, ... Hofstadter G sequence The Hofstadter G sequence is defined as follows: The first few terms of this sequence are 0, 1, 1, 2, 3, 3, 4, 4, 5, 6, 6, 7, 8, 8, 9, 9, 10, 11, 11, 12, 12, ... Hofstadter H sequence The Hofstadter H sequence is defined as follows: The first few terms of this sequence are 0, 1, 1, 2, 3, 4, 4, 5, 5, 6, 7, 7, 8, 9, 10, 10, 11, 12, 13, 13, 14, ... Hofstadter Female and Male sequences The Hofstadter Female (F) and Male (M) sequences are defined as follows: The first few terms of these sequences are F: 1, 1, 2, 2, 3, 3, 4, 5, 5, 6, 6, 7, 8, 8, 9, 9, 10, 11, 11, 12, 13, ... M: 0, 0, 1, 2, 2, 3, 4, 4, 5, 6, 6, 7, 7, 8, 9, 9, 10, 11, 11, 12, 12, ... Hofstadter Q sequence The Hofstadter Q sequence is defined as follows: The first few terms of the sequence are 1, 1, 2, 3, 3, 4, 5, 5, 6, 6, 6, 8, 8, 8, 10, 9, 10, 11, 11, 12, ... Hofstadter named the terms of the sequence "Q numbers"; thus the Q number of 6 is 4. The presentation of the Q sequence in Hofstadter's book is actually the first known mention of a meta-Fibonacci sequence in literature. While the terms of the Fibonacci sequence are determined by summing the two preceding terms, the two preceding terms of a Q number determine how far to go back in the Q sequence to find the two terms to be summed. The indices of the summation terms thus depend on the Q sequence itself. Q(1), the first element of the sequence, is never one of the two terms being added to produce a later element; it is involved only within an index in the calculation of Q(3). Although the terms of the Q sequence seem to flow chaotically, like many meta-Fibonacci sequences, its terms can be grouped into blocks of successive generations. In case of the Q sequence, the k-th generation has 2k members. Furthermore, with g being the generation that a Q number belongs to, the two terms to be summed to calculate the Q number, called its parents, reside by far mostly in generation g − 1 and only a few in generation g − 2, but never in an even older generation. Most of these findings are empirical observations, since virtually nothing has been proved about the Q sequence so far. It is specifically unknown whether the sequence is well-defined for all n; that is, whether the sequence "dies" at some point because its generation rule tries to refer to terms which would conceptually sit left of the first term Q(1). Generalizations of the Q sequence Hofstadter–Huber Qr,s(n) family 20 years after Hofstadter first described the Q sequence, he and Greg Huber used the character Q to name the generalization of the Q sequence toward a family of sequences, and renamed the original Q sequence of his book to U sequence. The original Q sequence is generalized by replacing n − 1 and n − 2 by n − r and n − s, respectively. This leads to the sequence family where s ≥ 2 and r < s. With (r,s) = (1,2), the original Q sequence is a member of this family. So far, only three sequences of the family Qr,s are known, namely the U sequence with (r,s) = (1,2) (which is the original Q sequence); the V sequence with (r,s) = (1,4); and the W sequence with (r,s) = (2,4). Only the V sequence, which does not behave as chaotically as the others, is proven not to "die". Similar to the original Q sequence, virtually nothing has been proved rigorously about the W sequence to date. The first few terms of the V sequence are 1, 1, 1, 1, 2, 3, 4, 5, 5, 6, 6, 7, 8, 8, 9, 9, 10, 11, 11, 11, ... The first few terms of the W sequence are 1, 1, 1, 1, 2, 4, 6, 7, 7, 5, 3, 8, 9, 11, 12, 9, 9, 13, 11, 9, ... For other values (r,s) the sequences sooner or later "die" i.e. there exists an n for which Qr,s(n) is undefined because n − Qr,s(n − r) < 1. Pinn Fi,j(n) family In 1998, Klaus Pinn, scientist at University of Münster (Germany) and in close communication with Hofstadter, suggested another generalization of Hofstadter's Q sequence which Pinn called F sequences. The family of Pinn Fi,j sequences is defined as follows: Thus Pinn introduced additional constants i and j which shift the index of the terms of the summation conceptually to the left (that is, closer to start of the sequence). Only F sequences with (i,j) = (0,0), (0,1), (1,0), and (1,1), the first of which represents the original Q sequence, appear to be well-defined. Unlike Q(1), the first elements of the Pinn Fi,j(n) sequences are terms of summations in calculating later elements of the sequences when any of the additional constants is 1. The first few terms of the Pinn F0,1 sequence are 1, 1, 2, 2, 3, 4, 4, 4, 5, 6, 6, 7, 8, 8, 8, 8, 9, 10, 10, 11, ... Hofstadter–Conway $10,000 sequence The Hofstadter–Conway $10,000 sequence is defined as follows The first few terms of this sequence are 1, 1, 2, 2, 3, 4, 4, 4, 5, 6, 7, 7, 8, 8, 8, 8, 9, 10, 11, 12, ... The values converge to 1/2, and this sequence acquired its name because John Horton Conway offered a prize of $10,000 to anyone who could determine its rate of convergence. The prize, since reduced to $1,000, was claimed by Collin Mallows, who proved that In private communication with Klaus Pinn, Hofstadter later claimed that he had found the sequence and its structure about 10–15 years before Conway posed his challenge. References Sources . . . . . Integer sequences
Hofstadter sequence
[ "Mathematics" ]
1,787
[ "Sequences and series", "Integer sequences", "Mathematical structures", "Recreational mathematics", "Mathematical objects", "Combinatorics", "Numbers", "Number theory" ]
13,404,388
https://en.wikipedia.org/wiki/Semantics%20of%20Business%20Vocabulary%20and%20Business%20Rules
The Semantics of Business Vocabulary and Business Rules (SBVR) is an adopted standard of the Object Management Group (OMG) intended to be the basis for formal and detailed natural language declarative description of a complex entity, such as a business. SBVR is intended to formalize complex compliance rules, such as operational rules for an enterprise, security policy, standard compliance, or regulatory compliance rules. Such formal vocabularies and rules can be interpreted and used by computer systems. SBVR is an integral part of the OMG's model-driven architecture (MDA). Overview The SBVR standard defines the vocabulary and rules for documenting the semantics of business vocabularies, business facts, and business rules; as well as an XMI schema for the interchange of business vocabularies and business rules among organizations and between software tools. SBVR allows the production of business vocabularies and rules; vocabulary plus rules constitute a shared domain model with the same expressive power of standard ontological languages. SBVR allows multilingual development, since it is based on separation between symbols and their meaning. SBVR enables making business rules accessible to software tools, including tools that support the business experts in creating, finding, validating, and managing business rules, and tools that support the information technology experts in converting business rules into implementation rules for automated systems. SBVR uses OMG's Meta-Object Facility (MOF) to provide interchange capabilities MOF/XMI mapping rules, enable generating MOF-compliant models and define an XML schema. SBVR proposes Structured English as one of possibly many notations that can map to the SBVR Metamodel. SBVR and Knowledge Discovery Metamodel (KDM) are designed as two parts of a unique OMG Technology Stack for software analytics related to existing software systems. KDM defines an ontology related to software artifacts and thus provides an initial formalization of the information related to a software system. SBVR can be further used to formalize complex compliance rules related to the software. Background Business rules represent the primary means by which an organization can direct its business, defining the operative way to reach its objectives and perform its actions. A rule-based approach to managing business and the information used by that business is a way of identifying and articulating the rules which define the structure and control the operation of an enterprise it represents a new way to think about enterprise and its rules, in order to enable a complete business representation made by and for business people. Business rules can play an important role in defining business semantics: they can influence or guide behaviours and support policies, responding to environmental situations and events. Semantics of Business Vocabulary and Business Rules (SBVR) is the OMG implementation of the business rules approach. History In June 2003 OMG issued the Business Semantics of Business Rule (BSBR) Request For Proposal, in order to create a standard to allow business people to define the policies and rules by which they run their business in their own language, in terms of the things they deal with in the business, and to capture those rules in a way that is clear, unambiguous and readily translatable into other representations. The SBVR proposal was developed by the Business Rules Team, a consortium organized in August 2003 to respond to the BSBR RFP. In September 2005, The Business Modeling and Integration Task Force and the Architecture Board of the Object Management Group approved the proposal Semantics of Business Vocabulary and Business Rules (SBVR) to become a final adopted specification in response to the RFP. Later SBVR proposal was ratified by the Domain Technical Committee (DTC), approved of the OMG Board of Directors, and SBVR finalization task force was launched to convert the proposal into ISO/OMG standard format and perform final editing prior to release as an OMG formal specification. In January 2008, the finalization phase was completed and the Semantics of Business Vocabulary and Business Rules (SBVR), Version 1.0 formal specification was released and is publicly available at the Catalog of OMG Business Strategy, Business Rules and Business Process Management Specifications web page. Conceptual formalization SBVR is a landmark for the OMG, the first OMG specification to incorporate the formal use of natural language in modeling and the first to provide explicitly a model of formal logic. Based on a fusion of linguistics, logic, and computer science, and two years in preparation, SBVR provides a way to capture specifications in natural language and represent them in formal logic so they can be machine-processed. Methodologies used in software development are typically applied only when a problem is already formulated and well described. The actual difficulty lies in the previous step, that is describing problems and expected functionalities. Stakeholders involved in software development can express their ideas using a language very close to them, but they usually are not able to formalize these concepts in a clear and unambiguous way. This implies a large effort in order to interpret and understand real meanings and concepts hidden among stakeholders' words. Special constraints on syntax or predefined linguistic structures can be used in order to overcome this problem, enabling natural language to well represent and formally define problems and requirements. The main purpose of natural language modelling is hence to make natural language suitable for conceptual modelling. The focus is on semantic aspects and shared meanings, while syntax is thought in a perspective based on formal logic mapping. Conceptualization and representation play fundamental roles in thinking, communicating, and modeling. There is a triad of 1) concepts in our minds, 2) real-world things conceptualized by concepts, and 3) representations of concepts that we can use to think and communicate about the concept and its corresponding real-world things. (Note that real-world things include both concrete things and representations of those concrete things as records and processes in operational information systems.) A conceptual model is a formal structure representing a possible world, comprising a conceptual schema and a set of facts that instantiate the conceptual schema. The conceptual schema is a combination of concepts and facts of what is possible, necessary, permissible, and obligatory in each possible world. The set of facts instantiates the conceptual schema by assertion to describe one possible world. A rule is a fact that asserts either a logical necessity or an obligation. Obligations are not necessarily satisfied by the facts; necessities are always satisfied. SBVR contains a vocabulary for conceptual modeling and captures expressions based on this vocabulary as formal logic structures. The SBVR vocabulary allows one to formally specify representations of concepts, definitions, instances, and rules of any knowledge domain in natural language, including tabular forms. These features make SBVR well suited for describing business domains and requirements for business processes and information systems to implement business models. Fact-orientation People communicate facts, that is the fact is the unit of communication. The fact-oriented approach enables multidimensional categorization. The fact-oriented approach supports time changeability. The fact-oriented approach provides semantic stability. The fact-oriented approach enables extensibility and reuse. The fact-oriented approach involves breaking down compound fact types into elementary (atomic) ones. Conceptual formalization describes a business domain, and is composed of 1) a conceptual schema (fact structure) and 2) a population of ground facts. A business domain (universe of discourse) comprises those aspects of the business that are of interest. The schema declares: the relevant fact types (kinds of ground fact, e.g. Employee works for Department) the relevant business rules (typically constraints or derivation rules). A fact is a proposition taken to be true by the business. Population facts are restricted to elementary and existential facts. Constraints can be static or dynamic: A static constraint imposes a restriction on what fact populations are possible or permitted, for each fact population taken individually e.g. Each Employee was born on at most one Date. A dynamic constraint imposes a restriction on transitions between fact populations e.g. a person’s marital status may change from single to married, but not from divorced to single Derivation of facts. Derivation means either, how a fact type may be derived from one or more other fact types e.g. Person1 is an uncle of Person2 if Person1 is a brother of some Person3 who is a parent of Person2 Or, how a noun concept (object type) may be defined in terms of other object types and fact types e.g. Each FemaleAustralian is a Person who was born in Country ‘Australia’ and has Gender ‘Female’ Rule-based approach Rules play a very important role in defining business semantics: they can influence or guide behaviours and support policies, responding to environmental situations and events. This means that rules represent the primary means by which an organization can direct its business, defining the operative way to reach its objectives and perform its actions. The rule-based approach aims to address two different kinds of users: it addresses business communities, in order to provide them with a structured approach, based on a clear set of concepts and used to access and manage business rules; it addresses IT professionals, in order to provide them with a deep understanding about business rules and to help them in models creation. The rules-based approach also helps bridge the rift that can occur between the data managers and the software designers. The essence of the rule-based conceptual formalizations is that rules build on facts, and facts build on concepts as expressed by terms. This mantra is memorable, but a simplification since in SBVR: Meaning is separate from expression; Fact Types (Verb Concepts) are built on Noun Concepts; Noun Concepts are represented by Terms; and Fact Types are represented by Fact Symbols (verb phrases). Rule statements are expressed using either alethic modality or deontic modality and require elements of modal logic as formalization. SBVR Structural Business Rules use two alethic modal operators: it is necessary that … it is possible that … SBVR Operative Business Rules use two deontic modal operators: it is obligatory that … it is permitted that … Structural business rules (static constraints) are treated as alethic necessities by default, where each state of the fact model corresponds to a possible world. Pragmatically, the rule is understood to apply to all future states of the fact model, until the rule is revoked or changed. For the model theory, the necessity operator is omitted from the formula. Instead, the rule is merely tagged as a necessity. For compliance with Common Logic, such formulae can be treated as irregular expressions, with the necessity modal operator treated as an uninterpreted symbol. If the rule includes exactly one deontic operator, e.g. O (obligation), and this is at the front, then the rule may be formalized as Op, where p is a first-order formula that is tagged as obligatory. In SBVR, this tag is assigned the informal semantics: it ought to be the case that p (for all future states of the fact model, until the constraint is revoked or changed). From a model-theoretic perspective, a model is an interpretation where each non-deontic formula evaluates to true, and the model is classified as: a permitted model if the p in each deontic formula (of the form Op) evaluates to true, otherwise the model is a forbidden model (though still a model). This approach removes any need to assign a truth value to expressions of the form Op. Formal logic with a natural language interface SBVR is for modeling in natural language. Based on linguistics and formal logic, SBVR provides a way to represent statements in controlled natural languages as logic structures called semantic formulations. SBVR is intended for expressing business vocabulary and business rules, and for specifying business requirements for information systems in natural language. SBVR models are declarative, not imperative or procedural. SBVR has the greatest expressivity of any OMG modeling language. The logics supported by SBVR are typed first order predicate logic with equality, restricted higher order logic (Henkin semantics), restricted deontic and alethic modal logic, set theory with bag comprehension, and mathematics. SBVR also includes projections, to support definitions and answers to queries, and questions, for formulating queries. Interpretation of SBVR semantic formulations is based on model theory. SBVR has a MOF model, so models can be structurally linked at the level of individual facts with other MDA models based on MOF. SBVR is aligned with Common Logic – published by ISO as ISO/IEC 24707:2007. SBVR captures business facts and business rules that may be expressed either informally or formally. Business rule expressions are formal only if they are expressed purely in terms of: fact types in the pre-declared schema for the business domain, certain logical/ mathematical operators, quantifiers etc. Formal rules are transformed into a logical formulation that is used for exchange with other rules-based software tools. Informal rules may be exchanged as un-interpreted comments. An approach to automatically generate SBVR business rules from natural language specification is presented in. Other OMG standards SBVR specification defines a metamodel and allows to instance it, in order to create different vocabularies and to define the related business rules; it is also possible to complete these models with data suitable to describe a specific organization. the SBVR approach provides means (i.e. mapping rules) to translate natural language artifacts into MOF-compliant artifacts; this allows to exploit all the advantages related to MOF (repository facilities, interchangeability, tools, ...). Several MDA-related OMG works in progress are expected to incorporate SBVR, including: Business Process Definition Metamodel (BPDM) Organization Structure Metamodel (OSM) Business Motivation Model (BMM) UML Profile for Production Rule Representation (PRR) UML Profile for the Department of Defense Architecture Framework/Ministry of Defense(Canada) Architecture Framework (DoDAF/MODAF). Knowledge Discovery Metamodel (KDM) Wider interest in SBVR– Semantic Web, OASIS The Ontology Definition Metamodel (ODM) has been made compatible with SBVR, primarily by aligning the logic grounding of the ISO Common Logic specification (CL) referenced by ODM with the SBVR Logical Formulation of Semantics vocabulary. CL itself was modified specifically so it potentially can include the modal sentence requirements of SBVR. ODM provides a bridge to link SBVR to the Web Ontology Language for Services (OWL-S), Resource Description Framework Schema (RDFS), Unified Modeling Language (UML), Topic Map (TM), Entity Relationship Modeling (ER), Description Logic (DL), and CL. Other programs outside the OMG are adopting SBVR. The Digital Business Ecosystem (DBE), an integrated project of the European Commission Framework Programme 6, has adopted SBVR as the basis for its Business Modeling Language. The World Wide Web Consortium (W3C) is assessing SBVR for use in the Semantic Web, through the bridge provided by ODM. SBVR will extend the capability of MDA in all these areas. References External links Business Rules Group Data modeling Unified Modeling Language 2008 in computing Computer standards
Semantics of Business Vocabulary and Business Rules
[ "Technology", "Engineering" ]
3,116
[ "Data modeling", "Computer standards", "Data engineering" ]
13,404,977
https://en.wikipedia.org/wiki/Navico
Navico is a marine electronics company providing navigation, marine instruments and fish finding equipment to both the recreational and commercial marine sectors. The Navico Recreational Marine Division is one of the world's largest provider of leisure marine electronic products. Lowrance is aimed at fishing, particularly in freshwater and near coastal areas. Simrad Yachting is focused on powerboat owners for cruising and sportfishing and B&G serves the sailing market. The Simrad Commercial Marine Electronics division also offers navigation products for the commercial market, while C-MAP provides cartography and digital products to both recreational and commercial markets. Navico has its headquarters in Egersund, Norway, and the group has manufacturing facilities in the United States, UK, Norway, Mexico and New Zealand. History 20th century: Predecessors In 1946 Simonsen Radio was founded by Willy Simonsen (NOR) leading the development of echo-sounding equipment. Simrad Yachting was born from the union of Simonsen Radio and other marine technology pioneers. In 1955 Brookes & Gatehouse (B&G) was founded by Major R.N Gatehouse and Ronald Brookes (UK). In 1957 Lowrance Electronics was created by Darrell Lowrance (US) and launched the first recreational sonar product for anglers – the Fish-Lo-K-Tor, also known as the ‘Little Green Box’. 21st century: Foundation and expansion In 2003 Simrad Yachting acquired B&G. In 2005 Altor 2003 Fund acquired Simrad Yachting AS from Kongsberg Group. In 2006 Altor 2003 Fund acquired Lowrance Electronics. Also in 2006, Navico was created through the merger of Simrad Yachting and Lowrance Electronics by their common owners, Altor Equity Partners, a Swedish private equity firm. In 2007 Navico acquired the marine electronics business of Brunswick New Technologies creating the world’s largest supplier of marine electronics for recreational boats. 2016 Goldman Sachs Merchant Banking Division and Altor Fund IV signed an agreement to acquire Navico Holding AS (Navico) and Digital Marine Solutions Holding AS (Digital Marine Solutions), owner of Jeppesen Marine, from the Altor 2003 Fund. In 2016 Navico expands manufacturing plant in Ensenada, Mexico adding 50,000 sq ft. In 2017 Navico acquired C-MAP, providing cartography products and services for all types of leisure boaters, from fishermen and sailing enthusiasts to powerboat owners around the world. In 2019 Knut Frostad was appointed as President and CEO of Navico. In 2021 Brunswick Corporation acquired Navico. Brands B&G B&G, formerly Brookes and Gatehouse, was founded over 60 years ago and manufactures sailing electronics for cruising and racing yachts. B&G systems are used by professional race boats as well as amateur club racers and sailing superyachts. The B&G range encompasses chart plotters, navigation equipment, instruments, autopilots, and radar, plus tactical racing software and other performance measurement and analysis. C-MAP Founded in 1985, C-MAP serves boaters worldwide, providing cartography products and services for all types of leisure boaters, from fishermen and sailing enthusiasts to powerboat owners. C-MAP worldwide cartography products and services include multiple formats for lakes, coasts, and oceans. C-MAP also provides products and services to the commercial marine sector. The majority of C-MAPs products and services for this sector were sold to Lloyd's Register in December 2020. Lowrance Lowrance is a manufacturer of consumer sonar and GPS receivers, as well as digital mapping systems. Headquartered in Tulsa, Oklahoma, with production facilities in Ensenada, Mexico, Lowrance employs approximately 1,000 people. The company is best known for its High Definition Systems (HDS) and add-on performance modules which include Broadband 4G Radar, StructureScan with SideScan and DownScan Imaging, Sonic Hub Audio, Sirius LWX-1 Weather, and NAIS Collision Avoidance. Lowrance was founded in Tulsa, Oklahoma in 1957. In 2006, Simrad Yachting and Lowrance merged in a deal valued at $215 million, creating a new company named Navico. In 2006, Lowrance was purchased by Simrad Yachting for $215 million. This merger went on to create Navico, now the largest leisure marine electronics manufacturer in the world. The Lowrance brand is wholly owned by Navico, a privately held, international corporation, Navico is currently the world’s largest marine electronics company, and is the parent company to leading marine electronics brands: Lowrance, Simrad Yachting and B&G. Simrad Simrad is a manufacturer of marine electronics for the leisure and professional markets. A member of the Navico family of brands, Simrad develops, manufactures and distributes navigation systems, autopilots, marine VHF radios, chartplotters, echosounders, radars, fishfinders and a wide range of other marine technology. The Simrad name has been in existence for over sixty years. The brand was established in 1947 by Willy Christian Simonsen, who set up his own wireless company called Simonsen Radio. Initially, he focused on the production of radio communications for fishing vessels. A few years later, he coined the name Simrad to encompass a wider range of activities – namely the design and manufacture of navigation, communication, auto-steering, and fish-finding technologies. In 1996 the Simrad Group was purchased by the Kongsberg Group which, following a decision to focus on the industrial market, sold the Simrad recreational product range to Altor Equity Partners in 2005, creating Simrad Yachting. Simrad Group and Simrad Yachting are therefore now entirely independent of each other, with separate owners and distinct product specializations. It was the merger of Simrad Yachting and Lowrance Inc in 2006 that went on to create the Navico Group, now the largest leisure marine electronics manufacturer in the world. Simrad produces a range of navigation instruments designed to withstand challenging conditions and provide navigation solutions for both leisure boaters (via the Simrad Yachting range) and coastal mariners (via the Simrad Professional range). In 2008 the company absorbed MX Marine – acquired as a result of the takeover by Navico of the marine electronics division of Brunswick New Technologies Inc in 2007 – into its Simrad Professional line-up, further extending its position in the commercial GPS and DGPS sector. Over the past seventy years, they have developed systems for commercial vessels offering a range of radar systems, auto steering, navigation, and safety products for vessels of all sizes, from small vessels on inland waterways to larger coastal commercial and passenger craft. References Electronics companies established in 2006 Manufacturing companies of Norway Navigation system companies Companies based in Rogaland Fishing equipment manufacturers Marine electronics Norwegian brands 2021 mergers and acquisitions Norwegian companies established in 2006
Navico
[ "Engineering" ]
1,395
[ "Marine electronics", "Marine engineering" ]
13,406,286
https://en.wikipedia.org/wiki/Handy%20billy
Handy billy — also known as Handy-billie —is an emergency portable pump that for decades was commonly placed aboard most U.S. Navy ships from World War I on, as well as later use on civilian craft. Purpose of the pump The handy billy, formally designated "P50", because it pumped 50 gallons per minute, was gasoline-powered and could be used, during flooding conditions, in conjunction with other pumps on the ship. However, it was especially valuable when the ship lost electrical power and normal pumping ability was lost. On smaller ships, it was a critical piece of equipment. The pump gained its name because it was very “handy” and dependable. It was especially handy because it could be easily transported from place to place by two strong crew members, one at each end, as it weighed 160 pounds during World War II. Versatility The handy billy could be used for fire-fighting and/or pumping water from flooded spaces aboard ship. Example of use See See also Pump References External links USS ATLANTA CL-51 - Battle damage during evening of 12 November 1942 Abandonment of the "Duncan" and Rescue of Her Survivors by the "McCalla" Fire pump aboard ship to pump sea water. Nautical terminology Pumps
Handy billy
[ "Physics", "Chemistry" ]
249
[ "Pumps", "Hydraulics", "Physical systems", "Turbomachinery" ]
13,406,872
https://en.wikipedia.org/wiki/Acousto-optical%20spectrometer
An acousto-optical spectrometer (AOS) is based on the diffraction of light by ultrasonic waves. A piezoelectric transducer, driven by the RF signal (from the receiver), generates an acoustic wave in a crystal (the so-called Bragg-cell). This acoustic wave modulates the refractive index and induces a phase grating. The Bragg-cell is illuminated by a collimated laser beam. The angular dispersion of the diffracted light represents a true image of the IF-spectrum according to the amplitude and wavelengths of the acoustic waves in the crystal. The spectrum is detected by using a single linear diode array (CCD), which is placed in the focal plane of an imaging optics. Depending on the crystal and the focal length of the imaging optics, the resolution of this type of spectrometer can be varied. See also Acousto-optics Acousto-optic deflector Acousto-optic modulator Nonlinear optics References Spectrometers
Acousto-optical spectrometer
[ "Physics", "Chemistry" ]
216
[ "Spectrometers", "Spectroscopy", "Spectrum (physical sciences)" ]
13,407,146
https://en.wikipedia.org/wiki/Positive%20vorticity%20advection
Positive vorticity advection, or PVA, is the result of more cyclonic values of vorticity advecting into lower values of vorticity. It is more generally referred to as "Cyclonic Vorticity Advection" (CVA). In the Northern Hemisphere this is positive, whilst in the Southern Hemisphere it is negative. Development Vorticity in the atmosphere is created in three different ways, which are named in their resultant vorticity. These are; Coriolis vorticity, curvature vorticity, and shear vorticity. For example, at the base of a trough, there is curvature and shear vorticity. Curvature vorticity is due to the increasing cyclonic turning as an air parcel enters the trough base. The maximum counter-clockwise spin (positive vorticity in the Northern Hemisphere) is at the trough base. Shear vorticity is caused by the difference in wind speed between air moving through the trough base (typically a jet or jet finger) and slower moving air on either poleward and equatorward side of the faster flow. Consider that slower air to the poleward side will be imparted counter-clockwise spin (picture faster moving air (jet) south and slower air to the north, spin is created). Thus, to the north (poleward) of the trough base an air parcel will experience positive vorticity. Likewise, to the south of the faster flow the air is spun in a clockwise direction (faster air (jet)to the north with slower air to the south, spin is created). Thus, to the south of the faster winds will be an area of negative vorticity. When these areas of negative and positive vorticity are moved (advected) they produce areas of negative vorticity advection (NVA) and positive vorticity advection (PVA) respectively, downstream from the trough base. The positive vorticity advection area is typically associated with divergence and upward motion. The negative vorticity advection area will be associated with convergence and downward motion. This produces convergence because of the way the air gains cyclonic vorticity while entering the base of the trough. The opposite happens when air is exiting the base of a trough. This air has more cyclonic vorticity than the air it is entering and therefore produces CVA. CVA produces divergence as a result of how there is a loss of cyclonic vorticity. Coriolis vorticity in this situation is ignored because it acts about the same on all the air flowing through the base of the trough. Significance in forecasting The divergence with CVA is significant because it creates forced lift in the atmosphere. This forced lift, in the presence of conditions favorable for atmospheric convection, can cause clouds or precipitation. AVA will do the opposite and lead to a stable atmosphere. In combination with a jet streak, CVA can lead to the amplification of a trough which is significant for forecasting many conditions of the atmosphere. References External links Vorticity Advection Atmospheric dynamics
Positive vorticity advection
[ "Chemistry" ]
643
[ "Atmospheric dynamics", "Fluid dynamics" ]
13,408,012
https://en.wikipedia.org/wiki/Healthy%20user%20bias
The healthy user bias or healthy worker bias is a bias that can damage the validity of epidemiologic studies testing the efficacy of particular therapies or interventions. Specifically, it is a sampling bias or selection bias: the kind of subjects that take up an intervention, including by enrolling in a clinical trial, are not representative of the general population. People who volunteer for a study can be expected, on average, to be healthier than people who don't volunteer, as they are concerned for their health and are predisposed to follow medical advice, both factors that would aid one's health. In a sense, being healthy or active about one's health is a precondition for becoming a subject of the study, an effect that can appear under other conditions such as studying particular groups of workers. For example, someone in ill health is unlikely to have a job as manual laborer. As a result, studies of manual laborers are studies of people who are currently healthy enough to engage in manual labor, rather than studies of people who would do manual labor if they were healthy enough. References Further reading McMichael, A. J. (1976). Standardized mortality ratios and the “healthy worker effect”: Scratching beneath the surface. Journal of Occupational Medicine, 18, 165–168. doi:10.1097/00043764-197603000-00009 External links "Do We Really Know What Makes Us Healthy?" Epidemiology Bias Medical statistics Sampling (statistics)
Healthy user bias
[ "Environmental_science" ]
312
[ "Epidemiology", "Environmental social science" ]
13,408,015
https://en.wikipedia.org/wiki/Heavy%20baryon%20chiral%20perturbation%20theory
Heavy baryon chiral perturbation theory (HBChPT) is an effective quantum field theory used to describe the interactions of pions and nucleons/baryons. It is somewhat an extension of chiral perturbation theory (ChPT) which just describes the low-energy interactions of pions. In a richer theory one would also like to describe the interactions of baryons with pions. A fully relativistic Lagrangian of nucleons is non-predictive as the quantum corrections, or loop diagrams can count as quantities and therefore do not describe higher-order corrections. Because the baryons are much heavier than the pions, HBChPT rests on the use of a nonrelativistic description of baryons compared to that of the pions. Therefore, higher order terms in the HBChPT Lagrangian come in at higher orders of where is the baryon mass. Quantum chromodynamics
Heavy baryon chiral perturbation theory
[ "Physics" ]
204
[ "Particle physics stubs", "Particle physics" ]
13,408,055
https://en.wikipedia.org/wiki/MK-9470
MK-9470 is a synthetic compound which binds to the CB1 cannabinoid receptor and functions as an inverse agonist. The 18F-labeled version, [18F]-MK-9470, is used in research as a positron emission tomography (PET) tracer for brain imaging of the CB1 receptor. References Cannabinoids CB1 receptor antagonists Nitriles Phenol ethers Pyridines
MK-9470
[ "Chemistry" ]
94
[ "Nitriles", "Functional groups" ]
13,408,158
https://en.wikipedia.org/wiki/Hornsby%E2%80%93Akroyd%20oil%20engine
The Hornsby–Akroyd oil engine, named after its inventor Herbert Akroyd Stuart and the manufacturer Richard Hornsby & Sons, was the first successful design of an internal combustion engine using heavy oil as a fuel. It was the first to use a separate vapourising combustion chamber and is the forerunner of all hot-bulb engines, which are considered predecessors of the similar Diesel engine, developed a few years later. Early internal combustion engines were quite successful running on gaseous and light petroleum fuels. However, due to the dangerous nature of petroleum and light petroleum fuel, legal restrictions were placed on their transportation and storage. Heavier petroleum fuels, such as kerosene, were quite prevalent, as they were used for lighting, but posed specific problems when used in internal combustion engines: Oil used for engine fuel must be turned to a vapour state and remain in that state during compression. Furthermore, the combustion of the fuel must be powerful, regular, and complete, to avoid deposits that will clog the valves and working parts of the engine. Early oil engines The earliest mention of an oil engine was by Robert Street, in his English patent no. 1983 of 1794, and according to Horst O. Hardenberg there is evidence that he built a working version. Other oil engines were subsequently built by Etienne Lenoir, Siegfried Marcus, Julius Hock of Vienna and George Brayton in the 19th century. In 1807 Nicéphore Niépce built a working moss and coal powder powered engine. the Pyreolophore, which powered a boat upstream on the River Saône. All of these engines with the exception of Brayton's were non-compression. Others made refinements to the oil engine; William Dent Priestman and Emile Capitaine are some of the more notable. However, it was Herbert Akroyd Stuart's design that was the most successful. Herbert Akroyd Stuart’s engine Herbert Akroyd Stuart's first prototype engines were built in 1886. In 1890, in collaboration with Charles Richard Binney, he filed Patent 7146 for Richard Hornsby & Sons of Grantham, Lincolnshire, England. The patent was entitled: Improvements in Engines Operated by the Explosion of Mixtures of Combustible Vapour or Gas and Air. Vapourising combustion chamber Stuart's oil engine design was simple, reliable and economical. It had a comparatively low compression ratio, so that the temperature of the air compressed in the combustion chamber at the end of the compression stroke was not high enough to initiate combustion. Combustion instead took place in a separated combustion chamber, the vapouriser (also called the hot bulb) mounted on the cylinder head, into which fuel was sprayed. It was connected to the cylinder by a narrow passage and was heated either by the cylinder's coolant or by exhaust gases while running; an external flame such as a blowtorch was used for starting. Self-ignition occurred from contact between the fuel-air mixture and the hot walls of the vapouriser. By contracting the bulb to a very narrow neck where it attached to the cylinder, a high degree of turbulence was set up as the ignited gases flashed through the neck into the cylinder, where combustion was completed. As the engine's load increased, so did the temperature of the bulb, causing the ignition period to advance; to counteract pre-ignition, water was dripped into the air intake. Four-stroke oil engine The Stuart engine is of four cycle design. During the intake stroke (1), fresh air is inducted into the cylinder through a mechanically operated intake valve. Simultaneously, oil is injected into the vapouriser. The vapour of the oil is almost entirely confined to the vapouriser chamber. This cloud of hot oil vapour is too rich to support combustion. On the compression stroke (2) of the piston, the fresh air is forced through the narrow neck and into the vapouriser. Just as compression is completed, the mixture is just right to support combustion and ignition occurs to push the piston during expansion stroke (3). Exhaust gas is released then during stroke (4). Two-stroke hot-bulb engines Some years later, Akroyd-Stuart's design was further developed in the United States by the German emigrants Mietz and Weiss, who combined the hot-bulb engine with the two-stroke scavenging principle, developed by Joseph Day to provide nearly twice the power, as compared to a four-stroke engine of same size. Similar engines, for agricultural and marine use, were built by J. V. Svensons Automobilfabrik, Bolinders, Lysekils Mekaniska Verkstad, Pythagoras Engine Factory and many other factories in Sweden. Comparison to the Diesel engine Akroyd-Stuart's engine was the first internal combustion engine to use a pressurised fuel injection system and also the first using a separate vapourising combustion chamber. It is the forerunner of all hot-bulb engines, which are considered kind of predecessors of the similar Diesel engine, developed a few years later. However, the Hornsby–Akroyd oil engine and other hot-bulb engines are distinctly different from Rudolf Diesel's design, where ignition occurs alone through the heat of compression: An oil engine will have a decent compression ratio between 3:1 and 5:1, where a typical diesel engine will have a much harder achieved compression ratio ranging between 15:1 and 20:1, making it a lot more efficient. Also the fuel is injected easily during the early intake stroke and not at the peak of compression with a high-pressure Diesel injection pump. First production oil engine Akroyd-Stuart's engines were built from 26 June 1891 by Richard Hornsby & Sons in Grantham, a large manufacturer of steam engines and agricultural equipment, as the Hornsby Akroyd Patent Oil Engine under licence and were first sold commercially on 8 July 1892. Other engineering companies had been offered the option of manufacturing the engine, but they saw it as a threat to their business, and so declined the offer. Adaption to compression ignition In 1892, T. H. Barton at Hornsbys enhanced the engine by replacing the vaporiser with a new cylinder head and increased the compression ratio to make the engine run on compression alone pre-dating Rudolph Diesel's engine. This Hornsby–Akroyd oil engine design was hugely successful: during the period from 1891 through 1905, a total of 32,417 engines were produced. They would provide electricity for lighting the Taj Mahal, the Rock of Gibraltar, the Statue of Liberty (chosen after Hornsby won the oil engine prize at the Chicago World's Fair of 1893), many lighthouses, and for powering Guglielmo Marconi's first transatlantic radio broadcast. See also Hot-bulb engine History of the internal combustion engine Notes External links Richard Hornsby oil engine Video Clips Running Hornsby–Akroyd-Motor (1905) (Great Dorset Steam Fair, 2005) English inventions Stationary engines Ruston (engine builder)
Hornsby–Akroyd oil engine
[ "Technology" ]
1,432
[ "Stationary engines", "Engines" ]
13,408,203
https://en.wikipedia.org/wiki/FRACTRAN
FRACTRAN is a Turing-complete esoteric programming language invented by the mathematician John Conway. A FRACTRAN program is an ordered list of positive fractions together with an initial positive integer input n. The program is run by updating the integer n as follows: for the first fraction f in the list for which nf is an integer, replace n by nf repeat this rule until no fraction in the list produces an integer when multiplied by n, then halt. gives the following FRACTRAN program, called PRIMEGAME, which finds successive prime numbers: Starting with n=2, this FRACTRAN program generates the following sequence of integers: 2, 15, 825, 725, 1925, 2275, 425, 390, 330, 290, 770, ... After 2, this sequence contains the following powers of 2: The exponent part of these powers of two are primes, 2, 3, 5, etc. Understanding a FRACTRAN program A FRACTRAN program can be seen as a type of register machine where the registers are stored in prime exponents in the argument . Using Gödel numbering, a positive integer can encode an arbitrary number of arbitrarily large positive integer variables. The value of each variable is encoded as the exponent of a prime number in the prime factorization of the integer. For example, the integer represents a register state in which one variable (which we will call ) holds the value 2 and two other variables ( and ) hold the value 1. All other variables hold the value 0. A FRACTRAN program is an ordered list of positive fractions. Each fraction represents an instruction that tests one or more variables, represented by the prime factors of its denominator. For example: tests and . If and , then it subtracts 2 from and 1 from and adds 1 to v3 and 1 to . For example: Since the FRACTRAN program is just a list of fractions, these test-decrement-increment instructions are the only allowed instructions in the FRACTRAN language. In addition the following restrictions apply: Each time an instruction is executed, the variables that are tested are also decremented. The same variable cannot be both decremented and incremented in a single instruction (otherwise the fraction representing that instruction would not be in its lowest terms). Therefore each FRACTRAN instruction consumes variables as it tests them. It is not possible for a FRACTRAN instruction to directly test if a variable is 0 (However, an indirect test can be implemented by creating a default instruction that is placed after other instructions that test a particular variable.). Creating simple programs Addition The simplest FRACTRAN program is a single instruction such as This program can be represented as a (very simple) algorithm as follows: Given an initial input of the form , this program will compute the sequence , , etc., until eventually, after steps, no factors of 2 remain and the product with no longer yields an integer; the machine then stops with a final output of . It therefore adds two integers together. Multiplication We can create a "multiplier" by "looping" through the "adder". In order to do this we need to introduce states into our algorithm. This algorithm will take a number and produce : State B is a loop that adds to and also moves to , and state A is an outer control loop that repeats the loop in state B times. State A also restores the value of from after the loop in state B has completed. We can implement states using new variables as state indicators. The state indicators for state B will be and . Note that we require two state control indicators for one loop; a primary flag () and a secondary flag (). Because each indicator is consumed whenever it is tested, we need a secondary indicator to say "continue in the current state"; this secondary indicator is swapped back to the primary indicator in the next instruction, and the loop continues. Adding FRACTRAN state indicators and instructions to the multiplication algorithm table, we have: When we write out the FRACTRAN instructions, we must put the state A instructions last, because state A has no state indicators - it is the default state if no state indicators are set. So as a FRACTRAN program, the multiplier becomes: With input 2a3b this program produces output 5ab. Subtraction and division In a similar way, we can create a FRACTRAN "subtractor", and repeated subtractions allow us to create a "quotient and remainder" algorithm as follows: Writing out the FRACTRAN program, we have: and input 2n3d11 produces output 5q7r where n = qd + r and 0 ≤ r < d. Conway's prime algorithm Conway's prime generating algorithm above is essentially a quotient and remainder algorithm within two loops. Given input of the form where 0 ≤ m < n, the algorithm tries to divide n+1 by each number from n down to 1, until it finds the largest number k that is a divisor of n+1. It then returns 2n+1 7k-1 and repeats. The only times that the sequence of state numbers generated by the algorithm produces a power of 2 is when k is 1 (so that the exponent of 7 is 0), which only occurs if the exponent of 2 is a prime. A step-by-step explanation of Conway's algorithm can be found in Havil (2007). For this program, reaching the prime number 2, 3, 5, 7... requires respectively 19, 69, 281, 710,... steps . A variant of Conway's program also exists, which differs from the above version by two fractions: This variant is a little faster: reaching 2, 3, 5, 7... takes it 19, 69, 280, 707... steps . A single iteration of this program, checking a particular number N for primeness, takes the following number of steps: where is the largest integer divisor of N and is the floor function. In 1999, Devin Kilminster demonstrated a shorter, ten-instruction program: For the initial input n = 10 successive primes are generated by subsequent powers of 10. Other examples The following FRACTRAN program: calculates the Hamming weight H(a) of the binary expansion of a i.e. the number of 1s in the binary expansion of a. Given input 2a, its output is 13H(a). The program can be analysed as follows: Notes See also One-instruction set computer Collatz conjecture References External links Lecture from John Conway: "Fractran: A Ridiculous Logical Language" "Prime Number Pathology: Fractran" Prime Number Pathology FRACTRAN - (Esolang wiki) Ruby implementation and example programs Project Euler Problem 308 "Building Fizzbuzz in Fractran from the Bottom Up" Chris Lomont, "A Universal FRACTRAN Interpreter in FRACTRAN" Models of computation Esoteric programming languages Recreational mathematics
FRACTRAN
[ "Mathematics" ]
1,461
[ "Recreational mathematics" ]
13,409,455
https://en.wikipedia.org/wiki/Clarkson%27s%20inequalities
In mathematics, Clarkson's inequalities, named after James A. Clarkson, are results in the theory of Lp spaces. They give bounds for the Lp-norms of the sum and difference of two measurable functions in Lp in terms of the Lp-norms of those functions individually. Statement of the inequalities Let (X, Σ, μ) be a measure space; let f, g : X → R be measurable functions in Lp. Then, for 2 ≤ p < +∞, For 1 < p < 2, where i.e., q = p ⁄ (p − 1). References . . . External links Banach spaces Inequalities Measure theory Lp spaces
Clarkson's inequalities
[ "Mathematics" ]
146
[ "Binary relations", "Mathematical relations", "Inequalities (mathematics)", "Mathematical problems", "Mathematical theorems" ]
13,409,547
https://en.wikipedia.org/wiki/Asymmetric%20carbon
In stereochemistry, an asymmetric carbon is a carbon atom that is bonded to four different types of atoms or groups of atoms. The four atoms and/or groups attached to the carbon atom can be arranged in space in two different ways that are mirror images of each other, and which lead to so-called left-handed and right-handed versions (stereoisomers) of the same molecule. Molecules that cannot be superimposed on their own mirror image are said to be chiral; as the asymmetric carbon is the center of this chirality, it is also known as a chiral carbon. As an example, malic acid () has 4 carbon atoms but just one of them is asymmetric. The asymmetric carbon atom, bolded in the formula, is the one attached to two carbon atoms, an oxygen atom, and a hydrogen atom. One may initially be inclined to think this atom is not asymmetric because it is attached to two carbon atoms, but because those two carbon atoms are not attached to exactly the same things, there are two different groups of atoms that the carbon atom in question is attached to, therefore making it an asymmetric carbon atom: Knowing the number of asymmetric carbon atoms, one can calculate the maximum possible number of stereoisomers for any given molecule as follows: If is the number of asymmetric carbon atoms then the maximum number of isomers =  (Le Bel-van't Hoff rule) This is a corollary of Le Bel and van't Hoff's simultaneously announced conclusions, in 1874, that the most probable orientation of the bonds of a carbon atom linked to four groups or atoms is toward the apexes of a tetrahedron, and that this accounted for all then-known phenomena of molecular asymmetry (which involved a carbon atom bearing four different atoms or groups). A tetrose with 2 asymmetric carbon atoms has 22 =  4 stereoisomers: An aldopentose with 3 asymmetric carbon atoms has 23 =  8 stereoisomers: An aldohexose with 4 asymmetric carbon atoms has 24 =  16 stereoisomers: References Stereochemistry Jacobus Henricus van 't Hoff de:Asymmetrisches Kohlenstoffatom
Asymmetric carbon
[ "Physics", "Chemistry" ]
470
[ "Spacetime", "Stereochemistry", "Space", "nan" ]
13,409,578
https://en.wikipedia.org/wiki/Creola%20bodies
Creola bodies are a histopathologic finding indicative of asthma. Found in a patient's sputum, they are ciliated columnar cells sloughed from the bronchial mucosa of a patient with asthma. Other common findings in the sputum of asthma patients include Charcot-Leyden crystals, Curschmann's Spirals, and eosinophils (and excessive amounts of sputum). Yoshihara et al. reported 60% of pediatric asthmatic patients demonstrating acute symptoms were found to have creola bodies in their sputum. These patients had increased levels of neutrophil-mediated cytokine activity concluding that "epithelial damage is associated with a locally enhanced chemotactic signal for and activity of neutrophils, but not eosinophils, during acute exacerbations of paediatric asthma." Ogata et al. found significant correlations among the CrB score, the concentration of sputum ECP and %FEV1.0 (p less than 0.001). The CrB score on the day of clinical appraisal significantly correlated with the number of days of treatment needed for remission. These results were in keeping with the hypothesis that eosinophils cause desquamation of respiratory epithelial cells resulting in prolongation of asthmatic attacks. Observation of CrB seemed to be useful as a marker of duration of asthmatic attacks. References Further reading Histopathology Asthma
Creola bodies
[ "Chemistry" ]
318
[ "Histopathology", "Microscopy" ]
13,409,671
https://en.wikipedia.org/wiki/Turpan%20water%20system
The Turpan water system, also called the Turfan kārēz system, is used for water supply via a vertical tunnel in the Turpan Depression of Xinjiang, China. "Karez" () is a word in the local Uyghur language that is derived from the word in the Persian language for the system from which it is derived: the 3000-year-old qanāt. Turpan has the Turpan Karez Paradise (a Protected Area of the People's Republic of China), which is dedicated to demonstrating its karez water system, as well as exhibiting other historical artifacts. Turpan's karez well system was crucial in Turpan's development as an important oasis stopover on the Silk Road, which skirted the barren and hostile Taklamakan Desert. Description Turpan's karez water system is made up of a horizontal series of vertically dug wells that are then linked by underground water canals to collect water from the watershed surface runoff from the base of the Tian Shan Mountains and the nearby Flaming Mountains. The canals channel the water to the surface, taking advantage of the current provided by the gravity of the downward slope of the Turpan Depression. The canals are mostly underground to reduce water evaporation and to make the slope long enough to reach far distances being only gravity fed. The system has wells, dams and underground canals built to store the water and control the amount of water flow. Vertical wells are dug at various points to tap into the groundwater flowing down sloping land from the source, the mountain runoff. The water is then channeled through underground canals dug from the bottom of one well to the next well and then to the desired destination. Turpan's karez irrigation system of special connected wells is believed to be of indigenous origin in China, perhaps combined with technology arriving from more western regions. In Xinjiang, the greatest number of karez wells are in the Turpan Depression, where today there remain over 1100 karez wells and channels having a total length of over . The local geography makes karez wells practical for agricultural irrigation and other uses. Turpan is located in the second deepest geographical depression in the world, with over of land below sea level and with soil that forms a sturdy basin. Water naturally flows down from the nearby mountains during the rainy season in an underground current to the low depression basin under the desert. The Turpan summer is very hot and dry with periods of wind and blowing sand. Importance Ample water was crucial to Turpan, so that the oasis city could service the many caravans on the Silk Route resting there near a route skirting the Taklamakan Desert. The caravans included merchant traders and missionaries with their armed escorts, animals including camels, sometimes numbering into the thousands, along with camel drivers, agents and other personnel, all of whom might stay for a week or more. The caravans needed pastures for their animals, resting facilities, trading bazaars for conducting business and replenishment of food and water. Potential UNESCO World Heritage Site Karez wells in the Turfan area are on the UNESCO World Heritage Sites Tentative List for China. Threatened by global warming There are 20,000 glaciers in Xinjiang – nearly half of all the glaciers in China. The water from the glaciers via the underground channels has provided a stable water source year round, independent of season, for thousands of years. But since the 1950s, Xinjiang's glaciers have retreated by between 21 percent to 27 percent due to global warming, threatening the agricultural productivity of the region. See also References External links Satellite map showing deep basin from Google Link to Silk Road map Turpan – Ancient Stop on the Silk Road Karez close to Turfan Turpan Water supply Water wells Chinese architectural history Sites along the Silk Road Major National Historical and Cultural Sites in Xinjiang Irrigation projects Irrigation in China
Turpan water system
[ "Chemistry", "Engineering", "Environmental_science" ]
797
[ "Hydrology", "Water wells", "Irrigation projects", "Environmental engineering", "Water supply" ]
13,409,706
https://en.wikipedia.org/wiki/Null%20cell
Null cells, a subset of large circulating white blood cells, mimic the appearance of T or B lymphocytes but do not possess their defining surface receptors. Predominantly, these include natural killer cells (NK cells), with a lesser portion being hematopoietic stem cells traversing freely within the bloodstream. Oncology In the realm of oncology, certain pathological null cells contribute to the development of cancers, such as null cell adenomas within the pituitary gland. These adenomas often grow slowly and secrete hormones in patterns that are not well understood, potentially leading to necrosis of surrounding brain tissue, thereby affecting neurological functions. The discovery of null cells in the benign adenohypophysis suggests that such adenomas might evolve from pre-existing benign null cells, shedding light on the tumors' origins and potential interventions. Viruses In relation to viral infections, the interaction between viruses and the immune system can lead to the emergence of null cells with impaired functionality. For example, Cytomegalovirus (CMV) has been shown to induce T-lymphocytes to stop expressing CD28 and other critical surface molecules. This alteration essentially converts these T-cells into a form of null cell, lacking the additional properties of NK cells and therefore failing to contribute to the immune response, which can result in conditions of immunodeficiency. Conclusion Understanding the roles and mechanisms of null cells within the immune system and in pathological conditions such as cancer and viral infections not only provides insights into fundamental biological processes but also opens avenues for therapeutic interventions targeting these unique cell types. See also Natural killer cell References External links Cell anatomy Histology
Null cell
[ "Chemistry" ]
339
[ "Histology", "Microscopy" ]
13,409,770
https://en.wikipedia.org/wiki/Ferruginous%20body
A ferruginous body is a histopathologic finding in interstitial lung disease suggestive of significant asbestos exposure (asbestosis). Asbestos exposure is associated with occupations such as shipbuilding, roofing, plumbing, and construction. They appear as small brown nodules in the septum of the alveolus. Ferruginous bodies are typically indicative of asbestos inhalation (when the presence of asbestos is verified they are called "asbestos bodies"). In this case they are fibers of asbestos coated with an iron-rich material derived from proteins such as ferritin and hemosiderin. Ferruginous bodies are believed to be formed by macrophages that have phagocytosed and attempted to digest the fibers. Additional images References Histopathology Asbestos
Ferruginous body
[ "Chemistry", "Environmental_science" ]
167
[ "Toxicology", "Asbestos", "Histopathology", "Microscopy" ]
1,000,474
https://en.wikipedia.org/wiki/PLATO%20%28computer%20system%29
PLATO (Programmed Logic for Automatic Teaching Operations), also known as Project Plato and Project PLATO, was the first generalized computer-assisted instruction system. Starting in 1960, it ran on the University of Illinois's ILLIAC I computer. By the late 1970s, it supported several thousand graphics terminals distributed worldwide, running on nearly a dozen different networked mainframe computers. Many modern concepts in multi-user computing were first developed on PLATO, including forums, message boards, online testing, email, chat rooms, picture languages, instant messaging, remote screen sharing, and multiplayer video games. PLATO was designed and built by the University of Illinois and functioned for four decades, offering coursework (elementary through university) to UIUC students, local schools, prison inmates, and other universities. Courses were taught in a range of subjects, including Latin, chemistry, education, music, Esperanto, and primary mathematics. The system included a number of features useful for pedagogy, including text overlaying graphics, contextual assessment of free-text answers, depending on the inclusion of keywords, and feedback designed to respond to alternative answers. Rights to market PLATO as a commercial product were licensed by Control Data Corporation (CDC), the manufacturer on whose mainframe computers the PLATO IV system was built. CDC President William Norris planned to make PLATO a force in the computer world, but found that marketing the system was not as easy as hoped. PLATO nevertheless built a strong following in certain markets, and the last production PLATO system was in use until 2006. Innovations PLATO was either the first or an earlier example of many now-common technologies: Hardware . Donald Bitzer . Donald Bitzer Display Graphics storing in downloadable fonts. . Online communities Notesfiles (precursor to newsgroups), 1973. Term-talk (1:1 chat) Screen software sharing: , used by instructors to help students, precursor of Timbuktu. Common Computer Game Genres, including many of the early (first?) real time multi-player games Multiplayer Games . Rick Blomme Dungeon Games . Included the first video game boss. , likely the first graphical dungeon computer game. . Space combat Flight Simulation: ; this probably inspired UIUC student Bruce Artwick to start Sublogic which was acquired and later became Microsoft Flight Simulator. Military simulations: . 3D Maze games: , based on a story by J. G. Ballard, the first PLATO 3-D walkthru maze game. Quest Simulation: , like Trek with monsters, trees, treasures. Solitaire: solitaire, Educational . Training systems; an ambitious ICAI programming system featuring partial-order plans, used to train Con Edison steam plant operators. History Impetus Before the 1944 G.I. Bill that provided free college education to World War II veterans, higher education was limited to a minority of the US population, though only 9% of the population was in the military. The trend towards greater enrollment was notable by the early 1950s, and the problem of providing instruction for the many new students was a serious concern to university administrators. To wit, if computerized automation increased factory production, it could do the same for academic instruction. The USSR's 1957 launching of the Sputnik I artificial satellite energized the United States' government into spending more on science and engineering education. In 1958, the U.S. Air Force's Office of Scientific Research had a conference about the topic of computer instruction at the University of Pennsylvania; interested parties, notably IBM, presented studies. Genesis Around 1959, Chalmers W. Sherwin, a physicist at the University of Illinois, suggested a computerised learning system to William Everett, the engineering college dean, who, in turn, recommended that Daniel Alpert, another physicist, convene a meeting about the matter with engineers, administrators, mathematicians, and psychologists. After weeks of meetings they were unable to agree on a single design. Before conceding failure, Alpert mentioned the matter to laboratory assistant Donald Bitzer, who had been thinking about the problem, suggesting he could build a demonstration system. Project PLATO was established soon afterwards, and in 1960, the first system, PLATO I, operated on the local ILLIAC I computer. It included a television set for display and a special keyboard for navigating the system's function menus; PLATO II, in 1961, featured two users at once, one of the first implementations of multi-user time-sharing. The PLATO system was re-designed, between 1963 and 1969; PLATO III allowed "anyone" to design new lesson modules using their TUTOR programming language, conceived in 1967 by biology graduate student Paul Tenczar. Built on a CDC 1604, given to them by William Norris, PLATO III could simultaneously run up to 20 terminals, and was used by local facilities in Champaign–Urbana that could enter the system with their custom terminals. The only remote PLATO III terminal was located near the state capitol in Springfield, Illinois at Springfield High School. It was connected to the PLATO III system by a video connection and a separate dedicated line for keyboard data. PLATO I, II, and III were funded by small grants from a combined Army-Navy-Air Force funding pool. By the time PLATO III was in operation, everyone involved was convinced it was worthwhile to scale up the project. Accordingly, in 1967, the National Science Foundation granted the team steady funding, allowing Alpert to set up the Computer-based Education Research Laboratory (CERL) at the University of Illinois Urbana–Champaign campus. The system was capable of supporting 20 time-sharing terminals. Multimedia experiences (PLATO IV) In 1972, with the introduction of PLATO IV, Bitzer declared general success, claiming that the goal of generalized computer instruction was now available to all. However, the terminals were very expensive (about $12,000). The PLATO IV terminal had several major innovations: Plasma Display Screen: Bitzer's orange plasma display, incorporated both memory and bitmapped graphics into one display. The display was a 512×512 bitmap, with both character and vector plotting done by hardwired logic. It included fast vector line drawing capability, and ran at 1260 baud, rendering 60 lines or 180 characters per second. . Users could provide their own characters to support rudimentary bitmap graphics. Touch panel: A 16×16 grid infrared touch panel, allowing students to answer questions by touching anywhere on the screen. Microfiche images: Compressed air powered a piston-driven microfiche image selector that permitted colored images to be projected on the back of the screen under program control. A hard drive for Audio snippets: The random-access audio device used a magnetic disc with a capacity to hold 17 total minutes of pre-recorded audio. It could retrieve for playback any of 4096 audio clips within 0.4 seconds. By 1980, the device was being commercially produced by Education and Information Systems, Incorporated with a capacity of just over 22 minutes. A Votrax voice synthesizer The Gooch Synthetic Woodwind (named after inventor Sherwin Gooch), a synthesizer that offered four-voice music synthesis to provide sound in PLATO courseware. This was later supplanted on the PLATO V terminal by the Gooch Cybernetic Synthesizer, which had sixteen voices that could be programmed individually, or combined to make more complex sounds. Bruce Parello, a student at the University of Illinois in 1972, created the first digital emojis on the PLATO IV system. Influence on PARC and Apple Early in 1972, researchers from Xerox PARC were given a tour of the PLATO system at the University of Illinois. At this time, they were shown parts of the system, such as the Insert Display/Show Display (ID/SD) application generator for pictures on PLATO (later translated into a graphics-draw program on the Xerox Star workstation); the Charset Editor for "painting" new characters (later translated into a "Doodle" program at PARC); and the Term Talk and Monitor Mode communications programs. Many of the new technologies they saw were adopted and improved upon, when these researchers returned to Palo Alto, California. They subsequently transferred improved versions of this technology to Apple Inc. CDC years As PLATO IV reached production quality, William Norris (CDC) became increasingly interested in it as a potential product. His interest was twofold. From a strict business perspective, he was evolving Control Data into a service-based company instead of a hardware one, and was increasingly convinced that computer-based education would become a major market in the future. At the same time, Norris was troubled by the unrest of the late 1960s, and felt that much of it was due to social inequalities that needed to be addressed. PLATO offered a solution by providing higher education to segments of the population that would otherwise never be able to afford a university education. Norris provided CERL with machines on which to develop their system in the late 1960s. In 1971, he set up a new division within CDC to develop PLATO "courseware", and eventually many of CDC's own initial training and technical manuals ran on it. In 1974, PLATO was running on in-house machines at CDC headquarters in Minneapolis, and in 1976, they purchased the commercial rights in exchange for a new CDC Cyber machine. CDC announced the acquisition soon after, claiming that by 1985, 50% of the company's income would be related to PLATO services. Through the 1970s, CDC tirelessly promoted PLATO, both as a commercial tool and one for re-training unemployed workers in new fields. Norris refused to give up on the system, and invested in several non-mainstream courses, including a crop-information system for farmers, and various courses for inner-city youth. CDC even went as far as to place PLATO terminals in some shareholder's houses, to demonstrate the concept of the system. In the early 1980s, CDC started heavily advertising the service, apparently due to increasing internal dissent over the now $600 million project, taking out print and even radio ads promoting it as a general tool. The Minneapolis Tribune was unconvinced by their ad copy and started an investigation of the claims. In the end, they concluded that while it was not proven to be a better education system, everyone using it nevertheless enjoyed it, at least. An official evaluation by an external testing agency ended with roughly the same conclusions, suggesting that everyone enjoyed using it, but it was essentially equal to an average human teacher in terms of student advancement. Of course, a computerized system equal to a human should have been a major achievement, the very concept for which the early pioneers in CBT were aiming. A computer could serve all the students in a school for the cost of maintaining it, and wouldn't go on strike. However, CDC charged $50 an hour for access to their data center, in order to recoup some of their development costs, making it considerably more expensive than a human on a per-student basis. PLATO was, therefore, a failure as a profitable commercial enterprise, although it did find some use in large companies and government agencies willing to invest in the technology. An attempt to mass-market the PLATO system was introduced in 1980 as Micro-PLATO, which ran the basic TUTOR system on a CDC "Viking-721" terminal and various home computers. Versions were built for the TI-99/4A, Atari 8-bit computers, Zenith Z-100 and, later, Radio Shack TRS-80, and IBM Personal Computer. Micro-PLATO could be used stand-alone for normal courses, or could connect to a CDC data center for multiuser programs. To make the latter affordable, CDC introduced the Homelink service for $5 an hour. Norris continued to praise PLATO, announcing that it would be only a few years before it represented a major source of income for CDC as late as 1984. In 1986, Norris stepped down as CEO, and the PLATO service was slowly killed off. He later claimed that Micro-PLATO was one of the reasons PLATO got off-track. They had started on the TI-99/4A, but then Texas Instruments pulled the plug and they moved to other systems like the Atari, who soon did the same. He felt that it was a waste of time anyway, as the system's value was in its online nature, which Micro-PLATO lacked initially. Bitzer was more forthright about CDC's failure, blaming their corporate culture for the problems. He noted that development of the courseware was averaging $300,000 per delivery hour, many times what the CERL was paying for similar products. This meant that CDC had to charge high prices in order to recoup their costs, prices that made the system unattractive. The reason, he suggested, for these high prices was that CDC had set up a division that had to keep itself profitable via courseware development, forcing them to raise the prices in order to keep their headcount up during slow periods. PLATO V: multimedia Intel 8080 microprocessors were introduced in the new PLATO V terminals. They could download small software modules and execute them locally. It was a way to augment the PLATO courseware with rich animation and other sophisticated capabilities. Online community Although PLATO was designed for computer-based education, perhaps its most enduring legacy is its place in the origins of online community. This was made possible by PLATO's groundbreaking communication and interface capabilities, features whose significance is only lately being recognized by computer historians. PLATO Notes, created by David R. Woolley in 1973, was among the world's first online message boards, and years later became the direct progenitor of Lotus Notes. PLATO's plasma panels were well suited to games, although its I/O bandwidth (180 characters per second or 60 graphic lines per second) was relatively slow. By virtue of 1500 shared 60-bit variables per game (initially), it was possible to implement online games. Because it was an educational computer system, most of the user community were keenly interested in games. In much the same way that the PLATO hardware and development platform inspired advances elsewhere (such as at Xerox PARC and MIT), many popular commercial and Internet games ultimately derived their inspiration from PLATO's early games. As one example, Castle Wolfenstein by PLATO alum Silas Warner was inspired by PLATO's dungeon games (see below), in turn inspiring Doom and Quake. Thousands of multiplayer online games were developed on PLATO from around 1970 through the 1980s, with the following notable examples: Daleske's Empire a top-view multiplayer space game based on Star Trek. Either Empire or Colley's Maze War is the first networked multiplayer action game. It was ported to Trek82, Trek83, ROBOTREK, Xtrek, and Netrek, and also adapted (without permission) for the Apple II computer by fellow PLATO alum Robert Woodhead (of Wizardry fame), as a game called Galactic Attack. The original Freecell by Alfille (from Baker's concept). Fortner's Airfight, probably the direct inspiration for (PLATO alum) Bruce Artwick's Microsoft Flight Simulator. Haefeli and Bridwell's Panther (a vector graphics-based tankwar game, anticipating Atari's Battlezone). Many other first-person shooters, most notably Bowery's Spasim and Witz and Boland's Futurewar, believed to be the first FPS. Countless games inspired by the role-playing game Dungeons & Dragons, including the original Rutherford/Whisenhunt and Wood dnd (later ported to the PDP-10/11 by Lawrence, who earlier had visited PLATO). and is believed to be the first dungeon crawl game and was followed by: Moria, Rogue, Dry Gulch (a western-style variation), and Bugs-n-Drugs (a medical variation)—all presaging MUDs (Multi-User Domains) and MOOs (MUDs, Object Oriented) as well as popular first-person shooters like Doom and Quake, and MMORPGs (Massively multiplayer online role-playing game) like EverQuest and World of Warcraft. Avatar, PLATO's most popular game, is one of the world's first MUDs and has over 1 million hours of use.. The games Doom and Quake can trace part of their lineage back to PLATO programmer Silas Warner. PLATO's communication tools and games formed the basis for an online community of thousands of PLATO users, which lasted for well over twenty years. PLATO's games became so popular that a program called "The Enforcer" was written to run as a background process to regulate or disable game play at most sites and times – a precursor to parental-style control systems that regulate access based on content rather than security considerations. In September 2006 the Federal Aviation Administration retired its PLATO system, the last system that ran the PLATO software system on a CDC Cyber mainframe, from active duty. Existing PLATO-like systems now include NovaNET and Cyber1.org. By early 1976, the original PLATO IV system had 950 terminals giving access to more than 3500 contact hours of courseware, and additional systems were in operation at CDC and Florida State University. Eventually, over 12,000 contact hours of courseware was developed, much of it developed by university faculty for higher education. PLATO courseware covers a full range of high-school and college courses, as well as topics such as reading skills, family planning, Lamaze training and home budgeting. In addition, authors at the University of Illinois School of Basic Medical Sciences (now, the University of Illinois College of Medicine) devised a large number of basic science lessons and a self-testing system for first-year students. However the most popular "courseware" remained their multi-user games and role-playing video games such as dnd, although it appears CDC was uninterested in this market. As the value of a CDC-based solution disappeared in the 1980s, interested educators ported the engine first to the IBM PC, and later to web-based systems. Custom character sets In the early 1970s, some people working in the modern foreign languages group at the University of Illinois began working on a set of Hebrew lessons, originally without good system support for leftward writing. In preparation for a PLATO demo in Tehran, that would participate in, Sherwood worked with Don Lee to implement support for leftward writing, including Persian (Farsi), which uses the Arabic script. There was no funding for this work, which was undertaken only due to Sherwood's personal interest, and no curriculum development occurred for either Persian or Arabic. However, Peter Cole, Robert Lebowitz, and Robert Hart used the new system capabilities to re-do the Hebrew lessons. The PLATO hardware and software supported the design and use of one's own 8-by-16 characters, so most languages could be displayed on the graphics screen (including those written right-to-left). University of Illinois School of Music PLATO Project (Technology and Research-based Chronology) A PLATO-compatible music language known as OPAL (Octave-Pitch-Accent-Length) was developed for these synthesizers, as well as a compiler for the language, two music text editors, a filing system for music binaries, programs to play the music binaries in real time, and print musical scores, and many debugging and compositional aids. A number of interactive compositional programs have also been written. Gooch's peripherals were heavily used for music education courseware as created, for example, by the University of Illinois School of Music PLATO Project. From 1970 to 1994, the University of Illinois (U of I) School of Music explored the use of the Computer-based Education Research Laboratory (CERL) PLATO computer system to deliver online instruction in music. Led by G. David Peters, music faculty and students worked with PLATO’s technical capabilities to produce music-related instructional materials and experimented with their use in the music curriculum. Peters began his work on PLATO III. By 1972, the PLATO IV system made it technically possible to introduce multimedia pedagogies that were not available in the marketplace until years later. Between 1974 and 1988, 25 U of I music faculty participated in software curriculum development and more than 40 graduate students wrote software and assisted the faculty in its use. In 1988, the project broadened its focus beyond PLATO to accommodate the increasing availability and use of microcomputers. The broader scope resulted in renaming the project to The Illinois Technology-based Music Project. Work in the School of Music continued on other platforms after the CERL PLATO system shutdown in 1994. Over the 24-year life of the music project, its many participants moved into educational institutions and into the private sector. Their influence can be traced to numerous multimedia pedagogies, products, and services in use today, especially by musicians and music educators. Significant early efforts Pitch recognition/performance judging In 1969, G. David Peters began researching the feasibility of using PLATO to teach trumpet students to play with increased pitch and rhythmic precision. He created an interface for the PLATO III terminal. The hardware consisted of (1) filters that could determine the true pitch of a tone, and (2) a counting device to measure tone duration. The device accepted and judged rapid notes, two notes trilled, and lip slurs. Peters demonstrated that judging instrumental performance for pitch and rhythmic accuracy was feasible in computer-assisted instruction. Rhythm notation and perception By 1970, a random access audio device was available for use with PLATO III. In 1972, Robert W. Placek conducted a study that used computer-assisted instruction for rhythm perception. Placek used the random access audio device attached to a PLATO III terminal for which he developed music notation fonts and graphics. Students majoring in elementary education were asked to (1) recognize elements of rhythm notation, and (2) listen to rhythm patterns and identify their notations. This was the first known application of the PLATO random-access audio device to computer-based music instruction. Study participants were interviewed about the experience and found it both valuable and enjoyable. Of particular value was PLATO’s immediate feedback. Though participants noted shortcomings in the quality of the audio, they generally indicated that they were able to learn the basic skills of rhythm notation recognition. These PLATO IV terminal included many new devices and yielded two notable music projects: Visual diagnostic skills for instrumental music educators By the mid-1970s, James O. Froseth (University of Michigan) had published training materials that taught instrumental music teachers to visually identify typical problems demonstrated by beginning band students. For each instrument, Froseth developed an ordered checklist of what to look for (i.e., posture, embouchure, hand placement, instrument position, etc.) and a set of 35mm slides of young players demonstrating those problems. In timed class exercises, trainees briefly viewed slides and recorded their diagnoses on the checklists which were reviewed and evaluated later in the training session. In 1978, William H. Sanders adapted Froseth’s program for delivery using the PLATO IV system. Sanders transferred the slides to microfiche for rear-projection through the PLATO IV terminal’s plasma display. In timed drills, trainees viewed the slides, then filled in the checklists by touching them on the display. The program gave immediate feedback and kept aggregate records. Trainees could vary the timing of the exercises and repeat them whenever they wished. Sanders and Froseth subsequently conducted a study to compare traditional classroom delivery of the program to delivery using PLATO. The results showed no significant difference between the delivery methods for a) student post-test performance and b) their attitudes toward the training materials. However, students using the computer appreciated the flexibility to set their own practice hours, completed significantly more practice exercises, and did so in significantly less time. Musical instrument identification In 1967, Allvin and Kuhn used a four-channel tape recorder interfaced to a computer to present pre-recorded models to judge sight-singing performances. In 1969, Ned C. Deihl and Rudolph E. Radocy conducted a computer-assisted instruction study in music that included discriminating aural concepts related to phrasing, articulation, and rhythm on the clarinet. They used a four-track tape recorder interfaced to a computer to provide pre-recorded audio passages. Messages were recorded on three tracks and inaudible signals on the fourth track with two hours of play/record time available. This research further demonstrated that computer-controlled audio with four-track tape was possible. In 1979, Williams used a digitally controlled cassette tape recorder that had been interfaced to a minicomputer (Williams, M.A. "A comparison of three approaches to the teaching of auditory-visual discrimination, sight singing and music dictation to college music students: A traditional approach, a Kodaly approach, and a Kodaly approach augmented by computer-assisted instruction," University of Illinois, unpublished). This device worked, yet was slow with variable access times. In 1981, Nan T. Watanabe researched the feasibility of computer-assisted music instruction using computer-controlled pre-recorded audio. She surveyed audio hardware that could interface with a computer system. Random-access audio devices interfaced to PLATO IV terminals were also available. There were issues with sound quality due to dropouts in the audio. Regardless, Watanabe deemed consistent fast access to audio clips critical to the study design and selected this device for the study. Watanabe’s computer-based drill-and-practice program taught elementary music education students to identify musical instruments by sound. Students listened to randomly selected instrument sounds, identified the instrument they heard, and received immediate feedback. Watanabe found no significant difference in learning between the group who learned through computer-assisted drill programs and the group receiving traditional instruction in instrument identification. The study did, however, demonstrate that use of random-access audio in computer-assisted instruction in music was feasible. The Illinois Technology-based music project By 1988, with the spread of micro-computers and their peripherals, the University of Illinois School of Music PLATO Project was renamed The Illinois Technology-based Music Project. Researchers subsequently explored the use of emerging, commercially available technologies for music instruction until 1994. Influences and impacts Educators and students used the PLATO System for music instruction at other educational institutions including Indiana University, Florida State University, and the University of Delaware. Many alumni of the University of Illinois School of Music PLATO Project gained early hands-on experience in computing and media technologies and moved into influential positions in both education and the private sector. The goal of this system was to provide tools for music educators to use in the development of instructional materials, which might possibly include music dictation drills, automatically graded keyboard performances, envelope and timbre ear-training, interactive examples or labs in musical acoustics, and composition and theory exercises with immediate feedback. One ear-training application, Ottaviano, became a required part of certain undergraduate music theory courses at Florida State University in the early 1980s. Another peripheral was the Votrax speech synthesizer, and a "say" instruction (with "saylang" instruction to choose the language) was added to the Tutor programming language to support text-to-speech synthesis using the Votrax. Other efforts One of CDC's greatest commercial successes with PLATO was an online testing system developed for National Association of Securities Dealers (now the Financial Industry Regulatory Authority), a private-sector regulator of the US securities markets. During the 1970s Michael Stein, E. Clarke Porter and PLATO veteran Jim Ghesquiere, in cooperation with NASD executive Frank McAuliffe, developed the first "on-demand" proctored commercial testing service. The testing business grew slowly and was ultimately spun off from CDC as Drake Training and Technologies in 1990. Applying many of the PLATO concepts used in the late 1970s, E. Clarke Porter led the Drake Training and Technologies testing business (today Thomson Prometric) in partnership with Novell, Inc. away from the mainframe model to a LAN-based client server architecture and changed the business model to deploy proctored testing at thousands of independent training organizations on a global scale. With the advent of a pervasive global network of testing centers and IT certification programs sponsored by, among others, Novell and Microsoft, the online testing business exploded. Pearson VUE was founded by PLATO/Prometric veterans E. Clarke Porter, Steve Nordberg and Kirk Lundeen in 1994 to further expand the global testing infrastructure. VUE improved on the business model by being one of the first commercial companies to rely on the Internet as a critical business service and by developing self-service test registration. The computer-based testing industry has continued to grow, adding professional licensure and educational testing as important business segments. A number of smaller testing-related companies also evolved from the PLATO system. One of the few survivors of that group is The Examiner Corporation. Dr. Stanley Trollip (formerly of the University of Illinois Aviation Research Lab) and Gary Brown (formerly of Control Data) developed the prototype of The Examiner System in 1984. In the early 1970s, James Schuyler developed a system at Northwestern University called HYPERTUTOR as part of Northwestern's MULTI-TUTOR computer assisted instruction system. This ran on several CDC mainframes at various sites. Between 1973 and 1980, a group under the direction of Thomas T. Chen at the Medical Computing Laboratory of the School of Basic Medical Sciences at the University of Illinois at Urbana Champaign ported PLATO's TUTOR programming language to the MODCOMP IV minicomputer. Douglas W. Jones, A.B. Baskin, Tom Szolyga, Vincent Wu and Lou Bloomfield did most of the implementation. This was the first port of TUTOR to a minicomputer and was largely operational by 1976. In 1980, Chen founded Global Information Systems Technology of Champaign, Illinois, to market this as the Simpler system. GIST eventually merged with the Government Group of Adayana Inc. Vincent Wu went on to develop the Atari PLATO cartridge. CDC eventually sold the "PLATO" trademark and some courseware marketing segment rights to the newly formed The Roach Organization (TRO) in 1989. In 2000 TRO changed their name to PLATO Learning and continue to sell and service PLATO courseware running on PCs. In late 2012, PLATO Learning brought its online learning solutions to market under the name Edmentum. CDC continued development of the basic system under the name CYBIS (CYber-Based Instructional System) after selling the trademarks to Roach, in order to service their commercial and government customers. CDC later sold off their CYBIS business to University Online, which was a descendant of IMSATT. University Online was later renamed to VCampus. The University of Illinois also continued development of PLATO, eventually setting up a commercial on-line service called NovaNET in partnership with University Communications, Inc. CERL was closed in 1994, with the maintenance of the PLATO code passing to UCI. UCI was later renamed NovaNET Learning, which was bought by National Computer Systems (NCS). Shortly after that, NCS was bought by Pearson, and after several name changes now operates as Pearson Digital Learning. The Evergreen State College received several grants from CDC to implement computer language interpreters and associated programming instruction. Royalties received from the PLATO computer-aided instruction materials developed at Evergreen support technology grants and an annual lecture series on computer-related topics. Other versions In South Africa During the period when CDC was marketing PLATO, the system began to be used internationally. South Africa was one of the biggest users of PLATO in the early 1980s. Eskom, the South African electrical power company, had a large CDC mainframe at Megawatt Park in the northwest suburbs of Johannesburg. Mainly this computer was used for management and data processing tasks related to power generation and distribution, but it also ran the PLATO software. The largest PLATO installation in South Africa during the early 1980s was at the University of the Western Cape, which served the "native" population, and at one time had hundreds of PLATO IV terminals all connected by leased data lines back to Johannesburg. There were several other installations at educational institutions in South Africa, among them Madadeni College in the Madadeni township just outside Newcastle. This was perhaps the most unusual PLATO installation anywhere. Madadeni had about 1,000 students, all of them who were original inhabitants i.e. native population and 99.5% of Zulu ancestry. The college was one of 10 teacher preparation institutions in kwaZulu, most of them much smaller. In many ways Madadeni was very primitive. None of the classrooms had electricity and there was only one telephone for the whole college, which one had to crank for several minutes before an operator might come on the line. So an air-conditioned, carpeted room with 16 computer terminals was a stark contrast to the rest of the college. At times the only way a person could communicate with the outside world was through PLATO term-talk. For many of the Madadeni students, most of whom came from very rural areas, the PLATO terminal was the first time they encountered any kind of electronic technology. Many of the first-year students had never seen a flush toilet before. There initially was skepticism that these technologically illiterate students could effectively use PLATO, but those concerns were not borne out. Within an hour or less most students were using the system proficiently, mostly to learn math and science skills, although a lesson that taught keyboarding skills was one of the most popular. A few students even used on-line resources to learn TUTOR, the PLATO programming language, and a few wrote lessons on the system in the Zulu language. PLATO was also used fairly extensively in South Africa for industrial training. Eskom successfully used PLM (PLATO learning management) and simulations to train power plant operators, South African Airways (SAA) used PLATO simulations for cabin attendant training, and there were a number of other large companies as well that were exploring the use of PLATO. The South African subsidiary of CDC invested heavily in the development of an entire secondary school curriculum (SASSC) on PLATO, but unfortunately as the curriculum was nearing the final stages of completion, CDC began to falter in South Africa—partly because of financial problems back home, partly because of growing opposition in the United States to doing business in South Africa, and partly due to the rapidly evolving microcomputer, a paradigm shift that CDC failed to recognize. Cyber1 In August 2004, a version of PLATO corresponding to the final release from CDC was resurrected online. This version of PLATO runs on a free and open-source software emulation of the original CDC hardware called Desktop Cyber. Within six months, by word of mouth alone, more than 500 former users had signed up to use the system. Many of the students who used PLATO in the 1970s and 1980s felt a special social bond with the community of users who came together using the powerful communications tools (talk programs, records systems and notesfiles) on PLATO. The PLATO software used on Cyber1 is the final release (99A) of CYBIS, by permission of VCampus. The underlying operating system is NOS 2.8.7, the final release of the NOS operating system, by permission of Syntegra (now British Telecom [BT]), which had acquired the remainder of CDC's mainframe business. Cyber1 runs this software on the Desktop Cyber emulator. Desktop Cyber accurately emulates in software a range of CDC Cyber mainframe models and many peripherals. Cyber1 offers free access to the system, which contains over 16,000 of the original lessons, in an attempt to preserve the original PLATO communities that grew up at CERL and on CDC systems in the 1980s. The load average of this resurrected system is about 10–15 users, sending personal and notesfile notes, and playing inter-terminal games such as Avatar and Empire (a Star Trek-like game), which had both accumulated more than 1.0 million contact hours on the original PLATO system at UIUC. See also :Category:PLATO (computer system) games The Mother of All Demos (1968) References Further reading External links . Discusses his relationship with Control Data Corporation (CDC) during the development of PLATO, a computer-assisted instruction system. He describes the interest in PLATO of Harold Brooks, a CDC salesman, and his help in procuring a 1604 computer for Bitzer's use. Recalls the commercialization of PLATO by CDC and his disagreements with CDC over marketing strategy and the creation of courseware for PLATO. . A program officer at the National Science Foundation (NSF) describes the impact of Don Bitzer and the PLATO system, grants related to the classroom use of computers, and NSF's Regional Computing Program. . . . Archival collection containing internal reports and external reports and publications related to the development of PLATO and the operations of CERL. . The CBE series documents CDC’s objective of creating, marketing and distributing PLATO courseware internally within various CDC departments and divisions, and externally. . : online preservation of the PLATO system. Computer-based Education Research Laboratory PLATO Control Data Corporation software History of electronic engineering
PLATO (computer system)
[ "Engineering" ]
7,582
[ "Electronic engineering", "History of electronic engineering" ]
1,000,551
https://en.wikipedia.org/wiki/Ependymin
Ependymin is a glycoprotein found in the cerebrospinal fluid of many teleost fish. The humans homolog is called ependymin related 1 that is encoded by the EPDR1 gene. Ependymin is associated with the consolidation of long-term memory, possibly providing protection from strokes, and contributing to neuronal regeneration. This encoded protein was originally detected in elevated amounts of fluid within the central nervous system of teleost fishes. Along with long-term memory and neuronal regeneration, ependymin has been connected to specific aspects of changes in signaling within nerve cells leading to brain plasticity, as well as behavioral performance in response to environment stress in fishes. For example, this glycoprotein interaction in the extracellular matrix influences cell adhesion and migration processes in the central nervous system of teleost fishes. The presence of ependymin-related proteins can be found in both vertebrates and invertebrates. They have variety of functional roles in non-neural sites of organisms. For example, an ependymin-related gene that is upregulated in colon cancer known as UCC1 was found in human colorectal tumor cells. References External links Glycoproteins
Ependymin
[ "Chemistry" ]
262
[ "Glycoproteins", "Glycobiology" ]
1,000,568
https://en.wikipedia.org/wiki/Indiction
An indiction (, impost) was a periodic reassessment of taxation in the Roman Empire which took place every fifteen years. In Late Antiquity, this 15-year cycle began to be used to date documents and it continued to be used for this purpose in Medieval Europe, and can also refer to an individual year in the cycle; for example, "the fourth indiction" came to mean the fourth year of the current indiction. Since the cycles themselves were not numbered, other information is needed to identify the specific year. History Indictions originally referred to the periodic reassessment for an agricultural or land tax in the Roman Empire. There were three different cycles: a 15-year cycle used throughout the empire; a 14-year cycle used in Roman Egypt; and a five year cycle called the lustrum, derived from the Roman Republican census. Changes to the tax system usually took place at the beginning of one of these cycles and at the end of the indiction Emperors often chose to forgive any arrears. The 15-year cycle can be traced in literary and epigraphic references to taxation reforms and the cancellation of arrears. Principate The Chronicon Paschale (c. 630 AD) claims that the 15-year cycle was instituted by Julius Caesar in 49 BC, which was also the first year of the Antiochene era, but there is no other evidence for this and, if the cycle were the same one known from later periods, the start date ought to be 48 BC. The earliest known event associated with the 15-year cycle is the establishment of a special board of three praetors to pursue arrears for the cycle ending in 42 AD, under Claudius. The beginning of the cycle in 58 AD coincides with a set of tax reforms and remissions instituted by Nero. Vespasian carried out a census of Italy at the start of the next indiction in 73 AD The indiction starting in 103 AD may coincide with the tax remission by Trajan depicted on the Plutei of Trajan. At the start of the next indiction in 118 AD, Hadrian wrote off 900,000,000 sesterces of tax arrears, which he refers to in an inscription as the largest remission ever granted. He again remitted arrears at the start of the next indiction in AD 133, as did Antoninus Pius at the start of the next indiction in 148 AD. Marcus Aurelius and Commodus carried out another remission at the start of the indiction beginning in 178 AD. The 14-year cycle used in Egypt derived from the fact that liability for the Egyptian poll tax began at the age of fourteen, necessitating a new survey of the population every fourteen years. Tax reforms and remissions recorded in papyrus sources indicate that it was also in existence in the first century AD. The first evidence is an edict by Marcus Mettius Rufus, the Prefect of Egypt in AD 89, requiring property and loans to be registered. The next cycle in 103 AD coincides with reforms to record-keeping. The beginning of the cycle in 117 AD coincided with the 15-year cycle and was the occasion of Hadrian's large tax remission. This 14-year cycle is last attested in 257 AD. From 287 AD, at the latest, Roman Egypt used a system of 5-year cycles, then a non-cyclic series which reached number 26 by 318 AD. Late Antiquity and Middle Ages The 15-year cycle was introduced as a dating system on documents throughout the Roman empire by Constantine in 312 AD and it was in used in Egypt by 314 AD. The Chronicon Paschale (c. 630 AD) assigned its first year to 312–313 AD, whereas a Coptic document of 933 AD assigned its first year to 297–298 AD, one cycle earlier. Both of these were years of the Alexandrian calendar whose first day was Thoth 1 on August 29 in years preceding common Julian years and August 30 in years preceding leap years, hence each straddled two Julian years. The reason for beginning the year at that time was that the harvest would be in, and so it was an appropriate moment to calculate the taxes that should be paid. The indiction was first used to date documents unrelated to tax collection in the mid-fourth century. By the late fourth century it was being used to date documents throughout the Mediterranean. In the Eastern Roman Empire outside of Egypt, the first day of its year was September 23, the birthday of Augustus. During the last half of the fifth century, probably 462 AD, this shifted to September 1, where it remained throughout the rest of the Byzantine Empire. In 537 AD, Justinian decreed that all dates must include the indiction via , which eventually caused the Byzantine year to begin on . But in the western Mediterranean, its first day was according to Bede, or the following or , called the papal indiction. An indictio Senensis beginning is sometimes mentioned. The 7,980-year Julian Period was formed by multiplying the 15-year indiction cycle, the 28-year solar cycle and the 19-year Metonic cycle. Terminology When the term "indiction" began to be used, it referred only to the full cycle, and individual years were referred to as being Year 1 of the indiction, Year 2 of the indiction, etc. It gradually became common to apply the term to the years themselves, which thus became the first indiction, the second indiction, and so on. Calculation A useful chart providing all the equivalents can be found in Chaîne's book on chronology, and can easily be consulted online at the Internet Archive, from page 134 to page 172. The Roman indiction for a modern Anno Domini year Y (January 1 to December 31) may be calculated as follows: (Y + 3) mod 15 For example, the indiction for the year 2017 is 10: (2017 + 3) mod 15 = 10 However, this formula will produce an error for the last year of an indiction, where the modulo value is 0 instead of the expected 15, as can be seen when applying it to the year 2022: (2022+3) mod 15 = 0 One can simply read 0 as 15, but in order to have the correct result directly, the addition of a value of 1 from the offset may be delayed until after the mod operation: (Y + 2) mod 15 + 1 That yields the expected answer, for 2022 again: (2022+2) mod 15 + 1 = 15 References Works cited Bonnie Blackburn, Leofranc Holford-Strevens, The Oxford Companion to the year (Oxford, 1999), p. 769-71. "Calendars" in Astronomical Almanac for the Year 2017 (Washington: US Government Publishing Office, 2016) p. B4. Chronicon paschale 284–628 AD, trans. Michael Whitby, Mary Whitby (Liverpool, 1989), p. 10. Richard Duncan-Jones, Money and government in the Roman empire (Cambridge University Press, 1994) p. 59-63. Further reading Roger S. Bagnall, K. A. Worp, The chronological systems of Byzantine Egypt (Zutphen, 1978). Leo Depuydt, "AD 297 as the beginning of the first indiction cycle", The Bulletin of the American Society of Papyrologists, 24:137–9. Yiannis E. Meimaris, Chronological systems in Roman-Byzantine Palestine and Arabia (Athens, 1992), 32-34 S. P. Scott [Justinian I], "Forty-seventh new constitution" [Novella 47], The civil law [Corpvs jvris civilis] (1932; reprinted New York, 1973), 16 (in 7): 213-15. External links Dates and dating A chart of years and their indictions Units of time Byzantine calendar Julian calendar
Indiction
[ "Physics", "Mathematics" ]
1,643
[ "Physical quantities", "Time", "Units of time", "Quantity", "Spacetime", "Units of measurement" ]
1,000,609
https://en.wikipedia.org/wiki/Dallasite
Dallasite is a breccia made of subequant to rectangular or distinctly elongate, curvilinear shards that represent the spalled rims of pillow basalt (see: Hyaloclastite). This material is commonly partly altered to chlorite, epidote, quartz and carbonate, for which the local term 'dallasite' has been coined. The stone dallasite is named after Dallas Road, Victoria, British Columbia. It is considered the unofficial stone of British Columbia's capital city. Dallasite is found in Triassic volcanic rocks of Vancouver Island and is considered the third most important gem material in British Columbia. References Rocks Gemstones Geology of British Columbia Breccias
Dallasite
[ "Physics", "Materials_science" ]
146
[ "Breccias", "Fracture mechanics", "Materials", "Physical objects", "Gemstones", "Rocks", "Matter" ]
1,000,634
https://en.wikipedia.org/wiki/NesC
nesC (pronounced "NES-see") is a component-based, event-driven programming language used to build applications for the TinyOS platform. TinyOS is an operating environment designed to run on embedded devices used in distributed wireless sensor networks. nesC is built as an extension to the C programming language with components "wired" together to run applications on TinyOS. The name nesC is an abbreviation of "network embedded systems C". Components and interfaces nesC programs are built out of components, which are assembled ("wired") to form whole programs. Components have internal concurrency in the form of tasks. Threads of control may pass into a component through its interfaces. These threads are rooted either in a task or a hardware interrupt. Interfaces may be provided or used by components. The provided interfaces are intended to represent the functionality that the component provides to its user, the used interfaces represent the functionality the component needs to perform its job. In nesC, interfaces are bidirectional: They specify a set of functions to be implemented by the interface's provider (commands) and a set to be implemented by the interface's user (events). This allows a single interface to represent a complex interaction between components (e.g., registration of interest in some event, followed by a callback when that event happens). This is critical because all lengthy commands in TinyOS (e.g. send packet) are non-blocking; their completion is signaled through an event (send done). By specifying interfaces, a component cannot call the send command unless it provides an implementation of the sendDone event. Typically commands call downwards, i.e., from application components to those closer to the hardware, while events call upwards. Certain primitive events are bound to hardware interrupts. Components are statically linked to each other via their interfaces. This increases runtime efficiency, encourages robust design, and allows for better static analysis of programs. References External links Embedded systems Wireless sensor network C programming language family
NesC
[ "Technology", "Engineering" ]
410
[ "Computer engineering", "Embedded systems", "Wireless networking", "Wireless sensor network", "Computer systems", "Computer science" ]
1,000,657
https://en.wikipedia.org/wiki/Genkan
are traditional Japanese entryway areas for a house, apartment, or building, a combination of a porch and a doormat. It is usually located inside the building directly in front of the door. The primary function of is for the removal of shoes before entering the main part of the house or building. A secondary function is a place for brief visits without being invited across the step into the house proper. For example, where a pizza delivery driver in an English-speaking country would normally stand on the porch and conduct business through the open front door, in Japan a food delivery would traditionally have taken place across the step. After removing shoes, one must avoid stepping on the tiled or concrete in socks or with bare feet, to avoid bringing dirt into the house. Once inside, generally one will change into : slippers or shoes intended for indoor wear. are also occasionally found in other buildings in Japan, especially in old-fashioned businesses. Design are normally recessed into the floor, to contain any dirt that is tracked in from the outside (as in a mud room). The height of the step varies from very low () to shin-level or knee-level. in apartments are usually much smaller than those in houses, and may have no difference in elevation with the rest of the floor; it may simply have a different type of flooring material than the rest of the floor to distinguish it as the . Schools and have large with compartments for each person's outdoor shoes. In private residences, may be absent, and shoes are usually turned to face the door so they can be slipped on easily when leaving. History The custom of removing one's shoes before entering the house is believed to go back over one thousand years to the pre-historical era of elevated-floor structures. It has continued to the present, even after the Westernization of the Japanese home, which began in the Meiji period (1868–1912). See also Engawa (traditional Japanese veranda) References External links What is this? . A comprehensive explanation about the in Japan. Japanese home Japanese words and phrases Rooms
Genkan
[ "Engineering" ]
421
[ "Rooms", "Architecture" ]
1,000,682
https://en.wikipedia.org/wiki/Cognitive%20radio
A cognitive radio (CR) is a radio that can be programmed and configured dynamically to use the best channels in its vicinity to avoid user interference and congestion. Such a radio automatically detects available channels, then accordingly changes its transmission or reception parameters to allow more concurrent wireless communications in a given band at one location. This process is a form of dynamic spectrum management. Description In response to the operator's commands, the cognitive engine is capable of configuring radio-system parameters. These parameters include "waveform, protocol, operating frequency, and networking". This functions as an autonomous unit in the communications environment, exchanging information about the environment with the networks it accesses and other cognitive radios (CRs). A CR "monitors its own performance continuously", in addition to "reading the radio's outputs"; it then uses this information to "determine the RF environment, channel conditions, link performance, etc.", and adjusts the "radio's settings to deliver the required quality of service subject to an appropriate combination of user requirements, operational limitations, and regulatory constraints". Some "smart radio" proposals combine wireless mesh network—dynamically changing the path messages take between two given nodes using cooperative diversity; cognitive radio—dynamically changing the frequency band used by messages between two consecutive nodes on the path; and software-defined radio—dynamically changing the protocol used by message between two consecutive nodes. History The concept of cognitive radio was first proposed by Joseph Mitola III in a seminar at KTH Royal Institute of Technology in Stockholm in 1998 and published in an article by Mitola and Gerald Q. Maguire, Jr. in 1999. It was a novel approach in wireless communications, which Mitola later described as: The point in which wireless personal digital assistants (PDAs) and the related networks are sufficiently computationally intelligent about radio resources and related computer-to-computer communications to detect user communications needs as a function of use context, and to provide radio resources and wireless services most appropriate to those needs. Cognitive radio is considered as a goal towards which a software-defined radio platform should evolve: a fully reconfigurable wireless transceiver which automatically adapts its communication parameters to network and user demands. Traditional regulatory structures have been built for an analog model and are not optimized for cognitive radio. Regulatory bodies in the world (including the Federal Communications Commission in the United States and Ofcom in the United Kingdom) as well as different independent measurement campaigns found that most radio frequency spectrum was inefficiently utilized. Cellular network bands are overloaded in most parts of the world, but other frequency bands (such as military, amateur radio and paging frequencies) are insufficiently utilized. Independent studies performed in some countries confirmed that observation, and concluded that spectrum utilization depends on time and place. Moreover, fixed spectrum allocation prevents rarely used frequencies (those assigned to specific services) from being used, even when any unlicensed users would not cause noticeable interference to the assigned service. Regulatory bodies in the world have been considering whether to allow unlicensed users in licensed bands if they would not cause any interference to licensed users. These initiatives have focused cognitive-radio research on dynamic spectrum access. The first cognitive radio wireless regional area network standard, IEEE 802.22, was developed by IEEE 802 LAN/MAN Standard Committee (LMSC) and published in 2011. This standard uses geolocation and spectrum sensing for spectral awareness. Geolocation combines with a database of licensed transmitters in the area to identify available channels for use by the cognitive radio network. Spectrum sensing observes the spectrum and identifies occupied channels. IEEE 802.22 was designed to utilize the unused frequencies or fragments of time in a location. This white space is unused television channels in the geolocated areas. However, cognitive radio cannot occupy the same unused space all the time. As spectrum availability changes, the network adapts to prevent interference with licensed transmissions. Terminology Depending on transmission and reception parameters, there are two main types of cognitive radio: Full Cognitive Radio (Mitola radio), in which every possible parameter observable by a wireless node (or network) is considered. Spectrum-Sensing Cognitive Radio, in which only the radio-frequency spectrum is considered. Other types are dependent on parts of the spectrum available for cognitive radio: Licensed-Band Cognitive Radio, capable of using bands assigned to licensed users (except for unlicensed bands, such as the U-NII band or the ISM band). The IEEE 802.22 working group is developing a standard for wireless regional area network (WRAN), which will operate on unused television channels, also known as TV white spaces. Unlicensed-Band Cognitive Radio, which can only utilize unlicensed parts of the radio frequency (RF) spectrum. One such system is described in the IEEE 802.15 Task Group 2 specifications, which focus on the coexistence of IEEE 802.11 and Bluetooth. Spectrum mobility: Process by which a cognitive-radio user changes its frequency of operation. Cognitive-radio networks aim to use the spectrum in a dynamic manner by allowing radio terminals to operate in the best available frequency band, maintaining seamless communication requirements during transitions to better spectrum. Spectrum sharing: Spectrum sharing cognitive radio networks allow cognitive radio users to share the spectrum bands of the licensed-band users. However, the cognitive radio users have to restrict their transmit power so that the interference caused to the licensed-band users is kept below a certain threshold. Sensing-based Spectrum sharing: In sensing-based spectrum sharing cognitive radio networks, cognitive radio users first listen to the spectrum allocated to the licensed users to detect the state of the licensed users. Based on the detection results, cognitive radio users decide their transmission strategies. If the licensed users are not using the bands, cognitive radio users will transmit over those bands. If the licensed users are using the bands, cognitive radio users share the spectrum bands with the licensed users by restricting their transmit power. Database-enabled Spectrum Sharing,,: In this modality of spectrum sharing, cognitive radio users are required to access a white space database prior to be allowed, or denied, access to the shared spectrum. The white space database contain algorithms, mathematical models and local regulations to predict the spectrum utilization in a geographical area and to infer on the risk of interference posed to incumbent services by a cognitive radio user accessing the shared spectrum. If the white space database judges that destructive interference to incumbents will happen, the cognitive radio user is denied access to the shared spectrum. Technology Although cognitive radio was initially thought of as a software-defined radio extension (full cognitive radio), most research work focuses on spectrum-sensing cognitive radio (particularly in the TV bands). The chief problem in spectrum-sensing cognitive radio is designing high-quality spectrum-sensing devices and algorithms for exchanging spectrum-sensing data between nodes. It has been shown that a simple energy detector cannot guarantee the accurate detection of signal presence, calling for more sophisticated spectrum sensing techniques and requiring information about spectrum sensing to be regularly exchanged between nodes. Increasing the number of cooperating sensing nodes decreases the probability of false detection. Filling free RF bands adaptively, using OFDMA, is a possible approach. Timo A. Weiss and Friedrich K. Jondral of the University of Karlsruhe proposed a spectrum pooling system, in which free bands (sensed by nodes) were immediately filled by OFDMA subbands. Applications of spectrum-sensing cognitive radio include emergency-network and WLAN higher throughput and transmission-distance extensions. The evolution of cognitive radio toward cognitive networks is underway; the concept of cognitive networks is to intelligently organize a network of cognitive radios. Functions The main functions of cognitive radios are: Power Control: Power control is usually used for spectrum sharing CR systems to maximize the capacity of secondary users with interference power constraints to protect the primary users. Spectrum sensing: Detecting unused spectrum and sharing it, without harmful interference to other users; an important requirement of the cognitive-radio network is to sense empty spectrum. Detecting primary users is the most efficient way to detect empty spectrum. Spectrum-sensing techniques may be grouped into three categories: Transmitter detection: Cognitive radios must have the capability to determine if a signal from a primary transmitter is locally present in a certain spectrum. There are several proposed approaches to transmitter detection: Matched filter detection Energy detection: Energy detection is a spectrum sensing method that detects the presence/absence of a signal just by measuring the received signal power. This signal detection approach is quite easy and convenient for practical implementation. To implement energy detector, however, noise variance information is required. It has been shown that an imperfect knowledge of the noise power (noise uncertainty) may lead to the phenomenon of the SNR wall, which is a SNR level below which the energy detector can not reliably detect any transmitted signal even increasing the observation time. It has also been shown that the SNR wall is not caused by the presence of a noise uncertainty itself, but by an insufficient refinement of the noise power estimation while the observation time increases. Cyclostationary-feature detection: These type of spectrum sensing algorithms are motivated because most man-made communication signals, such as BPSK, QPSK, AM, OFDM, etc. exhibit cyclostationary behavior. However, noise signals (typically white noise) do not exhibit cyclostationary behavior. These detectors are robust against noise variance uncertainty. The aim of such detectors is to exploit the cyclostationary nature of man-made communication signals buried in noise. Their main decision parameter is comparing the non zero values obtained by CSD of the primary signal. Cyclostationary detectors can be either single cycle or multicycle cyclostationary. Wideband spectrum sensing: refers to spectrum sensing over large spectral bandwidth, typically hundreds of MHz or even several GHz. Since current ADC technology cannot afford the high sampling rate with high resolution, it requires revolutional techniques, e.g., compressive sensing and sub-Nyquist sampling. Cooperative detection: Refers to spectrum-sensing methods where information from multiple cognitive-radio users is incorporated for primary-user detection Interference-based detection Null-space based CR: With the aid of multiple antennas, CR detects the null-space of the primary-user and then transmits within the null-space, such that its subsequent transmission causes less interference to the primary-user Spectrum management: Capturing the best available spectrum to meet user communication requirements, while not creating undue interference to other (primary) users. Cognitive radios should decide on the best spectrum band (of all bands available) to meet quality of service requirements; therefore, spectrum-management functions are required for cognitive radios. Spectrum-management functions are classified as: Spectrum analysis Spectrum decision The practical implementation of spectrum-management functions is a complex and multifaceted issue, since it must address a variety of technical and legal requirements. An example of the former is choosing an appropriate sensing threshold to detect other users, while the latter is exemplified by the need to meet the rules and regulations set out for radio spectrum access in international (ITU radio regulations) and national (telecommunications law) legislation. Artificial Intelligence based algorithms algorithm for dynamic spectrum allocation and interference management in order to reduce harmful interference to other services and networks will be a key technology enabler towards 6G. This will pave the way for more flexibility in the management and regulation of the radioelectric spectrum. Intelligent antenna (IA) An intelligent antenna (or smart antenna) is an antenna technology that uses spatial beam-formation and spatial coding to cancel interference; however, applications are emerging for extension to intelligent multiple or cooperative-antenna arrays for application to complex communication environments. Cognitive radio, by comparison, allows user terminals to sense whether a portion of the spectrum is being used in order to share spectrum with neighbor users. The following table compares the two: Note that both techniques can be combined as illustrated in many contemporary transmission scenarios. Cooperative MIMO (CO-MIMO) combines both techniques. Applications Cognitive Radio (CR) can sense its environment and, without the intervention of the user, can adapt to the user's communications needs while conforming to FCC rules in the United States. In theory, the amount of spectrum is infinite; practically, for propagation and other reasons it is finite because of the desirability of certain spectrum portions. Assigned spectrum is far from being fully utilized, and efficient spectrum use is a growing concern; CR offers a solution to this problem. A CR can intelligently detect whether any portion of the spectrum is in use, and can temporarily use it without interfering with the transmissions of other users. According to Bruce Fette, "Some of the radio's other cognitive abilities include determining its location, sensing spectrum use by neighboring devices, changing frequency, adjusting output power or even altering transmission parameters and characteristics. All of these capabilities, and others yet to be realized, will provide wireless spectrum users with the ability to adapt to real-time spectrum conditions, offering regulators, licenses and the general public flexible, efficient and comprehensive use of the spectrum". Examples of applications include: The application of CR networks to emergency and public safety communications by utilizing white space The potential of CR networks for executing dynamic spectrum access (DSA) Application of CR networks to military action such as chemical biological radiological and nuclear attack detection and investigation, command control, obtaining information of battle damage evaluations, battlefield surveillance, intelligence assistance, and targeting. They are also proven to be helpful in establishing Medical Body Area Networks which can be utilized in omnipresent patient monitoring that aids in immediately notifying the doctors regarding vital information of patients such as sugar level, blood pressure, blood oxygen and electrocardiogram (ECG), etc. This gives the additional advantage of reducing the risk of infections and also increases the patient's mobility. Cognitive radio is practical also to wireless sensor networks, where packet relaying can take place using primary and secondary queues to forward packets without delays and with minimum power consumption. Simulation of CR networks At present, modeling & simulation is the only paradigm which allows the simulation of complex behavior in a given environment's cognitive radio networks. Network simulators like OPNET, NetSim, MATLAB and ns2 can be used to simulate a cognitive radio network. CogNS is an open-source NS2-based simulation framework for cognitive radio networks. Areas of research using network simulators include: Spectrum sensing & incumbent detection Spectrum allocation Measurement and/or modeling of spectrum usage Efficiency of spectrum utilization Network Simulator 3 (ns-3) is also a viable option for simulating CR. ns-3 can be also used to emulate and experiment CR networks with the aid from commodity hardware like Atheros WiFi devices. Future plans The success of the unlicensed band in accommodating a range of wireless devices and services has led the FCC to consider opening further bands for unlicensed use. In contrast, the licensed bands are underutilized due to static frequency allocation. Realizing that CR technology has the potential to exploit the inefficiently utilized licensed bands without causing interference to incumbent users, the FCC released a Notice of Proposed Rule Making which would allow unlicensed radios to operate in the TV-broadcast bands. The IEEE 802.22 working group, formed in November 2004, is tasked with defining the air-interface standard for wireless regional area networks (based on CR sensing) for the operation of unlicensed devices in the spectrum allocated to TV service. To comply with later FCC regulations on unlicensed utilization of TV spectrum, the IEEE 802.22 has defined interfaces to the mandatory TV White Space Database in order to avoid interference to incumbent services. Although spectrum geolocation databases allow reducing the receiver complexity, and interference probability, for instance from sensing errors or hidden nodes, this comes at the cost of a lower spectrum utilization efficiency as the databases can not capture a fine-grained quantification of spectrum utilization and are not updated in real-time. Collaborative sensing, and distributed spectrum management based on artificial intelligence could contribute in the future towards a better balance between spectrum utilization efficiency and interference mitigation. See also Channel allocation schemes Channel-dependent scheduling Cognitive network LTE Advanced Network Simulator OFDMA Radio resource management (RRM) White spaces (radio) White spaces (database) Software-defined radio References External links Berkeley Wireless Research Center Cognitive Radio Workshop – first workshop on cognitive radio; its focus was mainly on research issues within the topic Center for Wireless Telecommunications (CWT), Virginia Tech Cognitive Radio Technologies Proceeding of Federal Communications Commission – Federal Communications Commission rules on cognitive radio IEEE DySPAN Conference Radio technology Wireless networking Radio resource management Radiofrequency receivers Telecommunications engineering
Cognitive radio
[ "Technology", "Engineering" ]
3,383
[ "Information and communications technology", "Telecommunications engineering", "Wireless networking", "Computer networks engineering", "Radio technology", "Electrical engineering" ]
1,000,690
https://en.wikipedia.org/wiki/Jacksonville%20University
Jacksonville University (JU) is a private university in Jacksonville, Florida, United States. Located in the city's Arlington district, the school was founded in 1934 as a two-year college and was known as Jacksonville Junior College until September 5, 1956, when it shifted focus to building four-year university degree programs and later graduated its first four-year degree candidates as Jacksonville University in June 1959. It is a member of the Independent Colleges and Universities of Florida and is accredited by the Southern Association of Colleges and Schools (SACS) and the Association to Advance Collegiate Schools of Business (AACSB). JU's student body currently represents more than 40 U.S. states and approximately 45 countries around the world. As a Division I institution, it fields 18 varsity athletics teams, known as the JU Dolphins, as well as intramural sports and clubs. Among the top majors declared by JU students are aviation management, biology, nursing, business, and marine science. History The school was founded in 1934 by William J. Porter. Originally known as William J. Porter University, it began as a private two-year college. Since a permanent site had not yet been acquired, classes were held on the third floor auditorium of the First Baptist Church Educational Building in downtown Jacksonville. Sixty students were enrolled in Porter University's first year of operation. The school changed its name to Jacksonville Junior College in 1935. It relocated three times over the next fifteen years, including a period in the Florida Theatre building, but the influx of GI bill students following the end of World War II made it necessary for the school to find a permanent location. In 1947 the administration purchased land in Jacksonville's Arlington neighborhood on which to establish the current campus. The first building was completed in 1950 and classes officially began. The same year the school received full accreditation as a two-year college from the Southern Association of Colleges and Schools (SACS). In 1958 Jacksonville Junior College merged with the Jacksonville College of Music, and the name was changed to Jacksonville University. In 1959 the first four-year class of 100 students graduated, and in 1961 JU received full accreditation as a four-year school from SACS. The 1960s saw the university grow substantially as enrollment increased, dormitories were built, two new colleges were established and the Swisher Gymnasium was constructed. The first student dormitories (Williams, McGehee, Brest, Merrill and Grether Halls) opened for the fall semester of 1965 on the south part of campus for a combined total of $2.4 million. The sixth dormitory, Botts Hall, opened in 1968. In 1970 the Jacksonville University Dolphins men's basketball team, under star center Artis Gilmore, went to the NCAA Division I Championship. However, the opening of the public University of North Florida in 1972 eroded JU's enrollment, while the removal of public funding hurt the school financially. In the 1990s Jacksonville University reconfigured itself as primarily a liberal arts college and embarked on a substantial fundraising campaign, which provided for the construction of new buildings and a revision of the campus master plan. In 1997 a new cafeteria was constructed, a Visual Arts Annex opened, and the on-campus Villages Apartments finished construction and opened for students on the north part of campus. Merrill and Grether Hall were demolished in 2007 to make way for Oak Hall, a modern 500-bed dormitory, and a new parking garage. George Hallam, in conjunction with Jacksonville University and its library staff, published an extensive history of the university titled Our Place in the Sun, which details the development and progress of the institution between its inception in 1934 through the spring of 1988. Other university publications which have chronicled JU history throughout the decades include the JU Navigator, the Riparian, and The Wave magazine. Academics Jacksonville University offers more than 100 majors, minors, and programs at the undergraduate level, as well as 23 master's and doctorate degree programs, leading to the M.S., M.A., M.A.T., and Master of Business Administration, Doctor of Occupational Therapy (OTD), and Doctor of Nursing Practice (DNP). The university is divided into five colleges: the College of Arts and Sciences, the Davis College of Business & Technology, the College of Fine Arts & Humanities, the College of Law, and the Brooks Rehabilitation College of Healthcare Sciences. Along with the five colleges, the university also consists of three institutes: the Marine Science Research Institute, the Public Policy Institute, and the STEAM Institute. The College of Arts and Sciences offers a traditional liberal arts education and includes JU's School of Education, Wilma's Little People School, Science and Mathematics, Social Sciences, Humanities, and the Naval Reserve Officer Training Corps (NROTC). JU has the second-largest Naval Reserve Officer Training Corps program in the nation and the longest-running in Florida. Jacksonville is a military- and veteran-friendly town, and is home to three major military installations. It is also an approved Yellow Ribbon School and is home to the Jacksonville University Veterans and Military Resource Center (VMRC). University staff and administration includes many distinguished veterans from multiple branches of the U.S. military. The College of Fine Arts & Humanities, with its integrated Alexander Brest Museum and Gallery, is one of the longest-standing colleges in JU history. Undergraduate programs include dance, theatre, music, and visual arts. Graduate programs are available in choreography and visual arts. The College of Fine Arts' annual artist series is open to the public and offers more than 20 concerts, events and exhibitions per season. The Davis College of Business & Technology (DCOBT) received its AACSB accreditation in January 2010, and is the only private, AACSB-accredited business school in North Florida. DCOBT offers both MBA and EMBA degrees, along with undergraduate business degrees in accounting, aviation management, aviation management & flight operations, business administration, business analytics, business information systems, economics, finance, international business, management, marketing, and sport business. In both 2017 and 2018, the school's CFA Research Challenge team won the CFA Institute Research Challenge in Florida, beating out schools such as University of Miami and University of Florida, and went on to compete nationally. In 2018 they won the national competition and competed as finalists in the global CFA Institute Research Challenge in Kuala Lumpur, Malaysia. Jacksonville University has also teamed up with the Florida Coastal School of Law to offer a joint MBA/law degree, and joined forces with Aerosim Flight Academy to provide professional flight training to students of its ever-popular aviation major. The inaugural class of Jacksonville University College of Law occurred in August 2022 with fourteen students. Twenty-six students joined the next year. Provisional accreditation was granted to the school by the American Bar Association during that organization's February 22–23, 2024 meeting. The JU Flight Team competes in National Intercollegiate Flying Association Regional and National Safety and Flight Evaluation Conference (SAFECON) against other universities, with its best team performance in 2007. The program is the third largest in the nation, behind Spartan School in Tulsa, Oklahoma and the Embry-Riddle Aeronautical University in Daytona Beach. The team placed 10th in the nation at the National Intercollegiate Flying Association. In 2008, the team was awarded the Loening Trophy, which is given to the best collegiate aviation program in the country each year. It is currently on display in the Smithsonian in Washington, DC. The Brooks Rehabilitation College of Healthcare Sciences (BRCHS), includes the School of Orthodontics and one of JU's many premier learning environments, the Simulation Training and Applied Research (STAR) Center where students can participate in simulations of everything from childbirth to wound care. The university's BRCHS program offers Bachelor of Science in Kinesiology, Bachelor of Science in Nursing, and a Master of Science in Nursing degree, among many other degree programs and certifications. In 2014, Jacksonville University partnered with Brooks Rehabilitation Hospital to create the Brooks Rehabilitation Speech-Language Pathology program. BRCHS is affiliated with hundreds of local healthcare partners, including Nemours Children's Clinic, Baptist Health Systems, Shands, St. Vincent's Healthcare, Florida Blue, Duval County Public Schools, and Wolfson Children's Hospital. In 2012, the university established the Public Policy Institute (PPI), offering the only Master in Public Policy (MPP) degree program in the state of Florida. The institute also offers dual degree programs in conjunction with the Davis College of Business and hosts a variety of politically related events, including televised debates for local and regional elections, a radio program titled Policy Matters, and internship opportunities with local companies, local government and the Office of the Governor. On February 28, 2022, Jacksonville University announced that with the assistance of a Jacksonville municipal grant, it was starting a law school. The announcement was made by Jacksonville University President Tim Cost and Mayor Lenny Curry. The location will be in the VyStar Building downtown where Jacksonville University already has a facility for working students. The law school opened in August 2022 with an initial enrollment of 14 students, the first new law school to open in the U.S. since 2014. In November 2022, the university announced that it had partnered with the Lake Erie College of Osteopathic Medicine to open a branch of the medical school at the Arlington Campus by 2026. Rankings Jacksonville University was ranked #34 (tie) in the Regional Universities South category of U.S. News & World Report's Best Colleges ranking in 2022–23. Athletics The JU athletic programs participate in NCAA Division I in the ASUN Conference, with the exception of the rowing program, which competes in the MAAC Conference (NCAA Division I). Terry Alexander, the most successful coach in Jacksonville's baseball history with 631 wins, entered his 31st year at Jacksonville and his 20th year as the program's head coach. He has led the program to nine NCAA regional appearances, won six conference championships (1995, 1999, 2003, 2006, 2007, 2009) and has completed five 40-win seasons. He has also coached 10 All-America honorees, 50 all-conference selections and helped 44 players get drafted by Major League Baseball organizations. The basketball program has produced professional basketball players such as Artis Gilmore, Otis Smith, Pembrook Burrows III and Rex Morgan. In 1970, Jacksonville University became the second smallest school (behind St. Bonaventure) to make it to the NCAA Final Four and the national championship game. The team was led by head coach Joe Williams. After defeating the St. Bonaventure team in the tournament semi-finals, the Dolphins lost to the UCLA Bruins in the national championship. The following season, Jacksonville became the first college basketball team to average 100+ points per game, at a time when there was no three-point shot and no shot clock in college basketball. In 2009, Jacksonville won the regular season Atlantic Sun Conference title in men's basketball, but fell to East Tennessee State in the conference tournament title game. The Dolphins were invited to the National Invitation Tournament, the school's first post-season tournament since 1986, but lost in the first round to the University of Florida Gators. The football program began play in 1998, winning its first Pioneer League title in 2008. The Dolphins competed in the Football Championship Series (FCS), where they won two division titles and two conference championships. The university discontinued its football program at the conclusion of the 2019 season. JU is noted for its rowing program after taking the overall FIRA Cup (Florida Intercollegiate Rowing Association) in 2007 and again in 2014. The women's rowing team won their first MAAC Championship in 2014 and won an automatic bid to the NCAA Div I National Championship (JU Website). Recently, JU has expanded its rowing program with the addition of the Negaard Rowing Center. The JU rowing program has had over 50 years of success around the world and has competed in locations such as the Nile River and England's Henley Royal Regatta. The school added men's and women's lacrosse programs during the 2009–2010 academic year. In 2016 Jacksonville University landed a pair of lacrosse icons to lead its men's lacrosse program as Providence College assistant coach John Galloway was named head coach. One of the young legends in the sport, he was at Providence for four years after spending one year as a volunteer assistant at Duke. He brought along one of the game's most famous players, Casey Powell, as his offensive coordinator. Student life The school's Greek system, including, by some estimates, 15% of the school, includes Alpha Phi Alpha, Pi Kappa Alpha, Kappa Alpha Psi, Sigma Chi, and Sigma Nu fraternities; and Alpha Kappa Alpha, Delta Delta Delta, Alpha Epsilon Phi, Alpha Delta Pi, and Gamma Phi Beta sororities. 53% of all students live on campus in one of three residential halls and eight apartment-style housing facilities. Most residence halls provide academic and social events as well as host programs to acclimate incoming students to the college experience. While Greeks do offer some social events, many residence halls also host their own events. Alcohol policies are strictly enforced. The student center (the Davis Student Commons Building) includes a fitness center overlooking the St. Johns River, a Chick-Fil-A, and a game room for all campus community members, while serving as a focal point for campus life. The facility opened in October 2006. Student life at Jacksonville University includes a diverse range of activities and organizations. There are multicultural, arts, political and social action, service and professional, religious, sports and recreation, academic and professional, and special interest groups. There are a variety of campus ministries on campus. In 2011, another campus ministry, the Campus to City Wesley Foundation, started meeting at JU. Campus media organizations include the student newspaper (The Navigator), campus radio station (JU108), literary and arts magazine (The Aquarian), student-run broadcasting station (Dolphin Channel), and yearbook (The Riparian), which stopped its publication in 2010. Jacksonville University's Student Government Association serves the needs of the student body as a whole by electing representatives from the university's student organizations, residential communities and colleges. The Florida Leader magazine ranked JU as having the third-best positive student life experience out of the 28 private colleges and universities in the state, citing its small campus size, peer and faculty relationships, and the close-knit campus community. Library The Carl S. Swisher Library spans over 52,000 square feet and three floors. It offers scenic views of the St. Johns River and is situated in the academic center of campus. This building was funded by a former JU Board of Trustees chair, Carl S. Swisher, who contributed the necessary funds for its construction. The library was built in three phases, with the first phase completed in 1953, the second phase in 1961, and the third phase in 1971. In 1966, then-President of the University, Dr. Robert H. Spiro, established the “Friends of the Library." The library has completed several renovations over the years, the most recent being completed in early 2023. Today, the Carl S. Swisher Library holds more than 350,000 volumes of books, periodicals, music scores, and other items, as well as a substantial collection of digital resources. The library provides services in support of the university's objectives, including research assistance, instruction sessions, and interlibrary loan services. In partnership with the university's College of Law and Center for Gender + Sexuality, in March 2023 the Swisher Library became home to the American Bar Association's 19th Amendment exhibit. Notable alumni This list of Jacksonville University alumni includes graduates, non-graduate former students and current students of Jacksonville University. List of University presidents See also Independent Colleges and Universities of Florida Notes References External links Private universities and colleges in Florida Universities and colleges in Jacksonville, Florida Universities and colleges established in 1934 Universities and colleges accredited by the Southern Association of Colleges and Schools 1934 establishments in Florida Arlington, Jacksonville Universities and colleges in the Jacksonville metropolitan area Jacksonville Modern architecture Glassmaking schools Aviation schools in the United States
Jacksonville University
[ "Materials_science", "Engineering" ]
3,292
[ "Glass engineering and science", "Glassmaking schools" ]
1,001,091
https://en.wikipedia.org/wiki/Ethanolamine
Ethanolamine (2-aminoethanol, monoethanolamine, ETA, or MEA) is a naturally occurring organic chemical compound with the formula or . The molecule is bifunctional, containing both a primary amine and a primary alcohol. Ethanolamine is a colorless, viscous liquid with an odor reminiscent of ammonia. Ethanolamine is commonly called monoethanolamine or MEA in order to be distinguished from diethanolamine (DEA) and triethanolamine (TEOA). The ethanolamines comprise a group of amino alcohols. A class of antihistamines is identified as ethanolamines, which includes carbinoxamine, clemastine, dimenhydrinate, chlorphenoxamine, diphenhydramine and doxylamine. History Ethanolamines, or in particular, their salts, were discovered by Charles Adolphe Wurtz in 1860 by heating 2-chloroethanol with ammonia solution while studying derivatives of ethylene oxide he discovered a year earlier. He wasn't able to separate the salts or isolate any free bases. In 1897 Ludwig Knorr developed the modern industrial route (see below) and separated the products, including MEA, by fractional distillation, for the first time studying their properties. None of the ethanolamines were of any commercial importance until after the WWII industrial production of ethylene oxide took off. Occurrence in nature MEA molecules are a component in the formation of cellular membranes and are thus a molecular building block for life. Ethanolamine is the second-most-abundant head group for phospholipids, substances found in biological membranes (particularly those of prokaryotes); e.g., phosphatidylethanolamine. It is also used in messenger molecules such as palmitoylethanolamide, which has an effect on CB1 receptors. MEA was thought to exist only on Earth and on certain asteroids, but in 2021 evidence was found that these molecules exist in interstellar space. Ethanolamine is biosynthesized by decarboxylation of serine: → + Derivatives of ethanolamine are widespread in nature; e.g., lipids, as precursor of a variety of N-acylethanolamines (NAEs), that modulate several animal and plant physiological processes such as seed germination, plant–pathogen interactions, chloroplast development and flowering, as well as precursor, combined with arachidonic acid 20:4, ω-6), to form the endocannabinoid anandamide (AEA: ; 20:4, ω-6). MEA is biodegraded by ethanolamine ammonia-lyase, a B12-dependent enzyme. It is converted to acetaldehyde and ammonia via initial H-atom abstraction. Industrial production Monoethanolamine is produced by treating ethylene oxide with aqueous ammonia; the reaction also produces diethanolamine and triethanolamine. The ratio of the products can be controlled by the stoichiometry of the reactants. Applications MEA is used as feedstock in the production of detergents, emulsifiers, polishes, pharmaceuticals, corrosion inhibitors, and chemical intermediates. For example, reacting ethanolamine with ammonia gives ethylenediamine, a precursor of the commonly used chelating agent, EDTA. Gas stream scrubbing Monoethanolamines can scrub combusted-coal, combusted-methane and combusted-biogas flue emissions of carbon dioxide () very efficiently. MEA carbon dioxide scrubbing is also used to regenerate the air on submarines. Solutions of MEA in water are used as a gas stream scrubbing liquid in amine treaters. For example, aqueous MEA is used to remove carbon dioxide () and hydrogen sulfide () from various gas streams; e.g., flue gas and sour natural gas. The MEA ionizes dissolved acidic compounds, making them polar and considerably more soluble. MEA scrubbing solutions can be recycled through a regeneration unit. When heated, MEA, being a rather weak base, will release dissolved or gas resulting in a pure MEA solution. Other uses In pharmaceutical formulations, MEA is used primarily for buffering or preparation of emulsions. MEA can be used as pH regulator in cosmetics. It is an injectable sclerosant as a treatment option of symptomatic hemorrhoids. 2–5 ml of ethanolamine oleate can be injected into the mucosa just above the hemorrhoids to cause ulceration and mucosal fixation thus preventing hemorrhoids from descending out of the anal canal. It is also an ingredient in cleaning fluid for automobile windshields. pH-control amine Ethanolamine is often used for alkalinization of water in steam cycles of power plants, including nuclear power plants with pressurized water reactors. This alkalinization is performed to control corrosion of metal components. ETA (or sometimes a similar organic amine; e.g., morpholine) is selected because it does not accumulate in steam generators (boilers) and crevices due to its volatility, but rather distributes relatively uniformly throughout the entire steam cycle. In such application, ETA is a key ingredient of so-called "all-volatile treatment" of water (AVT). Reactions Upon reaction with carbon dioxide, 2 equivalents of ethanolamine react through the intermediacy of carbonic acid to form a carbamate salt, which when heated usually reforms back to ethanolamine and carbon dioxide but occasionally can also cyclizate to 2-oxazolidone, generating amine gas treatment wastes. References External links Process technology to produce ethanolamines by reaction of ammonia and ethylene oxide CDC - NIOSH Pocket Guide to Chemical Hazards Primary alcohols Commodity chemicals Ethanolamines
Ethanolamine
[ "Chemistry" ]
1,243
[ "Commodity chemicals", "Products of chemical industry" ]
1,001,110
https://en.wikipedia.org/wiki/Charpai
Charpai (also, Charpaya, Charpoy, Khat, Khatla, Manja, or Manji) is a traditional woven bed used across South Asia. The name charpai is a compound of char "four" and pay "footed". Regional variations are found in Afghanistan and Pakistan, North and Central India, Bihar and Myanmar. The charpai is a simple design that is easy to construct. It was traditionally made out of a wooden frame and natural-fiber ropes, but modern charpais may have metal frames and plastic tapes. The frame is four strong vertical posts connected by four horizontal members; the design makes the construction self-leveling. Lacing or rope can be made out of cotton, date leaves, and other natural fibers. The open and airy design of the charpai provides ventilation, making it an suitable choice for warm climates. Accordingly, it is mostly used in warm areas: in cold areas, a similar rope bed would be topped (with an insulating palliasse or tick, stuffed with straw, chaff, or down feathers), and possibly hung with curtains. There are many interpretations of the traditional design, and over the years craftspeople have innovated with the weave patterns and materials used. The weaving is done in many ways, e.g. a diagonal cross (bias) weave, with one end woven short, and laced to the endpiece, for tensioning adjustments (which helps in controlling the sagging of the bed as it ages with use). In the 1300s, Ibn Battuta described the charpai as having "four conical legs with four crosspieces of wood on which braids of silk or cotton are woven. When one lies down on it, there is no need for anything to make it pliable, for it is pliable of itself." Recognized for its portability, adapted charpais were used as colonial campaign furniture. Construction Paaga: the legs of the charpai can be simple or mimic the legs of an animal Iss: the long beams of the frame, which is proportionately twice the length of the Upala Upala: the short beams of the frame which is kept higher than Iss Munj: is the webbing of rope that creates the main surface that the person sleeps on Badaan: is the extended area of the rope near the foot which keeps the tension History Its exact provenance of the charpai is unknown. Various versions of it can be found in Egyptian and Mesopotamian cultures; however, the simple structured, handmade charpai is indigenous to the Indian Subcontinent. The oldest description of a charpoi in India dates back to the 2nd century BC. Bedsteads are depcited in scenes of the life of Budha. This kind of furniture in the Buddhist time period is referred to as “Manca.” There are four known types of Mancas from ancient times: Masaranka (a longer version), Bundikabaddh (aversion with slots), Kulirapadaka (a version with curved legs) and Achacca Padaka (a version with removable legs). Gallery See also Niwar (cotton tape) used for stringing charpais Rope bed Klinē (Classical Greek) References Beds Culture of India Culture of Pakistan History of furniture Punjabi words and phrases Desi culture Indian furniture Portable furniture
Charpai
[ "Biology" ]
697
[ "Beds", "Behavior", "Sleep" ]
1,001,293
https://en.wikipedia.org/wiki/Irreducibility%20%28mathematics%29
In mathematics, the concept of irreducibility is used in several ways. A polynomial over a field may be an irreducible polynomial if it cannot be factored over that field. In abstract algebra, irreducible can be an abbreviation for irreducible element of an integral domain; for example an irreducible polynomial. In representation theory, an irreducible representation is a nontrivial representation with no nontrivial proper subrepresentations. Similarly, an irreducible module is another name for a simple module. Absolutely irreducible is a term applied to mean irreducible, even after any finite extension of the field of coefficients. It applies in various situations, for example to irreducibility of a linear representation, or of an algebraic variety; where it means just the same as irreducible over an algebraic closure. In commutative algebra, a commutative ring R is irreducible if its prime spectrum, that is, the topological space Spec R, is an irreducible topological space. A matrix is irreducible if it is not similar via a permutation to a block upper triangular matrix (that has more than one block of positive size). (Replacing non-zero entries in the matrix by one, and viewing the matrix as the adjacency matrix of a directed graph, the matrix is irreducible if and only if such directed graph is strongly connected.) A detailed definition is given here. Also, a Markov chain is irreducible if there is a non-zero probability of transitioning (even if in more than one step) from any state to any other state. In the theory of manifolds, an n-manifold is irreducible if any embedded (n − 1)-sphere bounds an embedded n-ball. Implicit in this definition is the use of a suitable category, such as the category of differentiable manifolds or the category of piecewise-linear manifolds. The notions of irreducibility in algebra and manifold theory are related. An n-manifold is called prime, if it cannot be written as a connected sum of two n-manifolds (neither of which is an n-sphere). An irreducible manifold is thus prime, although the converse does not hold. From an algebraist's perspective, prime manifolds should be called "irreducible"; however, the topologist (in particular the 3-manifold topologist) finds the definition above more useful. The only compact, connected 3-manifolds that are prime but not irreducible are the trivial 2-sphere bundle over S1 and the twisted 2-sphere bundle over S1. See, for example, Prime decomposition (3-manifold). A topological space is irreducible if it is not the union of two proper closed subsets. This notion is used in algebraic geometry, where spaces are equipped with the Zariski topology; it is not of much significance for Hausdorff spaces. See also irreducible component, algebraic variety. In universal algebra, irreducible can refer to the inability to represent an algebraic structure as a composition of simpler structures using a product construction; for example subdirectly irreducible. A 3-manifold is P²-irreducible if it is irreducible and contains no 2-sided (real projective plane). An irreducible fraction (or fraction in lowest terms) is a vulgar fraction in which the numerator and denominator are smaller than those in any other equivalent fraction. Mathematical terminology
Irreducibility (mathematics)
[ "Mathematics" ]
727
[ "nan" ]
1,001,329
https://en.wikipedia.org/wiki/Class%20function
In mathematics, especially in the fields of group theory and representation theory of groups, a class function is a function on a group G that is constant on the conjugacy classes of G. In other words, it is invariant under the conjugation map on G. Such functions play a basic role in representation theory. Characters The character of a linear representation of G over a field K is always a class function with values in K. The class functions form the center of the group ring K[G]. Here a class function f is identified with the element . Inner products The set of class functions of a group G with values in a field K form a K-vector space. If G is finite and the characteristic of the field does not divide the order of G, then there is an inner product defined on this space defined by where |G| denotes the order of G and bar is conjugation in the field K. The set of irreducible characters of G forms an orthogonal basis, and if K is a splitting field for G, for instance if K is algebraically closed, then the irreducible characters form an orthonormal basis. In the case of a compact group and K = C the field of complex numbers, the notion of Haar measure allows one to replace the finite sum above with an integral: When K is the real numbers or the complex numbers, the inner product is a non-degenerate Hermitian bilinear form. See also Brauer's theorem on induced characters References Jean-Pierre Serre, Linear representations of finite groups, Graduate Texts in Mathematics 42, Springer-Verlag, Berlin, 1977. Group theory
Class function
[ "Mathematics" ]
340
[ "Group theory", "Fields of abstract algebra" ]
1,001,343
https://en.wikipedia.org/wiki/Electrodermal%20activity
Electrodermal activity (EDA) is the property of the human body that causes continuous variation in the electrical characteristics of the skin. Historically, EDA has also been known as skin conductance, galvanic skin response (GSR), electrodermal response (EDR), psychogalvanic reflex (PGR), skin conductance response (SCR), sympathetic skin response (SSR) and skin conductance level (SCL). The long history of research into the active and passive electrical properties of the skin by a variety of disciplines has resulted in an excess of names, now standardized to electrodermal activity (EDA). The traditional theory of EDA holds that skin resistance varies with the state of sweat glands in the skin. Sweating is controlled by the sympathetic nervous system, and skin conductance is an indication of psychological or physiological arousal. If the sympathetic branch of the autonomic nervous system is highly aroused, then sweat gland activity also increases, which in turn increases skin conductance. In this way, skin conductance can be a measure of emotional and sympathetic responses. More recent research and additional phenomena (resistance, potential, impedance, electrochemical skin conductance, and admittance, sometimes responsive and sometimes apparently spontaneous) suggest that EDA is more complex than it seems, and research continues into the source and significance of EDA. History In 1849, Dubois-Reymond in Germany first observed that human skin was electrically active. He immersed the limbs of his subjects in a zinc sulfate solution and found that electric current flowed between a limb with muscles contracted and one that was relaxed. He therefore attributed his EDA observations to muscular phenomena. Thirty years later, in 1878 in Switzerland, Hermann and Luchsinger demonstrated a connection between EDA and sweat glands. Hermann later demonstrated that the electrical effect was strongest in the palms of the hands, suggesting that sweat was an important factor. Vigouroux (France, 1879), working with emotionally distressed patients, was the first researcher to relate EDA to psychological activity. In 1888, the French neurologist Féré demonstrated that skin resistance activity could be changed by emotional stimulation and that activity could be inhibited by drugs. In 1889 in Russia, Ivane Tarkhnishvili observed variations in skin electrical potentials in the absence of any external stimuli, and he developed a meter to observe the variations as they happened in real time. The scientific study of EDA began in the early 1900s. One of the first references to the use of EDA instruments in psychoanalysis is the book by C. G. Jung entitled Studies in Word Analysis, published in 1906. Jung and his colleagues used the meter to evaluate the emotional sensitivities of patients to lists of words during word association. Jung was so impressed with EDA monitoring, he allegedly cried, "Aha, a looking glass into the unconscious!" Jung described his use of the device in counseling in his book, Studies in Word Association, and such use has continued with various practitioners. The controversial Austrian psychoanalyst Wilhelm Reich also studied EDA in his experiments at the Psychological Institute at the University of Oslo, in 1935 and 1936, to confirm the existence of a bio-electrical charge behind his concept of vegetative, pleasurable "streamings". By 1972, more than 1500 articles on electrodermal activity had been published in professional publications, and today EDA is regarded as the most popular method for investigating human psychophysiological phenomena. As of 2013, EDA monitoring was still on the increase in clinical applications. Description Skin conductance is not under conscious control. Instead, it is modulated autonomously by sympathetic activity which drives human behavior, cognitive and emotional states on a subconscious level. Skin conductance, therefore, offers direct insights into autonomous emotional regulation. Human extremities, including fingers, palms, and soles of feet display different bio-electrical phenomena. They can be detected with an EDA meter, a device that displays the change in electrical conductance between two points over time. The two current paths are along the surface of the skin and through the body. Active measuring involves sending a small amount of current through the body. Some studies include the human skin's response to alternating current, including recently deceased bodies. Physiological basis There is a relationship between emotional arousal and sympathetic activity, although the electrical change alone does not identify which specific emotion is being elicited. These autonomic sympathetic changes alter sweat and blood flow, which in turn affects GSR and GSP (Galvanic skin potential). The amount of sweat glands varies across the human body, being highest in hand and foot regions (200–600 sweat glands per cm2). The response of the skin and muscle tissue to external and internal stimuli can cause the conductance to vary by several microsiemens. A correctly calibrated device can record and display the subtle changes. The combined changes between electrodermal resistance and electrodermal potential make up electrodermal activity. Galvanic skin resistance (GSR) is an older term that refers to the recorded electrical resistance between two electrodes when a very weak current is steadily passed between them. The electrodes are normally placed about an inch apart, and the resistance recorded varies according to the emotional state of the subject. Galvanic skin potential (GSP) refers to the voltage measured between two electrodes without any externally applied current. It is measured by connecting the electrodes to a voltage amplifier. This voltage also varies with the emotional state of the subject. Examples A painful stimulus such as a pinprick elicits a sympathetic response by the sweat glands, increasing secretion. Although this increase is generally very small, sweat contains water and electrolytes, which increase electrical conductivity, thus lowering the electrical resistance of the skin. These changes in turn affect GSR. Another common manifestation is the vasodilation (dilation) of blood vessels in the face, referred to as blushing, as well as increased sweating that occurs when one is embarrassed. EDA is highly responsive to emotions in some people. Fear, anger, startled response, orienting response, and sexual feelings are among the reactions that may be reflected in EDA. These responses are utilized as part of the polygraph or lie detector test. EDA in regular subjects differs according to feelings of being treated fairly or unfairly, but psychopaths have been shown to manifest no such differences. This indicates that the EDA record of a polygraph may be deceptive in a criminal investigation. Different units of EDA EDA reflects both slow varying tonic sympathetic activity and fast varying phasic sympathetic activity. Tonic activity can be expressed in units of electrodermal level (EDL or SCL), while phasic activity is expressed in units of electrodermal responses (EDR or SCR). Phasic changes (EDR) are short-lasting changes in EDA that appear as a response to a distinct stimulus. EDRs can also appear spontaneously without an observable external stimulus. These types of EDRs are referred to as "nonspecific EDR" (NS.EDR). The phasic EDR is useful when investigating multifaceted attentional processes. Tonic changes (EDL) are based on the phasic parameters. The spontaneous fluctuations of nonspecific EDR can be used to evaluate tonic EDA. More specifically by using the frequency of "nonspecific EDR" as an index of EDA during a specific time period, e. g. 30–60 seconds. Tonic EDA is considered useful in investigations of general arousal and alertness. Uses EDA is a common measure of autonomic nervous system activity, with a long history of being used in psychological research. Hugo D. Critchley, Chair of Psychiatry at the Brighton and Sussex Medical School states, "EDA is a sensitive psychophysiological index of changes in autonomic sympathetic arousal that are integrated with emotional and cognitive states." Many biofeedback therapy devices utilize EDA as an indicator of the user's stress response with the goal of helping the user to control anxiety. EDA is used to assess an individual's neurological status without using traditional, but uncomfortable and expensive, EEG-based monitoring. It has also been used as a proxy of psychological stress. EDA has also been studied as a method of pain assessment in premature born infants. Often, EDA monitoring is combined with the recording of heart rate, respiratory rate, and blood pressure, because they are all autonomically dependent variables. EDA measurement is one component of modern polygraph devices, which are often used as lie detectors. The E-meter used by the Church of Scientology as part of its practice of "auditing" and "security checking", is a custom EDA measurement device. Possible problems External factors such as temperature and humidity affect EDA measurements, which can lead to inconsistent results. Internal factors such as medications and hydration can also change EDA measurements, demonstrating inconsistency with the same stimulus level. Also, the classic understanding has treated EDA as if it represented one homogeneous change in arousal across the body, but in fact different locations of its measurement can lead to different responses; for example, the responses on the left and right wrists are driven by different regions of the brain, providing multiple sources of arousal; thus, the EDA measured in different places on the body varies not only with different sweat gland density but also with different underlying sources of arousal. Lastly, electrodermal responses are delayed 1–3 seconds. These show the complexity of determining the relationship between EDA and sympathetic activity. The skill of the operator may be a significant factor in the successful application of the tool. See also Affective computing Biosignal Electroacupuncture E-meter Notes References Carlson, Neil (2013). Physiology of Behavior. New Jersey: Pearson Education, Inc. . Figner, B., & Murphy, R. O. (2010). Using skin conductance in judgment and decision making research. A Handbook of Process Tracing Methods for Decision Research: A Critical Review and User's Guide, 163–84. Pflanzer, Richard. "Galvanic Skin Response and the Polygraph". BIOPAC Systems, Inc. Retrieved 5 May 2013. Measuring instruments Electronic test equipment Skin physiology Forensic techniques Electrophysiology
Electrodermal activity
[ "Physics", "Technology", "Engineering" ]
2,123
[ "Physical quantities", "Electronic test equipment", "Measuring instruments", "Impedance measurements", "Electrical resistance and conductance" ]
1,001,361
https://en.wikipedia.org/wiki/Semisimple%20module
In mathematics, especially in the area of abstract algebra known as module theory, a semisimple module or completely reducible module is a type of module that can be understood easily from its parts. A ring that is a semisimple module over itself is known as an Artinian semisimple ring. Some important rings, such as group rings of finite groups over fields of characteristic zero, are semisimple rings. An Artinian ring is initially understood via its largest semisimple quotient. The structure of Artinian semisimple rings is well understood by the Artin–Wedderburn theorem, which exhibits these rings as finite direct products of matrix rings. For a group-theory analog of the same notion, see Semisimple representation. Definition A module over a (not necessarily commutative) ring is said to be semisimple (or completely reducible) if it is the direct sum of simple (irreducible) submodules. For a module M, the following are equivalent: M is semisimple; i.e., a direct sum of irreducible modules. M is the sum of its irreducible submodules. Every submodule of M is a direct summand: for every submodule N of M, there is a complement P such that . For the proof of the equivalences, see . The most basic example of a semisimple module is a module over a field, i.e., a vector space. On the other hand, the ring of integers is not a semisimple module over itself, since the submodule is not a direct summand. Semisimple is stronger than completely decomposable, which is a direct sum of indecomposable submodules. Let A be an algebra over a field K. Then a left module M over A is said to be absolutely semisimple if, for any field extension F of K, is a semisimple module over . Properties If M is semisimple and N is a submodule, then N and are also semisimple. An arbitrary direct sum of semisimple modules is semisimple. A module M is finitely generated and semisimple if and only if it is Artinian and its radical is zero. Endomorphism rings A semisimple module M over a ring R can also be thought of as a ring homomorphism from R into the ring of abelian group endomorphisms of M. The image of this homomorphism is a semiprimitive ring, and every semiprimitive ring is isomorphic to such an image. The endomorphism ring of a semisimple module is not only semiprimitive, but also von Neumann regular. Semisimple rings A ring is said to be (left-)semisimple if it is semisimple as a left module over itself. Surprisingly, a left-semisimple ring is also right-semisimple and vice versa. The left/right distinction is therefore unnecessary, and one can speak of semisimple rings without ambiguity. A semisimple ring may be characterized in terms of homological algebra: namely, a ring R is semisimple if and only if any short exact sequence of left (or right) R-modules splits. That is, for a short exact sequence there exists such that the composition is the identity. The map s is known as a section. From this it follows that or in more exact terms In particular, any module over a semisimple ring is injective and projective. Since "projective" implies "flat", a semisimple ring is a von Neumann regular ring. Semisimple rings are of particular interest to algebraists. For example, if the base ring R is semisimple, then all R-modules would automatically be semisimple. Furthermore, every simple (left) R-module is isomorphic to a minimal left ideal of R, that is, R is a left Kasch ring. Semisimple rings are both Artinian and Noetherian. From the above properties, a ring is semisimple if and only if it is Artinian and its Jacobson radical is zero. If an Artinian semisimple ring contains a field as a central subring, it is called a semisimple algebra. Examples For a commutative ring, the four following properties are equivalent: being a semisimple ring; being artinian and reduced; being a reduced Noetherian ring of Krull dimension 0; and being isomorphic to a finite direct product of fields. If K is a field and G is a finite group of order n, then the group ring K[G] is semisimple if and only if the characteristic of K does not divide n. This is Maschke's theorem, an important result in group representation theory. By the Wedderburn–Artin theorem, a unital ring R is semisimple if and only if it is (isomorphic to) , where each Di is a division ring and each ni is a positive integer, and Mn(D) denotes the ring of n-by-n matrices with entries in D. An example of a semisimple non-unital ring is M∞(K), the row-finite, column-finite, infinite matrices over a field K. Simple rings One should beware that despite the terminology, not all simple rings are semisimple. The problem is that the ring may be "too big", that is, not (left/right) Artinian. In fact, if R is a simple ring with a minimal left/right ideal, then R is semisimple. Classic examples of simple, but not semisimple, rings are the Weyl algebras, such as the Q-algebra which is a simple noncommutative domain. These and many other nice examples are discussed in more detail in several noncommutative ring theory texts, including chapter 3 of Lam's text, in which they are described as nonartinian simple rings. The module theory for the Weyl algebras is well studied and differs significantly from that of semisimple rings. Jacobson semisimple A ring is called Jacobson semisimple (or J-semisimple or semiprimitive) if the intersection of the maximal left ideals is zero, that is, if the Jacobson radical is zero. Every ring that is semisimple as a module over itself has zero Jacobson radical, but not every ring with zero Jacobson radical is semisimple as a module over itself. A J-semisimple ring is semisimple if and only if it is an artinian ring, so semisimple rings are often called artinian semisimple rings to avoid confusion. For example, the ring of integers, Z, is J-semisimple, but not artinian semisimple. See also Socle Semisimple algebra Citations References Module theory Ring theory
Semisimple module
[ "Mathematics" ]
1,446
[ "Fields of abstract algebra", "Ring theory", "Module theory" ]
1,001,430
https://en.wikipedia.org/wiki/Bisphenol%20A
Bisphenol A (BPA) is a chemical compound primarily used in the manufacturing of various plastics. It is a colourless solid which is soluble in most common organic solvents, but has very poor solubility in water. BPA is produced on an industrial scale by the condensation reaction of phenol and acetone. Global production in 2022 was estimated to be in the region of 10 million tonnes. BPA's largest single application is as a co-monomer in the production of polycarbonates, which accounts for 65–70% of all BPA production. The manufacturing of epoxy resins and vinyl ester resins account for 25–30% of BPA use. The remaining 5% is used as a major component of several high-performance plastics, and as a minor additive in PVC, polyurethane, thermal paper, and several other materials. It is not a plasticizer, although it is often wrongly labelled as such. The health effects of BPA have been the subject of prolonged public and scientific debate. BPA is a xenoestrogen, exhibiting hormone-like properties that mimic the effects of estrogen in the body. Although the effect is very weak, the pervasiveness of BPA-containing materials raises concerns, as exposure is effectively lifelong. Many BPA-containing materials are non-obvious but commonly encountered, and include coatings for the inside of food cans, clothing designs, shop receipts, and dental fillings. BPA has been investigated by public health agencies in many countries, as well as by the World Health Organization. While normal exposure is below the level currently associated with risk, several jurisdictions have taken steps to reduce exposure on a precautionary basis, in particular by banning BPA from baby bottles. There is some evidence that BPA exposure in infants has decreased as a result of this. BPA-free plastics have also been introduced, which are manufactured using alternative bisphenols such as bisphenol S and bisphenol F, but there is also controversy around whether these are actually safer. History Bisphenol A was first reported in 1891 by the Russian chemist Aleksandr Dianin. In 1934, workers at I.G. Farbenindustrie reported the coupling of BPA and epichlorohydrin. Over the following decade, coatings and resins derived from similar materials were described by workers at the companies of DeTrey Freres in Switzerland and DeVoe and Raynolds in the US. This early work underpinned the development of epoxy resins, which in turn motivated production of BPA. The utilization of BPA further expanded with discoveries at Bayer and General Electric on polycarbonate plastics. These plastics first appeared in 1958, being produced by Mobay, General Electric, and Bayer. The British biochemist Edward Charles Dodds tested BPA as an artificial estrogen in the early 1930s. Subsequent work found that it bound to estrogen receptors tens of thousands of times more weakly than estradiol, the major natural female sex hormone. Dodds eventually developed a structurally similar compound, diethylstilbestrol (DES), which was used as a synthetic estrogen drug in women and animals until it was banned due to its risk of causing cancer; the ban on use of DES in humans came in 1971 and in animals, in 1979. BPA was never used as a drug. Production The synthesis of BPA still follows Dianin's general method, with the fundamentals changing little in 130 years. The condensation of acetone (hence the suffix 'A' in the name) with two equivalents of phenol is catalyzed by a strong acid, such as concentrated hydrochloric acid, sulfuric acid, or a solid acid resin such as the sulfonic acid form of polystyrene sulfonate. An excess of phenol is used to ensure full condensation and to limit the formation of byproducts, such as Dianin's compound. BPA is fairly cheap to produce, as the synthesis benefits from a high atom economy and large amounts of both starting materials are available from the cumene process. As the only by-product is water, it may be considered an industrial example of green chemistry. Global production in 2022 was estimated to be in the region of 10 million tonnes. Usually, the addition of acetone takes place at the para position on both phenols, however minor amounts of the ortho-para (up to 3%) and ortho-ortho isomers are also produced, along with several other minor by‑products. These are not always removed and are known impurities in commercial samples of BPA. Properties BPA has a fairly high melting point but can be easily dissolved in a broad range of organic solvents including toluene, ethanol and ethyl acetate. It may be purified by recrystallisation from acetic acid with water. Crystals form in the monoclinic space group P 21/n (where n indicates the glide plane); within this individual molecules of BPA are arraigned with a 91.5° torsion angle between the phenol rings. Spectroscopic data is available from AIST. Uses and applications Main uses Polycarbonates About 65–70% of all bisphenol A is used to make polycarbonate plastics, which can consist of nearly 90% BPA by mass. Polymerisation is achieved by a reaction with phosgene, conducted under biphasic conditions; the hydrochloric acid is scavenged with aqueous base. This process converts the individual molecules of BPA into large polymer chains, effectively trapping them. Epoxy and vinyl ester resins About 25–30% of all BPA is used in the manufacture of epoxy resins and vinyl ester resins. For epoxy resin, it is first converted to its diglycidyl ether (usually abbreviated BADGE or DGEBA). This is achieved by a reaction with epichlorohydrin under basic conditions. Some of this is further reacted with methacrylic acid to form bis-GMA, which is used to make vinyl ester resins. Alternatively, and to a much lesser extent, BPA may be ethoxylated and then converted to its diacrylate and dimethacrylate derivatives (bis-EMA, or EBPADMA). These may be incorporated at low levels in vinyl ester resins to change their physical properties and see common use in dental composites and sealants. Minor uses The remaining 5% of BPA is used in a wide range of applications, many of which involve plastic. BPA is a main component of several high-performance plastics, the production of these is low compared to other plastics but still equals several thousand tons a year. Comparatively minor amounts of BPA are also used as additives or modifiers in some commodity plastics. These materials are much more common but their BPA content will be low. Plastics As a major component Polycyanurates can be produced from BPA by way of its dicyanate ester (BADCy). This is formed by a reaction between BPA and cyanogen bromide. Examples include BT-Epoxy, which is one of a number of resins used in the production of printed circuit boards. Polyetherimides such as Ultem can be produced from BPA via a nitro-displacement of appropriate bisnitroimides. These thermoplastic polyimide plastics have exceptional resistance to mechanical, thermal and chemical damage. They are used in medical devices and other high performance instrumentation. Polybenzoxazines may be produced from a number of biphenols, including BPA. Polysulfones can be produced from BPA and bis(4-chlorophenyl) sulfone forming poly(bisphenol-A sulfone) (PSF). It is used as a high performance alternative to polycarbonate. Bisphenol-A formaldehyde resins are a subset of phenol formaldehyde resins. They are used in the production of high-pressure laminates As a minor component Polyurethane can incorporate BPA and its derivatives as hard segment chain extenders, particularly in memory foams. PVC can contain BPA and its derivatives through multiple routes. BPA is sometimes used as an antioxidant in phthalates, which are extensively used as plasticizers for PVC. BPA has also been used as an antioxidant to protect sensitive PVC heat stabilizers. Historically 5–10% by weight of BPA was included in barium-cadmium types, although these have largely been phased out due health concerns surrounding the cadmium. BPA diglycidyl ether (BADGE) is used as an acid scavenger, particularly in PVC dispersions, such as organosols or plastisols, which are used as coatings for the inside of food cans, as well as embossed clothes designs produced using heat transfer vinyl or screen printing machines. BPA is used to form a number of flame retardants used in plastics. Bromination of BPA forms tetrabromobisphenol A (TBBPA), which is mainly used as a reactive component of polymers, meaning that it is incorporated into the polymer backbone. It is used to prepare fire-resistant polycarbonates by replacing some bisphenol A. A lower grade of TBBPA is used to prepare epoxy resins, used in printed circuit boards. TBBPA is also converted to tetrabromobisphenol-A-bis(2,3,-dibromopropyl ether) (TBBPA-BDBPE) which can be used as a flame retardant in polypropylene. TBBPA-BDBPE is not chemically bonded to the polymer and can leach out into the environment. The use of these compounds is diminishing due to restrictions on brominated flame retardants. The reaction of BPA with phosphorus oxychloride and phenol forms bisphenol-A bis(diphenyl phosphate) (BADP), which is used as a liquid flame retarder in some high performance polymer blends such as polycarbonate/ABS mixtures. Other applications BPA is used as an antioxidant in several fields, particularly in brake fluids. BPA is used as a developing agent in thermal paper (shop receipts). Recycled paper products can also contain BPA, although this can depend strongly on how it is recycled. Deinking can remove 95% of BPA, with the pulp produced used to make newsprint, toilet paper and facial tissues. If deinking is not performed then the BPA remains in the fibers, paper recycled this way is usually made into corrugated fiberboard. Ethoxylated BPA finds minor use as a 'levelling agent' in tin electroplating. Several drug candidates have also been developed from bisphenol A, including ralaniten, ralaniten acetate, and EPI-001. BPA substitutes Concerns about the health effects of BPA have led some manufacturers replacing it with other bisphenols, such as bisphenol S and bisphenol F. These are produced in a similar manner to BPA, by replacing acetone with other ketones, which undergo analogous condensation reactions. Thus, in bisphenol F, the F signifies formaldehyde. Health concerns have also been raised about these substitutes. Alternative polymers, such as tritan copolyester have been developed to give the same properties as polycarbonate (durable, clear) without using BPA or its analogues. Human safety Exposure As a result of the presence of BPA in plastics and other commonplace materials, most people are frequently exposed to trace levels of BPA. The primary source of human exposure is via food, as epoxy and PVC are used to line the inside of food cans to prevent corrosion of the metal by acidic foodstuffs. Polycarbonate drink containers are also a source of exposure, although most disposable drinks bottles are actually made of PET, which contains no BPA. Among the non-food sources, exposure routes include through dust, thermal paper, clothing, dental materials, and medical devices. Although BPA exposure is common it does not accumulate within the body, with toxicokinetic studies showing the biological half-life of BPA in adult humans to be around two hours. The body first converts it into more water-soluble compounds via glucuronidation or sulfation, which are then removed from the body through the urine. This allows exposure to be easily determined by urine testing, facilitating convenient biomonitoring of populations. Food and drink containers made from Bisphenol A-containing plastics do not contaminate the content to cause any increased cancer risk. Health effects and regulation The health effects of BPA have been the subject of prolonged public and scientific debate, with PubMed listing more than 18,000 scientific papers as of 2024. Concern is mostly related to its estrogen-like activity, although it can interact with other receptor systems as an endocrine-disrupting chemical. These interactions are all very weak, but exposure to BPA is effectively lifelong, leading to concern over possible cumulative effects. Studying this sort of long‑term, low‑dose interaction is difficult, and although there have been numerous studies, there are considerable discrepancies in their conclusions regarding the nature of the effects observed as well as the levels at which they occur. A common criticism is that industry-sponsored trials tend to show BPA as being safer than studies performed by academic or government laboratories, although this has also been explained in terms of industry studies being better designed. In the 2010s public health agencies in the EU, US, Canada, Australia and Japan as well as the WHO all reviewed the health risks of BPA, and found normal exposure to be below the level currently associated with risk. Regardless, due to the scientific uncertainty, many jurisdictions continued to take steps to reduce exposure on a precautionary basis. In particular, infants were considered to be at greater risk, leading to bans on the use of BPA in baby bottles and related products by the US, Canada, and EU amongst others. Bottle producers largely switched from polycarbonate to polypropylene and there is some evidence that BPA exposure in infants has decreased as a result of this. The European Food Safety Authority completed a re-evaluation into the risks of BPA in 2023, concluding that its tolerable daily intake should be greatly reduced. This lead to a European Union on 19 December 2024 to ban BPA in all the food contact materials, including plastic and coated packaging. The ban will come into force after an implementation period of up to three years. BPA exhibits very low acute toxicity (i.e. from a single large dose) as indicated by its LD50 of 4 g/kg (mouse). Reports indicate that it is a minor skin irritant as well, although less so than phenol. Pharmacology BPA has been found to interact with a diverse range of hormone receptors, in both humans and animals. It binds to both of the nuclear estrogen receptors (ERs), ERα and ERβ. BPA is a selective estrogen receptor modulator (SERM), or partial agonist of the ER, so it can serve as both an estrogen agonist and antagonist. However, it is 1000- to 2000-fold less potent than estradiol, the major female sex hormone in humans. At high concentrations, BPA also binds to and acts as an antagonist of the androgen receptor (AR). In addition to receptor binding, the compound has been found to affect Leydig cell steroidogenesis, including affecting 17α-hydroxylase/17,20 lyase and aromatase expression and interfering with LH receptor-ligand binding. Bisphenol A's interacts with the estrogen-related receptor γ (ERR-γ). This orphan receptor (endogenous ligand unknown) behaves as a constitutive activator of transcription. BPA seems to bind strongly to ERR-γ (dissociation constant = 5.5 nM), but only weakly to the ER. BPA binding to ERR-γ preserves its basal constitutive activity. It can also protect it from deactivation from the SERM 4-hydroxytamoxifen (afimoxifene). This may be the mechanism by which BPA acts as a xenoestrogen. Different expression of ERR-γ in different parts of the body may account for variations in bisphenol A effects. BPA has also been found to act as an agonist of the GPER (GPR30). Environmental safety Distribution and degradation BPA has been detectable in the natural environment since the 1990s and is now widely distributed. It is primarily a river pollutant, but has also been observed in the marine environment, in soils, and lower levels can also be detected in air. The solubility of BPA in water is low (~300 g per ton of water) but this is still sufficient to make it a significant means of distribution into the environment. Many of the largest sources of BPA pollution are water-based, particularly wastewater from industrial facilities using BPA. Paper recycling can be a major source of release when this includes thermal paper, leaching from PVC items may also be a significant source, as can landfill leachate. In all cases, wastewater treatment can be highly effective at removing BPA, giving reductions of 91–98%. Regardless, the remaining 2–9% of BPA will continue through to the environment, with low levels of BPA commonly observed in surface water and sediment in the U.S. and Europe. Once in the environment BPA is aerobically biodegraded by a wide a variety of organisms. Its half life in water has been estimated at between 4.5 and 15 days, degradation in the air is faster than this, while soil samples degrade more slowly. BPA in sediment degrades most slowly of all, particularly where this is anaerobic. Abiotic degradation has been reported, but is generally slower than biodegradation. Pathways include photo-oxidation, or reactions with minerals such as goethite which may be present in soils and sediments. Environmental effects BPA is an environmental contaminant of emerging concern. Despite its short half-life and non-bioaccumulating character, the continuous release of BPA into the environment causes continuous exposure to both plant and animal life. Although many studies have been performed, these often focus on a limited range of model organisms and can use BPA concentrations well beyond environmental levels. As such, the precise effects of BPA on the growth, reproduction, and development of aquatic organism are not fully understood. Regardless, the existing data shows the effects of BPA on wildlife to be generally negative. BPA appears able to affect development and reproduction in a wide range of wildlife, with certain species being particularly sensitive, such as invertebrates and amphibians. See also Structurally related 4,4'-Dihydroxybenzophenone - used as a UV stabilizer in cosmetics and plastics Dinitrobisphenol A - a proposed metabolite of BPA, which may show increased endocrine disrupting character HPTE - a metabolite of the synthetic insecticide methoxychlor Others 2,2,4,4-Tetramethyl-1,3-cyclobutanediol - next generation BPA replacement 4-tert-Butylphenol - used as a chain-length regulator in the production of polycarbonates and epoxy resins, it has also been studied as a potential endocrine disruptor References 2,2-Bis(4-hydroxyphenyl)propanes Bis(4-hydroxyphenyl)methanes Commodity chemicals Endocrine disruptors GPER agonists Medical controversies Monomers Nonsteroidal antiandrogens Russian inventions Selective estrogen receptor modulators Xenoestrogens
Bisphenol A
[ "Chemistry", "Materials_science" ]
4,232
[ "Products of chemical industry", "Endocrine disruptors", "Polymer chemistry", "Monomers", "Commodity chemicals" ]
1,001,490
https://en.wikipedia.org/wiki/Convex%20conjugate
In mathematics and mathematical optimization, the convex conjugate of a function is a generalization of the Legendre transformation which applies to non-convex functions. It is also known as Legendre–Fenchel transformation, Fenchel transformation, or Fenchel conjugate (after Adrien-Marie Legendre and Werner Fenchel). The convex conjugate is widely used for constructing the dual problem in optimization theory, thus generalizing Lagrangian duality. Definition Let be a real topological vector space and let be the dual space to . Denote by the canonical dual pairing, which is defined by For a function taking values on the extended real number line, its is the function whose value at is defined to be the supremum: or, equivalently, in terms of the infimum: This definition can be interpreted as an encoding of the convex hull of the function's epigraph in terms of its supporting hyperplanes. Examples For more examples, see . The convex conjugate of an affine function is The convex conjugate of a power function is The convex conjugate of the absolute value function is The convex conjugate of the exponential function is The convex conjugate and Legendre transform of the exponential function agree except that the domain of the convex conjugate is strictly larger as the Legendre transform is only defined for positive real numbers. Connection with expected shortfall (average value at risk) See this article for example. Let F denote a cumulative distribution function of a random variable X. Then (integrating by parts), has the convex conjugate Ordering A particular interpretation has the transform as this is a nondecreasing rearrangement of the initial function f; in particular, for f nondecreasing. Properties The convex conjugate of a closed convex function is again a closed convex function. The convex conjugate of a polyhedral convex function (a convex function with polyhedral epigraph) is again a polyhedral convex function. Order reversing Declare that if and only if for all Then convex-conjugation is order-reversing, which by definition means that if then For a family of functions it follows from the fact that supremums may be interchanged that and from the max–min inequality that Biconjugate The convex conjugate of a function is always lower semi-continuous. The biconjugate (the convex conjugate of the convex conjugate) is also the closed convex hull, i.e. the largest lower semi-continuous convex function with For proper functions if and only if is convex and lower semi-continuous, by the Fenchel–Moreau theorem. Fenchel's inequality For any function and its convex conjugate , Fenchel's inequality (also known as the Fenchel–Young inequality) holds for every and Furthermore, the equality holds only when . The proof follows from the definition of convex conjugate: Convexity For two functions and and a number the convexity relation holds. The operation is a convex mapping itself. Infimal convolution The infimal convolution (or epi-sum) of two functions and is defined as Let be proper, convex and lower semicontinuous functions on Then the infimal convolution is convex and lower semicontinuous (but not necessarily proper), and satisfies The infimal convolution of two functions has a geometric interpretation: The (strict) epigraph of the infimal convolution of two functions is the Minkowski sum of the (strict) epigraphs of those functions. Maximizing argument If the function is differentiable, then its derivative is the maximizing argument in the computation of the convex conjugate: and hence and moreover Scaling properties If for some , then Behavior under linear transformations Let be a bounded linear operator. For any convex function on where is the preimage of with respect to and is the adjoint operator of A closed convex function is symmetric with respect to a given set of orthogonal linear transformations, for all and all if and only if its convex conjugate is symmetric with respect to Table of selected convex conjugates The following table provides Legendre transforms for many common functions as well as a few useful properties. See also Dual problem Fenchel's duality theorem Legendre transformation Young's inequality for products References Further reading (271 pages) (24 pages) Convex analysis Duality theories Theorems involving convexity Transforms
Convex conjugate
[ "Mathematics" ]
927
[ "Functions and mappings", "Mathematical structures", "Mathematical objects", "Mathematical relations", "Transforms", "Geometry", "Duality theories", "Category theory" ]
1,001,628
https://en.wikipedia.org/wiki/Power%20electronics
Power electronics is the application of electronics to the control and conversion of electric power. The first high-power electronic devices were made using mercury-arc valves. In modern systems, the conversion is performed with semiconductor switching devices such as diodes, thyristors, and power transistors such as the power MOSFET and IGBT. In contrast to electronic systems concerned with the transmission and processing of signals and data, substantial amounts of electrical energy are processed in power electronics. An AC/DC converter (rectifier) is the most typical power electronics device found in many consumer electronic devices, e.g. television sets, personal computers, battery chargers, etc. The power range is typically from tens of watts to several hundred watts. In industry, a common application is the variable speed drive (VSD) that is used to control an induction motor. The power range of VSDs starts from a few hundred watts and ends at tens of megawatts. The power conversion systems can be classified according to the type of the input and output power: AC to DC (rectifier) DC to AC (inverter) DC to DC (DC-to-DC converter) AC to AC (AC-to-AC converter) History Power electronics started with the development of the mercury arc rectifier. Invented by Peter Cooper Hewitt in 1902, it was used to convert alternating current (AC) into direct current (DC). From the 1920s on, research continued on applying thyratrons and grid-controlled mercury arc valves to power transmission. Uno Lamm developed a mercury valve with grading electrodes making them suitable for high voltage direct current power transmission. In 1933 selenium rectifiers were invented. Julius Edgar Lilienfeld proposed the concept of a field-effect transistor in 1926, but it was not possible to actually construct a working device at that time. In 1947, the bipolar point-contact transistor was invented by Walter H. Brattain and John Bardeen under the direction of William Shockley at Bell Labs. In 1948 Shockley's invention of the bipolar junction transistor (BJT) improved the stability and performance of transistors, and reduced costs. By the 1950s, higher power semiconductor diodes became available and started replacing vacuum tubes. In 1956, the silicon controlled rectifier (SCR) was introduced by General Electric, greatly increasing the range of power electronics applications. By the 1960s, the improved switching speed of bipolar junction transistors had allowed for high frequency DC/DC converters. R. D. Middlebrook made important contributions to power electronics. In 1970, he founded the Power Electronics Group at Caltech. He developed the state-space averaging method of analysis and other tools crucial to modern power electronics design. Power MOSFET In 1957, Frosch and Derick were able to manufacture the first silicon dioxide field effect transistors at Bell Labs, the first transistors in which drain and source were adjacent at the surface. Subsequently, Dawon Kahng led a paper demonstrating a working MOSFET with their Bell Labs team in 1960. Their team included E. E. LaBate and E. I. Povilonis who fabricated the device; M. O. Thurston, L. A. D’Asaro, and J. R. Ligenza who developed the diffusion processes, and H. K. Gummel and R. Lindner who characterized the device. In 1969, Hitachi introduced the first vertical power MOSFET, which would later be known as the VMOS (V-groove MOSFET). From 1974, Yamaha, JVC, Pioneer Corporation, Sony and Toshiba began manufacturing audio amplifiers with power MOSFETs. International Rectifier introduced a 25 A, 400 V power MOSFET in 1978. This device allows operation at higher frequencies than a bipolar transistor, but is limited to low voltage applications. The power MOSFET is the most common power device in the world, due to its low gate drive power, fast switching speed, easy advanced paralleling capability, wide bandwidth, ruggedness, easy drive, simple biasing, ease of application, and ease of repair. It has a wide range of power electronic applications, such as portable information appliances, power integrated circuits, cell phones, notebook computers, and the communications infrastructure that enables the Internet. In 1982, the insulated-gate bipolar transistor (IGBT) was introduced. It became widely available in the 1990s. This component has the power handling capability of the bipolar transistor and the advantages of the isolated gate drive of the power MOSFET. Devices The capabilities and economy of power electronics system are determined by the active devices that are available. Their characteristics and limitations are a key element in the design of power electronics systems. Formerly, the mercury arc valve, the high-vacuum and gas-filled diode thermionic rectifiers, and triggered devices such as the thyratron and ignitron were widely used in power electronics. As the ratings of solid-state devices improved in both voltage and current-handling capacity, vacuum devices have been nearly entirely replaced by solid-state devices. Power electronic devices may be used as switches, or as amplifiers. An ideal switch is either open or closed and so dissipates no power; it withstands an applied voltage and passes no current or passes any amount of current with no voltage drop. Semiconductor devices used as switches can approximate this ideal property and so most power electronic applications rely on switching devices on and off, which makes systems very efficient as very little power is wasted in the switch. By contrast, in the case of the amplifier, the current through the device varies continuously according to a controlled input. The voltage and current at the device terminals follow a load line, and the power dissipation inside the device is large compared with the power delivered to the load. Several attributes dictate how devices are used. Devices such as diodes conduct when a forward voltage is applied and have no external control of the start of conduction. Power devices such as silicon controlled rectifiers and thyristors (as well as the mercury valve and thyratron) allow control of the start of conduction but rely on periodic reversal of current flow to turn them off. Devices such as gate turn-off thyristors, BJT and MOSFET transistors provide full switching control and can be turned on or off without regard to the current flow through them. Transistor devices also allow proportional amplification, but this is rarely used for systems rated more than a few hundred watts. The control input characteristics of a device also significantly affect design; sometimes, the control input is at a very high voltage with respect to ground and must be driven by an isolated source. As efficiency is at a premium in a power electronic converter, the losses generated by a power electronic device should be as low as possible. Devices vary in switching speed. Some diodes and thyristors are suited for relatively slow speed and are useful for power frequency switching and control; certain thyristors are useful at a few kilohertz. Devices such as MOSFETS and BJTs can switch at tens of kilohertz up to a few megahertz in power applications, but with decreasing power levels. Vacuum tube devices dominate high power (hundreds of kilowatts) at very high frequency (hundreds or thousands of megahertz) applications. Faster switching devices minimize energy lost in the transitions from on to off and back but may create problems with radiated electromagnetic interference. Gate drive (or equivalent) circuits must be designed to supply sufficient drive current to achieve the full switching speed possible with a device. A device without sufficient drive to switch rapidly may be destroyed by excess heating. Practical devices have a non-zero voltage drop and dissipate power when on, and take some time to pass through an active region until they reach the "on" or "off" state. These losses are a significant part of the total lost power in a converter. Power handling and dissipation of devices is also critical factor in design. Power electronic devices may have to dissipate tens or hundreds of watts of waste heat, even switching as efficiently as possible between conducting and non-conducting states. In the switching mode, the power controlled is much larger than the power dissipated in the switch. The forward voltage drop in the conducting state translates into heat that must be dissipated. High power semiconductors require specialized heat sinks or active cooling systems to manage their junction temperature; exotic semiconductors such as silicon carbide have an advantage over straight silicon in this respect, and germanium, once the main-stay of solid-state electronics is now little used due to its unfavorable high-temperature properties. Semiconductor devices exist with ratings up to a few kilovolts in a single device. Where very high voltage must be controlled, multiple devices must be used in series, with networks to equalize voltage across all devices. Again, switching speed is a critical factor since the slowest-switching device will have to withstand a disproportionate share of the overall voltage. Mercury valves were once available with ratings to 100 kV in a single unit, simplifying their application in HVDC systems. The current rating of a semiconductor device is limited by the heat generated within the dies and the heat developed in the resistance of the interconnecting leads. Semiconductor devices must be designed so that current is evenly distributed within the device across its internal junctions (or channels); once a "hot spot" develops, breakdown effects can rapidly destroy the device. Certain SCRs are available with current ratings to 3000 amperes in a single unit. DC/AC converters (inverters) DC to AC converters produce an AC output waveform from a DC source. Applications include adjustable speed drives (ASD), uninterruptible power supplies (UPS), Flexible AC transmission systems (FACTS), voltage compensators, and photovoltaic inverters. Topologies for these converters can be separated into two distinct categories: voltage source inverters and current source inverters. Voltage source inverters (VSIs) are named so because the independently controlled output is a voltage waveform. Similarly, current source inverters (CSIs) are distinct in that the controlled AC output is a current waveform. DC to AC power conversion is the result of power switching devices, which are commonly fully controllable semiconductor power switches. The output waveforms are therefore made up of discrete values, producing fast transitions rather than smooth ones. For some applications, even a rough approximation of the sinusoidal waveform of AC power is adequate. Where a near sinusoidal waveform is required, the switching devices are operated much faster than the desired output frequency, and the time they spend in either state is controlled so the averaged output is nearly sinusoidal. Common modulation techniques include the carrier-based technique, or Pulse-width modulation, space-vector technique, and the selective-harmonic technique. Voltage source inverters have practical uses in both single-phase and three-phase applications. Single-phase VSIs utilize half-bridge and full-bridge configurations, and are widely used for power supplies, single-phase UPSs, and elaborate high-power topologies when used in multicell configurations. Three-phase VSIs are used in applications that require sinusoidal voltage waveforms, such as ASDs, UPSs, and some types of FACTS devices such as the STATCOM. They are also used in applications where arbitrary voltages are required, as in the case of active power filters and voltage compensators. Current source inverters are used to produce an AC output current from a DC current supply. This type of inverter is practical for three-phase applications in which high-quality voltage waveforms are required. A relatively new class of inverters, called multilevel inverters, has gained widespread interest. The normal operation of CSIs and VSIs can be classified as two-level inverters, due to the fact that power switches connect to either the positive or to the negative DC bus. If more than two voltage levels were available to the inverter output terminals, the AC output could better approximate a sine wave. It is for this reason that multilevel inverters, although more complex and costly, offer higher performance. Each inverter type differs in the DC links used, and in whether or not they require freewheeling diodes. Either can be made to operate in square-wave or pulse-width modulation (PWM) mode, depending on its intended usage. Square-wave mode offers simplicity, while PWM can be implemented in several different ways and produces higher quality waveforms. Voltage Source Inverters (VSI) feed the output inverter section from an approximately constant-voltage source. The desired quality of the current output waveform determines which modulation technique needs to be selected for a given application. The output of a VSI is composed of discrete values. In order to obtain a smooth current waveform, the loads need to be inductive at the select harmonic frequencies. Without some sort of inductive filtering between the source and load, a capacitive load will cause the load to receive a choppy current waveform, with large and frequent current spikes. There are three main types of VSIs: Single-phase half-bridge inverter Single-phase full-bridge inverter Three-phase voltage source inverter Single-phase half-bridge inverter The single-phase voltage source half-bridge inverters are meant for lower voltage applications and are commonly used in power supplies. Figure 9 shows the circuit schematic of this inverter. Low-order current harmonics get injected back to the source voltage by the operation of the inverter. This means that two large capacitors are needed for filtering purposes in this design. As Figure 9 illustrates, only one switch can be on at a time in each leg of the inverter. If both switches in a leg were on at the same time, the DC source would be shorted out. Inverters can use several modulation techniques to control their switching schemes. The carrier-based PWM technique compares the AC output waveform, vc, to a carrier voltage signal, vΔ. When vc is greater than vΔ, S+ is on, and when vc is less than vΔ, S− is on. When the AC output is at frequency fc with its amplitude at vc, and the triangular carrier signal is at frequency fΔ with its amplitude at vΔ, the PWM becomes a special sinusoidal case of the carrier based PWM. This case is dubbed sinusoidal pulse-width modulation (SPWM).For this, the modulation index, or amplitude-modulation ratio, is defined as . The normalized carrier frequency, or frequency-modulation ratio, is calculated using the equation . If the over-modulation region, ma, exceeds one, a higher fundamental AC output voltage will be observed, but at the cost of saturation. For SPWM, the harmonics of the output waveform are at well-defined frequencies and amplitudes. This simplifies the design of the filtering components needed for the low-order current harmonic injection from the operation of the inverter. The maximum output amplitude in this mode of operation is half of the source voltage. If the maximum output amplitude, ma, exceeds 3.24, the output waveform of the inverter becomes a square wave. As was true for Pulse-Width Modulation (PWM), both switches in a leg for square wave modulation cannot be turned on at the same time, as this would cause a short across the voltage source. The switching scheme requires that both S+ and S− be on for a half cycle of the AC output period. The fundamental AC output amplitude is equal to . Its harmonics have an amplitude of . Therefore, the AC output voltage is not controlled by the inverter, but rather by the magnitude of the DC input voltage of the inverter. Using selective harmonic elimination (SHE) as a modulation technique allows the switching of the inverter to selectively eliminate intrinsic harmonics. The fundamental component of the AC output voltage can also be adjusted within a desirable range. Since the AC output voltage obtained from this modulation technique has odd half and odd quarter-wave symmetry, even harmonics do not exist. Any undesirable odd (N-1) intrinsic harmonics from the output waveform can be eliminated. Single-phase full-bridge inverter The full-bridge inverter is similar to the half bridge-inverter, but it has an additional leg to connect the neutral point to the load. Figure 3 shows the circuit schematic of the single-phase voltage source full-bridge inverter. To avoid shorting out the voltage source, S1+, and S1− cannot be on at the same time, and S2+ and S2− also cannot be on at the same time. Any modulating technique used for the full-bridge configuration should have either the top or the bottom switch of each leg on at any given time. Due to the extra leg, the maximum amplitude of the output waveform is Vi, and is twice as large as the maximum achievable output amplitude for the half-bridge configuration. States 1 and 2 from Table 2 are used to generate the AC output voltage with bipolar SPWM. The AC output voltage can take on only two values, either Vi or −Vi. To generate these same states using a half-bridge configuration, a carrier based technique can be used. S+ being on for the half-bridge corresponds to S1+ and S2− being on for the full-bridge. Similarly, S− being on for the half-bridge corresponds to S1− and S2+ being on for the full bridge. The output voltage for this modulation technique is more or less sinusoidal, with a fundamental component that has an amplitude in the linear region of less than or equal to one . Unlike the bipolar PWM technique, the unipolar approach uses states 1, 2, 3, and 4 from Table 2 to generate its AC output voltage. Therefore, the AC output voltage can take on the values Vi, 0 or −V [1]i. To generate these states, two sinusoidal modulating signals, Vc and −Vc, are needed, as seen in Figure 4. Vc is used to generate VaN, while –Vc is used to generate VbN. The following relationship is called unipolar carrier-based SPWM . The phase voltages VaN and VbN are identical, but 180 degrees out of phase with each other. The output voltage is equal to the difference of the two-phase voltages, and do not contain any even harmonics. Therefore, if mf is taken, even the AC output voltage harmonics will appear at normalized odd frequencies, fh. These frequencies are centered on double the value of the normalized carrier frequency. This particular feature allows for smaller filtering components when trying to obtain a higher quality output waveform. As was the case for the half-bridge SHE, the AC output voltage contains no even harmonics due to its odd half and odd quarter-wave symmetry. Three-phase voltage source inverter Single-phase VSIs are used primarily for low power range applications, while three-phase VSIs cover both medium and high power range applications. Figure 5 shows the circuit schematic for a three-phase VSI. Switches in any of the three legs of the inverter cannot be switched off simultaneously due to this resulting in the voltages being dependent on the respective line current's polarity. States 7 and 8 produce zero AC line voltages, which result in AC line currents freewheeling through either the upper or the lower components. However, the line voltages for states 1 through 6 produce an AC line voltage consisting of the discrete values of Vi, 0 or −Vi. For three-phase SPWM, three modulating signals that are 120 degrees out of phase with one another are used in order to produce out-of-phase load voltages. In order to preserve the PWM features with a single carrier signal, the normalized carrier frequency, mf, needs to be a multiple of three. This keeps the magnitude of the phase voltages identical, but out of phase with each other by 120 degrees. The maximum achievable phase voltage amplitude in the linear region, ma less than or equal to one, is . The maximum achievable line voltage amplitude is The only way to control the load voltage is by changing the input DC voltage. Current source inverters Current source inverters convert DC current into an AC current waveform. In applications requiring sinusoidal AC waveforms, magnitude, frequency, and phase should all be controlled. CSIs have high changes in current over time, so capacitors are commonly employed on the AC side, while inductors are commonly employed on the DC side. Due to the absence of freewheeling diodes, the power circuit is reduced in size and weight, and tends to be more reliable than VSIs. Although single-phase topologies are possible, three-phase CSIs are more practical. In its most generalized form, a three-phase CSI employs the same conduction sequence as a six-pulse rectifier. At any time, only one common-cathode switch and one common-anode switch are on. As a result, line currents take discrete values of –ii, 0 and ii. States are chosen such that a desired waveform is output and only valid states are used. This selection is based on modulating techniques, which include carrier-based PWM, selective harmonic elimination, and space-vector techniques. Carrier-based techniques used for VSIs can also be implemented for CSIs, resulting in CSI line currents that behave in the same way as VSI line voltages. The digital circuit utilized for modulating signals contains a switching pulse generator, a shorting pulse generator, a shorting pulse distributor, and a switching and shorting pulse combiner. A gating signal is produced based on a carrier current and three modulating signals. A shorting pulse is added to this signal when no top switches and no bottom switches are gated, causing the RMS currents to be equal in all legs. The same methods are utilized for each phase, however, switching variables are 120 degrees out of phase relative to one another, and the current pulses are shifted by a half-cycle with respect to output currents. If a triangular carrier is used with sinusoidal modulating signals, the CSI is said to be utilizing synchronized-pulse-width-modulation (SPWM). If full over-modulation is used in conjunction with SPWM the inverter is said to be in square-wave operation. The second CSI modulation category, SHE is also similar to its VSI counterpart. Utilizing the gating signals developed for a VSI and a set of synchronizing sinusoidal current signals, results in symmetrically distributed shorting pulses and, therefore, symmetrical gating patterns. This allows any arbitrary number of harmonics to be eliminated. It also allows control of the fundamental line current through the proper selection of primary switching angles. Optimal switching patterns must have quarter-wave and half-wave symmetry, as well as symmetry about 30 degrees and 150 degrees. Switching patterns are never allowed between 60 degrees and 120 degrees. The current ripple can be further reduced with the use of larger output capacitors, or by increasing the number of switching pulses. The third category, space-vector-based modulation, generates PWM load line currents that equal load line currents, on average. Valid switching states and time selections are made digitally based on space vector transformation. Modulating signals are represented as a complex vector using a transformation equation. For balanced three-phase sinusoidal signals, this vector becomes a fixed module, which rotates at a frequency, ω. These space vectors are then used to approximate the modulating signal. If the signal is between arbitrary vectors, the vectors are combined with the zero vectors I7, I8, or I9. The following equations are used to ensure that the generated currents and the current vectors are on the average equivalent. Multilevel inverters A relatively new class called multilevel inverters has gained widespread interest. Normal operation of CSIs and VSIs can be classified as two-level inverters because the power switches connect to either the positive or the negative DC bus. If more than two voltage levels were available to the inverter output terminals, the AC output could better approximate a sine wave. For this reason multilevel inverters, although more complex and costly, offer higher performance. A three-level neutral-clamped inverter is shown in Figure 10. Control methods for a three-level inverter only allow two switches of the four switches in each leg to simultaneously change conduction states. This allows smooth commutation and avoids shoot through by only selecting valid states. It may also be noted that since the DC bus voltage is shared by at least two power valves, their voltage ratings can be less than a two-level counterpart. Carrier-based and space-vector modulation techniques are used for multilevel topologies. The methods for these techniques follow those of classic inverters, but with added complexity. Space-vector modulation offers a greater number of fixed voltage vectors to be used in approximating the modulation signal, and therefore allows more effective space vector PWM strategies to be accomplished at the cost of more elaborate algorithms. Due to added complexity and the number of semiconductor devices, multilevel inverters are currently more suitable for high-power high-voltage applications. This technology reduces the harmonics hence improves overall efficiency of the scheme. AC/AC converters Converting AC power to AC power allows control of the voltage, frequency, and phase of the waveform applied to a load from a supplied AC system . The two main categories that can be used to separate the types of converters are whether the frequency of the waveform is changed. AC/AC converter that don't allow the user to modify the frequencies are known as AC Voltage Controllers, or AC Regulators. AC converters that allow the user to change the frequency are simply referred to as frequency converters for AC to AC conversion. Under frequency converters there are three different types of converters that are typically used: cycloconverter, matrix converter, DC link converter (aka AC/DC/AC converter). AC voltage controller: The purpose of an AC Voltage Controller, or AC Regulator, is to vary the RMS voltage across the load while at a constant frequency. Three control methods that are generally accepted are ON/OFF Control, Phase-Angle Control, and Pulse-Width Modulation AC Chopper Control (PWM AC Chopper Control). All three of these methods can be implemented not only in single-phase circuits, but three-phase circuits as well. ON/OFF Control: Typically used for heating loads or speed control of motors, this control method involves turning the switch on for n integral cycles and turning the switch off for m integral cycles. Because turning the switches on and off causes undesirable harmonics to be created, the switches are turned on and off during zero-voltage and zero-current conditions (zero-crossing), effectively reducing the distortion. Phase-Angle Control: Various circuits exist to implement a phase-angle control on different waveforms, such as half-wave or full-wave voltage control. The power electronic components that are typically used are diodes, SCRs, and Triacs. With the use of these components, the user can delay the firing angle in a wave, which will only cause part of the wave to be in output. PWM AC Chopper Control: The other two control methods often have poor harmonics, output current quality, and input power factor. In order to improve these values PWM can be used instead of the other methods. What PWM AC Chopper does is have switches that turn on and off several times within alternate half-cycles of input voltage. Matrix converters and cycloconverters: Cycloconverters are widely used in industry for ac to ac conversion, because they are able to be used in high-power applications. They are commutated direct frequency converters that are synchronised by a supply line. The cycloconverters output voltage waveforms have complex harmonics with the higher-order harmonics being filtered by the machine inductance. Causing the machine current to have fewer harmonics, while the remaining harmonics causes losses and torque pulsations. Note that in a cycloconverter, unlike other converters, there are no inductors or capacitors, i.e. no storage devices. For this reason, the instantaneous input power and the output power are equal. Single-Phase to Single-Phase Cycloconverters: Single-Phase to Single-Phase Cycloconverters started drawing more interest recently because of the decrease in both size and price of the power electronics switches. The single-phase high frequency ac voltage can be either sinusoidal or trapezoidal. These might be zero voltage intervals for control purpose or zero voltage commutation. Three-Phase to Single-Phase Cycloconverters: There are two kinds of three-phase to single-phase cycloconverters: 3φ to 1φ half wave cycloconverters and 3φ to 1φ bridge cycloconverters. Both positive and negative converters can generate voltage at either polarity, resulting in the positive converter only supplying positive current, and the negative converter only supplying negative current. With recent device advances, newer forms of cycloconverters are being developed, such as matrix converters. The first change that is first noticed is that matrix converters utilize bi-directional, bipolar switches. A single phase to a single phase matrix converter consists of a matrix of 9 switches connecting the three input phases to the tree output phase. Any input phase and output phase can be connected together at any time without connecting any two switches from the same phase at the same time; otherwise this will cause a short circuit of the input phases. Matrix converters are lighter, more compact and versatile than other converter solutions. As a result, they are able to achieve higher levels of integration, higher temperature operation, broad output frequency and natural bi-directional power flow suitable to regenerate energy back to the utility. The matrix converters are subdivided into two types: direct and indirect converters. A direct matrix converter with three-phase input and three-phase output, the switches in a matrix converter must be bi-directional, that is, they must be able to block voltages of either polarity and to conduct current in either direction. This switching strategy permits the highest possible output voltage and reduces the reactive line-side current. Therefore, the power flow through the converter is reversible. Because of its commutation problem and complex control keep it from being broadly utilized in industry. Unlike the direct matrix converters, the indirect matrix converters has the same functionality, but uses separate input and output sections that are connected through a dc link without storage elements. The design includes a four-quadrant current source rectifier and a voltage source inverter. The input section consists of bi-directional bipolar switches. The commutation strategy can be applied by changing the switching state of the input section while the output section is in a freewheeling mode. This commutation algorithm is significantly less complex, and has higher reliability as compared to a conventional direct matrix converter. DC link converters: DC Link Converters, also referred to as AC/DC/AC converters, convert an AC input to an AC output with the use of a DC link in the middle. Meaning that the power in the converter is converted to DC from AC with the use of a rectifier, and then it is converted back to AC from DC with the use of an inverter. The end result is an output with a lower voltage and variable (higher or lower) frequency. Due to their wide area of application, the AC/DC/AC converters are the most common contemporary solution. Other advantages to AC/DC/AC converters is that they are stable in overload and no-load conditions, as well as they can be disengaged from a load without damage. Hybrid matrix converter: Hybrid matrix converters are relatively new for AC/AC converters. These converters combine the AC/DC/AC design with the matrix converter design. Multiple types of hybrid converters have been developed in this new category, an example being a converter that uses uni-directional switches and two converter stages without the dc-link; without the capacitors or inductors needed for a dc-link, the weight and size of the converter is reduced. Two sub-categories exist from the hybrid converters, named hybrid direct matrix converter (HDMC) and hybrid indirect matrix converter (HIMC). HDMC convert the voltage and current in one stage, while the HIMC utilizes separate stages, like the AC/DC/AC converter, but without the use of an intermediate storage element. Applications: Below is a list of common applications that each converter is used in. AC voltage controller: Lighting control; domestic and industrial heating; speed control of fan, pump or hoist drives, soft starting of induction motors, static AC switches (temperature control, transformer tap changing, etc.) Cycloconverter: High-power low-speed reversible AC motor drives; constant frequency power supply with variable input frequency; controllable VAR generators for power factor correction; AC system interties linking two independent power systems. Matrix converter: Currently the application of matrix converters are limited due to the non-availability of bilateral monolithic switches capable of operating at high frequency, complex control law implementation, commutation, and other reasons. With these developments, matrix converters could replace cycloconverters in many areas. DC link: Can be used for individual or multiple load applications of machine building and construction. Simulations of power electronic systems Power electronic circuits are simulated using computer simulation programs such as SIMBA, PLECS, PSIM, SPICE, MATLAB/simulink, and OpenModelica. Circuits are simulated before they are produced to test how the circuits respond under certain conditions. Also, creating a simulation is both cheaper and faster than creating a prototype to use for testing. Applications Applications of power electronics range in size from a switched mode power supply in an AC adapter, battery chargers, audio amplifiers, fluorescent lamp ballasts, through variable frequency drives and DC motor drives used to operate pumps, fans, and manufacturing machinery, up to gigawatt-scale high voltage direct current power transmission systems used to interconnect electrical grids. Power electronic systems are found in virtually every electronic device. For example: DC/DC converters are used in most mobile devices (mobile phones, PDA etc.) to maintain the voltage at a fixed value whatever the voltage level of the battery is. These converters are also used for electronic isolation and power factor correction. A power optimizer is a type of DC/DC converter developed to maximize the energy harvest from solar photovoltaic or wind turbine systems. AC/DC converters (rectifiers) are used every time an electronic device is connected to the mains (computer, television etc.). These may simply change AC to DC or can also change the voltage level as part of their operation. AC/AC converters are used to change either the voltage level or the frequency (international power adapters, light dimmer). In power distribution networks, AC/AC converters may be used to exchange power between utility frequency 50 Hz and 60 Hz power grids. DC/AC converters (inverters) are used primarily in UPS or renewable energy systems or emergency lighting systems. Mains power charges the DC battery. If the mains fails, an inverter produces AC electricity at mains voltage from the DC battery. Solar inverter, both smaller string and larger central inverters, as well as solar micro-inverter are used in photovoltaics as a component of a PV system. Motor drives are found in pumps, blowers, and mill drives for textile, paper, cement and other such facilities. Drives may be used for power conversion and for motion control. For AC motors, applications include variable-frequency drives, motor soft starters and excitation systems. In hybrid electric vehicles (HEVs), power electronics are used in two formats: series hybrid and parallel hybrid. The difference between a series hybrid and a parallel hybrid is the relationship of the electric motor to the internal combustion engine (ICE). Devices used in electric vehicles consist mostly of dc/dc converters for battery charging and dc/ac converters to power the propulsion motor. Electric trains use power electronic devices to obtain power, as well as for vector control using pulse-width modulation (PWM) rectifiers. The trains obtain their power from power lines. Another new usage for power electronics is in elevator systems. These systems may use thyristors, inverters, permanent magnet motors, or various hybrid systems that incorporate PWM systems and standard motors. Inverters In general, inverters are utilized in applications requiring direct conversion of electrical energy from DC to AC or indirect conversion from AC to AC. DC to AC conversion is useful for many fields, including power conditioning, harmonic compensation, motor drives, renewable energy grid integration, and spacecraft solar power systems. In power systems it is often desired to eliminate harmonic content found in line currents. VSIs can be used as active power filters to provide this compensation. Based on measured line currents and voltages, a control system determines reference current signals for each phase. This is fed back through an outer loop and subtracted from actual current signals to create current signals for an inner loop to the inverter. These signals then cause the inverter to generate output currents that compensate for the harmonic content. This configuration requires no real power consumption, as it is fully fed by the line; the DC link is simply a capacitor that is kept at a constant voltage by the control system. In this configuration, output currents are in phase with line voltages to produce a unity power factor. Conversely, VAR compensation is possible in a similar configuration where output currents lead line voltages to improve the overall power factor. In facilities that require energy at all times, such as hospitals and airports, UPS systems are utilized. In a standby system, an inverter is brought online when the normally supplying grid is interrupted. Power is instantaneously drawn from onsite batteries and converted into usable AC voltage by the VSI, until grid power is restored, or until backup generators are brought online. In an online UPS system, a rectifier-DC-link-inverter is used to protect the load from transients and harmonic content. A battery in parallel with the DC-link is kept fully charged by the output in case the grid power is interrupted, while the output of the inverter is fed through a low pass filter to the load. High power quality and independence from disturbances is achieved. Various AC motor drives have been developed for speed, torque, and position control of AC motors. These drives can be categorized as low-performance or as high-performance, based on whether they are scalar-controlled or vector-controlled, respectively. In scalar-controlled drives, fundamental stator current, or voltage frequency and amplitude, are the only controllable quantities. Therefore, these drives are employed in applications where high quality control is not required, such as fans and compressors. On the other hand, vector-controlled drives allow for instantaneous current and voltage values to be controlled continuously. This high performance is necessary for applications such as elevators and electric cars. Inverters are also vital to many renewable energy applications. In photovoltaic purposes, the inverter, which is usually a PWM VSI, gets fed by the DC electrical energy output of a photovoltaic module or array. The inverter then converts this into an AC voltage to be interfaced with either a load or the utility grid. Inverters may also be employed in other renewable systems, such as wind turbines. In these applications, the turbine speed usually varies, causing changes in voltage frequency and sometimes in the magnitude. In this case, the generated voltage can be rectified and then inverted to stabilize frequency and magnitude. Smart grid A smart grid is a modernized electrical grid that uses information and communications technology to gather and act on information, such as information about the behaviors of suppliers and consumers, in an automated fashion to improve the efficiency, reliability, economics, and sustainability of the production and distribution of electricity. Electric power generated by wind turbines and hydroelectric turbines by using induction generators can cause variances in the frequency at which power is generated. Power electronic devices are utilized in these systems to convert the generated ac voltages into high-voltage direct current (HVDC). The HVDC power can be more easily converted into three phase power that is coherent with the power associated to the existing power grid. Through these devices, the power delivered by these systems is cleaner and has a higher associated power factor. Wind power systems optimum torque is obtained either through a gearbox or direct drive technologies that can reduce the size of the power electronics device. Electric power can be generated through photovoltaic cells by using power electronic devices. The produced power is usually then transformed by solar inverters. Inverters are divided into three different types: central, module-integrated, and string. Central converters can be connected either in parallel or in series on the DC side of the system. For photovoltaic "farms", a single central converter is used for the entire system. Module-integrated converters are connected in series on either the DC or AC side. Normally several modules are used within a photovoltaic system, since the system requires these converters on both DC and AC terminals. A string converter is used in a system that utilizes photovoltaic cells that are facing different directions. It is used to convert the power generated to each string, or line, in which the photovoltaic cells are interacting. Power electronics can be used to help utilities adapt to the rapid increase in distributed residential/commercial solar power generation. Germany and parts of Hawaii, California, and New Jersey require costly studies to be conducted before approving new solar installations. Relatively small-scale ground- or pole-mounted devices create the potential for a distributed control infrastructure to monitor and manage the flow of power. Traditional electromechanical systems, such as capacitor banks or voltage regulators at substations, can take minutes to adjust voltage and can be distant from the solar installations where the problems originate. If voltage on a neighborhood circuit goes too high, it can endanger utility crews and cause damage to both utility and customer equipment. Further, a grid fault causes photovoltaic generators to shut down immediately, spiking the demand for grid power. Smart grid-based regulators are more controllable than far more numerous consumer devices. In another approach, a group of 16 western utilities called the Western Electric Industry Leaders called for the mandatory use of "smart inverters." These devices convert DC to household AC and can also help with power quality. Such devices could eliminate the need for expensive utility equipment upgrades at a much lower total cost. See also Multi-port power electronic interface FET amplifier Power management integrated circuit RF power amplifier Notes References * External links Electronics industry
Power electronics
[ "Technology", "Engineering" ]
9,088
[ "Information and communications technology", "Electronic engineering", "Power electronics", "Electronics industry" ]
1,001,846
https://en.wikipedia.org/wiki/Trehalose
Trehalose (from Turkish tıgala – a sugar derived from insect cocoons + -ose) is a sugar consisting of two molecules of glucose. It is also known as mycose or tremalose. Some bacteria, fungi, plants and invertebrate animals synthesize it as a source of energy, and to survive freezing and lack of water. Extracting trehalose was once a difficult and costly process, but around 2000, the Hayashibara company (Okayama, Japan) discovered an inexpensive extraction technology from starch. Trehalose has high water retention capabilities, and is used in food, cosmetics and as a drug. A procedure developed in 2017 using trehalose allows sperm storage at room temperatures. Structure Trehalose is a disaccharide formed by a bond between two α-glucose units. It is found in nature as a disaccharide and also as a monomer in some polymers. Two other stereoisomers exist: α,β-trehalose, also called neotrehalose, and β,β-trehalose, also called isotrehalose. Neither of these alternate isomers has been isolated from living organisms, but isotrehalose has been was found in starch hydroisolates. Synthesis At least three biological pathways support trehalose biosynthesis. An industrial process can derive trehalose from corn starch. Properties Chemical Trehalose is a nonreducing sugar formed from two glucose units joined by a 1–1 alpha bond, giving it the name . The bonding makes trehalose very resistant to acid hydrolysis, and therefore is stable in solution at high temperatures, even under acidic conditions. The bonding keeps nonreducing sugars in closed-ring form, such that the aldehyde or ketone end groups do not bind to the lysine or arginine residues of proteins (a process called glycation). Trehalose is less soluble than sucrose, except at high temperatures (>80 °C). Trehalose forms a rhomboid crystal as the dihydrate, and has 90% of the calorific content of sucrose in that form. Anhydrous forms of trehalose readily regain moisture to form the dihydrate. Anhydrous forms of trehalose can show interesting physical properties when heat-treated. Trehalose aqueous solutions show a concentration-dependent clustering tendency. Owing to their ability to form hydrogen bonds, they self-associate in water to form clusters of various sizes. All-atom molecular dynamics simulations showed that concentrations of 1.5–2.2 molar allow trehalose molecular clusters to percolate and form large and continuous aggregates. Trehalose directly interacts with nucleic acids, facilitates melting of double stranded DNA and stabilizes single-stranded nucleic acids. Biological Organisms ranging from bacteria, yeast, fungi, insects, invertebrates, and lower and higher plants have enzymes that can make trehalose. In nature, trehalose can be found in plants, and microorganisms. In animals, trehalose is prevalent in shrimp, and also in insects, including grasshoppers, locusts, butterflies, and bees, in which trehalose serves as blood-sugar. Trehalase genes are found in tardigrades, the microscopic ecdysozoans found worldwide in diverse extreme environments. Trehalose is the major carbohydrate energy storage molecule used by insects for flight. One possible reason for this is that the glycosidic linkage of trehalose, when acted upon by an insect trehalase, releases two molecules of glucose, which is required for the rapid energy requirements of flight. This is double the efficiency of glucose release from the storage polymer starch, for which cleavage of one glycosidic linkage releases only one glucose molecule. The concentrations of both trehalose and glucose in the insect hemolymph are tightly controlled by multiple enzymes and hormones, including trehalase, insulin-like peptides (ILPs and DILPs), adipokinetic hormone (AKH), leucokinin (LK), octopamine and other mediators, thereby maintaining carbohydrate homeostasis by endocrine and metabolic feedback mechanisms. In plants, trehalose is seen in sunflower seeds, moonwort, Selaginella plants, and sea algae. Within the fungi, it is prevalent in some mushrooms, such as shiitake (Lentinula edodes), oyster, king oyster, and golden needle. Even within the plant kingdom, Selaginella (sometimes called the resurrection plant), which grows in desert and mountainous areas, may be cracked and dried out, but will turn green again and revive after rain because of the function of trehalose. The two prevalent theories as to how trehalose works within the organism in the state of cryptobiosis are the vitrification theory, a state that prevents ice formation, or the water displacement theory, whereby water is replaced by trehalose. In bacterial cell wall, trehalose has a structural role in adaptive responses to stress such as osmotic differences and extreme temperature. Yeast uses trehalose as a carbon source in response to abiotic stresses. In humans, the only known function of trehalose is as a neuroprotective, which it accomplishes by inducing autophagy and thereby clearing protein aggregates. Trehalose has also been reported for anti-bacterial, anti-biofilm, and anti-inflammatory (in vitro and in vivo) activities, upon its esterification with fatty acids of varying chain lengths. Nutritional and dietary properties Trehalose is rapidly broken down into glucose by the enzyme trehalase, which is present in the brush border of the intestinal mucosa of omnivores (including humans) and herbivores. It causes less of a spike in blood sugar than glucose. Trehalose has about 45% the sweetness of sucrose at concentrations above 22%, but when the concentration is reduced, its sweetness decreases more quickly than that of sucrose, so that a 2.3% solution tastes 6.5 times less sweet as the equivalent sugar solution. It is commonly used in prepared frozen foods, like ice cream, because it lowers the freezing point of foods. Deficiency of trehalase enzyme is unusual in humans, except in the Greenlandic Inuit, where it is present in only 10–15% of the population. Metabolism Five biosynthesis pathways have been reported for trehalose. The most common pathway is TPS/TPP pathway which is used by organisms that synthesize trehalose using the enzyme trehalose-6-phosphate (T6P) synthase (TPS). Second, trehalose synthase (TS) in certain types of bacteria could produce trehalose by using maltose and another disaccharide with two glucose units as substrates. Third, the TreY-TreZ pathway in some bacteria converts starch that contain maltooligosaccharide or glycogen directly into trehalose. Fourth, in primitive bacteria, trehalose glycisyltransferring synthase (TreT) produces trehalose from ADP-glucose and glucose. Fifth, trehalose phosphorylase (TreP) either hydrolyses trehalose into glucose-1-phosphate and glucose or may act reversibly in certain species. Vertebrates do not have the ability to synthesize or store trehalose. Trehalase in humans is found only in specific location such as the intestinal mucosa, renal brush-border, liver and blood. Expression of this enzyme in vertebrates is initially found during the gestation period that is the highest after weaning. Then, the level of trehalase remained constant in the intestine throughout life. Meanwhile, diets consisting of plants and fungi contain trehalose. Moderate amount of trehalose in diet is essential and having low amount of trehalose could result in diarrhea, or other intestinal symptoms. Medical use Trehalose is an ingredient, along with hyaluronic acid, in an artificial tears product used to treat dry eye. Outbreaks of Clostridioides difficile were initially associated with trehalose, but this finding was disputed in 2019. In 2021, the FDA accepted an Investigational New Drug (IND) application and granted fast track status for an injectable form of trehalose (SLS-005) as a potential treatment for spinocerebellar ataxia type 3 (SCA3). History In 1832, H.A.L. Wiggers discovered trehalose in an ergot of rye, and in 1859 Marcellin Berthelot isolated it from Trehala manna, a substance made by weevils and named it trehalose. Trehalose has long been known as an autophagy inducer that acts independently of mTOR. In 2017, research was published showing that trehalose induces autophagy by activating TFEB, a protein that acts as a master regulator of the autophagy-lysosome pathway. See also Biostasis Cryoprotectant Cryptobiosis Freeze drying Lentztrehalose Trehalosamine References External links Trehalose in sperm preservation Carbohydrates Disaccharides Types of sugar Orphan drugs
Trehalose
[ "Chemistry" ]
1,979
[ "Organic compounds", "Biomolecules by chemical classification", "Carbohydrates", "Carbohydrate chemistry" ]
1,001,908
https://en.wikipedia.org/wiki/Dichromatism
Dichromatism (or polychromatism) is a phenomenon where a material or solution's hue is dependent on both the concentration of the absorbing substance and the depth or thickness of the medium traversed. In most substances which are not dichromatic, only the brightness and saturation of the colour depend on their concentration and layer thickness. Examples of dichromatic substances are pumpkin seed oil, bromophenol blue, and resazurin. When the layer of pumpkin seed oil is less than 0.7 mm thick, the oil appears bright green, and in layer thicker than this, it appears bright red. The phenomenon is related to both the physical chemistry properties of the substance and the physiological response of the human visual system to colour. This combined physicochemical–physiological basis was first explained in 2007. In gemstones, dichromatism is sometimes referred to as the 'Usambara effect'. Physical explanation Dichromatic properties can be explained by the Beer–Lambert law and by the excitation characteristics of the three types of cone photoreceptors in the human retina. Dichromatism is potentially observable in any substance that has an absorption spectrum with one wide but shallow local minimum and one narrow but deep local minimum. The apparent width of the deep minimum may also be limited by the end of the visible range of human eye; in this case, the true full width may not necessarily be narrow. As the thickness of the substance increases, the perceived hue changes from that defined by the position of the wide-but-shallow minimum (in thin layers) to the hue of the deep-but-narrow minimum (in thick layers). The absorbance spectrum of pumpkin seed oil has the wide-but-shallow minimum in the green region of the spectrum and deep local minimum in the red region. In thin layers, the absorption at any specific green wavelength is not as low as it is for the red minimum, but a broader band of greenish wavelengths are transmitted, and hence the overall appearance is green. The effect is enhanced by the greater sensitivity to green of the photoreceptors in the human eye, and the narrowing of the red transmittance band by the long-wavelength limit of cone photoreceptor sensitivity. According to the Beer–Lambert law, when viewing through the coloured substance (and thus ignoring reflection), the proportion of light transmitted at a given wavelength, T, decreases exponentially with thickness t, T = e−at, where a is the absorbance at that wavelength. Let G = e−aGt be the green transmittance and R = e−aRt be the red transmittance. The ratio of the two transmitted intensities is then (G/R) = e(aR-aG)t. If the red absorbance is less than the green, then as the thickness t increases, so does the ratio of red to green transmitted light, which causes the apparent hue of the colour to switch from green to red. Quantification The extent of dichromatism of material can be quantified by the Kreft's dichromaticity index (DI). It is defined as the difference in hue angle (Δhab) between the colour of the sample at the dilution, where the chroma (colour saturation) is maximal and the colour of four times more diluted (or thinner) and four times more concentrated (or thicker) sample. The two hue angle differences are called dichromaticity index towards lighter (Kreft's DIL) and dichromaticity index towards darker (Kreft's DID) respectively. Kreft's dichromaticity index DIL and DID for pumpkin oil, which is one of the most dichromatic substances, are −9 and −44, respectively. This means that pumpkin oil changes its colour from green-yellow to orange-red (for 44 degrees in Lab colour space) when the thickness of the observed layer is increased from cca 0.5 mm to 2 mm; and it changes slightly towards green (for 9 degrees) if its thickness is reduced for 4-fold. History A record by William Herschel (1738–1822), shows he observed dichromatism with a solution of ferrous sulphate and tincture of nutgall in 1801 when working on an early solar telescope, but he did not recognise the effect. External links Kreft Samo & Kreft Marko (2007). Physicochemical and physiological basis of dichromatic colour. Naturwissenschaften, doi: 10.1007/s00114-007-0272-9 References Color appearance phenomena Color Optics
Dichromatism
[ "Physics", "Chemistry" ]
970
[ "Physical phenomena", "Applied and interdisciplinary physics", "Optics", "Color appearance phenomena", "Optical phenomena", " molecular", "Atomic", " and optical physics" ]
1,001,916
https://en.wikipedia.org/wiki/N-Methylethanolamine
N-Methylethanolamine is an alkanolamine with the formula CH3NHCH2CH2OH. It is flammable, corrosive, colorless, viscous liquid. It is an intermediate in the biosynthesis of choline. With both an amine and a hydroxyl functional groups, it is a useful intermediate in the chemical synthesis of various products including polymers and pharmaceuticals. It is also used as a solvent, for example in the processing of natural gas, where it is used together with its analogs ethanolamine and dimethylethanolamine. Production N-Methylethanolamine is produced industrially by reacting ethylene oxide with excess methylamine in aqueous solution. This reaction yields a mixture of the 1:1 addition product NMEA (1) and - by a further addition of another ethylene oxide - the 1:2 addition product methyl diethanolamine (MDEA) (2): In order to obtain high yields of the desired target product, the reactants are continuously fed to a flow reactor and reacted with a more than two-fold excess of methylamine. In the downstream process steps, the excess methylamine and the water is removed and NMEA (bp. 160 °C) and MDEA (bp. 243 °C) are isolated from the product mixture by fractional distillation. The poly(methyl-ethanolamine) formed by further addition of ethylene oxide to methylethanolamine remains in the distillation bottoms. Properties N-Methylethanolamine is a clear, colorless, hygroscopic, amine-like smelling liquid which is miscible with water and ethanol in any ratio. Aqueous solutions react strongly basic and are therefore corrosive. The substance is easily biodegradable and has no potential of bioaccumulation due to its water miscibility. NMEA is not mutagenic, but in the presence of nitrite, carcinogenic nitrosamines can be formed from the compound, as it is a secondary amine. Use Like other alkylalkanolamines, N-methylethanolamine is used in water- and solvent-based paints and coatings as a solubilizer for other components, such as pigments and as a stabilizer. In cathodic dip-coating, N-methylaminoethanol serves as cation neutralizer for the partial neutralization of the epoxy resin. It also serves as a chain extender in the reaction of high molecular weight polyepoxides with polyols. Being a base, N-methylaminoethanol forms neutral salts with fatty acids, which are used as surfactants (soaps) with good emulsifying properties and find applications in textile and personal care cleansing products. When bleaching cotton-polyester blends, NMEA is used as a brightener. By methylation of N-methylaminoethanol, dimethylaminoethanol and choline [(2-hydroxyethyl)-trimethyl-ammonium chloride] can be prepared. In the reaction of N-methylaminoethanol with fatty acids, long-chain N-methyl-N-(2-hydroxyethyl)amides are formed upon elimination of water. These are used as neutral surfactants. Such amides also act as flow improvers and pour point depressants in heavy oils and middle distillates. By catalytic oxidation of N-methylaminoethanol, the non-proteinogenic amino acid sarcosine is obtained. N-methylaminoethanol plays a role as a building block for the synthesis of crop protection compounds and pharmaceuticals, such as in the first stage of the reaction sequence to the antihistamine and antidepressant mianserin (Tolvin) and to the non-analgesic Nefopam (Ajan). In analogy to other aziridines, N-methylaziridine can be obtained by a Wenker synthesis from N-methylaminoethanol. This is done either via the sulfuric acid ester or after replacement of the hydroxy group by a chlorine atom (for example by thionyl chloride or chlorosulfuric acid) to N-methyl-2-chloroethylamine and then by using a strong base (cleavage of HCl) in an intramolecular nucleophilic substitution: It reacts with carbon disulfide to give N-methyl-2-thiazolidinethione. See also Ethanolamine Dimethylethanolamine References Primary alcohols Amines
N-Methylethanolamine
[ "Chemistry" ]
978
[ "Amines", "Bases (chemistry)", "Functional groups" ]
1,001,960
https://en.wikipedia.org/wiki/Cmp%20%28Unix%29
In computing, cmp is a command-line utility on Unix and Unix-like operating systems that compares two files of any type and writes the results to the standard output. By default, cmp is silent if the files are the same; if they differ, the byte and line number at which the first difference occurred is reported. The command is also available in the OS-9 shell. History is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX.1 and the Single Unix Specification. It first appeared in Version 1 Unix. The version of cmp bundled in GNU coreutils was written by Torbjorn Granlund and David MacKenzie. The command is available as a separate package for Microsoft Windows as part of the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The command has also been ported to the IBM i operating system. Switches cmp may be qualified by the use of command-line switches. The switches supported by notable implementations of cmp are: Operands that are byte counts are normally decimal, but may be preceded by '0' for octal and '0x' for hexadecimal. A byte count can be followed by a suffix to specify a multiple of that count; in this case an omitted integer is understood to be 1. A bare size letter, or one followed by 'iB', specifies a multiple using powers of 1024. A size letter followed by 'B' specifies powers of 1000 instead. For example, '-n 4M' and '-n 4MiB' are equivalent to '-n 4194304', whereas '-n 4MB' is equivalent to '-n 4000000'. This notation is upward compatible with the SI prefixes for decimal multiples and with the IEC 60027-2 prefixes for binary multiples. Example Return values 0 – files are identical 1 – files differ 2 – inaccessible or missing argument See also Comparison of file comparison tools List of Unix commands References External links Comparing and Merging Files: Invoking cmp The section of the manual of GNU cmp in the diffutils free manual. Free file comparison tools Standard Unix programs Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands IBM i Qshell commands
Cmp (Unix)
[ "Technology" ]
483
[ "IBM i Qshell commands", "Standard Unix programs", "Computing commands", "Plan 9 commands", "Inferno (operating system) commands" ]
1,001,976
https://en.wikipedia.org/wiki/E-Science
E-Science or eScience is computationally intensive science that is carried out in highly distributed network environments, or science that uses immense data sets that require grid computing; the term sometimes includes technologies that enable distributed collaboration, such as the Access Grid. The term was created by John Taylor, the Director General of the United Kingdom's Office of Science and Technology in 1999 and was used to describe a large funding initiative starting in November 2000. E-science has been more broadly interpreted since then, as "the application of computer technology to the undertaking of modern scientific investigation, including the preparation, experimentation, data collection, results dissemination, and long-term storage and accessibility of all materials generated through the scientific process. These may include data modeling and analysis, electronic/digitized laboratory notebooks, raw and fitted data sets, manuscript production and draft versions, pre-prints, and print and/or electronic publications." In 2014, IEEE eScience Conference Series condensed the definition to "eScience promotes innovation in collaborative, computationally- or data-intensive research across all disciplines, throughout the research lifecycle" in one of the working definitions used by the organizers. E-science encompasses "what is often referred to as big data [which] has revolutionized science... [such as] the Large Hadron Collider (LHC) at CERN... [that] generates around 780 terabytes per year... highly data intensive modern fields of science...that generate large amounts of E-science data include: computational biology, bioinformatics, genomics" and the human digital footprint for the social sciences. Turing Award winner Jim Gray imagined "data-intensive science" or "e-science" as a "fourth paradigm" of science (empirical, theoretical, computational and now data-driven) and asserted that "everything about science is changing because of the impact of information technology" and the data deluge. E-Science revolutionizes both fundamental legs of the scientific method: empirical research, especially through digital big data; and scientific theory, especially through computer simulation model building. These ideas were reflected by The White House's Office and Science Technology Policy in February 2013, which slated many of the aforementioned e-Science output products for preservation and access requirements under the memorandum's directive. E-sciences include particle physics, earth sciences and social simulations. Characteristics and examples Most of the research activities into e-Science have focused on the development of new computational tools and infrastructures to support scientific discovery. Due to the complexity of the software and the backend infrastructural requirements, e-Science projects usually involve large teams managed and developed by research laboratories, large universities or governments. Currently there is a large focus in e-Science in the United Kingdom, where the UK e-Science programme provides significant funding. In Europe the development of computing capabilities to support the CERN Large Hadron Collider has led to the development of e-Science and Grid infrastructures which are also used by other disciplines. Consortiums Example e-Science infrastructures include the Worldwide LHC Computing Grid, a federation with various partners including the European Grid Infrastructure, the Open Science Grid and the Nordic DataGrid Facility. To support e-Science applications, Open Science Grid combines interfaces to more than 100 nationwide clusters, 50 interfaces to geographically distributed storage caches, and 8 campus grids (Purdue, Wisconsin-Madison, Clemson, Nebraska-Lincoln, FermiGrid at FNAL, SUNY-Buffalo, and Oklahoma in the United States; and UNESP in Brazil). Areas of science benefiting from Open Science Grid include: astrophysics, gravitational physics, high-energy physics, neutrino physics, nuclear physics molecular dynamics, materials science, materials engineering, computer science, computer engineering, nanotechnology structural biology, computational biology, genomics, proteomics, medicine UK programme After his appointment as Director General of the Research Councils in 1999 John Taylor, with the support of the Science Minister David Sainsbury and the Chancellor of the Exchequer Gordon Brown, bid to HM Treasury to fund a programme of e-infrastructure development for science which would provide the foundation for UK science and industry to be a world leader in the knowledge economy which motivated the Lisbon Strategy for sustainable economic growth that the UK government committed to in March 2000. In November 2000 John Taylor announced £98 million for a national UK e-Science programme. An additional £20 million contribution was planned from UK industry in matching funds to projects that they participated in. From this budget of £120 million over three years, £75 million was to be spent on grid application pilots in all areas of science, administered by the Research Council responsible for each area, while £35 million was to be administered by the EPSRC as a Core Programme to develop "industrial strength" Grid middleware. Phase 2 of the programme for 2004-2006 was supported by a further £96 million for application projects, and £27 million for the EPSRC core programme. Phase 3 of the programme for 2007-2009 was supported by a further £14 million for the EPSRC core programme and a further sum for applications. Additional funding for UK e-Science activities was provided from European Union funding, from university funding council SRIF funding for hardware, and from Jisc for networking and other infrastructure. The UK e-Science programme comprised a wide range of resources, centres and people including the National e-Science Centre (NeSC) which is managed by the Universities of Glasgow and Edinburgh, with facilities in both cities. Tony Hey led the core programme from 2001 to 2005. Within the UK regional e-Science centres support their local universities and projects, including: White Rose Grid e-Science Centre (WRGeSC) Belfast e-Science Centre (BeSC) Centre for eResearch Bristol (CeRB) Cambridge e-Science Centre (CeSC) STFC e-Science Centre (STFCeSC) e-Science North West (eSNW) National Grid Service (NGS) OMII-UK Lancaster University Centre for e-Science London e-Science Centre (LeSC) North East Regional e-Science Centre (NEReSC) Oxford e-Science Centre (OeSC) Southampton e-Science Centre (SeSC) Welsh e-Science Centre (WeSC) Midlands e-Science Centre (MeSC) There are also various centres of excellence and research centres. In addition to centres, the grid application pilot projects were funded by the Research Council responsible for each area of UK science funding. The EPSRC funded 11 pilot e-Science projects in three phases (for about £3 million each in the first phase): First Phase (2001–2005) were CombEchem, DAME, Discovery Net, GEODISE, myGrid and RealityGrid. Second phase (2004–2008) were GOLD and Integrative biology Third phase (2005–2010) were PMSEG (MESSAGE), CARMEN and NanoCMOS The PPARC/STFC funded two projects: GridPP (phase 1 for £17 million, phase 2 for £5.9 million, phase 3 for £30 million and a 4th phase running from 2011 to 2014) and Astrogrid (£14 million over 3 phases). The remaining £23 million of phase one funding was divided between the application projects funded by BBSRC, MRC and NERC: BBSRC: Biomolecular Grid, Proteome Annotation Pipeline, High-Throughput Structural Biology, Global Biodiversity MRC: Biology of Ageing, Sequence and Structure Data, Molecular Genetics, Cancer Management, Clinical e-Science Framework, Neuroinformatics Modeling Tools NERC: Climateprediction.com, Oceanographic Grid, Molecular Environmental Grid, NERC DataGrid The funded UK e-Science programme was reviewed on its completion in 2009 by an international panel led by Daniel E. Atkins, director of the Office of Cyberinfrastructure of the US NSF. The report concluded that the programme had developed a skilled pool of expertise, some services, and had led to cooperation between academia and industry, but that these achievements were at a project level rather than by generating infrastructure or transforming disciplines to adopt e-Science as a normal method of work, and that they were not self-sustainable without further investment. United States United States-based initiatives, where the term cyberinfrastructure is typically used to define e-Science projects, are primarily funded by the National Science Foundation office of cyberinfrastructure (NSF OCI) and Department of Energy (in particular the Office of Science). After the conclusion of TeraGrid in 2011, the ACCESS program was established and funded by the National Science Foundation to help researchers and educators, with or without supporting grants, to utilize the nation’s advanced computing systems and services. The Netherlands Dutch eScience research is coordinated by the Netherlands eScience Center in Amsterdam, an initiative founded by NWO and SURF. Europe Plan-Europe is a Platform of National e-Science/Data Research Centers in Europe, as established during the constituting meeting 29–30 October 2014 in Amsterdam, the Netherlands, and which is based on agreed Terms of Reference. PLAN-E has a kernel group of active members and convenes twice annually. More can be found on PLAN-E. Sweden Two academic research projects have been carried out in Sweden by two different groups of universities, to help researches share and access scientific computing resources and knowledge: Swedish e-Science Research Center (SeRC): Kungliga Tekniska högskolan (KTH), Stockholm University (SU), Karolinska institutet (KI) and Linköping University (LiU) eSSENCE, The e-Science Collaboration (eSSENCE): Uppsala University, Lund University and Umeå University Comparison with traditional science Traditional science is representative of two distinct philosophical traditions within the history of science, but e-Science, it is being argued, requires a paradigm shift, and the addition of a third branch of the sciences. "The idea of open data is not a new one; indeed, when studying the history and philosophy of science, Robert Boyle is credited with stressing the concepts of skepticism, transparency, and reproducibility for independent verification in scholarly publishing in the 1660s. The scientific method later was divided into two major branches, deductive and empirical approaches. Today, a theoretical revision in the scientific method should include a new branch, Victoria Stodden advocate[s], that of the computational approach, where like the other two methods, all of the computational steps by which scientists draw conclusions are revealed. This is because within the last 20 years, people have been grappling with how to handle changes in high performance computing and simulation." As such, e-science aims at combining both empirical and theoretical traditions, while computer simulations can create artificial data, and real-time big data can be used to calibrate theoretical simulation models. Conceptually, e-Science revolves around developing new methods to support scientists in conducting scientific research with the aim of making new scientific discoveries by analyzing vast amounts of data accessible over the internet using vast amounts of computational resources. However, discoveries of value cannot be made simply by providing computational tools, a cyberinfrastructure or by performing a pre-defined set of steps to produce a result. Rather, there needs to be an original, creative aspect to the activity that by its nature cannot be automated. This has led to various research that attempts to define the properties that e-Science platforms should provide in order to support a new paradigm of doing science, and new rules to fulfill the requirements of preserving and making computational data results available in a manner such that they are reproducible in traceable, logical steps, as an intrinsic requirement for the maintenance of modern scientific integrity that allows an extenuation of "Boyle's tradition in the computational age". Modelling e-Science processes One view argues that since a modern discovery process instance serves a similar purpose to a mathematical proof it should have similar properties, namely it allows results to be deterministically reproduced when re-executed and that intermediate results can be viewed to aid examination and comprehension. In this case, simply modelling the provenance of data is not sufficient. One has to model the provenance of the hypotheses and results generated from analyzing the data as well so as to provide evidence that support new discoveries. Scientific workflows have thus been proposed and developed to assist scientists to track the evolution of their data, intermediate results and final results as a means to document and track the evolution of discoveries within a piece of scientific research. Science 2.0 Other views include Science 2.0 where e-Science is considered to be a shift from the publication of final results by well-defined collaborative groups towards a more open approach, which includes the public sharing of raw data, preliminary experimental results, and related information. To facilitate this shift, the Science 2.0 view is on providing tools that simplify communication, cooperation and collaboration between interested parties. Such an approach has the potential to: speed up the process of scientific discovery; overcome problems associated with academic publishing and peer review; and remove time and cost barriers, limiting the process of generating new knowledge. See also Citizen science Cyberinfrastructure Distributed computing E-research e-Science librarianship e-Social Science Grid computing List of e-Science infrastructures Science 2.0 Scientific workflow system References External links DOE and NSF Open Science Grid The eScience Institute at the University of Washington The Dutch Virtual Laboratory for e-science (VL-e) project UK Research Council's e-Science program e-science : personnalisation des résultats de recherches Google et sociologies du web UK National Centre for e-Social Science and their Wiki on e-Social Science NSF TeraGrid Project Arts and Humanities E-Science Support Centre (AHESSC) E-Science and Data Services Collaborative (EDSC) The European Commission's e-Infrastructures activity Swedish e-Science Research Centre eSSENCE the e-Science Collaboration Cyberinfrastructure
E-Science
[ "Technology" ]
2,878
[ "Information and communications technology", "IT infrastructure", "Cyberinfrastructure" ]
1,001,985
https://en.wikipedia.org/wiki/ODRL
The Open Digital Rights Language (ODRL) is a policy expression language that provides a flexible and interoperable information model, vocabulary, and encoding mechanisms for representing statements about the usage of content and services. ODRL became an endorsed W3C Recommendation in 2018. An example of ODRL policy follows, which can be simply interpreted as "John Doe can Play the asset mysong.mp3". { "@context": "http://www.w3.org/ns/odrl.jsonld", "uid": "http://example.com/policy:001", "permission": [{ "target": "http://example.com/mysong.mp3", "assignee": "John Doe", "action": "play" }] } ODRL History ODRL was initially created in 2000, to address the burgeoning needs of the digital rights management (DRM) sector when media players were first introduced to the marketplace. Version 1.1 of the ODRL language was quickly adopted by the Open Mobile Alliance (OMA) as their core standard for mobile media content protections and for managing digital objects. To date, ODRL is arguably the largest mobile implementation of a rights language, currently operating on over a billion compatible devices. ODRL was managed by an independent Initiative, hosted by IPR Systems and led by Renato Iannella, before becoming a W3C Community Group in 2011. This move has provided long-term stability of the specifications and a transparent governance model. In 2013, two new media sectors adopted ODRL: the eBook publishing and news industries. The International Press and Telecommunication Council (IPTC) news consortium adopted ODRL for the communication of usage policies, primarily in association with the licensed distribution and use of news content in the online news marketplace. In the current virtual goods environment, content assets purchased or permissioned by a consumer are often locked into the same platform where content was initially consumed due to interoperability of rights expressions across platforms. ODRL Version 2.0 recognized it is equally important to state Permissions and Prohibitions in an expression language representing both DRM and non-DRM digital objects, broad adoption of this advanced model can reduce friction across digital devices and enable transparent transactions between machines in accordance with the specified policy language. ODRL policy model framework currently supports traditional rights expressions for commercial transactions, open access expressions, and privacy expressions for social media. ODRL Specifications and Profiles ODRL is specified in two World Wide Web (W3C) Recommendations published in February 2018: ODRL Information Model 2.2 ODRL Vocabulary & Expression 2.2 Included within the ODRL documentation are a number of basic use cases demonstrating how to implement policy expressions using the Core Model with terms from the Common Vocabulary. ODRL is fully extensible and provides a mechanism for new communities to extend and/or deprecate the ODRL Common Vocabulary used in conjunction with the Core Model. An example of how the ODRL Profile and Vocabulary may be extended is found in the IPTC RightsML profile. The robust framework of ODRL allows for a wide variety of business models to be expressed and to address the requirements of multiple communities, such as social networks, publishers, image libraries, and education. Other profiles, such as the ODRL profile of Creative Commons were developed. The ODRL Community Group is a World Wide Web (W3C) Community Group still supports the promotion and future development of the W3C ODRL recommendations. Other W3C Community Groups have adopted ODRL as the core Policy language and have developed a Profile to meet their community requirements, such as the Rights Automation for Market Data Community Group for pricing and trading data for financial instruments. ODRL Core Model In the ODRL Core Model, the Policy is the central entity that holds an ODRL policy together. In its encoded form, e.g. in a JSON or XML document, it makes the policy addressable from the outside world via its unique UID attribute. A policy can refer to multiple permissions, duties and prohibitions. A Permission allows a particular Action to be executed on a related Asset, e.g. “play the audio file abc.mp3″. A Constraint like “at most 10 times” might be added to specify the Permission more precisely. The Party that grants this Permission is linked to it with the Role assigner, the Party that is granted the Permission is linked to it with the Role assignee, e.g. “assigner VirtualMusicShop grants the Permission to assignee Alice”. Additionally, a Permission may be linked to Duty entities that means there are obligations on the assigner to fulfil in order to exercise the permission. Similar to Permissions, a Duty states that a certain Action may be executed by the Party with the Role assignee for the Permission to be valid, e.g. “Alice must pay 5 EUR in order to get the Permission to play abc.mp3″. The Prohibition entity is used in the same way as Permission, with the key difference that it forbids the Action, e.g. “Alice is forbidden to use abc.mp3 commercially”. ODRL Vocabulary The ODRL Core Vocabulary defines the semantics for the concepts and terms from the ODRL Information Model. The ODRL Core Vocabulary represents the minimally supported terms for ODRL Policies. In addition, the ODRL Common Vocabulary defines semantics for generic terms that may be optionally used in ODRL Profiles by communities. ODRL Encodings ODRL can be implemented in three serializations: JSON, XML, and Turtle. Communities adopting ODRL can use standardized actions for Permissions, Prohibitions, and Duties that are expressed in policy statements. See also OMA DRM Creative Commons Rights Expression Language MPEG-21 Rights Expression Language References External links W3C Permission and Obligations Expression Working Group http://www.w3.org/community/odrl/ W3C ODRL Community group, the international effort to develop and promote ODRL IPTC RightsML Standard http://virtualgoods.org/ Conference that regularly hosts the ODRL workshop http://xml.coverpages.org/odrl.html Cover Pages article on ODRL https://copyrightandtechnology.com/2018/02/18/world-wide-web-consortium-embraces-odrl-rights-language/ World Wide Web Consortium Embraces ODRL Rights Language Digital rights management standards XML-based standards Metadata
ODRL
[ "Technology" ]
1,384
[ "Computer standards", "Digital rights management standards", "Metadata", "Data", "XML-based standards" ]
1,002,039
https://en.wikipedia.org/wiki/Music%20box
A music box (American English) or musical box (British English) is an automatic musical instrument in a box that produces musical notes by using a set of pins placed on a revolving cylinder or disc to pluck the tuned teeth (or lamellae) of a steel comb. The popular device best known today as a "music box" developed from musical snuff boxes of the 18th century and were originally called (French for "chimes of music"). Some of the more complex boxes also contain a tiny drum and/or bells in addition to the metal comb. History The Symphonium company started business in 1885 as the first manufacturers of disc-playing music boxes. Two of the founders of the company, Gustave Brachhausen and Paul Riessner, left to set up a new firm, Polyphon, in direct competition with their original business and their third partner, Oscar Paul Lochmann. Following the establishment of the Original Musikwerke Paul Lochmann in 1900, the founding Symphonion business continued until 1909. According to the Victoria Museums in Australia, "The Symphonion is notable for the enormous diversity of types, styles, and models produced... No other disc-playing musical box exists in so many varieties. The company also pioneered the use of electric motors... the first model fitted with an electric motor being advertised in 1900. The company moved into the piano-orchestrion business and made both disc-operated and barrel-playing models, player-pianos, and phonographs." Meanwhile, Polyphon expanded to America, where Brachhausen established the Regina Company. Regina was a spectacular success. It eventually reinvented itself as a maker of vacuums and steam cleaners. In the heyday of the music box, some variations were as tall as a grandfather clock and all used interchangeable large disks to play different sets of tunes. These were spring-wound and driven and both had a bell-like sound. The machines were often made in England, Italy, and the US, with additional disks made in Switzerland, Austria, and Prussia. Early "juke-box" pay versions of them existed in public places. Marsh's free Museum and curio shop in Long Beach, Washington (US) has several still-working versions of them on public display. The Musical Museum, Brentford, London has a number of machines. The Morris Museum in Morristown, New Jersey, USA has a notable collection, including interactive exhibits. In addition to video and audio footage of each piece, the actual instruments are demonstrated for the public daily on a rotational basis. Timeline 9th century: In Baghdad, the Banū Mūsā brothers, a trio of Persian inventors, produced "the earliest known mechanical musical instrument", in this case a hydropowered organ which played interchangeable cylinders automatically, which they described in their Book of Ingenious Devices. According to Charles B. Fowler, this "cylinder with raised pins on the surface remained the basic device to produce and reproduce music mechanically until the second half of the nineteenth century." Early 13th century: In Flanders, an ingenious bell ringer invents a cylinder with pins which operates cams, which then hit the bells. 1598: Flemish clockmaker Nicholas Vallin produces a wall-mounted clock which has a pinned barrel playing on multiple tuned bells mounted in the superstructure. The barrel can be programmed, as the pins can be separately placed in the holes provided on the surface of the barrel. 1665: Ahasuerus Fromanteel in London makes a table clock which has quarter striking and musical work on multiple bells operated by a pinned barrel. These barrels can be changed for those playing different tunes. 1772: A watch is made by one Ransonet at Nancy, France which has a pinned drum playing music not on bells but on tuned steel prongs arranged vertically. 1796: Antoine Favre-Salomon, a clockmaker from Geneva replaces the stack of bells by a comb with multiple pre-tuned metallic notes in order to reduce space. Together with a horizontally placed pinned barrel, this produces more varied and complex sounds. One of these first music boxes is now displayed at the Shanghai Gallery of Antique Music Boxes and Automata in Pudong's Oriental Art Center. 1877: Thomas Edison invents the phonograph, which has important consequences for the musical-box industry, especially around the end of the century. In 2010 American jazz guitarist Pat Metheny released the album Orchestrion on which he performed alongside a variety of custom-designed and built acoustic and electromechanical orchestrions which comprised the rest of the "band", playing music in real-time through the MIDI file format. In March 2016, the band Wintergatan released a video of their homemade Marble Machine which took 14 months to make and played in any key using a 3,000-piece wooden construction fueled by 2,000 marbles. Band member Martin Molin used a hand crank to mobilize the marbles, which then created various noises on a vibraphone and other installed musical elements. Repertoire In 1974–1975, German composer Karlheinz Stockhausen composed Tierkreis, a set of twelve pieces on the signs of the zodiac, for twelve music boxes. See also Barrel organ Cuckoo clock Graphophone Musical clock Player piano Singing bird box Shanghai Gallery of Antique Music Boxes and Automata The Musical Museum, Brentford, London, England has several examples by makers including Nicole Frères, Regina and Popper which may be seen and heard. References Further reading Bahl, Gilbert. Music Boxes: The Collector's Guide to Selecting, Restoring and Enjoying New and Vintage Music Boxes. Philadelphia, Pennsylvania: Running Press, 1993. Bowers, Q. David. Encyclopedia of Automatic Musical Instruments. . Lanham, Maryland: Vestal Press, Inc., 1972. Diagram Group. Musical Instruments of the World. New York: Facts on File, 1976. Ganske, Sharon. Making Marvelous Music Boxes. New York: Sterling Publishing Company, 1997. Greenhow, Jean. Making Musical Miniatures. London: B T Batsford, 1979. Hoke, Helen, and John Hoke. Music Boxes, Their Lore and Lure. New York: Hawthorn Books, 1957. Ord-Hume, Arthur W. J. G. The Musical Box: A Guide for Collectors. . Atglen, Pennsylvania: Schiffer Publishing Ltd., 1995. Reblitz, Arthur A. The Golden Age of Automatic Musical Instruments. . Woodsville, New Hampshire: Mechanical Music Press, 2001. Reblitz, Arthur A., Q. David Bowers. Treasures of Mechanical Music. . New York: The Vestal Press, 1981. Sadie, Stanley. ed. "Musical Box". The New Grove Dictionary of Music and Musicians. . MacMillan. 1980. Vol 12. P. 814. Smithsonian Institution. History of Music Machines. . New York: Drake Publishers, 1975. Templeton, Alec, as told to Rachael Bail Baumel. Alec Templeton's Music Boxes. New York: Wilfred Funk, 1958. External links Performance of Listen Thing and Pandora's Secret on a punched paper-tape controlled music box (video) Musical Box Society International – Glossary of Terms Music Box Maniacs – a website dedicated to paper strip punch card music boxes Videos antique music boxes Audio of historical music boxes Polyphon Music Box, made app. 1850 Mira Music Box – Sammy 1903 Mechanical Music Box – Auld Lang Syne Mechanical Music from Phonogrammarchiv of the Austrian Academy of Sciences LP vinyl record: "The Concert Regina Music Box and the Symphonium" (1977, Nostalgia Repertoire Records – Sonic Arts Corporation, 665 Harrison Street, San Francisco Ca. 94107, Curator: Leo de Gar Kulka, Record No. RR 4771 Stereo.) Comb lamellophones European musical instruments Box Articles containing video clips
Music box
[ "Physics", "Technology" ]
1,617
[ "Physical systems", "Mechanical musical instruments", "Machines" ]
1,002,128
https://en.wikipedia.org/wiki/Giant%20magnetoresistance
Giant magnetoresistance (GMR) is a quantum mechanical magnetoresistance effect observed in multilayers composed of alternating ferromagnetic and non-magnetic conductive layers. The 2007 Nobel Prize in Physics was awarded to Albert Fert and Peter Grünberg for the discovery of GMR, which also sets the foundation for the study of spintronics. The effect is observed as a significant change in the electrical resistance depending on whether the magnetization of adjacent ferromagnetic layers are in a parallel or an antiparallel alignment. The overall resistance is relatively low for parallel alignment and relatively high for antiparallel alignment. The magnetization direction can be controlled, for example, by applying an external magnetic field. The effect is based on the dependence of electron scattering on spin orientation. The main application of GMR is in magnetic field sensors, which are used to read data in hard disk drives, biosensors, microelectromechanical systems (MEMS) and other devices. GMR multilayer structures are also used in magnetoresistive random-access memory (MRAM) as cells that store one bit of information. In literature, the term giant magnetoresistance is sometimes confused with colossal magnetoresistance of ferromagnetic and antiferromagnetic semiconductors, which is not related to a multilayer structure. Formulation Magnetoresistance is the dependence of the electrical resistance of a sample on the strength of an external magnetic field. Numerically, it is characterized by the value where R(H) is the resistance of the sample in a magnetic field H, and R(0) corresponds to H = 0. Alternative forms of this expression may use electrical resistivity instead of resistance, a different sign for δH, and are sometimes normalized by R(H) rather than R(0). The term "giant magnetoresistance" indicates that the value δH for multilayer structures significantly exceeds the anisotropic magnetoresistance, which has a typical value within a few percent. History GMR was discovered in 1988 independently by the groups of Albert Fert of the University of Paris-Sud, France, and Peter Grünberg of Forschungszentrum Jülich, Germany. The practical significance of this experimental discovery was recognized by the Nobel Prize in Physics awarded to Fert and Grünberg in 2007. Early steps The first mathematical model describing the effect of magnetization on the mobility of charge carriers in solids, related to the spin of those carriers, was reported in 1936. Experimental evidence of the potential enhancement of δH has been known since the 1960s. By the late 1980s, the anisotropic magnetoresistance had been well explored, but the corresponding value of δH did not exceed a few percent. The enhancement of δH became possible with the advent of sample preparation techniques such as molecular beam epitaxy, which allows manufacturing multilayer thin films with a thickness of several nanometers. Experiment and its interpretation Fert and Grünberg studied electrical resistance of structures incorporating ferromagnetic and non-ferromagnetic materials. In particular, Fert worked on multilayer films, and Grünberg in 1986 discovered the antiferromagnetic exchange interaction in Fe/Cr films. The GMR discovery work was carried out by the two groups on slightly different samples. The Fert group used (001)Fe/(001) Cr superlattices wherein the Fe and Cr layers were deposited in a high vacuum on a (001) GaAs substrate kept at 20 °C and the magnetoresistance measurements were taken at low temperature (typically 4.2 K). The Grünberg work was performed on multilayers of Fe and Cr on (110) GaAs at room temperature. In Fe/Cr multilayers with 3-nm-thick iron layers, increasing the thickness of the non-magnetic Cr layers from 0.9 to 3 nm weakened the antiferromagnetic coupling between the Fe layers and reduced the demagnetization field, which also decreased when the sample was heated from 4.2 K to room temperature. Changing the thickness of the non-magnetic layers led to a significant reduction of the residual magnetization in the hysteresis loop. Electrical resistance changed by up to 50% with the external magnetic field at 4.2 K. Fert named the new effect giant magnetoresistance, to highlight its difference with the anisotropic magnetoresistance. The Grünberg experiment made the same discovery but the effect was less pronounced (3% compared to 50%) due to the samples being at room temperature rather than low temperature. The discoverers suggested that the effect is based on spin-dependent scattering of electrons in the superlattice, particularly on the dependence of resistance of the layers on the relative orientations of magnetization and electron spins. The theory of GMR for different directions of the current was developed in the next few years. In 1989, Camley and Barnaś calculated the "current in plane" (CIP) geometry, where the current flows along the layers, in the classical approximation, whereas Levy et al. used the quantum formalism. The theory of the GMR for the current perpendicular to the layers (current perpendicular to the plane or CPP geometry), known as the Valet-Fert theory, was reported in 1993. Applications favor the CPP geometry because it provides a greater magnetoresistance ratio (δH), thus resulting in a greater device sensitivity. Theory Fundamentals Spin-dependent scattering In magnetically ordered materials, the electrical resistance is crucially affected by scattering of electrons on the magnetic sublattice of the crystal, which is formed by crystallographically equivalent atoms with nonzero magnetic moments. Scattering depends on the relative orientations of the electron spins and those magnetic moments: it is weakest when they are parallel and strongest when they are antiparallel; it is relatively strong in the paramagnetic state, in which the magnetic moments of the atoms have random orientations. For good conductors such as gold or copper, the Fermi level lies within the sp band, and the d band is completely filled. In ferromagnets, the dependence of electron-atom scattering on the orientation of their magnetic moments is related to the filling of the band responsible for the magnetic properties of the metal, e.g., 3d band for iron, nickel or cobalt. The d band of ferromagnets is split, as it contains a different number of electrons with spins directed up and down. Therefore, the density of electronic states at the Fermi level is also different for spins pointing in opposite directions. The Fermi level for majority-spin electrons is located within the sp band, and their transport is similar in ferromagnets and non-magnetic metals. For minority-spin electrons the sp and d bands are hybridized, and the Fermi level lies within the d band. The hybridized spd band has a high density of states, which results in stronger scattering and thus shorter mean free path λ for minority-spin than majority-spin electrons. In cobalt-doped nickel, the ratio λ↑/λ↓ can reach 20. According to the Drude theory, the conductivity is proportional to λ, which ranges from several to several tens of nanometers in thin metal films. Electrons "remember" the direction of spin within the so-called spin relaxation length (or spin diffusion length), which can significantly exceed the mean free path. Spin-dependent transport refers to the dependence of electrical conductivity on the spin direction of the charge carriers. In ferromagnets, it occurs due to electron transitions between the unsplit 4s and split 3d bands. In some materials, the interaction between electrons and atoms is the weakest when their magnetic moments are antiparallel rather than parallel. A combination of both types of materials can result in a so-called inverse GMR effect. CIP and CPP geometries Electric current can be passed through magnetic superlattices in two ways. In the current in plane (CIP) geometry, the current flows along the layers, and the electrodes are located on one side of the structure. In the current perpendicular to plane (CPP) configuration, the current is passed perpendicular to the layers, and the electrodes are located on different sides of the superlattice. The CPP geometry results in more than twice higher GMR, but is more difficult to realize in practice than the CIP configuration. Carrier transport through a magnetic superlattice Magnetic ordering differs in superlattices with ferromagnetic and antiferromagnetic interaction between the layers. In the former case, the magnetization directions are the same in different ferromagnetic layers in the absence of applied magnetic field, whereas in the latter case, opposite directions alternate in the multilayer. Electrons traveling through the ferromagnetic superlattice interact with it much weaker when their spin directions are opposite to the magnetization of the lattice than when they are parallel to it. Such anisotropy is not observed for the antiferromagnetic superlattice; as a result, it scatters electrons stronger than the ferromagnetic superlattice and exhibits a higher electrical resistance. Applications of the GMR effect require dynamic switching between the parallel and antiparallel magnetization of the layers in a superlattice. In first approximation, the energy density of the interaction between two ferromagnetic layers separated by a non-magnetic layer is proportional to the scalar product of their magnetizations: The coefficient J is an oscillatory function of the thickness of the non-magnetic layer ds; therefore J can change its magnitude and sign. If the ds value corresponds to the antiparallel state then an external field can switch the superlattice from the antiparallel state (high resistance) to the parallel state (low resistance). The total resistance of the structure can be written as where R0 is the resistance of ferromagnetic superlattice, ΔR is the GMR increment and θ is the angle between the magnetizations of adjacent layers. Mathematical description The GMR phenomenon can be described using two spin-related conductivity channels corresponding to the conduction of electrons, for which the resistance is minimum or maximum. The relation between them is often defined in terms of the coefficient of the spin anisotropy β. This coefficient can be defined using the minimum and maximum of the specific electrical resistivity ρF± for the spin-polarized current in the form where ρF is the average resistivity of the ferromagnet. Resistor model for CIP and CPP structures If scattering of charge carriers at the interface between the ferromagnetic and non-magnetic metal is small, and the direction of the electron spins persists long enough, it is convenient to consider a model in which the total resistance of the sample is a combination of the resistances of the magnetic and non-magnetic layers. In this model, there are two conduction channels for electrons with various spin directions relative to the magnetization of the layers. Therefore, the equivalent circuit of the GMR structure consists of two parallel connections corresponding to each of the channels. In this case, the GMR can be expressed as Here the subscript of R denote collinear and oppositely oriented magnetization in layers, χ = b/a is the thickness ratio of the magnetic and non-magnetic layers, and ρN is the resistivity of non-magnetic metal. This expression is applicable for both CIP and CPP structures. Under the condition this relationship can be simplified using the coefficient of the spin asymmetry Such a device, with resistance depending on the orientation of electron spin, is called a spin valve. It is "open", if the magnetizations of its layers are parallel, and "closed" otherwise. Valet-Fert model In 1993, Thierry Valet and Albert Fert presented a model for the giant magnetoresistance in the CPP geometry, based on the Boltzmann equations. In this model the chemical potential inside the magnetic layer is split into two functions, corresponding to electrons with spins parallel and antiparallel to the magnetization of the layer. If the non-magnetic layer is sufficiently thin then in the external field E0 the amendments to the electrochemical potential and the field inside the sample will take the form where ℓs is the average length of spin relaxation, and the z coordinate is measured from the boundary between the magnetic and non-magnetic layers (z < 0 corresponds to the ferromagnetic). Thus electrons with a larger chemical potential will accumulate at the boundary of the ferromagnet. This can be represented by the potential of spin accumulation VAS or by the so-called interface resistance (inherent to the boundary between a ferromagnet and non-magnetic material) where j is current density in the sample, ℓsN and ℓsF are the length of the spin relaxation in a non-magnetic and magnetic materials, respectively. Device preparation Materials and experimental data Many combinations of materials exhibit GMR; the most common are the following: FeCr Co10Cu90: δH = 40% at room temperature [110]Co95Fe5/Cu: δH = 110% at room temperature. The magnetoresistance depends on many parameters such as the geometry of the device (CIP or CPP), its temperature, and the thicknesses of ferromagnetic and non-magnetic layers. At a temperature of 4.2 K and a thickness of cobalt layers of 1.5 nm, increasing the thickness of copper layers dCu from 1 to 10 nm decreased δH from 80 to 10% in the CIP geometry. Meanwhile, in the CPP geometry the maximum of δH (125%) was observed for dCu = 2.5 nm, and increasing dCu to 10 nm reduced δH to 60% in an oscillating manner. When a Co(1.2 nm)/Cu(1.1 nm) superlattice was heated from near zero to 300 K, its δH decreased from 40 to 20% in the CIP geometry, and from 100 to 55% in the CPP geometry. The non-magnetic layers can be non-metallic. For example, δH up to 40% was demonstrated for organic layers at 11 K. Graphene spin valves of various designs exhibited δH of about 12% at 7 K and 10% at 300 K, far below the theoretical limit of 109%. The GMR effect can be enhanced by spin filters that select electrons with a certain spin orientation; they are made of metals such as cobalt. For a filter of thickness t the change in conductivity ΔG can be expressed as where ΔGSV is change in the conductivity of the spin valve without the filter, ΔGf is the maximum increase in conductivity with the filter, and β is a parameter of the filter material. Types of GMR GMR is often classed by the type of devices which exhibit the effect. Films Antiferromagnetic superlattices GMR in films was first observed by Fert and Grünberg in a study of superlattices composed of ferromagnetic and non-magnetic layers. The thickness of the non-magnetic layers was chosen such that the interaction between the layers was antiferromagnetic and the magnetization in adjacent magnetic layers was antiparallel. Then an external magnetic field could make the magnetization vectors parallel thereby affecting the electrical resistance of the structure. Magnetic layers in such structures interact through antiferromagnetic coupling, which results in the oscillating dependence of the GMR on the thickness of the non-magnetic layer. In the first magnetic field sensors using antiferromagnetic superlattices, the saturation field was very large, up to tens of thousands of oersteds, due to the strong antiferromagnetic interaction between their layers (made of chromium, iron or cobalt) and the strong anisotropy fields in them. Therefore, the sensitivity of the devices was very low. The use of permalloy for the magnetic and silver for the non-magnetic layers lowered the saturation field to tens of oersteds. Spin valves using exchange bias In the most successful spin valves the GMR effect originates from exchange bias. They comprise a sensitive layer, "fixed" layer and an antiferromagnetic layer. The last layer freezes the magnetization direction in the "fixed" layer. The sensitive and antiferromagnetic layers are made thin to reduce the resistance of the structure. The valve reacts to the external magnetic field by changing the magnetization direction in the sensitive layer relatively to the "fixed" layer. The main difference of these spin valves from other multilayer GMR devices is the monotonic dependence of the amplitude of the effect on the thickness dN of the non-magnetic layers: where δH0 is a normalization constant, λN is the mean free path of electrons in the non-magnetic material, d0 is effective thickness that includes interaction between layers. The dependence on the thickness of the ferromagnetic layer can be given as: The parameters have the same meaning as in the previous equation, but they now refer to the ferromagnetic layer. Non-interacting multilayers (pseudospin valves) GMR can also be observed in the absence of antiferromagnetic coupling layers. In this case, the magnetoresistance results from the differences in the coercive forces (for example, it is smaller for permalloy than cobalt). In multilayers such as permalloy/Cu/Co/Cu the external magnetic field switches the direction of saturation magnetization to parallel in strong fields and to antiparallel in weak fields. Such systems exhibit a lower saturation field and a larger δH than superlattices with antiferromagnetic coupling. A similar effect is observed in Co/Cu structures. The existence of these structures means that GMR does not require interlayer coupling, and can originate from a distribution of the magnetic moments that can be controlled by an external field. Inverse GMR effect In the inverse GMR, the resistance is minimum for the antiparallel orientation of the magnetization in the layers. Inverse GMR is observed when the magnetic layers are composed of different materials, such as NiCr/Cu/Co/Cu. The resistivity for electrons with opposite spins can be written as ; it has different values, i.e. different coefficients β, for spin-up and spin-down electrons. If the NiCr layer is not too thin, its contribution may exceed that of the Co layer, resulting in inverse GMR. Note that the GMR inversion depends on the sign of the product of the coefficients β in adjacent ferromagnetic layers, but not on the signs of individual coefficients. Inverse GMR is also observed if NiCr alloy is replaced by vanadium-doped nickel, but not for doping of nickel with iron, cobalt, manganese, gold or copper. GMR in granular structures GMR in granular alloys of ferromagnetic and non-magnetic metals was discovered in 1992 and subsequently explained by the spin-dependent scattering of charge carriers at the surface and in the bulk of the grains. The grains form ferromagnetic clusters about 10 nm in diameter embedded in a non-magnetic metal, forming a kind of superlattice. A necessary condition for the GMR effect in such structures is poor mutual solubility in its components (e.g., cobalt and copper). Their properties strongly depend on the measurement and annealing temperature. They can also exhibit inverse GMR. Applications Spin-valve sensors General principle One of the main applications of GMR materials is in magnetic field sensors, e.g., in hard disk drives and biosensors, as well as detectors of oscillations in MEMS. A typical GMR-based sensor consists of seven layers: Silicon substrate, Binder layer, Sensing (non-fixed) layer, Non-magnetic layer, Fixed layer, Antiferromagnetic (pinning) layer, Protective layer. The binder and protective layers are often made of tantalum, and a typical non-magnetic material is copper. In the sensing layer, magnetization can be reoriented by the external magnetic field; it is typically made of NiFe or cobalt alloys. FeMn or NiMn can be used for the antiferromagnetic layer. The fixed layer is made of a magnetic material such as cobalt. Such a sensor has an asymmetric hysteresis loop owing to the presence of the magnetically hard, fixed layer. Spin valves may exhibit anisotropic magnetoresistance, which leads to an asymmetry in the sensitivity curve. Hard disk drives In hard disk drives (HDDs), information is encoded using magnetic domains, and a change in the direction of their magnetization is associated with the logical level 1 while no change represents a logical 0. There are two recording methods: longitudinal and perpendicular. In the longitudinal method, the magnetization is normal to the surface. A transition region (domain walls) is formed between domains, in which the magnetic field exits the material. If the domain wall is located at the interface of two north-pole domains then the field is directed outward, and for two south-pole domains it is directed inward. To read the direction of the magnetic field above the domain wall, the magnetization direction is fixed normal to the surface in the antiferromagnetic layer and parallel to the surface in the sensing layer. Changing the direction of the external magnetic field deflects the magnetization in the sensing layer. When the field tends to align the magnetizations in the sensing and fixed layers, the electrical resistance of the sensor decreases, and vice versa. Magnetic RAM A cell of magnetoresistive random-access memory (MRAM) has a structure similar to the spin-valve sensor. The value of the stored bits can be encoded via the magnetization direction in the sensor layer; it is read by measuring the resistance of the structure. The advantages of this technology are independence of power supply (the information is preserved when the power is switched off owing to the potential barrier for reorienting the magnetization), low power consumption and high speed. In a typical GMR-based storage unit, a CIP structure is located between two wires oriented perpendicular to each other. These conductors are called lines of rows and columns. Pulses of electric current passing through the lines generate a vortex magnetic field, which affects the GMR structure. The field lines have ellipsoid shapes, and the field direction (clockwise or counterclockwise) is determined by the direction of the current in the line. In the GMR structure, the magnetization is oriented along the line. The direction of the field produced by the line of the column is almost parallel to the magnetic moments, and it can not reorient them. Line of the row is perpendicular, and regardless of the magnitude of the field can rotate the magnetization by only 90°. With the simultaneous passage of pulses along the row and column lines, of the total magnetic field at the location of the GMR structure will be directed at an acute angle with respect to one point and an obtuse to others. If the value of the field exceeds some critical value, the latter changes its direction. There are several storage and reading methods for the described cell. In one method, the information is stored in the sensing layer; it is read via resistance measurement and is erased upon reading. In another scheme, the information is kept in the fixed layer, which requires higher recording currents compared to reading currents. Tunnel magnetoresistance (TMR) is an extension of spin-valve GMR, in which the electrons travel with their spins oriented perpendicularly to the layers across a thin insulating tunnel barrier (replacing the non-ferromagnetic spacer). This allows to achieve a larger impedance, a larger magnetoresistance value (~10× at room temperature) and a negligible temperature dependence. TMR has now replaced GMR in MRAMs and disk drives, in particular for high area densities and perpendicular recording. Other applications Magnetoresistive insulators for contactless signal transmission between two electrically isolated parts of electrical circuits were first demonstrated in 1997 as an alternative to opto-isolators. A Wheatstone bridge of four identical GMR devices is insensitive to a uniform magnetic field and reacts only when the field directions are antiparallel in the neighboring arms of the bridge. Such devices were reported in 2003 and may be used as rectifiers with a linear frequency response. Notes Citations Bibliography External links Giant Magnetoresistance: The Really Big Idea Behind a Very Tiny Tool National High Magnetic Field Laboratory Presentation of GMR-technique (IBM Research) Computer storage technologies Magnetoresistance Spintronics
Giant magnetoresistance
[ "Physics", "Chemistry", "Materials_science" ]
5,179
[ "Magnetoresistance", "Physical quantities", "Spintronics", "Magnetic ordering", "Condensed matter physics", "Electrical resistance and conductance" ]
1,002,256
https://en.wikipedia.org/wiki/Shot%20clock
A shot clock is a countdown timer used in a variety of games and sports, indicating a set amount of time that a team may possess the object of play before attempting to score a goal. Shot clocks are used in several sports including basketball, water polo, canoe polo, lacrosse, poker, ringette, korfball, tennis, ten-pin bowling, and various cue sports. It is analogous with the play clock used in American and Canadian football, and the pitch clock used in baseball. This article deals chiefly with the shot clock used in basketball. The set amount of time for a shot clock in basketball is 24–35 seconds, depending on the league. This clock reveals how much time a team may possess the ball before attempting to score a field goal. It may be colloquially known as the 24-second clock, particularly in the NBA and other leagues where that is the duration of the shot clock. If the shot clock reaches zero before the team attempts a field goal, the team has committed a shot clock violation, which is penalized with a loss of possession. At most professional and collegiate basketball courts the shot clock is displayed to the players and spectators in large red numerals below the game clock on a display mounted atop each backboard. In some collegiate and amateur facilities this display might be located on the floor or mounted to a wall behind the end line. A shot clock is used in conjunction with a game clock but is distinct from the game clock which displays the time remaining in the period of play. The shot clock was originally introduced in the NBA in 1954 as a way to increase scoring and reduce stalling tactics that were commonly used before its inception. It has been credited with increasing fan interest in the then-fledgling league, and has since been adopted at most organized levels of basketball. Definition The shot clock is a digital clock that displays a number of seconds or not. The shot clock is usually displayed above the backboard behind each goal, allowing offensive players to see precisely how much time they have to shoot and officials to easily determine whether buzzer beaters should be counted. The NBA specifies that a transparent shot clock and game clock that displays said times on both sides be part of the backboard assembly, and FIBA, EuroLeague, and many venues use this arrangement. Three signals indicate when the time to shoot has expired: A value of 0.0 on the shot clock itself An audible horn distinct from the scoreboard operator's signal for end of period and substitutions A yellow strip of lights (LEDs) on the backboard. The NBA (since 2011) and FIBA (since July 2018) require this. This is not explicitly required in the NCAA, although some venues will use the red LEDs surrounding most shot clocks or on the backboard (used in the NBA to signal the end of period) to denote a shot clock violation. In the final five seconds to shoot, the shot clock displays tenths of seconds. This was adopted in the 2011–12 NBA season. History The NBA has had a 24-second limit since 1954. FIBA introduced a 30-second shot clock in 1956 and switched to 24 seconds in 2000. The Women's National Basketball Association (WNBA) had a 30-second clock originally and switched to 24 seconds in 2006. Collegiate basketball uses a 30-second shot clock (details below). Background The NBA had problems attracting fans (and positive media coverage) before the shot clock's inception. Teams in the lead were running out the clock, passing the ball incessantly. The trailing team could do nothing but commit fouls to recover possession following the free throw. Frequent low-scoring games with many fouls bored fans. The most extreme case occurred on November 22, 1950, when the Fort Wayne Pistons defeated the Minneapolis Lakers by a record-low score of 19–18, including 3–1 in the fourth quarter. The Pistons held the ball for minutes at a time without shooting (they attempted 13 shots for the game) to limit the impact of the Lakers' dominant George Mikan. It led the St. Paul Dispatch to write, "[The Pistons] gave pro basketball a great black eye." NBA President Maurice Podoloff said, "In our game, with the number of stars we have, we of necessity run up big scores." A few weeks after the Pistons/Lakers game, the Rochester Royals and Indianapolis Olympians played a six-overtime game with only one shot in each overtime: in each overtime period, the team that had the ball first held it for the entirety of the period before attempting a last-second shot. The NBA tried several rule changes in the early 1950s to speed up the game and reduce fouls before eventually adopting the shot clock. Creation In 1954 in Syracuse, New York, Syracuse Nationals (now the Philadelphia 76ers) owner Danny Biasone and general manager Leo Ferris experimented with a 24-second shot clock during a scrimmage. Jack Andrews, longtime basketball writer for The Syracuse Post-Standard, often recalled how Ferris would sit at Danny Biasone's Eastwood bowling alley, scribbling potential shot clock formulas onto a napkin. According to Biasone, "I looked at the box scores from the games I enjoyed, games where they didn't screw around and stall. I noticed each team took about 60 shots. That meant 120 shots per game. So I took 2,880 seconds (48 minutes) and divided that by 120 shots. The result was 24 seconds per shot." Ferris was singled out by business manager Bob Sexton at the 1954 team banquet for pushing the shot clock rule. Biasone and Ferris then convinced the NBA to adopt it for the 1954–55 season, a season in which the Nationals won the NBA Championship Models Originally, the shot clocks used in the NBA were usually single-sided in a black box. A 1991 rule change required game clocks to be included with shot clocks in the NBA. Eventually, after the rule change, multiple-sided began to be used, and would be in most of the arenas. A 2002 NBA rule change allowing instant replay review of last-second shots required four-sided units in NBA venues, along with an accompanying shot clock light to determine if the shot went off in time. In 2005, the FedExForum in Memphis opened with a new two-sided transparent shot clock developed by Daktronics with a smaller secondary version also accompanying the larger one. By the 2010's, the twin shot clock format, used by Daktronics and Canadian rival OES, became the standard for most venues, especially in NCAA play. In the 2014-15 season, the NBA signed a deal with Tissot, a Swiss watch company, for specification two-sided transparent shot clock, which was thinner than its predecessors. But in many international leagues and the collegiate level, the older 3-sided and 4-sided shot clocks are still in use, except for Daktronics, OES, and SwissTiming/Tissot venues. Adoption by other leagues Two later pro leagues that rivaled the NBA adopted a modified version of the shot clock. The American Basketball League used a 30-second shot clock for its two years in existence The American Basketball Association also adopted a 30-second clock when it launched in switching to the NBA's 24-second length for its final season From its inception in 1975, the Philippine Basketball Association adopted a 25-second shot clock. This was because the shot clocks then installed at the league's main venues, the Araneta Coliseum and Rizal Memorial Coliseum (the latter no longer used by the league), could only be set at 5-second intervals. The league later adopted a 24-second clock starting from the 1995 season. The Metropolitan Basketball Association in the Philippines used the 23-second clock from its maiden season in 1998. In Philippine college basketball, the NCAA Basketball Championship (Philippines) and the UAAP Basketball Championship adopted a 30-second clock, then switched to 24 seconds starting with the 2001–02 UAAP season 64, the first season to start after the FIBA rule change in 2001. Operation The shot clock begins counting down when a team establishes possession, and stops any time the game clock stops (e.g., timeouts, violations, fouls). The offensive team must attempt to score a field goal before the shot clock expires; otherwise, the team has committed a shot clock violation (also known as a 24-second violation in leagues with a 24-second shot clock) that results in a turnover to their opponents. An important distinction is that there is no violation if the ball is in flight to the basket when the shot clock expires, as long as the ball leaves the player's hand before the shot clock expires and the ball proceeds to go into the basket or touch the basket rim. The shot clock resets to its full length at the start of each period and whenever possession changes to the opposite team such as after a basket is scored, the defense steals the ball or recovers a rebound, or the offense commits a foul or violation. The full length varies by country, level of play, and league; see the table below. The shot clock does not reset if a defender makes short contact with the ball (e.g., an attempted steal or a tipped pass) but the offense retains possession, or if a shot attempt misses the rim entirely and airballs. The shot clock also resets when the offense retains possession after a missed field goal or free throw, or on certain fouls or violations that give the offense an inbounds pass in their frontcourt. If the offensive team is fouled and the penalty does not include free throws but just an in-bounds pass, the shot clock is reset. There are several cases where the offense is not given a full 24 seconds. The shot clock is instead set to 14 following an offensive rebound. FIBA adopted this in 2014 and the NBA adopted in 2018. The WNBA also observes this rule. In several other cases where the offense inbounds the ball in its frontcourt (such as a foul by the defense not resulting in free throws), the offense is guaranteed 14 seconds. The shot clock is increased to 14 if it showed a shorter time. On a held ball (whether decided by a jump ball or a possession arrow), the state of the shot clock depends on which team gets possession of the ball. If the defensive team acquires possession, the shot clock is reset, as it is on any other change of possession. If the offense retains possession, the shot clock is not reset, because there was no change of possession. However, in Euroleague, the NBA, and WNBA, the shot clock is topped up to 14 seconds, as described above for a frontcourt inbounds pass. Near the end of each period, if the shot clock would ordinarily display more time than there is remaining in the period, the shot clock is switched off. During this time, a team cannot commit a shot clock violation. The shot clock apparatus itself is considered out of bounds and not part of the backboard. The shot clock operator sits at the scorer's table. This is usually a different person from the scoreboard operator, as the task requires concentration during and after the shot attempt. In the 2016-17 NBA season, a new 'official timekeeper' deal for the NBA with Swiss watch manufacturer Tissot introduced technology to unify the keeping of the shot clock and the game clock. Tissot also became official timekeeper for the WNBA in the 2017 season. Collegiate rules American collegiate basketball uses a 30-second shot clock, while Canadian university basketball uses a 24-second clock. In men's collegiate basketball, there was initial resistance to the implementation of a shot clock for men's NCAA basketball, due to fears that smaller colleges would be unable to compete with powerhouses in a running game. However, after extreme results like an 11–6 Tennessee win over Temple in 1973, support for a men's shot clock began to build. The NCAA introduced a 45-second shot clock for the 1985-86 season; several conferences had experimented with it for the two seasons prior. It was reduced in the 1993–94 season, and in the 2015–16 season. The NAIA also reduced the shot clock to 30 seconds starting in 2015–16. Women's collegiate basketball (at the time sanctioned by the Commission on Intercollegiate Athletics for Women) used a 30-second shot clock on an experimental basis in the 1969–70 season, officially adopting it for the 1970–71 season. The NCAA specifies 20 seconds rather than 30 after stoppages where the ball is already in the frontcourt. In 2019, it added offensive rebounds to this list. US high schools rules The National Federation of State High School Associations (NFHS), which sets rules for high school basketball in the U.S., does not mandate the use of a shot clock, instead leaving the choice to use a clock and its duration up to each individual state association. In concert with this, the "stall ball" strategy can be used in a state or league, but depending on the organization, itself comes with restrictions on its use by the game officials, with overuse of it often being whistled as a foul or an unsportsmanlike act. Others may allow stalling completely, at the risk of fan disinterest. As the cost of a shot clock system can be cost-prohibitive, its use in high schools has been debated on that consideration and not the flow of the game. While previous proposals for a national shot clock had been denied by the NFHS as recently as 2011, in the spring of 2021 the NFHS agreed to allow its member associations the option of a shot clock, with a mandatory 35-second duration, starting in 2022–23. As of August 2021, 11 states either require a shot clock in high school competition or will begin using one starting in 2022–23: California, Georgia, Iowa, Maryland, Massachusetts, Nebraska (Class A only), New York, North Dakota, Rhode Island, South Dakota, and Washington. Before 2022–23, the District of Columbia used a 30-second shot clock for public school (DCIAA) competition, charter school competition (as of 2018–19), and for the DCSAA State Tournament, where public, private, and charter schools compete for the championship of the District of Columbia. Shot clock length Shot clock length in basketball Shot clock length in other sports Related concepts A related rule to speed up play is that the offensive team has a limited time to advance the ball across the half-court line (the "time line"). See also Pitch clock, used in baseball Play clock, used in American and Canadian football. Four corners offense, offensive stall strategy in basketball Stall count, used in the sport of Ultimate. References External links 24 Seconds to Shoot snopes.com Basketball terminology Basketball equipment Rules of basketball Timers Basketball in Syracuse, New York Snooker terminology Time measurement systems
Shot clock
[ "Physics" ]
3,054
[ "Spacetime", "Time measurement systems", "Physical quantities", "Time" ]
1,002,300
https://en.wikipedia.org/wiki/Axiom%20of%20determinacy
In mathematics, the axiom of determinacy (abbreviated as AD) is a possible axiom for set theory introduced by Jan Mycielski and Hugo Steinhaus in 1962. It refers to certain two-person topological games of length ω. AD states that every game of a certain type is determined; that is, one of the two players has a winning strategy. Steinhaus and Mycielski's motivation for AD was its interesting consequences, and suggested that AD could be true in the smallest natural model L(R) of a set theory, which accepts only a weak form of the axiom of choice (AC) but contains all real and all ordinal numbers. Some consequences of AD followed from theorems proved earlier by Stefan Banach and Stanisław Mazur, and Morton Davis. Mycielski and Stanisław Świerczkowski contributed another one: AD implies that all sets of real numbers are Lebesgue measurable. Later Donald A. Martin and others proved more important consequences, especially in descriptive set theory. In 1988, John R. Steel and W. Hugh Woodin concluded a long line of research. Assuming the existence of some uncountable cardinal numbers analogous to ℵ0, they proved the original conjecture of Mycielski and Steinhaus that AD is true in L(R). Types of game that are determined The axiom of determinacy refers to games of the following specific form: Consider a subset A of the Baire space ωω of all infinite sequences of natural numbers. Two players alternately pick natural numbers n0, n1, n2, n3, ... That generates the sequence ⟨ni⟩i∈ω after infinitely many moves. The player who picks first wins the game if and only if the sequence generated is an element of A. The axiom of determinacy is the statement that all such games are determined. Not all games require the axiom of determinacy to prove them determined. If the set A is clopen, the game is essentially a finite game, and is therefore determined. Similarly, if A is a closed set, then the game is determined. By the Borel determinacy theorem, games whose winning set is a Borel set are determined. It follows from the existence of sufficiently large cardinals that AD holds in L(R) and that a game is determined if it has a projective set as its winning set (see Projective determinacy). The axiom of determinacy implies that for every subspace X of the real numbers, the Banach–Mazur game BM(X) is determined, and consequently, that every set of reals has the property of Baire. Incompatibility with the axiom of choice Under assumption of the axiom of choice, we present two separate constructions of counterexamples to the axiom of determinacy. It follows that the axiom of determinacy and the axiom of choice are incompatible. Using a well-ordering of the continuum The set S1 of all first player strategies in an ω-game G has the same cardinality as the continuum. The same is true for the set S2 of all second player strategies. Let SG be the set of all possible sequences in G, and A be the subset of sequences of SG that make the first player win. With the axiom of choice we can well order the continuum, and we can do so in such a way that any proper initial portion has lower cardinality than the continuum. We use the obtained well ordered set J to index both S1 and S2, and construct A such that it will be a counterexample. We start with empty sets A and B. Let α ∈ J be the index of the strategies in S1 and S2. We need to consider all strategies S1 = {s1(α)}α∈J of the first player and all strategies S2 = {s2(α)}α∈J of the second player to make sure that for every strategy there is a strategy of the other player that wins against it. For every strategy of the player considered we will generate a sequence that gives the other player a win. Let t be the time whose axis has length ℵ0 and which is used during each game sequence. We create the counterexample A by transfinite recursion on α: Consider the strategy s1(α) of the first player. Apply this strategy on an ω-game, generating (together with the first player's strategy s1(α)) a sequence ⟨a1, b2, a3, b4, ...,at, bt+1, ...⟩, which does not belong to A. This is possible, because the number of choices for ⟨b2, b4, b6, ...⟩ has the same cardinality as the continuum, which is larger than the cardinality of the proper initial portion { β ∈ J | β < α } of J. Add this sequence to B to indicate that s1(α) loses (on ⟨b2, b4, b6, ...⟩). Consider the strategy s2(α) of the second player. Apply this strategy on an ω-game, generating (together with the second player's strategy s2(α)) a sequence ⟨a1, b2, a3, b4, ..., at, bt+1, ...⟩, which does not belong to B. This is possible, because the number of choices for ⟨a1, a3, a5, ...⟩ has the same cardinality as the continuum, which is larger than the cardinality of the proper initial portion { β ∈ J | β ≤ α } of J. Add this sequence to A to indicate that s2(α) loses (on ⟨a1, a3, a5, ...⟩). Process all possible strategies of S1 and S2 with transfinite induction on α. For all sequences that are not in A or B after that, decide arbitrarily whether they belong to A or to B, so that B is the complement of A. Once this has been done, prepare for an ω-game G. For a given strategy s1 of the first player, there is an α ∈ J such that s1 = s1(α), and A has been constructed such that s1(α) fails (on certain choices ⟨b2, b4, b6, ...⟩ of the second player). Hence, s1 fails. Similarly, any other strategy of either player also fails. Using a choice function In this construction, the use of the axiom of choice is similar to the choice of socks as stated in the quote by Bertrand Russell at Axiom of choice#Quotations. In a ω-game, the two players are generating the sequence ⟨a1, b2, a3, b4, ...⟩, an element in ωω, where our convention is that 0 is not a natural number, hence neither player can choose it. Define the function f: ωω → {0, 1}ω such that f(r) is the unique sequence of length ω with values are in {0, 1} whose first term equals 0, and whose sequence of runs (see run-length encoding) equals r. (Such an f can be shown to be injective. The image is the subset of {0, 1}ω of sequences that start with 0 and that are not eventually constant. Formally, f is the Minkowski question mark function, {0, 1}ω is the Cantor space and ωω is the Baire space.) Observe the equivalence relation on {0, 1}ω such that two sequences are equivalent if and only if they differ in a finite number of terms. This partitions the set into equivalence classes. Let T be the set of equivalence classes (such that T has the cardinality of the continuum). Define g: {0, 1}ω → T that takes a sequence to its equivalence class. Define the complement of any sequence s in {0, 1}ω to be the sequence s1 that differs in each term. Define the function h: T → T such that for any sequence s in {0, 1}ω, h applied to the equivalence class of s equals the equivalence class of the complement of s (which is well-defined because if s and s are equivalent, then their complements are equivalent). One can show that h is an involution with no fixed points, and thus we have a partition of T into size-2 subsets such that each subset is of the form {t, h(t)}. Using the axiom of choice, we can choose one element out of each subset. In other words, we are choosing "half" of the elements of T, a subset that we denote by U (where U ⊆ T) such that t ∈ U iff h(t) ∉ U. Next, we define the subset A ⊆ ωω in which 1 wins: A is the set of all r such that g(f(r)) ∈ U. We now claim that neither player has a winning strategy, using a strategy-stealing argument. Denote the current game state by a finite sequence of natural numbers (so that if the length of this sequence is even, then 1 is next to play; otherwise 2 is next to play). Suppose that q is a (deterministic) winning strategy for 2. Player 1 can construct a strategy p that beats q as follows: Suppose that player 2 response (according to q) to ⟨1⟩ is b1. Then 1 specifies in p that a1 = 1 + b1. (Roughly, 1 is now playing as 2 in a second parallel game; 1 winning set in the second game equals 2 winning set in the original game, and this is a contradiction. Nevertheless, we continue more formally.) Suppose that 2 response (always according to q) to ⟨1 + b1⟩ is b2, and 2 response to ⟨1, b1, b2⟩ is b3. We construct p for 1, we only aim to beat q, and therefore only have to handle the response b2 to 1 first move. Therefore set 1 response to ⟨1 + b1, b2⟩ is b3. In general, for even n, denote 2 response to ⟨1 + b1, ..., bn−1⟩ by bn and 2 response to ⟨1, b1, ..., bn⟩ by bn+1. Then 1 specify in p that 1 response to ⟨1 + b1, b2, ..., bn⟩ is bn+1. Strategy q is presumed to be winning, and game-result r in ωω given by ⟨1, b1, ...⟩ is one possible sequence allowed by q, so r must be winning for 2 and g(f(r)) must not be in U. The game result r' in ωω given by ⟨1 + b1, b2, ...⟩ is also a sequence allowed by q (specifically, q playing against p), so g(f(r')) must not be in U. However, f(r) and f(r') differ in all but the first term (by the nature of run-length encoding and an offset of 1), so f(r) and f(r') are in complement equivalent classes, so g(f(r)), g(f(r')) cannot both be in U, contradicting the assumption that q is a winning strategy. Similarly, suppose that p is a winning strategy for 1; the argument is similar but now uses the fact that equivalence classes were defined by allowing an arbitrarily large finite number of terms to differ. Let a1 be 1 first move. In general, for even n, denote 1 response to ⟨a1, 1⟩ (if n = 2) or ⟨a1, 1, a2, ..., an−1⟩ by an and 1 response to ⟨a1, 1 + a2, ... an⟩ by an+1. Then the game result r given by ⟨a1, 1, a2, a3, ...⟩ is allowed by p so that g(f(r)) must be in U; also the game result r' given by ⟨a1, 1 + a2, a3, ...⟩ is also allowed by p so that g(f(r')) must be in U. However, f(r) and f(r') differ in all but the first a1 + 1 terms, so they are in complement equivalent classes, therefore g(f(r)) and g(f(r')) cannot both be in U, contradicting that p is a winning strategy. Large cardinals and the axiom of determinacy The consistency of the axiom of determinacy is closely related to the question of the consistency of large cardinal axioms. By a theorem of Woodin, the consistency of Zermelo–Fraenkel set theory without choice (ZF) together with the axiom of determinacy is equivalent to the consistency of Zermelo–Fraenkel set theory with choice (ZFC) together with the existence of infinitely many Woodin cardinals. Since Woodin cardinals are strongly inaccessible, if AD is consistent, then so are an infinity of inaccessible cardinals. Moreover, if to the hypothesis of an infinite set of Woodin cardinals is added the existence of a measurable cardinal larger than all of them, a very strong theory of Lebesgue measurable sets of reals emerges, as it is then provable that the axiom of determinacy is true in L(R), and therefore that every set of real numbers in L(R) is determined. Projective ordinals Yiannis Moschovakis introduced the ordinals δ, which is the upper bound of the length of Δ-norms (injections of a Δ set into the ordinals), where Δ is a level of the projective hierarchy. Assuming AD, all δ are initial ordinals, and we have , and for n < ω, the Suslin cardinal is equal to δ. See also Axiom of real determinacy (ADR) Borel determinacy theorem Martin measure Topological game References Inline citations Further reading Philipp Rohde, On Extensions of the Axiom of Determinacy, Thesis, Department of Mathematics, University of Bonn, Germany, 2001 Telgársky, R.J. Topological Games: On the 50th Anniversary of the Banach-Mazur Game, Rocky Mountain J. Math. 17' (1987), pp. 227–276. (3.19 MB) "Large Cardinals and Determinacy" at the Stanford Encyclopedia of Philosophy Axioms of set theory Determinacy Large cardinals
Axiom of determinacy
[ "Mathematics" ]
3,079
[ "Mathematical objects", "Infinity", "Mathematical axioms", "Game theory", "Determinacy", "Axioms of set theory", "Large cardinals" ]
1,002,527
https://en.wikipedia.org/wiki/Tab%20%28interface%29
In interface design, a tab is a graphical user interface object that allows multiple documents or panels to be contained within a single window, using tabs as a navigational widget for switching between sets of documents. It is an interface style most commonly associated with web browsers, web applications, text editors, and preference panels, with window managers and tiling window managers. Tabs are modeled after traditional card tabs inserted in paper files or card indexes (in keeping with the desktop metaphor). They are usually graphically displayed on webpages or apps as they look on paper. Tabs may appear in a horizontal bar or as a vertical list. Horizontal tabs may have multiple rows. In some cases, tabs may be reordered or organized into multiple rows through drag and drop interactions. Implementations may support opening an existing tab in a separate window or range-selecting multiple tabs for moving, closing, or separating them. History The WordVision DOS word processor for the IBM PC in 1982 was perhaps the first commercially available product with a tabbed interface. Don Hopkins developed and released several versions of tabbed window frames for the NeWS window system as free software, which the window manager applied to all NeWS applications, and enabled users to drag the tabs around to any edge of the window. The NeWS version of UniPress's Gosling Emacs text editor was another early product with multiple tabbed windows in 1988. It was used to develop an authoring tool for Ben Shneiderman's hypermedia browser HyperTIES (the NeWS workstation version of The Interactive Encyclopedia System), in 1988 at the University of Maryland Human-Computer Interaction Lab. HyperTIES also supported pie menus for managing windows and browsing hypermedia documents with PostScript applets. While Boeing Calc already utilized tabbed sheets (as so-called word pads) since at least 1987, Borland's Quattro Pro popularized tabs for spreadsheets in 1992. Microsoft Word in 1993 used them to simplify submenus. In 1994, BookLink Technologies featured tabbed windows in its InternetWorks browser. That same year, the text editor UltraEdit also appeared with a modern multi-row tabbed interface. The tabbed interface approach was then followed by the Internet Explorer shell NetCaptor in 1997. These were followed by several others like IBrowse in 1999, and Opera in 2000 (with the release of version 4 - although an MDI interface was supported before then), MultiViews October 2000, which changed its name into MultiZilla on April 1st, 2001 (an extension for the Mozilla Application Suite), Galeon in early 2001, Mozilla 0.9.5 in October 2001, Phoenix 0.1 (now Mozilla Firefox) in October 2002, Konqueror 3.1 in January 2003, and Safari in 2003. With the release of Internet Explorer 7 in 2006, all major web browsers featured a tabbed interface. Users quickly adopted the use of tabs in web browsing and web search. A study of tabbed browsing behavior in June 2009 found that users switched tabs in 57% of tab sessions, and 36% of users used new tabs to open search engine results at least once during that period. Numerous additional browser tab capabilities have emerged since then. One example is visual tabbed browsing in OmniWeb version 5, which displays preview images of pages in a drawer to the left or right of the main browser window. Another feature is the ability to re-order tabs and to bookmark all of the webpages opened in tab panes in a given window in a group or bookmark folder (as well as the ability to reopen all of them at the same time). Microsoft Internet Explorer marks tab families with different colours. Development Tab behavior in an application is determined by the underlying widget toolkit (for example Firefox uses GTK) framework. Due to lack of standardization, behavior may vary from one application to the next, which can result in usability challenges. Tab hoarding Tab hoarding is digital hoarding of web browser tabs. Users may accumulate tabs as reminders of tasks to research or complete (rather than using dedicated reminder software). They may use multiple browser windows to organize tabs or direct focus; however, leaving multiple windows open can exacerbate tab clutter. Tab hoarding can lead to stress and information overload, distraction, and reduced computer performance. It can develop into emotional attachment to the set of open tabs, including fear of losing them upon a crash or other reboot, and conversely, relief when tabs are properly restored. Tab hoarders have attributed the behavior to anxiety, fear of missing out, procrastination, and poor personal information management practices. The prevalence of tab hoarding is acknowledged by browser vendors such as Mozilla, and has inspired memory and tab management features in browsers and extensions. Such features include tab grouping, which allows related tabs to be visually organized and collapsed; conversion of tabs into a list of hyperlinks; and alternative interface paradigms, such as framing high-level tasks as first-class objects instead of tabs. A 2021 study developed UI design considerations which could enable better tools and changes to the code of web browsers that allow knowledge workers and other users to better manage and utilize their browser tabs. See also Comparison of document interfaces Microsoft Internet Explorer marks tab families with different colours IDE-style interface Ribbon (computing) References External links TabPanel Widget ASP.NET AJAX Control Toolkit Scriptaculous AJAX tabs Tab Window Demo deDevelopmentmo of the Pie Menu Tab Window Manager for The NeWS Toolkit 2.0 (1991). Graphical user interface elements Document interface Graphical control elements
Tab (interface)
[ "Technology" ]
1,191
[ "Components", "Graphical user interface elements" ]
1,002,560
https://en.wikipedia.org/wiki/Rocky%20Mountain%20Arsenal
The Rocky Mountain Arsenal was a United States chemical weapons manufacturing center located in the Denver Metropolitan Area in Commerce City, Colorado. The site was completed December 1942, operated by the United States Army throughout the later 20th century and was controversial among local residents until its closure in 1992. Much of the site is now protected as the Rocky Mountain Arsenal National Wildlife Refuge. History After the attack on Pearl Harbor and the United States' entry into World War II, the U.S. Army began looking for land to create a chemical manufacturing center. Located just north of Denver, in Commerce City and close to the Stapleton Airport, the U.S. Army purchased . The location was ideal, not only because of the proximity to the airport, but because of the geographic features of the site, it was less likely to be attacked. The Rocky Mountain Arsenal manufactured chemical weapons including mustard gas, napalm, white phosphorus, lewisite, chlorine gas, and sarin. In the early 1960s, the U.S. Army began to lease out its facilities to private companies to manufacture pesticides. In the early 1980s the site was selected as a Superfund site and the cleanup process began. In the mid-1980s, wildlife, including endangered species, moved into the space and the land became a protected wildlife refuge. Policy The environmental movement began in the United States in the 1960-1970s. The U.S. Congress responded to the movement in 1980 with the creation of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA), most commonly referred to as a Superfund. CERCLA was a tax imposed on chemical and petroleum industries. CERCLA also gave the Federal government the authority to respond to the release of life-threatening hazardous materials. After 42 years of chemical manufacturing, in 1984, the United States Army began to inspect the level of contamination at Rocky Mountain Arsenal (RMA). The site was placed on the National Priorities List (NPL), a list of the most contaminated areas in the United States. Rocky Mountain Arsenal, among other post-military sites, was a top priority, establishing RMA as a superfund site. This was further exacerbated when the U.S. Army discovered an endangered species, the bald eagle. After the bald eagles were captured, tested, and found to be healthy, the National Wildlife Federation worked with policymakers to transition RMA to a wildlife refuge. In 1992, Congress passed the Rocky Mountain Arsenal National Wildlife Refuge Act (RMANWR Act). Included in the RMANWR Act, areas within RMA that were still contaminated were still owned by the U.S. Army, however, the vast majority of the land that was deemed clean would be managed by the Federal Fish and Wildlife Service (FWS). Tensions arose between the United States Environmental Protection Agency (USEPA), the State of Colorado, United States Army, and the chemical industries as they partnered to clean up the site and create the RMANWR. This led the State of Colorado to take legal action over who has legal authority over RMA remediation efforts, payment of natural resource damages (NRDs), and reimbursement of costs expended for cleanup activities (response costs). Site selection The Arsenal's location was selected due to its relative distance from the coasts (and presumably not likely to be attacked), a sufficient labor force to work at the site, weather that was conducive to outdoor work, and the appropriate soil needed for the project. It was also helpful that the location was close to Stapleton airfield, a major transportation hub. In 1942, the US Army acquired of land on which to manufacture weapons in support of World War II military activities at a cost of $62,415,000. Additionally, some of this land was used for a prisoner of war camp (for German combatants) and later transferred to the city of Denver as Stapleton Airport expanded. A lateral was built off the High Line Canal to supply water to the Arsenal. Manufacturing operations Weapons manufactured at RMA included both conventional and chemical munitions, including white phosphorus (M34 grenade), napalm, mustard gas, lewisite, and chlorine gas. RMA is also one of the few sites that had a stockpile of Sarin gas (aka nerve agent GB), an organophosphorus compound. The manufacturing of these weapons continued until 1969. Rocket fuel to support Air Force operations was also manufactured and stored at RMA. Subsequently, through the 1970s until 1985, RMA was used as a demilitarization site to destroy munitions and chemically related items. Coinciding with these activities, from 1946 to 1982, the Army leased RMA facilities to private industries for the production of pesticides. One of the major lessees, Shell Oil Company, along with Julius Hyman and Company and Colorado Fuel and Iron, had manufacturing and processing capabilities on RMA between 1952 and 1982. The military reserved the right to oust these companies and restart chemical weapon production in the event of a national emergency. Deep injection well RMA contained a deep injection well that was constructed in 1961. It was drilled to a depth of . The well was cased and sealed to a depth of , with the remaining left as an open hole for the injection of Basin F liquids. For testing purposes, the well was injected with approximately 568,000 US gallons (2150 m³) of city water prior to injecting any waste. The injected fluids had very little potential for reaching the surface or usable groundwater supply since the injection point had of rock above it and was sealed at the opening. The Army discontinued use of the well in February 1966 because the fluid injection triggered a series of earthquakes in the area. The well remained unused until 1985 when the Army permanently sealed the disposal well. Environmental issues In 1984, the Army began a systematic investigation of site contamination in accordance with the Comprehensive Environmental Response, Compensation, and Liability Act of 1980 (CERCLA), commonly referred to as Superfund. In 1987, the RMA was placed on the National Priorities List (NPL) of Superfund sites. As provided by CERCLA, a Remedial Investigation/Feasibility Study (RI/FS) was conducted to determine the extent of contamination. Since 1985, the mission at RMA has been the remediation of the site. Contaminants The primary contaminants include organochloride pesticides, organophosphate pesticides, carbamate insecticides, organic solvents and feedstock chemicals used as raw products or intermediates in the manufacturing process (e.g., chlorinated benzenes), heavy metals, chemical warfare material and their related breakdown products and biological warfare agent such as TX. Additionally, ordnance (including incendiary munitions) was manufactured and tested, and asbestos and polychlorinated biphenyls (PCBs) were used at RMA. Today, it is considered a hazardous waste site according to the Colorado Department of Public Health and Environment. Groundwater contamination The contamination of the underlying alluvial aquifer occurred due to the discharge of waste into unlined basins. The following data were derived from the United States Nuclear Regulatory Commission. From 1943 to 1956, the US Army and Shell discharged wastes into the unlined basins resulting in the contamination of the South Platte River outside the Arsenal. Farmers in the vicinity complained about the damage to crops due to the water pumped from the shallow alluvial aquifer. In response, the Army constructed an asphalt-lined impoundment for the disposal of wastes in 1956. Further, in 1961, the Army constructed a 12,000-foot deep injection well for the disposal of wastes. This resulted in subsequent earthquakes in Denver area. In 1975, Colorado Department of Public Health and Environment ordered the Army and Shell to stop the non-permitted discharge of contaminants, to control the contaminated groundwater leaving the site, and to implement a monitoring plan. The Army and Shell took remedial actions to prevent the contamination that includes the installation of the groundwater barrier system which treated approximately 1 billion gallons of water every year. The deep injection well was closed in 1985 and Basin F was closed in 1988 According to National Resource Damage Assessment, although the contamination has been reduced by the treatment efforts, the water in and around the arsenal may never be fully clean. A volume of approximately 52,500 acre-feet (65 million cubic metres) of the alluvial aquifer is not usable for human consumption. Wildlife Injuries The NRDA found several injuries to wildlife. It was estimated by the U.S. Fish and Wildlife Service that at least 20,000 ducks died in a 10-year span during the 1970s. Mallard carcasses found to have higher levels of Dieldrin. Many mammals and birds were found dead and may have suffered lower reproduction rates or birth defects. Safety concerns for neighboring residents Because of the Superfund site status and the dramatic cleanups, many residents in neighborhoods surrounding the RMA voiced concern about ongoing health risks of living within the close vicinity of the site. In September 2017, the state of Colorado filed a lawsuit to sue the United States government for the right to control the contaminated areas of the RMA. Though the cleanup of the site was considered complete in 2010, soil and groundwater monitoring practices occur every five years to ensure the effects of the clean-up remain. Restrictions on well water use, residential development, consumption of fish and game from the arsenal, and agricultural use of the arsenal will exist in perpetuity until further scientific research is completed at the site. Water Many of the surrounding neighborhoods have been provided with potable tap water from other areas of Adams county because of the potential effects of contaminated groundwater from wells. Trace amounts of the chemical 1,4-dioxane has been found in some samples of drinking water. There is no appropriate standard by the EPA, but the state of Colorado has a standard treatment protocol for this chemical. Soil As part of the clean up of the RMA, much of the soil, up to 10 feet below the surface was removed from the site. This soil is contained in hazardous waste landfills. Contaminated areas of soil remain in the Rocky Mountain Arsenal, but are contained in basins and containment structures. Air quality During the cleanup of the RMA, concern for air pollution from the hazardous materials was raised. The Colorado Department of Public Health and Environment established monitoring systems throughout various locations of the RMA. Throughout the decades of cleanup, the air monitors revealed there was no safety hazard to public health as no arsenal chemicals had been released into the air. Epidemiological studies Longstanding agricultural and health concerns related to the Rocky Mountain Arsenal have resulted in a complex history of political and legal battles. Heavy volatile contaminants related to Basin F raised concern among the public for the site and the process of the clean-up itself of the Arsenal and a medical monitoring program (MMP) was put in place as part of the Record of Decision (ROD) between the U.S. Army, the U.S. Environmental Protection Agency, and the Colorado Department of Health and Environment in 1996. One of the goals of the MMP was to enhance community assurance that the clean-up was effective, and it included air quality monitoring, cancer surveillance, and birth defects surveillance. Air quality monitoring of the Arsenal began concurrently with the decontamination process in 1997 and surveillance continued until July 2009. The Surveillance for Birth Defects utilized passive observational data from an existing birth defects registry March 1989 – March 2009. The following data were derived from the Rocky Mountain Arsenal Medical Monitoring Program Surveillance for Birth Defects Compendium prepared by Colorado Department of Public Health and Environment and published in February 2010. In this study, baseline birth defects were estimated from the time period 1989–1997, which was the point at which the clean-up began, and inclusion criteria included mother's address at the time of birth being within the geographical study area. Other demographics of the mother were gathered as well. Birth defects included in the analysis were: "total congenital anomalies, major congenital anomalies, heart defects, muscle and skeletal defects, and kidney and bladder defects," and these categories were inconsistent in reporting accuracy. Statistically significant findings (p<0.01) of this study included demographic differences in the mothers as follows: median age 24, compared to 27 years of age in Colorado as a whole, higher percent of mothers who were white/Hispanic and black, mean education level of 11.8 years compared to 13.1 years in Colorado as a whole, fewer mothers who were married, and fewer prenatal visits on average. These potential confounders are not clearly addressed in this report and may complicate the analysis as well as raise concern for disparities in exposure risk that is dependent upon demographic factors. Baseline rates of congenital anomalies in the study area compared to Colorado as a whole did not show significant differences between populations. No significant increase was observed in congenital anomalies during the clean-up period compared to pre-clean up, although there are no baseline data prior to initial contamination events because data was not yet being collected and the population was very different at that time. In summary, there is no current evidence of health effects. The Colorado Department of Public Health and Environment found no increased risk of birth defects in infants. A separate study of cancer incidence by the Colorado Department of Health did not find convincing evidence of increased cancer risk in people living in residential areas surrounding the arsenal, although the study was made more difficult by the large demographic changes in the area and was also confounded by smoking and obesity rates. Additionally, studies performed at Colorado State University found no increased risk of Arsenic, Mercury, or neurotoxicity in communities within 15 miles of the RMA. Economic impact of contamination and clean up Many projects have attempted to clean contaminated groundwater at the Arsenal. For example, DIMP (diisopropyl methyl phosphonate) was one of the main contaminants in the area. One monitoring project has demonstrated incremental improvements over time, and specifically measured 640 parts per billion (ppb) in 1987 and 55 ppb in 1989, while a different off-post monitoring well measured 138 ppb in 1985, 105 ppb in 1987, 14 ppb in 1988, and 6.7 ppb in 1989. While it is difficult to capture the societal cost to clean up the site, the list of actions dealing with groundwater contamination listed by Mears and Heise include: North boundary groundwater treatment system (1979–82) – $4.3 million Irondale groundwater treatment system by Shell (1981) – $1.1 million Basin F liquid evaporation and contaminated sewer removal (1982) – $1.5 million Northwest boundary groundwater treatment system (1984) – $5.5 million Deep well closure (1986) – $2.5 million Removal of 76,000 drums of waste salts (1986) – $10.5 million Treatment in the public water supply and The Klein Water Treatment Facility supplies safe drinking water to 30,000 south Adams County residents (1989) – $23.1 million Removal and containment of 10.5M gallons of Basin F liquids and 564,000 cubic yards of sludges (1989) – $42 million Improvements and modifications to North boundary system (1990–1) – $2.75 million Closure of 353 abandoned wells on-post (1990) – $3.7 million Basin F groundwater intercept system (1990) – $0.7 million Basin A neck groundwater treatment system (1990) – $3.1 million Northwest Boundary System Improvement (1991) – $1.4 million Rail classification yard and motor pool ground water (1991) – $3.0 million South tank farm plume (1991) – $0.5 million Army trenches (1991) – $1.4 million Shell trenches (1991) – $3.2 million Reapplication of windblown dust control (1991) – $0.25 million Groundwater treatment system to the north (1992–3) – $8.7 million Building 1727 sump cleanup (1993) – $0.18 million Direct economic totals add up to approximately $111 million and this estimation does not include operation and maintenance costs. In addition, there were actions completed by Future Farmers of America (FFA) between 1991 and 1993 that cost approximately $151.2 million. A more recent article in 2004 by Pimentel, estimated the cost of removal pesticides from the groundwater and soil at the Rocky Mountain Arsenal by approximately $2 billion. Also, they noted that if all groundwater were to be cleared for human consumption, the cost would be $500 million annually. Estimating exact direct and indirect impact of the contamination is very challenging as the cleaning and monitoring costs are complex. Further, there have been damages to the rural areas due to contamination resulting in livestock losses, and crop losses. In addition, contamination affects public health and nature (honeybee poisonings, pesticide resistance in pests, destruction of natural predators, wild birds, microbes) negatively. There are many studies that try to estimate the total costs due to contamination of pesticides in U.S. as well as in other countries; however, indirect costs are difficult to estimate, but likely several times than total direct environmental and social costs. In the case of Rocky Mountain Arsenal, total indirect cost was not estimated at all. Rocky Mountain Arsenal NWR Act In 1986, it was discovered that the absence of human activity had made the area an involuntary park when a winter communal roost of bald eagles, then an endangered species, was discovered on site. The U.S. Fish and Wildlife Service inventoried more than 330 species of wildlife that inhabit the Arsenal including deer, coyotes, white pelicans, and owls. The Rocky Mountain Arsenal National Wildlife Refuge Act was passed in October 1992 and signed by President George H. W. Bush. It stipulates that the majority of the site will become a National Wildlife Refuge under the jurisdiction of the Fish and Wildlife Service when the environmental restoration is completed. The act also provides that to the extent possible, parts of the arsenal are to be managed as a refuge in the interim. Finally, the act provides for the transfer of some arsenal land for road expansion around the perimeter of the arsenal and to be sold for development and annexation by Commerce City. Already since 1995, the buildings became the seat of the National Eagle Repository, an office of the Fish and Wildlife Service that receives the bodies of all dead golden and bald eagles in the nation and provides feathers and other parts to Native Americans for cultural uses. In September 2010, the cleanup was considered complete, and the remaining portions of land were transferred to the U.S. Fish and Wildlife Service, bringing the total to . Two sites were retained by the Army: the South Plants location due to historical use, and the North Plant location, which is now a landfill containing the remains of various buildings used in the plants. On May 21, 2011, the official visitor center for the refuge was opened with an exhibit about the site's history, ranging from the homesteading era to its current status. Public use Congruent with the outline of the June 1996 USFWS Comprehensive Management Plan, RMA will be available for public use through both community outreach and educational programs (as provided by the Visitor Access Plan and the USFWS). This public availability will be implemented while simultaneously supporting the remediation effort and the USFWS activities. Dick's Sporting Goods Park In April 2007 Dick's Sporting Goods Park, a soccer-specific stadium, was opened on part of the former Rocky Mountain Arsenal land that was transferred to Commerce City. The new venue hosts the Colorado Rapids of Major League Soccer. Bison A small herd of wild bison was introduced to the refuge in March 2007 as part of the USFWS Bison Project. The animals were transferred from the National Bison Range in Montana. See also United States chemical weapons program References External links Army's Rocky Mountain Arsenal page CDPHE's Rocky Mountain Arsenal page EPA's Rocky Mountain Arsenal page Rocky Mountain Arsenal Archive: A collection of primary, historical documents Historic American Engineering Record in Colorado Military history of Colorado Chemical warfare facilities United States Army arsenals Commerce City, Colorado Military Superfund sites Military installations in Colorado United States Army arsenals during World War II Buildings and structures in Adams County, Colorado Superfund sites in Colorado 1942 establishments in Colorado 1992 disestablishments in Colorado
Rocky Mountain Arsenal
[ "Chemistry" ]
4,182
[ "Chemical warfare facilities" ]
1,002,744
https://en.wikipedia.org/wiki/Sustainable%20transport
Sustainable transport is transportation sustainable in terms of their social and environmental impacts. Components for evaluating sustainability include the particular vehicles used for road, water or air transport; the source of energy; and the infrastructure used to accommodate the transport (roads, railways, airways, waterways, canals and terminals). Transport operations and logistics as well as transit-oriented development are also involved in evaluation. Transportation sustainability is largely being measured by transportation system effectiveness and efficiency as well as the environmental and climate impacts of the system. Transport systems have significant impacts on the environment, accounting for between 20% and 25% of world energy consumption and carbon dioxide emissions. The majority of the emissions, almost 97%, came from direct burning of fossil fuels. In 2019, about 95% of the fuel came from fossil sources. The main source of greenhouse gas emissions in the European Union is transportation. In 2019 it contributes to about 31% of global emissions and 24% of emissions in the EU. In addition, up to the COVID-19 pandemic, emissions have only increased in this one sector. Greenhouse gas emissions from transport are increasing at a faster rate than any other energy using sector. Road transport is also a major contributor to local air pollution and smog. Sustainable transport systems make a positive contribution to the environmental, social and economic sustainability of the communities they serve. Transport systems exist to provide social and economic connections, and people quickly take up the opportunities offered by increased mobility, with poor households benefiting greatly from low carbon transport options. The advantages of increased mobility need to be weighed against the environmental, social and economic costs that transport systems pose. Short-term activity often promotes incremental improvement in fuel efficiency and vehicle emissions controls while long-term goals include migrating transportation from fossil-based energy to other alternatives such as renewable energy and use of other renewable resources. The entire life cycle of transport systems is subject to sustainability measurement and optimization. The United Nations Environment Programme (UNEP) estimates that each year 2.4 million premature deaths from outdoor air pollution could be avoided. Particularly hazardous for health are emissions of black carbon, a component of particulate matter, which is a known cause of respiratory and carcinogenic diseases and a significant contributor to global climate change. The links between greenhouse gas emissions and particulate matter make low carbon transport an increasingly sustainable investment at local level—both by reducing emission levels and thus mitigating climate change; and by improving public health through better air quality. The term "green mobility" also refers to clean ways of movement or sustainable transport. The social costs of transport include road crashes, air pollution, physical inactivity, time taken away from the family while commuting and vulnerability to fuel price increases. Many of these negative impacts fall disproportionately on those social groups who are also least likely to own and drive cars. Traffic congestion imposes economic costs by wasting people's time and by slowing the delivery of goods and services. Traditional transport planning aims to improve mobility, especially for vehicles, and may fail to adequately consider wider impacts. But the real purpose of transport is access – to work, education, goods and services, friends and family – and there are proven techniques to improve access while simultaneously reducing environmental and social impacts, and managing traffic congestion. Communities which are successfully improving the sustainability of their transport networks are doing so as part of a wider program of creating more vibrant, livable, sustainable cities. Definition The term sustainable transport came into use as a logical follow-on from sustainable development, and is used to describe modes of transport, and systems of transport planning, which are consistent with wider concerns of sustainability. There are many definitions of the sustainable transport, and of the related terms sustainable transportation and sustainable mobility. One such definition, from the European Union Council of Ministers of Transport, defines a sustainable transportation system as one that: Allows the basic access and development needs of individuals, companies and society to be met safely and in a manner consistent with human and ecosystem health, and promotes equity within and between successive generations. Is affordable, operates fairly and efficiently, offers a choice of transport mode, and supports a competitive economy, as well as balanced regional development. Limits emissions and waste within the planet's ability to absorb them, uses renewable resources at or below their rates of generation, and uses non-renewable resources at or below the rates of development of renewable substitutes, while minimizing the impact on the use of land and the generation of noise. Sustainability extends beyond just the operating efficiency and emissions. A life-cycle assessment involves production, use and post-use considerations. A cradle-to-cradle design is more important than a focus on a single factor such as energy efficiency. Benefits Sustainable transport has many social and economic benefits that can accelerate local sustainable development. According to a series of serious reports by the Low Emission Development Strategies Global Partnership (LEDS GP), sustainable transport can help create jobs, improve commuter safety through investment in bicycle lanes, pedestrian pathways and non-pedestrian pathways, make access to employment and social opportunities more affordable and efficient. It also offers a practical opportunity to save people's time and household income as well as government budgets, making investment in sustainable transport a 'win-win' opportunity. Environmental impact Transport systems are major emitters of greenhouse gases, responsible for 23% of world energy-related GHG emissions in 2004, with about three-quarters coming from road vehicles. Data from 2011 stated that one-third of all greenhouse gases produced are due to transportation. Currently 95% of transport energy comes from petroleum. Energy is consumed in the manufacture as well as the use of vehicles, and is embodied in transport infrastructure including roads, bridges and railways. Motorized transport also releases exhaust fumes that contain particulate matter which is hazardous to human health and a contributor to climate change. The first historical attempts of evaluating the Life Cycle environmental impact of vehicle is due to Theodore Von Karman. After decades in which all the analysis has been focused on emending the Von Karman model, Dewulf and Van Langenhove have introduced a model based on the second law of thermodynamics and exergy analysis. Chester and Orwath, have developed a similar model based on the first law that accounts the necessary costs for the infrastructure. The environmental impacts of transport can be reduced by reducing the weight of vehicles, sustainable styles of driving, reducing the friction of tires, encouraging electric and hybrid vehicles, improving the walking and cycling environment in cities, and by enhancing the role of public transport, especially electric rail. Green vehicles are intended to have less environmental impact than equivalent standard vehicles, although when the environmental impact of a vehicle is assessed over the whole of its life cycle this may not be the case. Electric vehicle technology significantly reduces transport CO2 emissions when comparing battery electric vehicles (BEVs) with equivalent internal combustion engine vehicles (ICEVs). The extent to which it does this depends on the embodied energy of the vehicle and the source of the electricity. Lifecycle greenhouse gas emission reductions from BEVs are significant, even in countries with relatively high shares of coal in their electricity generation mix, such as China and India. As a specific example, a Nissan Leaf in the UK in 2019 produced one third of the greenhouse gases than the average internal combustion car. The Online Electric Vehicle (OLEV), developed by the Korea Advanced Institute of Science and Technology (KAIST), is an electric vehicle that can be charged while stationary or driving, thus removing the need to stop at a charging station. The City of Gumi in South Korea runs a 24 km roundtrip along which the bus will receive 100 kW (136 horsepower) electricity at an 85% maximum power transmission efficiency rate while maintaining a 17 cm air gap between the underbody of the vehicle and the road surface. At that power, only a few sections of the road need embedded cables. Hybrid vehicles, which use an internal combustion engine combined with an electric engine to achieve better fuel efficiency than a regular combustion engine, are already common. Natural gas is also used as a transport fuel, but is a less promising technology as it is still a fossil fuel and still has significant emissions (though lower than gasoline, diesel, etc.). Brazil met 17% of its transport fuel needs from bioethanol in 2007, but the OECD has warned that the success of (first-generation) biofuels in Brazil is due to specific local circumstances. Internationally, first-generation biofuels are forecast to have little or no impact on greenhouse emissions, at significantly higher cost than energy efficiency measures. The later generation biofuels however (2nd to 4th generation) do have significant environmental benefit, as they are no driving force for deforestation or struggle with the food vs fuel issue. In practice there is a sliding scale of green transport depending on the sustainability of the option. Green vehicles are more fuel-efficient, but only in comparison with standard vehicles, and they still contribute to traffic congestion and road crashes. Well-patronized public transport networks based on traditional diesel buses use less fuel per passenger than private vehicles, and are generally safer and use less road space than private vehicles. Green public transport vehicles including electric trains, trams and electric buses combine the advantages of green vehicles with those of sustainable transport choices. Other transport choices with very low environmental impact are cycling and other human-powered vehicles, and animal powered transport. The most common green transport choice, with the least environmental impact is walking. Transport on rails boasts an excellent efficiency (see fuel efficiency in transportation). Transport and social sustainability Cities with overbuilt roadways have experienced unintended consequences, linked to radical drops in public transport, walking, and cycling. In many cases, streets became void of "life." Stores, schools, government centers and libraries moved away from central cities, and residents who did not flee to the suburbs experienced a much reduced quality of public space and of public services. As schools were closed their mega-school replacements in outlying areas generated additional traffic; the number of cars on US roads between 7:15 and 8:15 a.m. increases 30% during the school year. Yet another impact was an increase in sedentary lifestyles, causing and complicating a national epidemic of obesity, and accompanying dramatically increased health care costs. Car-based transport systems present barriers to employment in low-income neighbourhoods, with many low-income individuals and families forced to run cars they cannot afford to maintain their income. Potential shift to sustainable transport in developing countries In developing countries such as Uganda, researchers have sought to determine factors that could possibly influence travelers to opt for bicycles as an alternative to motorcycle taxis (Bodaboda). The findings suggest that generally, the age, gender, and ability of the individual to cycle in the first place are key determinants of their willingness to shift to a more sustainable mode. Transport system improvements that could reduce the perceived risks of cycling were also seen to be the most impactful changes that could contribute towards the greater use of bicycles. Cities Cities are shaped by their transport systems. In The City in History, Lewis Mumford documented how the location and layout of cities was shaped around a walkable center, often located near a port or waterway, and with suburbs accessible by animal transport or, later, by rail or tram lines. In 1939, the New York World's Fair included a model of an imagined city, built around a car-based transport system. In this "greater and better world of tomorrow", residential, commercial and industrial areas were separated, and skyscrapers loomed over a network of urban motorways. These ideas captured the popular imagination, and are credited with influencing city planning from the 1940s to the 1970s. The emergence of the car in the post-war era led to major changes in the structure and function of cities. There was some opposition to these changes at the time. The writings of Jane Jacobs, in particular The Death and Life of Great American Cities provide a poignant reminder of what was lost in this transformation, and a record of community efforts to resist these changes. Lewis Mumford asked "is the city for cars or for people?" Donald Appleyard documented the consequences for communities of increasing car traffic in "The View from the Road" (1964) and in the UK, Mayer Hillman first published research into the impacts of traffic on child independent mobility in 1971. Despite these notes of caution, trends in car ownership, car use and fuel consumption continued steeply upward throughout the post-war period. Mainstream transport planning in Europe has, by contrast, never been based on assumptions that the private car was the best or only solution for urban mobility. For example, the Dutch Transport Structure Scheme has since the 1970s required that demand for additional vehicle capacity only be met "if the contribution to societal welfare is positive", and since 1990 has included an explicit target to halve the rate of growth in vehicle traffic. Some cities outside Europe have also consistently linked transport to sustainability and to land-use planning, notably Curitiba, Brazil, Portland, Oregon and Vancouver, Canada. There are major differences in transport energy consumption between cities; an average U.S. urban dweller uses 24 times more energy annually for private transport than a Chinese urban resident, and almost four times as much as a European urban dweller. These differences cannot be explained by wealth alone but are closely linked to the rates of walking, cycling, and public transport use and to enduring features of the city including urban density and urban design. The cities and nations that have invested most heavily in car-based transport systems are now the least environmentally sustainable, as measured by per capita fossil fuel use. The social and economic sustainability of car-based transportation engineering has also been questioned. Within the United States, residents of sprawling cities make more frequent and longer car trips, while residents of traditional urban neighborhoods make a similar number of trips, but travel shorter distances and walk, cycle and use transit more often. It has been calculated that New York residents save $19 billion each year simply by owning fewer cars and driving less than the average American. A less car intensive means of urban transport is carsharing, which is becoming popular in North America and Europe, and according to The Economist, carsharing can reduce car ownership at an estimated rate of one rental car replacing 15 owned vehicles. Car sharing has also begun in the developing world, where traffic and urban density is often worse than in developed countries. Companies like Zoom in India, eHi in China, and Carrot in Mexico, are bringing car-sharing to developing countries in an effort to reduce car-related pollution, ameliorate traffic, and expand the number of people who have access to cars. The European Commission adopted the Action Plan on urban mobility on 30 September 2009 for sustainable urban mobility. The European Commission will conduct a review of the implementation of the Action Plan in the year 2012, and will assess the need for further action. In 2007, 72% of the European population lived in urban areas, which are key to growth and employment. Cities need efficient transport systems to support their economy and the welfare of their inhabitants. Around 85% of the EU's GDP is generated in cities. Urban areas face today the challenge of making transport sustainable in environmental (CO2, air pollution, noise) and competitiveness (congestion) terms while at the same time addressing social concerns. These range from the need to respond to health problems and demographic trends, fostering economic and social cohesion to taking into account the needs of persons with reduced mobility, families and children. The C40 Cities Climate Leadership Group (C40) is a group of 94 cities around the world driving urban action that reduces greenhouse gas emissions and climate risks, while increasing the health and wellbeing of urban citizens. In October 2019, by signing the C40 Clean Air Cities Declaration, 35 mayors recognized that breathing clean air is a human right and committed to work together to form a global coalition for clean air. Papers have been written showing with satellite data that cities with subway systems produce much less greenhouse gas. Policies and governance By country United Kingdom In 2021 the Institute for Public Policy Research issued a statement saying that car use in the United Kingdom must shrink while active transport and public transport should be used more. The Department for Transport responded that they will spend 2 billion pounds on active transport, more than ever, including making England and the rest of the UK's railways greener. UK studies have shown that a modal shift to rail from air could result in a sixty fold reduction in CO2 emissions. Germany Some Western countries are making transportation more sustainable in both long-term and short-term implementations. An example is the modification in available transportation in Freiburg, Germany. The city has implemented extensive methods of public transportation, cycling, and walking, along with large areas where cars are not allowed. United States Since many Western countries are highly automobile-oriented, the main transit that people use is personal vehicles. About 80% of their travel involves cars. Therefore, California, is one of the highest greenhouse gases emitters in the United States. The federal government has to come up with some plans to reduce the total number of vehicle trips to lower greenhouse gases emission. Such as: Improve public transport through the provision of larger coverage area in order to provide more mobility and accessibility, new technology to provide a more reliable and responsive public transportation network. Encourage walking and biking through the provision of wider pedestrian pathway, bike share stations in downtowns, locate parking lots far from the shopping center, limit on street parking, slower traffic lane in downtown area. Increase the cost of car ownership and gas taxes through increased parking fees and tolls, encouraging people to drive more fuel efficient vehicles. This can produce a social equity problem, since lower income people usually drive older vehicles with lower fuel efficiency. Government can use the extra revenue collected from taxes and tolls to improve public transportation and benefit poor communities. Other states and nations have built efforts to translate knowledge in behavioral economics into evidence-based sustainable transportation policies. France In March 2022, an advertising regulation will come into force in France, requiring all advertising materials for automobiles to include one of three standard disclaimers promoting the use of sustainable transport practices. This applies to all vehicles, including electric vehicles. In 2028, it will also become illegal to advertise vehicles which emit more than 128 grams of carbon dioxide per-kilometre. At city level Sustainable transport policies have their greatest impact at the city level. Some of the biggest cities in Western Europe have a relatively sustainable transport. In Paris 53% of trips are made by walking, 3% by bicycle, 34% by public transport, and only 10% by car. In the entire Ile-de-France region, walking is the most popular way of transportation. In Amsterdam, 28% of trips are made by walking, 31% by bicycle, 18% by public transport and only 23% by car. In Copenhagen 62% of people commute to school or work by bicycle. Outside Western Europe, cities which have consistently included sustainability as a key consideration in transport and land use planning include Curitiba, Brazil; Bogota, Colombia; Portland, Oregon; and Vancouver, Canada. The state of Victoria, Australia passed legislation in 2010 – the Transport Integration Act – to compel its transport agencies to actively consider sustainability issues including climate change impacts in transport policy, planning and operations. Many other cities throughout the world have recognized the need to link sustainability and transport policies, for example by joining the Cities for Climate Protection program. Some cities are trying to become car-free cities, e.g., limit or exclude the usage of cars. In 2020, the COVID-19 pandemic pushed several cities to adopt a plan to drastically increase biking and walking; these included Milan, London, Brighton, and Dublin. These plans were taken to facilitate social distancing by avoiding public transport and at the same time prevent a rise in traffic congestion and air pollution from increase in car use. A similar plan was adopted by New York City and Paris. The pandemic's impact on urban public transportation means revenue declines will put a strain on operators' finances and may cause creditworthiness to worsen. Governments might be forced to subsidize operators with financial transfers, in turn reducing resources available for investment in greener transportation systems. Community and grassroots action Sustainable transport is fundamentally a grassroots movement, albeit one which is now recognized as of citywide, national and international significance. Whereas it started as a movement driven by environmental concerns, over these last years there has been increased emphasis on social equity and fairness issues, and in particular the need to ensure proper access and services for lower income groups and people with mobility limitations, including the fast-growing population of older citizens. Many of the people exposed to the most vehicle noise, pollution and safety risk have been those who do not own, or cannot drive cars, and those for whom the cost of car ownership causes a severe financial burden. An organization called Greenxc started in 2011 created a national awareness campaign in the United States encouraging people to carpool by ride-sharing cross country stopping over at various destinations along the way and documenting their travel through video footage, posts and photography. Ride-sharing reduces individual's carbon footprint by allowing several people to use one car instead of everyone using individual cars. At the beginning of the 21st century, some companies are trying to increase the use of sailing ships, even for commercial purposes, for example, Fairtrannsport and New Dawn Traders They have created the Sail Cargo Alliance. The European Investment Bank committed €314 million between 2018 and 2022 to green marine transport, funding the building of new ships and the retrofitting of current ships with eco-friendly technologies to increase their energy efficiency and lower harmful emissions. The Bank also offered an average of €11 billion per year from 2012 to 2022 for sustainable transportation solutions and climate-friendly initiatives. In 2022, railway projects received around 32% of overall transport loans, while urban mobility received approximately 37%. Recent trends Car travel increased steadily throughout the twentieth century, but trends since 2000 have been more complex. Oil price rises from 2003 have been linked to a decline in per capita fuel use for private vehicle travel in the US, Britain and Australia. In 2008, global oil consumption fell by 0.8% overall, with significant declines in consumption in North America, Western Europe, and parts of Asia. Other factors affecting a decline in driving, at least in America, include the retirement of Baby Boomers who now drive less, preference for other travel modes (such as transit) by younger age cohorts, the Great Recession, and the rising use of technology (internet, mobile devices) which have made travel less necessary and possibly less attractive. Greenwashing The term green transport is often used as a greenwash marketing technique for products which are not proven to make a positive contribution to environmental sustainability. Such claims can be legally challenged. For instance the Norwegian Consumer Ombudsman has targeted car manufacturers who claim that their cars are "green", "clean" or "environmentally friendly". Manufacturers risk fines if they fail to drop the words. The Australian Competition & Consumer Commission (ACCC) describes "green" claims on products as "very vague, inviting consumers to give a wide range of meanings to the claim, which risks misleading them". In 2008 the ACCC forced a car retailer to stop its green marketing of Saab cars, which was found by the Australian Federal Court to be "misleading". Tools and incentives Several European countries are opening up financial incentives that support more sustainable modes of transport. The European Cyclists' Federation, which focuses on daily cycling for transport, has created a document containing a non-complete overview. In the UK, employers have for many years been providing employees with financial incentives. The employee leases or borrows a bike that the employer has purchased. You can also get other support. The scheme is beneficial for the employee who saves money and gets an incentive to get exercise integrated in the daily routine. The employer can expect a tax deduction, lower sick leave and less pressure on parking spaces for cars. Since 2010, there has been a scheme in Iceland (Samgöngugreiðslur) where those who do not drive a car to work, get paid a lump of money monthly. An employee must sign a statement not to use a car for work more often than one day a week, or 20% of the days for a period. Some employers pay fixed amounts based on trust. Other employers reimburse the expenses for repairs on bicycles, period-tickets for public transport and the like. Since 2013, amounts up to ISK 8000 per month have been tax-free. Most major workplaces offer this, and a significant proportion of employees use the scheme. Since 2019 half the amount is tax-free if the employee signs a contract not to use a car to work for more than 40% of the days of the contract period. Possible measures for urban transport The EU Directorate-General for Transport and Energy (DG-TREN) has launched a program which focusses mostly on urban transport. Its main measures are: History Most of the tools and concepts of sustainable transport were developed before the phrase was coined. Walking, the first mode of transport, is also the most sustainable. Public transport dates back at least as far as the invention of the public bus by Blaise Pascal in 1662. The first passenger tram began operation in 1807 and the first passenger rail service in 1825. Pedal bicycles date from the 1860s. These were the only personal transport choices available to most people in Western countries prior to World War II, and remain the only options for most people in the developing world. Freight was moved by human power, animal power or rail. Mass motorization The post-war years brought increased wealth and a demand for much greater mobility for people and goods. The number of road vehicles in Britain increased fivefold between 1950 and 1979, with similar trends in other Western nations. Most affluent countries and cities invested heavily in bigger and better-designed roads and motorways, which were considered essential to underpin growth and prosperity. Transport planning became a branch of Urban Planning and identified induced demand as a pivotal change from "predict and provide" toward a sustainable approach incorporating land use planning and public transit. Public investment in transit, walking and cycling declined dramatically in the United States, Great Britain and Australia, although this did not occur to the same extent in Canada or mainland Europe. Concerns about the sustainability of this approach became widespread during the 1973 oil crisis and the 1979 energy crisis. The high cost and limited availability of fuel led to a resurgence of interest in alternatives to single occupancy vehicle travel. Transport innovations dating from this period include high-occupancy vehicle lanes, citywide carpool systems and transportation demand management. Singapore was the first country in the world to implement congestion pricing in 1975, and Curitiba began implementing its Bus Rapid Transit system in the early 1980s. Relatively low and stable oil prices during the 1980s and 1990s led to significant increases in vehicle travel from 1980 to 2000, both directly because people chose to travel by car more often and for greater distances, and indirectly because cities developed tracts of suburban housing, distant from shops and from workplaces, now referred to as urban sprawl. Trends in freight logistics, including a movement from rail and coastal shipping to road freight and a requirement for just in time deliveries, meant that freight traffic grew faster than general vehicle traffic. At the same time, the academic foundations of the "predict and provide" approach to transport were being questioned, notably by Peter Newman in a set of comparative studies of cities and their transport systems dating from the mid-1980s. The British Government's White Paper on Transport marked a change in direction for transport planning in the UK. In the introduction to the White Paper, Prime Minister Tony Blair stated thatWe recognise that we cannot simply build our way out of the problems we face. It would be environmentally irresponsible – and would not work. A companion document to the White Paper called "Smarter Choices" researched the potential to scale up the small and scattered sustainable transport initiatives then occurring across Britain, and concluded that the comprehensive application of these techniques could reduce peak period car travel in urban areas by over 20%. A similar study by the United States Federal Highway Administration, was also released in 2004 and also concluded that a more proactive approach to transportation demand was an important component of overall national transport strategy. Mobility transition See also Alternatives to car use Circular economy Cyclability Ecological modernization Electric bicycle Energy efficiency in transport Environmental impact of aviation Environmental impact of shipping Free public transport Freeway removal Green building Green infrastructure Green transport hierarchy Hypermobility Localism Modal share Michael Replogle Road reallocation Solar vehicle Sustainable architecture Sustainable aviation fuel Sustainable biofuel Sustainable distribution Transport ecology Urban vitality Wind-powered vehicle Groups: EcoMobility Alliance Institute for Transportation and Development Policy International Association of Public Transport Michelin Challenge Bibendum References Bibliography Sustainability and Cities: Overcoming Automobile Dependence, Island Press, Washington DC, 1999. Newman P and Kenworthy J, . Sustainable Transportation Networks, Edward Elgar Publishing, Cheltenham, England, 2000. Nagurney A, Introduction to Sustainable Transportation: Policy, Planning and Implementation, Earthscan, London, Washington DC, 2010. Schiller P Eric C. Bruun and Jeffrey R. Kenworthy, . Sustainable Transport, Mobility Management and Travel Plans, Ashgate Press, Farnham, Surrey, 2012, Enoch M P. . External links Guiding Principles to Sustainable Mobility Sustainable Urban Transport Project - knowledge platform (SUTP) German Partnership for Sustainable Mobility (GPSM) Bridging the Gap: Pathways for transport in the post 2012 process Sustainable-mobility.org: the centre of resources on sustainable transport Transportation Research at IssueLab Switching Gears: Enabling Access to Sustainable Urban Mobility
Sustainable transport
[ "Physics" ]
6,038
[ "Physical systems", "Transport", "Sustainable transport" ]
1,002,779
https://en.wikipedia.org/wiki/Azeotropic%20distillation
In chemistry, azeotropic distillation is any of a range of techniques used to break an azeotrope in distillation. In chemical engineering, azeotropic distillation usually refers to the specific technique of adding another component to generate a new, lower-boiling azeotrope that is heterogeneous (e.g. producing two, immiscible liquid phases), such as the example below with the addition of benzene to water and ethanol. This practice of adding an entrainer which forms a separate phase is a specific sub-set of (industrial) azeotropic distillation methods, or combination thereof. In some senses, adding an entrainer is similar to extractive distillation. Material separation agent The addition of a material separation agent, such as benzene to an ethanol/water mixture, changes the molecular interactions and eliminates the azeotrope. Added in the liquid phase, the new component can alter the activity coefficient of various compounds in different ways thus altering a mixture's relative volatility. Greater deviations from Raoult's law make it easier to achieve significant changes in relative volatility with the addition of another component. In azeotropic distillation the volatility of the added component is the same as the mixture, and a new azeotrope is formed with one or more of the components based on differences in polarity. If the material separation agent is selected to form azeotropes with more than one component in the feed then it is referred to as an entrainer. The added entrainer should be recovered by distillation, decantation, or another separation method and returned near the top of the original column. Distillation of ethanol/water A common historical example of azeotropic distillation is its use in dehydrating ethanol and water mixtures. For this, a near azeotropic mixture is sent to the final column where azeotropic distillation takes place. Several entrainers can be used for this specific process: benzene, pentane, cyclohexane, hexane, heptane, isooctane, acetone, and diethyl ether are all options as the mixture. Of these benzene and cyclohexane have been used the most extensively, but since the identification of benzene as a carcinogen, toluene is used instead. Pressure-swing distillation Another method, pressure-swing distillation, relies on the fact that an azeotrope is pressure dependent. An azeotrope is not a range of concentrations that cannot be distilled, but the point at which the activity coefficients of the distillates are crossing one another. If the azeotrope can be "jumped over", distillation can continue, although because the activity coefficients have crossed, the component which is boiling will change. For instance, in a distillation of ethanol and water, water will boil out of the remaining ethanol, rather than the ethanol out of the water as at lower concentrations. Overall the pressure-swing distillation is a very robust and not so highly sophisticated method compared to multi component distillation or membrane processes, but the energy demand is in general higher. Also the investment cost of the distillation columns is higher, due to the pressure inside the vessels. Molecular sieves For low boiling azeotropes distillation may not allow the components to be fully separated, and must make use of separation methods that does not rely on distillation. A common approach involves the use of molecular sieves. The sieves can be subsequently regenerated by dehydration using a vacuum oven. Ethanol can be dried to 95% ABV by heating 3A molecular sieves such as 3A zeolite. Dehydration reactions In organic chemistry, some dehydration reactions are subject to unfavorable but fast equilibria. One example is the formation of dioxolanes from aldehydes: RCHO + (CH2OH)2 RCH(OCH2)2 + H2O Such unfavorable reactions proceed when water is removed by azeotropic distillation. See also Azeotrope tables Residue curve Theoretical plate Vacuum distillation References Distillation
Azeotropic distillation
[ "Chemistry" ]
900
[ "Distillation", "Separation processes" ]
1,003,125
https://en.wikipedia.org/wiki/Bronis%C5%82aw%20Knaster
Bronisław Knaster (22 May 1893 – 3 November 1980) was a Polish mathematician; from 1939 a university professor in Lwów and from 1945 in Wrocław. In 1945, he completed a project in collaboration with Karol Borsuk and Kazimierz Kuratowski concerning the establishment of the Institute of Mathematics of the Polish Academy of Sciences. He is known for his work in point-set topology and in particular for his discoveries in 1922 of the hereditarily indecomposable continuum or pseudo-arc and of the Knaster continuum, or buckethandle continuum. Together with his teacher Hugo Steinhaus and his colleague Stefan Banach, he also developed the last diminisher procedure for fair cake cutting. Knaster received his Ph.D. degree from University of Warsaw in 1922 under the supervision of Stefan Mazurkiewicz. See also List of Polish mathematicans References 1893 births 1990 deaths People from Warsaw Governorate University of Paris alumni Warsaw School of Mathematics Topologists Recipients of the State Award Badge (Poland) Recipients of the Medal of the 10th Anniversary of the People's Republic of Poland
Bronisław Knaster
[ "Mathematics" ]
229
[ "Topologists", "Topology" ]
1,003,148
https://en.wikipedia.org/wiki/ROYGBIV
ROYGBIV is an acronym for the sequence of hues commonly described as making up a rainbow: red, orange, yellow, green, blue, indigo, and violet. When making an artificial rainbow, glass prism is used, but the colors of "ROY-G-BIV" are inverted to VIB-G-YOR". There are several mnemonics that can be used for remembering this color sequence, such as the name "Roy G. Biv" or sentences such as "Richard of York Gave Battle in Vain". History In the Renaissance, several artists tried to establish a sequence of up to seven primary colors from which all other colors could be mixed. In line with this artistic tradition, Sir Isaac Newton divided his color circle, which he constructed to explain additive color mixing, into seven colors. Originally he used only five colors, but later he added orange and indigo to match the number of musical notes in the major scale. The Munsell color system, the first formal color notation system (1905), names only five "principal hues": red, yellow, green, blue, and purple. Mnemonics Isaac Newton's color sequence (red, orange, yellow, green, blue, indigo, violet) is kept alive today by several popular mnemonics. One is simply the nonsense word roygbiv, which is an acronym for the seven colors. This word can also be envisioned as a person's name, "Roy G. Biv". Another traditional mnemonic device has been to turn the initial letters of the seven spectral colors into a sentence, most commonly "Richard Of York Gave Battle In Vain" (or the slight alternative "Richard Of York Gained Battles In Vain"). This mnemonic is said to refer to the defeat and death of Richard, Duke of York at the Battle of Wakefield in 1460, or to his son Richard III being defeated at the battle of Bosworth Field in 1485. Another sentence sometimes used is "Read Out Your Good Book In Verse", referring to the Bible. In popular culture The mnemonic sentence "Richard Of York Gave Battle In Vain", mentioned above, also appears in the 2003 novel Artemis Fowl and the Eternity Code, third book of the Artemis Fowl series. The song "Roygbiv" by Scottish electronic band Boards of Canada is named for the mnemonic. References Optical spectrum Mnemonics
ROYGBIV
[ "Physics" ]
496
[ "Optical spectrum", "Spectrum (physical sciences)", "Electromagnetic spectrum" ]
1,003,410
https://en.wikipedia.org/wiki/S%20transform
S transform as a time–frequency distribution was developed in 1994 for analyzing geophysics data. In this way, the S transform is a generalization of the short-time Fourier transform (STFT), extending the continuous wavelet transform and overcoming some of its disadvantages. For one, modulation sinusoids are fixed with respect to the time axis; this localizes the scalable Gaussian window dilations and translations in S transform. Moreover, the S transform doesn't have a cross-term problem and yields a better signal clarity than Gabor transform. However, the S transform has its own disadvantages: the clarity is worse than Wigner distribution function and Cohen's class distribution function. A fast S transform algorithm was invented in 2010. It reduces the computational complexity from O[N2·log(N)] to O[N·log(N)] and makes the transform one-to-one, where the transform has the same number of points as the source signal or image, compared to storage complexity of N2 for the original formulation. An implementation is available to the research community under an open source license. A general formulation of the S transform makes clear the relationship to other time frequency transforms such as the Fourier, short time Fourier, and wavelet transforms. Definition There are several ways to represent the idea of the S transform. In here, S transform is derived as the phase correction of the continuous wavelet transform with window being the Gaussian function. S-Transform Inverse S-Transform Modified form Spectrum Form The above definition implies that the s-transform function can be express as the convolution of and . Applying the Fourier transform to both and gives . Discrete-time S-transform From the spectrum form of S-transform, we can derive the discrete-time S-transform. Let , where is the sampling interval and is the sampling frequency. The Discrete time S-transform can then be expressed as: Implementation of discrete-time S-transform Below is the Pseudo code of the implementation. Step1.Compute loop over m (voices) Step2.Compute for Step3.Move to Step4.Multiply Step2 and Step3 Step5.IDFT(). Repeat.} Comparison with other time–frequency analysis tools Comparison with Gabor transform The only difference between the Gabor transform (GT) and the S transform is the window size. For GT, the windows size is a Gaussian function , meanwhile, the window function for S-Transform is a function of f. With a window function proportional to frequency, S Transform performs well in frequency domain analysis when the input frequency is low. When the input frequency is high, S-Transform has a better clarity in the time domain. As table below. This kind of property makes S-Transform a powerful tool to analyze sound because human is sensitive to low frequency part in a sound signal. Comparison with Wigner transform The main problem with the Wigner Transform is the cross term, which stems from the auto-correlation function in the Wigner Transform function. This cross term may cause noise and distortions in signal analyses. S-transform analyses avoid this issue. Comparison with the short-time Fourier transform We can compare the S transform and short-time Fourier transform (STFT). First, a high frequency signal, a low frequency signal, and a high frequency burst signal are used in the experiment to compare the performance. The S transform characteristic of frequency dependent resolution allows the detection of the high frequency burst. On the other hand, as the STFT consists of a constant window width, it leads to the result having poorer definition. In the second experiment, two more high frequency bursts are added to crossed chirps. In the result, all four frequencies were detected by the S transform. On the other hand, the two high frequencies bursts are not detected by STFT. The high frequencies bursts cross term caused STFT to have a single frequency at lower frequency. Applications Signal filterings Magnetic resonance imaging (MRI) Power system disturbance recognition S transform has been proven to be able to identify a few types of disturbances, like voltage sag, voltage swell, momentary interruption, and oscillatory transients. S transform also be applied for other types of disturbances such as notches, harmonics with sag and swells etc. S transform generates contours which are suitable for simple visual inspection. However, wavelet transform requires specific tools like standard multiresolution analysis. Geophysical signal analysis Reflection seismology Global seismology See also Laplace transform Wavelet transform Short-time Fourier transform References Rocco Ditommaso, Felice Carlo Ponzo, Gianluca Auletta (2015). Damage detection on framed structures: modal curvature evaluation using Stockwell Transform under seismic excitation. Earthquake Engineering and Engineering Vibration. June 2015, Volume 14, Issue 2, pp 265–274. Rocco Ditommaso, Marco Mucciarelli, Felice C. Ponzo (2010). S-Transform based filter applied to the analysis of non-linear dynamic behaviour of soil and buildings. 14th European Conference on Earthquake Engineering. Proceedings Volume. Ohrid, Republic of Macedonia. August 30 – September 3, 2010. (downloadable from http://roccoditommaso.xoom.it) M. Mucciarelli, M. Bianca, R. Ditommaso, M.R. Gallipoli, A. Masi, C Milkereit, S. Parolai, M. Picozzi, M. Vona (2011). FAR FIELD DAMAGE ON RC BUILDINGS: THE CASE STUDY OF NAVELLI DURING THE L’AQUILA (ITALY) SEISMIC SEQUENCE, 2009. Bulletin of Earthquake Engineering. . J. J. Ding, "Time-frequency analysis and wavelet transform course note," the Department of Electrical Engineering, National Taiwan University (NTU), Taipei, Taiwan, 2007. Jaya Bharata Reddy, Dusmanta Kumar Mohanta, and B. M. Karan, "Power system disturbance recognition using wavelet and s-transform techniques," Birla institute of Technology, Mesra, Ranchi-835215, 2004. B. Boashash, "Notes on the use of the wigner distribution for time frequency signal analysis", IEEE Trans. on Acoust. Speech. and Signal Processing, vol. 26, no. 9, 1987 R. N. Bracewell, The Fourier Transform and Its Applications, McGraw Hill Book Company, New York, 1978 E. O. Brigham, The Fast Fourier Transform, Prentice-Hall Inc., Englewood Cliffs, New Jersey, 1974 I. Daubechies, "The wavelet transform, time-frequency localization and signal analysis", IEEE Trans. on Information Theory, vol. 36, no. 5, Sept. 1990 D. Gabor, "Theory of communication", J. Inst. Elect. Eng., vol. 93, no. 3, pp. 429–457, 1946 F. Hlawatsch and G. F. Boudreuax-Bartels, 1992 "Linear and quadratic timefrequency signal representations", IEEE Signal Processing Magazine, pp. 21–67 R. K. Young, Wavelet Theory and its Applications, Kluwer Academic Publishers, Dordrecht,1993 Integral transforms Fourier analysis Time–frequency analysis
S transform
[ "Physics" ]
1,515
[ "Frequency-domain analysis", "Spectrum (physical sciences)", "Time–frequency analysis" ]
1,003,435
https://en.wikipedia.org/wiki/FTAM
FTAM, ISO standard 8571, is the OSI application layer protocol for file transfer, access and management. The goal of FTAM is to combine into a single protocol both file transfer, similar in concept to the Internet FTP, as well as remote access to open files, similar to NFS. However, like the other OSI protocols, FTAM has not been widely adopted, and the TCP/IP based Internet has become the dominant global network. The FTAM protocol was used in the German banking sector to transfer clearing information. The Banking Communication Standard (BCS) over FTAM access (short BCS-FTAM) was standardized in the DFÜ-Abkommen (EDI-agreement) enacted in Germany on 15 March 1995. The BCS-FTAM transmission protocol was supposed to be replaced by the Electronic Banking Internet Communication Standard (EBICS) in 2010. The obligatory support for BCS over FTAM was ceased in December 2010. RFC 1415 provides an FTP-FTAM gateway specification but attempts to define an Internet-scale file transfer protocol have instead focused on Server message block, NFS or Andrew File System as models. ISO 8571 parts ISO 8571, Information processing systems — Open Systems Interconnection — File Transfer, Access and Management, is split into five parts: ISO 8571-1:1988 Part 1: General introduction ISO 8571-2:1988 Part 2: Virtual Filestore Definition ISO 8571-3:1988 Part 3: File Service Definition ISO 8571-4:1988 Part 4: File Protocol Specification ISO/IEC 8571-5:1990 Part 5: Protocol Implementation Conformance Statement Proforma References Networking standards Computer file systems ITU-T recommendations OSI protocols Network file transfer protocols File transfer protocols Application layer protocols
FTAM
[ "Technology", "Engineering" ]
376
[ "Networking standards", "Computer standards", "Computer networks engineering" ]
1,003,661
https://en.wikipedia.org/wiki/RTP%20Control%20Protocol
The RTP Control Protocol (RTCP) is a binary-encoded out-of-band signaling protocol that functions alongside the Real-time Transport Protocol (RTP). Its basic functionality and packet structure is defined in RFC 3550. RTCP provides statistics and control information for an RTP session. It partners with RTP in the delivery and packaging of multimedia data but does not transport any media data itself. The primary function of RTCP is to provide feedback on the quality of service (QoS) in media distribution by periodically sending statistics information such as transmitted octet and packet counts, packet loss, packet delay variation, and round-trip delay time to participants in a streaming multimedia session. An application may use this information to control quality of service parameters, perhaps by limiting flow, or using a different codec. Protocol functions Typically RTP will be sent on an even-numbered UDP port, with RTCP messages being sent over the next higher odd-numbered port. RTCP itself does not provide any flow encryption or authentication methods. Such mechanisms may be implemented, for example, with the Secure Real-time Transport Protocol (SRTP) defined in RFC 3711. RTCP provides basic functions expected to be implemented in all RTP sessions: The primary function of RTCP is to gather statistics on quality aspects of the media distribution during a session and transmit this data to the session media source and other session participants. Such information may be used by the source for adaptive media encoding (codec) and detection of transmission faults. If the session is carried over a multicast network, this permits non-intrusive session quality monitoring. RTCP provides canonical end-point identifiers (CNAME) to all session participants. Although a source identifier (SSRC) of an RTP stream is expected to be unique, the instantaneous binding of source identifiers to end-points may change during a session. The CNAME establishes unique identification of end-points across an application instance (multiple use of media tools) and for third-party monitoring. Provisioning of session control functions. RTCP is a convenient means to reach all session participants, whereas RTP itself is not. RTP is only transmitted by a media source. RTCP reports are expected to be sent by all participants, even in a multicast session which may involve thousands of recipients. Such traffic will increase proportionally with the number of participants. Thus, to avoid network congestion, the protocol must include session bandwidth management. This is achieved by dynamically controlling the frequency of report transmissions. RTCP bandwidth usage should generally not exceed 5% of the total session bandwidth. Furthermore, 25% of the RTCP bandwidth should be reserved to media sources at all times, so that in large conferences new participants can receive the CNAME identifiers of the senders without excessive delay. The RTCP reporting interval is randomized to prevent unintended synchronization of reporting. The recommended minimum RTCP report interval per station is 5 seconds. Stations should not transmit RTCP reports more often than once every 5 seconds. Packet header Version: (2 bits) Identifies the version of RTP, which is the same in RTCP packets as in RTP data packets. The version defined by this specification is two (2). P (Padding): (1 bits) Indicates if there are extra padding bytes at the end of the RTP packet. Padding may be used to fill up a block of certain size, for example as required by an encryption algorithm. The last byte of the padding contains the number of padding bytes that were added (including itself). RC (Reception report count): (5 bits) The number of reception report blocks contained in this packet. A value of zero is valid. PT (Packet type): (8 bits) Contains a constant to identify RTCP packet type. Length: (16 bits) Indicates the length of this RTCP packet (including the header itself) in 32-bit units minus one. SSRC: (32 bits) Synchronization source identifier uniquely identifies the source of a stream. Note that multiple reports can be concatenated into a single compound RTCP packet, each with its own packet header. Message types RTCP distinguishes several types of packets: sender report, receiver report, source description, and goodbye. In addition, the protocol is extensible and allows application-specific RTCP packets. A standards-based extension of RTCP is the extended report packet type introduced by RFC 3611. Sender report (SR)The sender report is sent periodically by the active senders in a conference to report transmission and reception statistics for all RTP packets sent during the interval. The sender report includes two distinct timestamps, an absolute timestamp, represented using the timestamp format of the Network Time Protocol (NTP) (which is in seconds relative to midnight UTC on 1 January 1900) and an RTP timestamp that corresponds to the same time as the NTP timestamp, but in the same units and with the same random offset as the RTP timestamps in data packets described by this Sender Report. The absolute timestamp allows the receiver to synchronize RTP messages. It is particularly important when both audio and video are transmitted simultaneously because audio and video streams use independent relative timestamps. Receiver report (RR) The receiver report is for passive participants, those that do not send RTP packets. The report informs the sender and other receivers about the quality of service. Source description (SDES) The Source Description message is used to send the CNAME item to session participants. It may also be used to provide additional information such as the name, e-mail address, telephone number, and address of the owner or controller of the source. Goodbye (BYE) A source sends a BYE message to shut down a stream. It allows an endpoint to announce that it is leaving the conference. Although other sources can detect the absence of a source, this message is a direct announcement. It is also useful to a media mixer. Application-specific message (APP) The application-specific message provides a mechanism to design application-specific extensions to the RTCP protocol. Scalability in large deployments In large-scale applications, such as in Internet Protocol television (IPTV), very long delays (minutes to hours) between RTCP reports may occur, because of the RTCP bandwidth control mechanism required to control congestion (see Protocol functions). Acceptable frequencies are usually less than one per minute. This affords the potential of inappropriate reporting of the relevant statistics by the receiver or causes evaluation by the media sender to be inaccurate relative to the current state of the session. Methods have been introduced to alleviate the problems: RTCP filtering, RTCP biasing and hierarchical aggregation. Hierarchical aggregation The Hierarchical Aggregation (or also known as RTCP feedback hierarchy) is an optimization of the RTCP feedback model and its aim is to shift the maximum number of users limit further together with quality of service (QoS) measurement. The RTCP bandwidth is constant and takes just 5% of session bandwidth. Therefore, the reporting interval about QoS depends, among others, on a number of session members and for very large sessions it can become very high (minutes or even hours). However, the acceptable interval is about 10 seconds of reporting. Bigger values would cause time-shifted and very inaccurate reported status about the current session status and any optimization made by the sender could even have a negative effect on network or QoS conditions. The Hierarchical Aggregation is used with Source-Specific Multicast where only a single source is allowed, i.e. IPTV. Another type of multicast could be Any-Source Multicast but it is not so suitable for large-scale applications with huge number of users. , only the most modern IPTV systems use Hierarchical aggregation. Feedback Target Feedback Target is a new type of member that has been firstly introduced by the Internet Draft draft-ietf-avt-rtcpssm-13. The Hierarchical Aggregation method has extended its functionality. The function of this member is to receive Receiver Reports (RR) (see RTCP) and retransmit summarized RR packets, so-called Receiver Summary Information (RSI) to a sender (in case of single-level hierarchy). Standards documents , Standard 64, RTP: A Transport Protocol for Real-Time Applications See also Streaming media Voice over IP Notes References Further reading Streaming Application layer protocols Audio network protocols VoIP protocols
RTP Control Protocol
[ "Technology" ]
1,745
[ "Multimedia", "Streaming" ]
1,003,833
https://en.wikipedia.org/wiki/Telecommunications%20Management%20Network
The Telecommunications Management Network is a protocol model defined by ITU-T for managing open systems in a communications network. It is part of the ITU-T Recommendation series M.3000 and is based on the OSI management specifications in ITU-T Recommendation series X.700. TMN provides a framework for achieving interconnectivity and communication across heterogeneous operations system and telecommunication networks. To achieve this, TMN defines a set of interface points for elements which perform the actual communications processing (such as a call processing switch) to be accessed by elements, such as management workstations, to monitor and control them. The standard interface allows elements from different manufacturers to be incorporated into a network under a single management control. For communication between Operations Systems and NEs (Network Elements), it uses the Common management information protocol (CMIP) or Mediation devices when it uses Q3 interface. The TMN layered organization is used as fundamental basis for the management software of ISDN, B-ISDN, ATM, SDH/SONET and GSM networks. It is not as commonly used for purely packet-switched data networks. Modern telecom networks offer automated management functions and are run by operations support system (OSS) software. These manage modern telecom networks and provide the data that is needed in the day-to-day running of a telecom network. OSS software is also responsible for issuing commands to the network infrastructure to activate new service offerings, commence services for new customers, and detect and correct network faults. Architecture According to ITU-T M.3010 TMN has 3 architectures: Physical architecture Security architecture Logical layered architecture Logical layers The framework identifies four logical layers of network management: Business management Includes the functions related to business aspects, analyzes trends and quality issues, for example, or to provide a basis for billing and other financial reports. Service management Handles services in the network: definition, administration and charging of services. Network management Distributes network resources, performs tasks of: configuration, control and supervision of the network. Element management Handles individual network elements including alarm management, handling of information, backup, logging, and maintenance of hardware and software. A network element provides agent services, mapping the physical aspects of the equipment into the TMN framework. Recommendations The TMN M.3000 series includes the following recommendations: M.3000 Tutorial Introduction to TMN M.3010 Principles for a TMN M.3020 TMN Interface Specification Methodology M.3050 Business Process Framework (eTOM) M.3060 Principles for the Management of the Next Generation Networks M.3100 Generic Network Information Model for TMN M.3200 TMN Management Services Overview M.3300 TMN Management Capabilities at the F Interface See also Simple Network Management Protocol (SNMP) Common management interface protocol (CMIP, X.700) References Further reading 420 pp with 740 pp appendices in CD-ROM Network protocols ITU-T recommendations Network management
Telecommunications Management Network
[ "Engineering" ]
617
[ "Computer networks engineering", "Network management" ]
1,003,960
https://en.wikipedia.org/wiki/Praktica
Praktica was a brand of camera manufactured by Pentacon in Dresden in eastern Germany, within the GDR between 1949 and the German reunification in 1990. The firm Pentacon was divided in mainly two parts and sold after German reunification. Schneider Kreuznach and Noble bought parts of it. Pentacon is a Dresden-based company in the optical and precision engineering industry, which was at times a major manufacturer of photo cameras. The name Pentacon is derived on the one hand from the Contax brand of the Dresden Zeiss Ikon Kamerawerke and Pentagon (Greek for pentagon), because a pentaprism for SLR cameras developed for the first time in Dresden has this shape in cross section. Today's PENTACON GmbH Foto- und Feinwerktechnik is still based in Dresden. It is part of the Schneider Group, Bad Kreuznach. Pentacon is the modern-day successor to Dresden camera firms such as Zeiss Ikon; for many years Dresden was the world's largest producer of cameras. Previous brands of the predecessor firms included Praktica, Exa, Pentacon, Zeiss Ikon, Contax (now owned by the Carl Zeiss company), Ica, Ernemann, Exakta, Praktiflex, and many more. Among the innovative legacies of the predecessor firms are the roll film SLR camera in 1933, the 35mm SLR in 1936, and the pentaprism SLR in 1949. After WWII the company's products were best known in the Eastern Bloc countries, though some were exported to the west. They currently produce both budget lenses (mostly small, not very durable, and having manual focus, but good in optical quality) and higher priced products . They also produce optical equipment for the space programs of the US, Western Europe and Russia. In 2001, the production of Praktica Analogue SLR cameras was discontinued with the focus shifting to a range of Praktica digital compact cameras and camcorders together with an extensive range of binoculars, spotting scopes, accessories and other optical imaging products. Praktica today produces many products under various brands such as auto industry products, 3D LCD screens, and still cameras and lenses under their own Praktica brand and also more known international brands. Since September 2015 the owner of the Praktica brand has been Praktica Ltd, a UK limited company. Praktica SLRs Original Praktica Praktica - 1949 to 1952 Praktica FX - 1952 to 1955 Praktica MX - 1952 to 1954 Praktica Modell III - 1955 to 1956 Praktica FX 2 - 1955 to 1959 Praktica FX 3 - 1956 to 1958 Praktica IV / V Praktica IV - 1959 to 1966 Praktica IV B Praktica IV M Praktica IV BM Praktica IV F Praktica IV FB Praktica V F Praktica V FB Praktica nova / mat Praktica nova - 1964 to 1967 Praktica nova B - 1965 to 1967 Praktica mat - 1965 to 1969 Praktica PL-series Praktica PL nova I - 1967 to 1972 Praktica PL nova I B - 1967 to 1975 Praktica Super TL - 1968 to 1976 Praktica PL electronic - 1968 to 1969 1st generation L-series Praktica L - 1969 to 1975 Praktica LLC - 1969 to 1975 Praktica LTL - 1970 to 1975 Praktica LB - 1972 to 1976 Praktica VLC - 1974 to 1975 Praktica LTL 2 - 1975 to 1978 Praktica TL - 1976 Praktica Super TL 2 - 1977 to 1978 2nd generation L-series Praktica L2 - 1975 to 1980 Praktica LTL 3 - 1975 to 1978 Praktica PLC 2 - 1975 to 1978 Praktica L3 ENDO - 1975 to 1980 Praktica LB 2 - 1976 to 1977 Praktica VLC 2 - 1976 to 1978 Praktica EE 2 - 1977 to 1979 Praktica DTL 2 - 1978 to 1979 3rd generation L-series Praktica Super TL 3 - 1978 to 1980 Praktica MTL 3 - 1978 to 1984 Praktica VLC 3 - 1978 to 1981 Praktica PLC 3 - 1978 to 1983 Praktica DTL 3 - 1979 to 1982 Praktica EE 3 - 1979 to 1980 4th generation L-series Praktica Super TL 1000 - 1980 to 1986 Praktica Super TL 500 - 1981 Praktica MTL 5 - 1983 to 1985 Praktica MTL 5B - 1985 to 1989 Praktica MTL 50 - 1987 to 1989 B-series Praktica B200 - 1979 to 1984 Praktica B100 - 1981 to 1986 Praktica BC1 - 1984 to 1988 Praktica BCA - 1986 to 1990 Praktica BCC - 1989 to 1990 Praktica BCS - 1989 to 1990 Praktica BM - 1989 to 1990 Praktica BMS - 1989 to 1992 BX-series Praktica BX20 - 1987 to 1993 Praktica BX10 DX - 1989 to 1990 Praktica BX21 DX - 1990 Praktica BX20S - 1990 to 2001 Older company history 1887 Richard Hüttig founded the first camera manufacturing company in Dresden. 1896 Zeus-mirror reflex camera with plate magazine as first single-lens reflex camera from Dresden by the company Richard Hüttig & Sohn. 1897-98 Foundation of the Aktiengesellschaft für Camera-Fabrikation Heinrich Ernemann in Dresden; Foundation of the Aktiengesellschaft für photographische Industrie Emil Wünsche in Dresden. 1903 Bosco mirror camera for 9×9 roll films by the Wünsche AG. 1903 The Ernemann-Kino movie camera uses 17.5 mm One-hole filmstrips for taking and displaying movies. The word Kino (cinema) had been born. 1906 Hüttig-AG becomes the biggest camera manufacturer in Europe with more than 800 employees. 1912 Foundation of the Industrie- und Handelsgesellschaft m.b. H., named Ihagee Kamerawerk GmbH since 1914. 1919 Foundation of the camera shop of Benno Thorsch and Paul Guthe. 1923 Inauguration of the 48 m high tower building of the Ernemann AG (see photography on the Pentacon GmbH page). 1924 The high-speed Ernostar lens designed by Ludwig Bertele of Ernemann AG, was first made in , then in maximum aperture. Its unprecedented speed made available-light photography possible for the first time. While it was supplied to a number of other cameras, it was best known on Ernemann's own Er-Nox (later called Ermanox) cameras. 1926 With the help of the Carl Zeiss Stiftung, four German camera manufacturers - Contessa-Nettel (Stuttgart), Ernemann and ICA (both Dresden), and C.P. Goerz (Berlin) were merged to form Zeiss-Ikon AG, and became the largest camera manufacturer in Europe with 3400 employees. 1933 EXAKTA 4×6.5, a small roll-film single-lens reflex camera using 127 film, was introduced by Ihagee Kamerawerk Steenbergen & Co. 1935 Contaflex: first 35 mm twin-lens reflex camera with interchangeable lens and the first camera with built-in exposure meter was introduced by Zeiss-Ikon AG. 1936 Kine Exakta: first 35 mm single-lens reflex camera introduced by Ihagee Kamerawerk Steenbergen & Co. 1939 Praktiflex introduced by K.W. AG, Dresden-Niedersedlitz 1945 Heavy destruction of the 'Dresdner Kamerabetriebe' (Camera Manufacturing of Dresden) through aerial bombing on February, 13th-14th 1945 1949 Contax S: first 35 mm single-lens reflex camera with built-in pentaprism viewfinder (world novelty), offering an unreversed viewfinder image, introduced by MECHANIK Zeiss Ikon VEB, at that time a 'state-owned' company. It also introduced the M42 screw lens mount for interchangeable lenses. 1949 Praktica single-lens reflex camera with M42 lens mount 1950 EXAKTA Varex by Ihagee Kamerawerk AG is the first single-lens reflex camera with interchangeable view-finders 1956 Praktica FX2 by VEB Kamera-Werke Dresden-Niedersedlitz is the first 35 mm single-lens reflex camera with diaphragm stop-down actuation mechanism built inside the lens mount 1959 Merger of the 'Dresdner Kamerabetriebe' (Camera Manufacturing of Dresden) to 'VEB Kamera- und Kinowerke Dresden' (VEB Pentacon Dresden since 1964). 1965 Praktica mat by VEB Pentacon Dresden is the first 35 mm single-lens reflex camera with TTL exposure measurement in Europe. 1969 Praktica LLC is the first 35 mm single-lens reflex camera with electrical diaphragm simulation between interchangeable lenses and camera body by the VEB Pentacon (Dresden). The MTL series was successful and is not mentioned. So is the electronic SLRs of the B series. See also John H. Noble Zeiss Ikon External links Praktica-B Kameras a Praktica-B collector's website, information about Praktica-B cameras and fitting lenses (in German) Mike's Praktica Home (English) Praktica Lenses Praktica Naver Cafe Collection Appareils Praktica B Camera pages Saxony Single-lens reflex cameras Photography equipment manufacturers of Germany Photography in East Germany German brands
Praktica
[ "Technology" ]
2,039
[ "System cameras", "Single-lens reflex cameras" ]
1,004,008
https://en.wikipedia.org/wiki/Dynamic%20knowledge%20repository
The dynamic knowledge repository (DKR) is a concept developed by Douglas C. Engelbart as a primary strategic focus for allowing humans to address complex problems. He has proposed that a DKR will enable us to develop a collective IQ greater than any individual's IQ. References and discussion of Engelbart's DKR concept are available at the Doug Engelbart Institute. Definition A knowledge repository is a computerized system that systematically captures, organizes and categorizes an organization's knowledge. The repository can be searched and data can be quickly retrieved. The effective knowledge repositories include factual, conceptual, procedural and meta-cognitive techniques. The key features of knowledge repositories include communication forums. A knowledge repository can take many forms to "contain" the knowledge it holds. A customer database is a knowledge repository of customer information and insights – or electronic explicit knowledge. A Library is a knowledge repository of books – physical explicit knowledge. A community of experts is a knowledge repository of tacit knowledge or experience. The nature of the repository only changes to contain/manage the type of knowledge it holds. A repository (as opposed to an archive) is designed to get knowledge out. It should therefore have some rules of structure, classification, taxonomy, record management, etc., to facilitate user engagement. References Further reading External links Doug Engelbart Institute Knowledge representation Data management
Dynamic knowledge repository
[ "Technology" ]
286
[ "Data management", "Data" ]
1,004,065
https://en.wikipedia.org/wiki/O-I%20Glass
O-I Glass, Inc. is an American company that specializes in container glass products. It is one of the world's leading manufacturers of packaging products, holding the position of largest manufacturer of glass containers in North America, South America, Asia-Pacific and Europe (after acquiring BSN Glasspack in 2004). Company While legally known as Owens-Illinois, Inc., the company changed its trade name to O-I in 2005 to group its global operations under a single, cross-language and cross-culture brand name. The company's headquarters were previously located at One SeaGate, Toledo, Ohio. The headquarters were moved in late 2006 to the Levis Commons complex in Perrysburg, Ohio. The company is the successor to the Owens Bottle Company founded in 1903 by Michael Joseph Owens, who made the first automated bottle-making machine, and Edward Drummond Libbey. In 1929, the Owens Bottle Company merged with Illinois Glass Company to become Owens-Illinois, Inc. Six years later, Owens-Illinois merged with Corning Incorporated to form Owens Corning. In 1971 Owens-Illinois produced an early commercial plasma display, the digivue. Until July 2007, the company was also a worldwide manufacturer of plastics packaging with operations in North America, South America, Asia-Pacific and Europe. Plastics packaging products manufactured by O-I included containers, closures, and prescription containers. In July 2007 O-I completed the sale of its entire plastics packaging business to Rexam, a United Kingdom listed packaging manufacturer. Owens-Illinois was a part of the Dow Jones Industrial Average from June 1, 1959, until March 12, 1987. The company was added to the S&P 500 Index in January 2009. Owens-Illinois was one of the original S&P 500 companies in 1957. It was removed in 1987 (after purchase by KKR), added in 1991 and removed again in 2000. In October 2010, Owens-Illinois Venezuela C.A was expropriated by President Hugo Chávez. In May 2015, O-I made an offer to purchase the food and beverage glass container business of Mexican company Vitro for $2.15 billion. The acquisition closed in September 2015. In 2020, a subsidiary of O-I Glass, Paddock Enterprises, entered bankruptcy following numerous asbestos lawsuits filed against the company. All of the company's asbestos-related claims were isolated within Paddock and separated from O-I's glass-making operations. Partnership with NEG Owens-Illinois partnered with NEG (Nippon Electric Glass), to produce glass television screens at its Columbus, Ohio, and Pittston, Pennsylvania, plants in the 1970s through the mid-1990s before allowing Techneglas to take over the operations. Environmental issues Although it has not made asbestos-containing materials since 1958, Owens-Illinois invented, tested, manufactured and distributed KAYLO asbestos containing thermal pipe insulation from 1948 through 1958. Owens-Illinois remains a named defendant in numerous asbestos litigation matters throughout the U.S. Some claims in these cases allege that Owens-Illinois was a participant in the seventh annual Saranac Seminar when the cancer-causing potential of asbestos was studied in the 1950s. As a result of a pattern of violations producing repeat emissions, its Oregon plant was fined in August 2023 by the Oregon Department of Environmental Quality. This was their 10th fine. See also In-mould labelling Glass container production Glass References External links O-I trademarks seen on their vintage glass containers Glassmaking companies of the United States American brands Asbestos Manufacturing companies based in Ohio Companies based in Toledo, Ohio Wood County, Ohio American companies established in 1929 Manufacturing companies established in 1929 1929 establishments in Ohio Companies listed on the New York Stock Exchange Former components of the Dow Jones Industrial Average Packaging companies of the United States 1987 mergers and acquisitions Perrysburg, Ohio
O-I Glass
[ "Environmental_science" ]
771
[ "Toxicology", "Asbestos" ]
1,004,186
https://en.wikipedia.org/wiki/Aversives
In psychology, aversives are unpleasant stimuli that induce changes in behavior via negative reinforcement or positive punishment. By applying an aversive immediately before or after a behavior, the likelihood of the target behavior occurring in the future may be reduced. Aversives can vary from being slightly unpleasant or irritating to physically, psychologically and/or emotionally damaging. Types of stimuli There are two types of aversive stimuli: Unconditioned Unconditioned aversive stimuli naturally result in pain or discomfort and are often associated with biologically harmful or damaging substances or events. Examples include extreme heat or cold, bitter flavors, electric shocks, loud noises and pain. Aversives can be applied naturally (such as touching a hot stove) or in a contrived manner (such as during torture or behavior modification). Conditioned A conditioned aversive stimulus is an initially neutral stimulus that becomes aversive after repeated pairing with an unconditioned aversive stimulus. This type of stimulus would include consequences such as verbal warnings, gestures or even the sight of an individual who is disliked. Use in applied behavior analysis (ABA) Aversives may be used as punishment or negative reinforcement during applied behavior analysis. In early years, the use of aversives was represented as a less restrictive alternative to the methods used in mental institutions such as shock treatment, hydrotherapy, straitjacketing and frontal lobotomies. Early iterations of the Lovaas technique incorporated aversives, though Lovaas later abandoned their use. Over time the use of aversives has become less common, though they are still in use as of 2021. Several national and international disability rights groups have spoken against the use of aversive therapies, including TASH and the Autism National Committee (known as AUTCOM). Although it has generally fallen out of favor, at least one institution continues to use electric shocks on the skin as an aversive. A ruling in 2018 supported its continued use. The FDA has made a commitment to ban its use, but as of January 2019 has not yet done so. A report from the Food and Drug Administration found that "the literature contains reports that when health care providers have resorted to punishers... the addition of punishers proved no more successful than [Positive behavioral support]-only techniques... Reflecting this trend, a 2008 survey of members of the Association for Behavior Analysis found that providers generally view punishment procedures as having more negative side effects and being less successful than reinforcement procedures." The Behavior Analyst Certification Board has stated their support the use of aversives on children with consent by a parent or guardian. Opposition The use of aversives in applied behavior analysis is opposed by many advocacy groups for people with disabilities. These include: Autistic Self Advocacy Network Arc of the United States Aspies For Freedom Autism Network International See also Carrot and stick Extinction (psychology) Pavlovian-instrumental transfer References External links Aversive stimulation at an education wiki Behavioral concepts Punishment Torture
Aversives
[ "Biology" ]
616
[ "Behavior", "Behavioral concepts", "Behaviorism" ]
1,004,285
https://en.wikipedia.org/wiki/Java%20Heterogeneous%20Distributed%20Computing
Java Heterogeneous Distributed Computing refers to a programmable Java distributed system which was developed at the National University of Ireland in Maynooth. It allows researchers to access the spare clock cycles of a large number of semi-idle desktop PCs. It also allows for multiple problems to be processed in parallel with sophisticated scheduling mechanisms controlling the system. It has been successful when used for tackling problems in the areas of Bioinformatics, Biomedical engineering and cryptography. It is an open source project licensed under the GPL. See also List of volunteer computing projects Distributed computing Java External links Heterogeneous Java Distributed Computing Distributed computing projects Software using the GNU General Public License
Java Heterogeneous Distributed Computing
[ "Engineering" ]
136
[ "Distributed computing projects", "Information technology projects" ]
1,004,372
https://en.wikipedia.org/wiki/Changing%20room
A changing room, locker room (usually in a sports, theater, or staff context), or changeroom (regional use) is a room or area designated for changing one's clothes. Changing-rooms are provided in a semi-public situation to enable people to change clothes with varying degrees of privacy. A fitting room, or dressing room, is a room where people try on clothes, such as in a department store. Separate changing rooms may be provided for men and women, or there may be a non-gender-specific open space with individual cubicles or stalls, as with unisex public toilets. Many changing rooms include toilets, sinks and showers. Sometimes a changing room exists as a small portion of a restroom/washroom. For example, the men's and women's washrooms in Toronto's Yonge–Dundas Square (which includes a water play area) each include a change area which is a blank counter space at the end of a row of sinks. In this case, the facility is primarily a washroom, and its use as a changing room is minimal, since only a small percentage of users change into bathing suits. Sometimes a person may change their clothes in a toilet cubicle of a washroom. Larger changing rooms are usually found at public beaches, or other bathing areas, where most of the space is for changing, and minimal washroom space is included. Beach-style changing rooms are often large open rooms with benches against the walls. Some do not have a roof, providing just the barrier necessary to prevent people outside from seeing in. Types Various types of changing rooms exist: Changing stalls are small stalls where clothes can be changed in privacy. They are used for any physical activity. Locker rooms are usually gender-specific spaces where clothes are changed and stored in lockers. They are often used for swimming or other sporting purposes. They are open spaces with no stalls. These rooms include toilets, sinks, and showers. Fitting rooms, or dressing rooms, are usually small single-user-cubicles where a person may try on clothes. These are often found at retail stores where one would want to try on clothes before purchasing them. Changing stalls Changing stalls are small stalls where clothes can be changed in privacy. Clothes are usually stored in lockers. There are usually no separate areas for men and women. They are often combined with gender-separated communal showers. Most public pools have changing facilities of this kind alongside communal changing-rooms. Some other places also offer these changing stalls such as fitness centers. Communal changing rooms Locker rooms are thus named because they provide lockers for the storage of one's belongings. Alternatively, they may have a locker room attendant who will keep a person's belongings until one comes to retrieve them. Locker rooms are usually open spaces where people change together, but there are separate areas, or separate locker rooms, for men and women. Sometimes they are used in swimming complexes. Locking devices used in locker rooms have traditionally been key or coin lockers, or lockers that are secured with a combination lock. Newer locker rooms may be automated, with robotic machines to store clothes, with such features as a fingerprint scanner to enroll and for later retrieval. Locker rooms in some water parks use a microchip equipped wristband. The same wristband that unlocks the lockers can be used to purchase food and drinks and other items in the water park. Some communal changing rooms are only supposed to be used by groups of persons, not individuals. In this case, there may be no lockers. Instead, the entire room is locked in order to protect belongings from theft. Locker rooms are also used in many middle schools and high schools. Most of them include showers for after Physical Education. At an outdoor sports facility, the changing rooms may be integrated into a pavilion or clubhouse, with other facilities such as seating or a bar. (Store) fitting rooms Fitting rooms, or dressing rooms, are rooms where people try on clothes, such as in a department store. The rooms are usually individual rooms in which a person tries on clothes to determine fit before making a purchase. People do not always use the fitting rooms to change, as to change implies to remove one set of clothes and put on another. Sometimes a person chooses to try on clothes over their clothes (such as sweaters or coats), but would still like to do this in private. Thus fitting rooms may be used for changing, or just for fitting without changing. Rules and conventions Retail establishments often post rules such as maximum number of items allowed in changing room, e.g. "no more than 4 items allowed in changing room". History It appears that the first store fitting rooms appeared with the spread of department stores. Émile Zola noted their existence in his novel Au Bonheur des Dames (1883), and that they were then forbidden to men. Some years later, when Henri Gervex, who painted Jeanne Paquin in 1906, that was no longer the case. In any case, Buster Keaton worked in one in an American 1928 silent comedy The Cameraman. Since then, they have continued to provide comic scenes in films, for example in the 1995 French film Les Trois Frères. Dressing rooms (domestic) Some homes may have dedicated rooms solely for the purpose of dressing and changing clothes, typically with fitted wardrobes. In larger Victorian houses it was common to have a private room called a boudoir for the lady of the house that is accessible from the bedroom, and also a dressing room for the gentleman (and sometimes a man's cabinet). Security Because of the privacy afforded by changing rooms, they create a problem in the trade-off between security and privacy, where in it may be possible for crime to be perpetrated by people using the cover of privacy to sell drugs, or steal clothing from a department store. Some department stores have security cameras in the changing rooms. Communal changing rooms pose less of a risk of theft than fitting rooms, because there is not total privacy. In particular, the perpetrator of a crime would not know whether or not other users might be undercover police or security guards. Many modern changing rooms often have labyrinth-style entrances that have no door, so that people outside cannot see in, but security can walk in at any time without the sound of an opening door alerting persons inside. Washrooms in which changing clothes is merely a secondary purpose often also have such labyrinth openings. Many washrooms have security cameras in the main area with a view of the sinks and the urinals from a viewing angle that would only show the back of a user. However, when a washroom is located near a fountain, wading pool, or the like, and is likely to be used for changing clothes, some believe that washroom surveillance cameras would be a violation of privacy. Another security risk present is that of theft. Sometimes, no method of securing items is provided, but even lockable lockers or baskets are usually designed for only minimal security allowing experienced thieves to steal the valuable items which people typically have with them before changing. Changing room operators frequently post signs disclaiming responsibility for stolen items, which can discourage but not eliminate claims for negligence. See also Unisex changing rooms Virtual dressing room References External links Rooms Bathing Sex segregation
Changing room
[ "Engineering" ]
1,479
[ "Rooms", "Architecture" ]
1,004,397
https://en.wikipedia.org/wiki/Conical%20measure
A conical measure is a type of laboratory glassware which consists of a conical cup with a notch on the top to allow for the easy pouring of liquids, and graduated markings on the side to allow easy and accurate measurement of volumes of liquid. They may be made of plastic, glass, or borosilicate glass. The use of the conical measure usually dictates its construction material. Plastic conical measures, commonly referred to as measuring cups are used by patients to measure liquid medicaments for oral administration. Glass and borosilicate conical measures are commonly used when compounding by the pharmacy profession. Conical measures are the most commonly used item of glassware used in the preparation of extemporaneous medicaments. They are not as precise as graduated cylinders for measuring liquids, but make up for this in terms of easy pouring and ability to mix solutions within the measure itself. History During his experiments, Abū al-Rayhān al-Bīrūnī (973–1048) invented the conical measure, in order to find the ratio between the weight of a substance in air and the weight of water displaced, and to accurately measure the specific weights of the gemstones and their corresponding metals, which are very close to modern measurements. References Volumetric instruments Laboratory glassware
Conical measure
[ "Technology", "Engineering" ]
258
[ "Volumetric instruments", "Measuring instruments" ]
1,004,399
https://en.wikipedia.org/wiki/Human%20decontamination
Human decontamination is the process of removing hazardous materials from the human body, including chemicals, radioactive substances, and infectious material. General principle People suspected of being contaminated are usually separated by sex, and led into a decontamination tent, trailer, or pod, where they shed their potentially contaminated clothes in a strip-down room. They then enter a wash-down room where they are showered. Finally, they enter a drying and re-robing room to be issued clean clothing, a jumpsuit, or other attire. Some more structured facilities include six rooms (strip-down, wash-down and examination rooms, for each of men's and women's side as per attached drawing). Some facilities, such as MODEC, and many others, are remotely operable, and function like "human car washes". Common lathering in soap, removes external dust that may contain radioisotopes. It is advised that when lathering, effort should be made not to spread potential dust that deposited onto exposed, unclothed areas of skin, to areas that were once likely clean. Mass decontamination is the decontamination of large numbers of people. The ACI World Aviation Security Standing Committee describes a decontamination process thus, specifically referring to plans for Los Angeles authorities: Hospital decontamination Most hospitals in the United States are prepared to handle a large influx of patients from a terrorist attack. Volunteer hospital decontamination teams are common and trained to set up showers or washing equipment, to wear personal protective equipment, and to ensure the safety of both the victims and the community during the response. From a planning perspective it must be remembered that first responders in Level A or B personal protective equipment (PPE) will have a limited working duration, typically 20 minutes to 2 hours. Typically these teams use decontamination showers built into the hospital or tents which are set up outside in order to decontaminate individuals. Beyond terrorism incidents, common exposures may be related to factory spills, agricultural incidents, and vehicle accidents. Incidents are common in both urban and rural communities. Hospital decontamination is a component of the Hospital Incident Command System and is required in the standards set forth by the Joint Commission. Decontamination exercises Decontamination exercises are frequently used to test the preparedness of emergency plans and personnel. Exercises are of three types: Tabletop - An exercise held with responsible personnel in which a facilitator relays information about a scenario to the group. The group then discusses the actions they each would take in the given situation. There is no "live response" or use of assets. The table top is a low impact, low stress method to review emergency plans. Functional - A functional exercise involves the agencies involved in an Emergency Operations Center, a scenario is presented and the players go through the actions they would if it were a real incident. The exercise tests the technical resources and plans of the Emergency Operations Center. There is no "live response" outside of the Emergency Operations Center. Full Scale - A full-scale exercise is the most involved type of exercise and the most difficult to plan and execute. Full-scale exercises can vary in size from one agency or municipality to multinational exercises such as the US Government-led annual TOPOFF exercise. In a full-scale exercise, a scenario is created and acted out in a real-world manner. Responders are expected to act in accordance with established plans, just as they would in a real incident. At times, certain parts of the exercise have to be simulated due to equipment, financial, or safety reasons, which can make the scenario confusing. Full-scale exercises are often used as an opportunity to test and assess an agency's true level of preparedness. Unified command Collaboration among various levels of authority, and among various countries, is required to address bioterror threats, because contamination knows no boundaries. Disease and contamination do not stop at the border from one country to another. Thus organizations such as NATO, bring together member countries to practice how to contain an outbreak, setup quarantine facilities, and care for displaced persons. Collection of personal belongings for evidence "Dofficers" (Decontamination officers in the "doffing" or disrobing area) are often police or military personnel, ready to handle potentially unruly persons who refuse to cooperate with first responders. For example, the U.S. ARMY SOLDIER AND BIOLOGICAL CHEMICAL COMMAND suggests that: "The entire incident is a crime scene requiring the collection of criminal evidence and suspicious victim belongings. The preservation of a proper chain of custody must be maintained for all evidence. ... patients could be suspects and their belongings may be evidence. ... Direct patients through a detailed decontamination process and deal with potentially unruly patients. ... Enforce order when persons become uncooperative when asked to remove clothing and relinquish personal items.". Paul Rega, M.D., FACEP, and Kelly Burkholder-Allen also note, in "The ABCs of Bioterrorism" an additional advantage in decontaminating everyone found at the scene of an incident, because this will help the authorities in searching through everyone's clothes to find suspicious items: "Removal of clothing in the decon procedure has the additional advantage of detecting weapons or a secondary device on a victim or "pseudo-victim". Chris Seiple, in "Another Perspective on the Domestic Role of the Military in Consequence Management" suggests that the evidence gathering process of identifying contaminated people and their belongings should also include the process of video surveillance: The identification of contaminated victims and their personal effects... Victims are also videotaped as they proceed through the decontamination line. Video Surveillance... Videotaped documentation could later be used in the evidence processes; Although there are the obvious privacy concerns in surveillance, one can also argue that due to the high risk nature of terrorism, such surveillance is warranted, as it is in other high risk areas like bathing complexes where surveillance is often used because of the risk of drowning. In these cases the importance of safety may often be thought to outweigh privacy concerns. Handling uncooperative victims One of the elements that separates a drill from a real-life situation is dealing with panicked or uncooperative victims. Security personnel should be assigned to the area for crowd control and to ensure appropriate flow of individuals in and out of the decontamination area. In a real attack, the perpetrators may be among the victims, or some of the victims may be in possession of contraband, or of evidence that might help law enforcement in solving the crime. Another consideration is that some of the perpetrator victims might refuse to go through decon because this would result in discovery of the contraband they may be hiding. For example, a person with explosives strapped to his or her body, under their clothing, would likely not be so willing to take it off. Such a victim might try to escape, and need to be restrained for decontamination. Separate male and female officers (decontamination officers) deal with potentially unruly patients, by restraining the hands using flex cuffs, and cutting off the shirt, then removing shoes and pants normally. This usually requires a couple of officers. The Belfast Telegraph describes such a situation: "...holds back hundreds of extras playing traumatised bomb victims. Coated in ash and wrapped up in bandages, these people are staggering around, dazed and confused, like so many shell-shocked World War I soldiers. While troops in riot gear charge forward to reinforce the cordon and use their shields and batons to beat back ... desperately appeal for calm. They ask people to file in an orderly fashion towards the decontamination units being rapidly assembled by fire fighters in inflated orange Chemical Biological Radiation Nuclear (CBRN) suits." See also Battalion Chief Michael Farri: They bring a law enforcement agency group with them and they have no problem if somebody needs to be restrained with handcuffs or flex cuffs or whatever to keep them from going from the hot zone to a cool zone; whereas the fire department, we are not geared to do that.... Kwame Holman: Colonel Hammes says his Marines are trained to handle uncooperative people. ... If they're really hysterical, there's some simple techniques from this program called Marine Martial Arts, that teaches various martial arts skills; there are common techniques that police also use to provide pain compliance-- no permanent damage, just enough to get your attention, and allows us to control you. If you still won't, then we can control in flex cuffs, and then we'll flex cuff decontaminate you. And if you're calm at that point, we turn you loose. If you're still not calm, then the police will be asked to give us a hand. Internal human contamination Radioactive contamination can enter the body through ingestion, inhalation, absorption, or injection. This will result in a committed dose of radiation. For this reason, it is important to use personal protective equipment when working with radioactive materials. Radioactive contamination may also be ingested as the result of eating contaminated plants and animals or drinking contaminated water or milk from exposed animals. Following a major contamination incident, all potential pathways of internal exposure should be considered. Successfully used on Harold McCluskey, chelation therapy and other treatments exist for internal radionuclide contamination. See also Contamination control DeconGel Decontamination foam Disease surveillance Fukushima disaster cleanup Incident Support Unit Mass decontamination Radioactive contamination References External links Cleanroom Technology-The Internal Journal of Contamination Control Airport shows off human carwash Annual decontamination (de)conference RODS Laboratory TVI corporation (makes tents that have a separate decon corridors for men and women on each side, with a central corridor for nonambulatory patients) First Line Technology - Mass Decontamination, Personal Decontamination Systems and Equipment Airshelter - ACD, manufacturer of mobile, rapid deployment shelters and (mass) decon systems qüb9 Environmental - Manufacturer of container-based mobile, deployable human decontamination, decon platforms Hygiene Security Safety Civil defense Cleanroom technology
Human decontamination
[ "Chemistry" ]
2,109
[ "Cleanroom technology" ]
1,004,401
https://en.wikipedia.org/wiki/Nambu%E2%80%93Goto%20action
The Nambu–Goto action is the simplest invariant action in bosonic string theory, and is also used in other theories that investigate string-like objects (for example, cosmic strings). It is the starting point of the analysis of zero-thickness (infinitely thin) string behaviour, using the principles of Lagrangian mechanics. Just as the action for a free point particle is proportional to its proper time – i.e., the "length" of its world-line – a relativistic string's action is proportional to the area of the sheet which the string traces as it travels through spacetime. It is named after Japanese physicists Yoichiro Nambu and Tetsuo Goto. Background Relativistic Lagrangian mechanics The basic principle of Lagrangian mechanics, the principle of stationary action, is that an object subjected to outside influences will "choose" a path which makes a certain quantity, the action, an extremum. The action is a functional, a mathematical relationship which takes an entire path and produces a single number. The physical path, that which the object actually follows, is the path for which the action is "stationary" (or extremal): any small variation of the path from the physical one does not significantly change the action. (Often, this is equivalent to saying the physical path is the one for which the action is a minimum.) Actions are typically written using Lagrangians, formulas which depend upon the object's state at a particular point in space and/or time. In non-relativistic mechanics, for example, a point particle's Lagrangian is the difference between kinetic and potential energy: . The action, often written , is then the integral of this quantity from a starting time to an ending time: (Typically, when using Lagrangians, we assume we know the particle's starting and ending positions, and we concern ourselves with the path which the particle travels between those positions.) This approach to mechanics has the advantage that it is easily extended and generalized. For example, we can write a Lagrangian for a relativistic particle, which will be valid even if the particle is traveling close to the speed of light. To preserve Lorentz invariance, the action should only depend upon quantities that are the same for all (Lorentz) observers, i.e. the action should be a Lorentz scalar. The simplest such quantity is the proper time, the time measured by a clock carried by the particle. According to special relativity, all Lorentz observers watching a particle move will compute the same value for the quantity and is then an infinitesimal proper time. For a point particle not subject to external forces (i.e., one undergoing inertial motion), the relativistic action is World-sheets Just as a zero-dimensional point traces out a world-line on a spacetime diagram, a one-dimensional string is represented by a world-sheet. All world-sheets are two-dimensional surfaces, hence we need two parameters to specify a point on a world-sheet. String theorists use the symbols and for these parameters. As it turns out, string theories involve higher-dimensional spaces than the 3D world with which we are familiar; bosonic string theory requires 25 spatial dimensions and one time axis. If is the number of spatial dimensions, we can represent a point by the vector We describe a string using functions which map a position in the parameter space (, ) to a point in spacetime. For each value of and , these functions specify a unique spacetime vector: The functions determine the shape which the world-sheet takes. Different Lorentz observers will disagree on the coordinates they assign to particular points on the world-sheet, but they must all agree on the total proper area which the world-sheet has. The Nambu–Goto action is chosen to be proportional to this total proper area. Let be the metric on the -dimensional spacetime. Then, is the induced metric on the world-sheet, where and . For the area of the world-sheet the following holds: where and Using the notation that: and one can rewrite the metric : the Nambu–Goto action is defined as {| | | | | |} where . The factors before the integral give the action the correct units, energy multiplied by time. is the tension in the string, and is the speed of light. Typically, string theorists work in "natural units" where is set to 1 (along with the reduce Planck constant and the Newtonian constant of gravitation ). Also, partly for historical reasons, they use the "slope parameter" instead of . With these changes, the Nambu–Goto action becomes These two forms are, of course, entirely equivalent: choosing one over the other is a matter of convention and convenience. Two further equivalent forms (on shell but not off shell) are and The conjugate momentum field . Then, is a primary constraint. The secondary constraint is . These constraints generate timelike diffeomorphisms and spacelike diffeomorphisms on the worldsheet. The Hamiltonian . The extended Hamiltonian is given by where and are Lagrange multipliers. The equations of motion satisfy the Virasoro constraints and . Typically, the Nambu–Goto action does not yet have the form appropriate for studying the quantum physics of strings. For this it must be modified in a similar way as the action of a point particle. That is classically equal to minus mass times the invariant length in spacetime, but must be replaced by a quadratic expression with the same classical value. For strings the analog correction is provided by the Polyakov action, which is classically equivalent to the Nambu–Goto action, but gives the 'correct' quantum theory. It is, however, possible to develop a quantum theory from the Nambu–Goto action in the light cone gauge. See also Dirac membrane References Further reading Ortin, Thomas, Gravity and Strings, Cambridge Monographs, Cambridge University Press (2004). . String theory
Nambu–Goto action
[ "Astronomy" ]
1,261
[ "String theory", "Astronomical hypotheses" ]
1,004,417
https://en.wikipedia.org/wiki/Satiety
Satiety (/səˈtaɪ.ə.ti/ sə-TYE-ə-tee) is a state or condition of fullness gratified beyond the point of satisfaction, the opposite of hunger. Following satiation (meal termination), satiety is a feeling of fullness lasting until the next meal. When food is present in the GI tract after a meal, satiety signals overrule hunger signals, but satiety slowly fades as hunger increases. The satiety center in animals is located in ventromedial nucleus of the hypothalamus. Mechanism Satiety is signaled through the vagus nerve as well as circulating hormones. During intake of a meal, the stomach must stretch to accommodate this increased volume. This gastric accommodation activates stretch receptors in the proximal (upper) portion of the stomach. These receptors then signal through afferent vagus nerve fibers to the hypothalamus, increasing satiety. Signalling factors In addition, as the food moves into the duodenum, duodenal cells release multiple substances that affect digestion and satiety. Glucagon-like peptide-1 (GLP-1) is an incretin released by the duodenum that inhibits relaxation of the stomach. This inhibition causes increased stretch of the stomach, increasing activation of proximal gastric stretch receptors. It also slows overall gut motility, increasing the duration of satiety. This effect is used to increase weight loss and treat obesity through GLP-1 agonists. Cholecystokinin (CCK) is gut peptide produced by the duodenum in response to fat and proteins. CCK has the effect of slowing gut motility and increasing satiety as well as activating release of pancreatic digestive enzymes and bile from the gallbladder. See also Ghrelin Satiety value Prader–Willi syndrome References Digestive system Neuropsychology Nutritional physiology
Satiety
[ "Biology" ]
417
[ "Digestive system", "Organ systems" ]
1,004,474
https://en.wikipedia.org/wiki/Brunner%27s%20glands
Brunner's glands (or duodenal glands) are compound tubuloalveolar submucosal glands found in that portion of the duodenum proximal to the hepatopancreatic sphincter (i.e sphincter of Oddi). For decades, it was believed that the main function of the glands is to secrete alkaline (bicarbonate-containing) mucus in order to: protect the duodenum from the acidic content of chyme (which enters the duodenum from the stomach), provide an alkaline environment which promotes the activity of intestinal enzymes, lubricate the intestinal walls. However, more recent studies have demonstrated that Brunner’s glands actually act as major modulators of the gut microbiome and systemic immunity. They are the distinguishing feature of the duodenum, and are named for the Swiss physician who first described them, Johann Conrad Brunner. Structure Duodenal glands are situated within the mucosa and submucosa of the duodenum. They are most abundant near the pylorus, growing shorter and more sparse distally towards the terminal portion of the duodenum. The duodenum can be distinguished from the jejunum and ileum by the presence of Brunner's glands in the submucosa. Histology Their excretory cannals are tortuous, opening at the bases of the villi. Two forms of duodenal glands are distinguished: the external group (which are more voluminous and extend into the duodenal submucosa), and the internal group (which are smaller and are situated within the duodenal mucosa). Function They also secrete epidermal growth factor, which inhibits parietal and chief cells of the stomach from secreting acid and their digestive enzymes. This is another form of protection for the duodenum. The Brunner glands, which empty into the intestinal glands, secrete an alkaline fluid composed of mucin, which exerts a physiologic anti-acid function by coating the duodenal epithelium, therefore protecting it from the acid chyme of the stomach. Furthermore, in response to the presence of acid in the duodenum, these glands secrete pepsinogen and urogastrone, which inhibit gastric acid secretion. More recent studies have demonstrated that Brunner’s glands are major modulators of the gut microbiome and systemic immunity. Studies conducted by Ivan De Araujo’s laboratory revealed that Brunner’s gland secretions promote the proliferation of probiotics and protect the host against foreign pathogens. Clinical significance Hyperplasia of Brunner glands with a lesion greater than 1 cm was initially described as a Brunner gland adenoma. Several features of these lesions favor their designation as hamartomas, including the lack of encapsulation; the mixture of acini, smooth muscles, adipose tissue, Paneth cells, and mucosal glands; and the lack of any cell atypia. These hamartomas are rare, with approximately 150 cases described in the literature. It is estimated that they represent approximately 5–10% of benign duodenal tumors. They are variable in size, typically 1–3 cm, with only a few reported cases of lesions larger than 5 cm. Most patients with Brunner gland hamartomas are asymptomatic or have nonspecific complaints such as nausea, bloating, or vague abdominal pain. Most reports in the literature describe local surgical resection of Brunner gland hamartoma via duodenotomy. Increasingly, successful endoscopic resection has been reported and is primarily used for pedunculated Brunner gland hamartomas. The endoscopic approach in selective cases appears to be safe, less invasive, and less costly. Consistent with the more recent idea that Brunner’s glands influence systemic immunity via the microbiome, patients who had the duodenal bulb removed (where the glands are mostly located) showed greater alterations in immune factors compared to patients having  more distal parts of the duodenum removed. See also Peutz–Jeghers syndrome List of distinct cell types in the adult human body References External links - "Digestive System: Alimentary Canal: pyloro/duodenal junction, duodenum" - "Digestive System: Alimentary Canal: pyloro/duodenal junction" - "Digestive System: Alimentary Canal: duodenum, plicae circularis" Digestive system
Brunner's glands
[ "Biology" ]
974
[ "Digestive system", "Organ systems" ]
1,004,486
https://en.wikipedia.org/wiki/Pharmacogenomics
Pharmacogenomics, often abbreviated "PGx," is the study of the role of the genome in drug response. Its name (pharmaco- + genomics) reflects its combining of pharmacology and genomics. Pharmacogenomics analyzes how the genetic makeup of a patient affects their response to drugs. It deals with the influence of acquired and inherited genetic variation on drug response, by correlating DNA mutations (including point mutations, copy number variations, and structural variations) with pharmacokinetic (drug absorption, distribution, metabolism, and elimination), pharmacodynamic (effects mediated through a drug's biological targets), and/or immunogenic endpoints. Pharmacogenomics aims to develop rational means to optimize drug therapy, with regard to the patients' genotype, to achieve maximum efficiency with minimal adverse effects. It is hoped that by using pharmacogenomics, pharmaceutical drug treatments can deviate from what is dubbed as the "one-dose-fits-all" approach. Pharmacogenomics also attempts to eliminate trial-and-error in prescribing, allowing physicians to take into consideration their patient's genes, the functionality of these genes, and how this may affect the effectiveness of the patient's current or future treatments (and where applicable, provide an explanation for the failure of past treatments). Such approaches promise the advent of precision medicine and even personalized medicine, in which drugs and drug combinations are optimized for narrow subsets of patients or even for each individual's unique genetic makeup. Whether used to explain a patient's response (or lack of it) to a treatment, or to act as a predictive tool, it hopes to achieve better treatment outcomes and greater efficacy, and reduce drug toxicities and adverse drug reactions (ADRs). For patients who do not respond to a treatment, alternative therapies can be prescribed that would best suit their requirements. In order to provide pharmacogenomic recommendations for a given drug, two possible types of input can be used: genotyping, or exome or whole genome sequencing. Sequencing provides many more data points, including detection of mutations that prematurely terminate the synthesized protein (early stop codon). Pharmacogenetics vs. pharmacogenomics The term pharmacogenomics is often used interchangeably with pharmacogenetics. Although both terms relate to drug response based on genetic influences, there are differences between the two. Pharmacogenetics is limited to monogenic phenotypes (i.e., single gene-drug interactions). Pharmacogenomics refers to polygenic drug response phenotypes and encompasses transcriptomics, proteomics, and metabolomics. Mechanisms of pharmacogenetic interactions Pharmacokinetics Pharmacokinetics involves the absorption, distribution, metabolism, and elimination of pharmaceutics. These processes are often facilitated by enzymes such as drug transporters or drug metabolizing enzymes (discussed in-depth below). Variation in DNA loci responsible for producing these enzymes can alter their expression or activity so that their functional status changes. An increase, decrease, or loss of function for transporters or metabolizing enzymes can ultimately alter the amount of medication in the body and at the site of action. This may result in deviation from the medication's therapeutic window and result in either toxicity or loss of effectiveness. Drug-metabolizing enzymes The majority of clinically actionable pharmacogenetic variation occurs in genes that code for drug-metabolizing enzymes, including those involved in both phase I and phase II metabolism. The cytochrome P450 enzyme family is responsible for metabolism of 70-80% of all medications used clinically. CYP3A4, CYP2C9, CYP2C19, and CYP2D6 are major CYP enzymes involved in drug metabolism and are all known to be highly polymorphic. Additional drug-metabolizing enzymes that have been implicated in pharmacogenetic interactions include UGT1A1 (a UDP-glucuronosyltransferase), DPYD, and TPMT. Drug transporters Many medications rely on transporters to cross cellular membranes in order to move between body fluid compartments such as the blood, gut lumen, bile, urine, brain, and cerebrospinal fluid. The major transporters include the solute carrier, ATP-binding cassette, and organic anion transporters. Transporters that have been shown to influence response to medications include OATP1B1 (SLCO1B1) and breast cancer resistance protein (BCRP) (ABCG2). Pharmacodynamics Pharmacodynamics refers to the impact a medication has on the body, or its mechanism of action. Drug targets Drug targets are the specific sites where a medication carries out its pharmacological activity. The interaction between the drug and this site results in a modification of the target that may include inhibition or potentiation. Most of the pharmacogenetic interactions that involve drug targets are within the field of oncology and include targeted therapeutics designed to address somatic mutations (see also Cancer Pharmacogenomics). For example, EGFR inhibitors like gefitinib (Iressa) or erlotinib (Tarceva) are only indicated in patients carrying specific mutations to EGFR. Germline mutations in drug targets can also influence response to medications, though this is an emerging subfield within pharmacogenomics. One well-established gene-drug interaction involving a germline mutation to a drug target is warfarin (Coumadin) and VKORC1, which codes for vitamin K epoxide reductase (VKOR). Warfarin binds to and inhibits VKOR, which is an important enzyme in the vitamin K cycle. Inhibition of VKOR prevents reduction of vitamin K, which is a cofactor required in the formation of coagulation factors II, VII, IX and X, and inhibitors protein C and S. Off-target sites Medications can have off-target effects (typically unfavorable) that arise from an interaction between the medication and/or its metabolites and a site other than the intended target. Genetic variation in the off-target sites can influence this interaction. The main example of this type of pharmacogenomic interaction is glucose-6-phosphate-dehydrogenase (G6PD). G6PD is the enzyme involved in the first step of the pentose phosphate pathway which generates NADPH (from NADP). NADPH is required for the production of reduced glutathione in erythrocytes and it is essential for the function of catalase. Glutathione and catalase protect cells from oxidative stress that would otherwise result in cell lysis. Certain variants in G6PD result in G6PD deficiency, in which cells are more susceptible to oxidative stress. When medications that have a significant oxidative effect are administered to individuals who are G6PD deficient, they are at an increased risk of erythrocyte lysis that presents as hemolytic anemia. Immunologic The human leukocyte antigen (HLA) system, also referred to as the major histocompatibility complex (MHC), is a complex of genes important for the adaptive immune system. Mutations in the HLA complex have been associated with an increased risk of developing hypersensitivity reactions in response to certain medications. Clinical pharmacogenomics resources Clinical Pharmacogenetics Implementation Consortium (CPIC) The Clinical Pharmacogenetics Implementation Consortium (CPIC) is "an international consortium of individual volunteers and a small dedicated staff who are interested in facilitating use of pharmacogenetic tests for patient care. CPIC’s goal is to address barriers to clinical implementation of pharmacogenetic tests by creating, curating, and posting freely available, peer-reviewed, evidence-based, updatable, and detailed gene/drug clinical practice guidelines. CPIC guidelines follow standardized formats, include systematic grading of evidence and clinical recommendations, use standardized terminology, are peer-reviewed, and are published in a journal (in partnership with Clinical Pharmacology and Therapeutics) with simultaneous posting to cpicpgx.org, where they are regularly updated." The CPIC guidelines are "designed to help clinicians understand HOW available genetic test results should be used to optimize drug therapy, rather than WHETHER tests should be ordered. A key assumption underlying the CPIC guidelines is that clinical high-throughput and pre-emptive (pre-prescription) genotyping will become more widespread, and that clinicians will be faced with having patients’ genotypes available even if they have not explicitly ordered a test with a specific drug in mind. CPIC's guidelines, processes and projects have been endorsed by several professional societies." U.S. Food and Drug Administration Table of Pharmacogenetic Associations In February 2020 the FDA published the Table of Pharmacogenetic Associations. For the gene-drug pairs included in the table, "the FDA has evaluated and believes there is sufficient scientific evidence to suggest that subgroups of patients with certain genetic variants, or genetic variant-inferred phenotypes (such as affected subgroup in the table below), are likely to have altered drug metabolism, and in certain cases, differential therapeutic effects, including differences in risks of adverse events." "The information in this Table is intended primarily for prescribers, and patients should not adjust their medications without consulting their prescriber. This version of the table is limited to pharmacogenetic associations that are related to drug metabolizing enzyme gene variants, drug transporter gene variants, and gene variants that have been related to a predisposition for certain adverse events. The FDA recognizes that various other pharmacogenetic associations exist that are not listed here, and this table will be updated periodically with additional pharmacogenetic associations supported by sufficient scientific evidence." Table of Pharmacogenomic Biomarkers in Drug Labeling The FDA Table of Pharmacogenomic Biomarkers in Drug Labeling lists FDA-approved drugs with pharmacogenomic information found in the drug labeling. "Biomarkers in the table include but are not limited to germline or somatic gene variants (polymorphisms, mutations), functional deficiencies with a genetic etiology, gene expression differences, and chromosomal abnormalities; selected protein biomarkers that are used to select treatments for patients are also included." PharmGKB The Pharmacogenomics Knowledgebase (PharmGKB) is an "NIH-funded resource that provides information about how human genetic variation affects response to medications. PharmGKB collects, curates and disseminates knowledge about clinically actionable gene-drug associations and genotype-phenotype relationships." Commercial Pharmacogenetic Testing Laboratories There are many commercial laboratories around the world who offer pharmacogenomic testing as a laboratory developed test (LDTs). The tests offered can vary significantly from one lab to another, including genes and alleles tested for, phenotype assignment, and any clinical annotations provided. With the exception of a few direct-to-consumer tests, all pharmacogenetic testing requires an order from an authorized healthcare professional. In order for the results to be used in a clinical setting in the United States, the laboratory performing the test much be CLIA-certified. Other regulations may vary by country and state. Direct-to-Consumer Pharmacogenetic Testing Direct-to-consumer (DTC) pharmacogenetic tests allow consumers to obtain pharmacogenetic testing without an order from a prescriber. DTC pharmacogenetic tests are generally reviewed by the FDA to determine the validity of test claims. The FDA maintains a list of DTC genetic tests that have been approved. Common Pharmacogenomic-Specific Nomenclature Genotype There are multiple ways to represent a pharmacogenomic genotype. A commonly used nomenclature system is to report haplotypes using a star (*) allele (e.g., CYP2C19 *1/*2). Single-nucleotide polymorphisms (SNPs) may be described using their assignment reference SNP cluster ID (rsID) or based on the location of the base pair or amino acid impacted. Phenotype In 2017 CPIC published results of an expert survey to standardize terms related to clinical pharmacogenetic test results. Consensus for terms to describe allele functional status, phenotype for drug metabolizing enzymes, phenotype for drug transporters, and phenotype for high-risk genotype status was reached. Applications The list below provides a few more commonly known applications of pharmacogenomics: Improve drug safety, and reduce ADRs; Tailor treatments to meet patients' unique genetic pre-disposition, identifying optimal dosing; Improve drug discovery targeted to human disease; and Improve proof of principle for efficacy trials. Pharmacogenomics may be applied to several areas of medicine, including pain management, cardiology, oncology, and psychiatry. A place may also exist in forensic pathology, in which pharmacogenomics can be used to determine the cause of death in drug-related deaths where no findings emerge using autopsy. In cancer treatment, pharmacogenomics tests are used to identify which patients are most likely to respond to certain cancer drugs. In behavioral health, pharmacogenomic tests provide tools for physicians and care givers to better manage medication selection and side effect amelioration. Pharmacogenomics is also known as companion diagnostics, meaning tests being bundled with drugs. Examples include KRAS test with cetuximab and EGFR test with gefitinib. Beside efficacy, germline pharmacogenetics can help to identify patients likely to undergo severe toxicities when given cytotoxics showing impaired detoxification in relation with genetic polymorphism, such as canonical 5-FU. In particular, genetic deregulations affecting genes coding for DPD, UGT1A1, TPMT, CDA and CYP2D6 are now considered as critical issues for patients treated with 5-FU/capecitabine, irinotecan, mercaptopurine/azathioprine, gemcitabine/capecitabine/AraC and tamoxifen, respectively. In cardiovascular disorders, the main concern is response to drugs including warfarin, clopidogrel, beta blockers, and statins. In patients with CYP2C19, who take clopidogrel, cardiovascular risk is elevated, leading to medication package insert updates by regulators. In patients with type 2 diabetes, haptoglobin (Hp) genotyping shows an effect on cardiovascular disease, with Hp2-2 at higher risk and supplemental vitamin E reducing risk by affecting HDL. In psychiatry, as of 2010, research has focused particularly on 5-HTTLPR and DRD2. Clinical implementation Initiatives to spur adoption by clinicians include the Ubiquitous Pharmacogenomics (U-PGx) program in Europe and the Clinical Pharmacogenetics Implementation Consortium (CPIC) in the United States. In a 2017 survey of European clinicians, in the prior year two-thirds had not ordered a pharmacogenetic test. In 2010, Vanderbilt University Medical Center launched Pharmacogenomic Resource for Enhanced Decisions in Care and Treatment (PREDICT); in 2015 survey, two-thirds of the clinicians had ordered a pharmacogenetic test. In 2019, the largest private health insurer, UnitedHealthcare, announced that it would pay for genetic testing to predict response to psychiatric drugs. In 2020, Canada's 4th largest health and dental insurer, Green Shield Canada, announced that it would pay for pharmacogenetic testing and its associated clinical decision support software to optimize and personalize mental health prescriptions. Reduction of polypharmacy A potential role for pharmacogenomics is to reduce the occurrence of polypharmacy: it is theorized that with tailored drug treatments, patients will not need to take several medications to treat the same condition. Thus they could potentially reduce the occurrence of adverse drug reactions, improve treatment outcomes, and save costs by avoiding purchase of some medications. For example, maybe due to inappropriate prescribing, psychiatric patients tend to receive more medications than age-matched non-psychiatric patients. The need for pharmacogenomically tailored drug therapies may be most evident in a survey conducted by the Slone Epidemiology Center at Boston University from February 1998 to April 2007. The study elucidated that an average of 82% of adults in the United States are taking at least one medication (prescription or nonprescription drug, vitamin/mineral, herbal/natural supplement), and 29% are taking five or more. The study suggested that those aged 65 years or older continue to be the biggest consumers of medications, with 17-19% in this age group taking at least ten medications in a given week. Polypharmacy has also shown to have increased since 2000 from 23% to 29%. Example case studies Case A – Antipsychotic adverse reaction Patient A has schizophrenia. Their treatment included a combination of ziprasidone, olanzapine, trazodone and benztropine. The patient experienced dizziness and sedation, so they were tapered off ziprasidone and olanzapine, and transitioned to quetiapine. Trazodone was discontinued. The patient then experienced excessive sweating, tachycardia and neck pain, gained considerable weight and had hallucinations. Five months later, quetiapine was tapered and discontinued, with ziprasidone re-introduced into their treatment, due to the excessive weight gain. Although the patient lost the excessive weight they had gained, they then developed muscle stiffness, cogwheeling, tremors and night sweats. When benztropine was added they experienced blurry vision. After an additional five months, the patient was switched from ziprasidone to aripiprazole. Over the course of 8 months, patient A gradually experienced more weight gain and sedation, and developed difficulty with their gait, stiffness, cogwheeling and dyskinetic ocular movements. A pharmacogenomics test later proved the patient had a CYP2D6 *1/*41, which has a predicted phenotype of IM and CYP2C19 *1/*2 with a predicted phenotype of IM as well. Case B – Pain Management Patient B is a woman who gave birth by caesarian section. Her physician prescribed codeine for post-caesarian pain. She took the standard prescribed dose, but she experienced nausea and dizziness while she was taking codeine. She also noticed that her breastfed infant was lethargic and feeding poorly. When the patient mentioned these symptoms to her physician, they recommended that she discontinue codeine use. Within a few days, both the patient's and her infant's symptoms were no longer present. It is assumed that if the patient had undergone a pharmacogenomic test, it would have revealed she may have had a duplication of the gene CYP2D6, placing her in the Ultra-rapid metabolizer (UM) category, explaining her reactions to codeine use. Case C – FDA Warning on Codeine Overdose for Infants On February 20, 2013, the FDA released a statement addressing a serious concern regarding the connection between children who are known as CYP2D6 UM, and fatal reactions to codeine following tonsillectomy and/or adenoidectomy (surgery to remove the tonsils and/or adenoids). They released their strongest Boxed Warning to elucidate the dangers of CYP2D6 UMs consuming codeine. Codeine is converted to morphine by CYP2D6, and those who have UM phenotypes are in danger of producing large amounts of morphine due to the increased function of the gene. The morphine can elevate to life-threatening or fatal amounts, as became evident with the death of three children in August 2012. Challenges Although there appears to be a general acceptance of the basic tenet of pharmacogenomics amongst physicians and healthcare professionals, several challenges exist that slow the uptake, implementation, and standardization of pharmacogenomics. Some of the concerns raised by physicians include: Limitation on how to apply the test into clinical practices and treatment; A general feeling of lack of availability of the test; The understanding and interpretation of evidence-based research; Combining test results with other patient data for prescription optimization; and Ethical, legal and social issues. Issues surrounding the availability of the test include: The lack of availability of scientific data: Although there are a considerable number of drug-metabolizing enzymes involved in the metabolic pathways of drugs, only a fraction have sufficient scientific data to validate their use within a clinical setting; and Demonstrating the cost-effectiveness of pharmacogenomics: Publications for the pharmacoeconomics of pharmacogenomics are scarce, therefore sufficient evidence does not at this time exist to validate the cost-effectiveness and cost-consequences of the test. Although other factors contribute to the slow progression of pharmacogenomics (such as developing guidelines for clinical use), the above factors appear to be the most prevalent. Increasingly substantial evidence and industry body guidelines for clinical use of pharmacogenetics have made it a population wide approach to precision medicine. Cost, reimbursement, education, and easy use at the point of care remain significant barriers to widescale adoption. Controversies Race-based medicine There has been call to move away from race and ethnicity in medicine and instead use genetic ancestry as a way to categorize patients. Some alleles that vary in frequency between specific populations have been shown to be associated with differential responses to specific drugs. As a result, some disease-specific guidelines only recommend pharmacogenetic testing for populations where high-risk alleles are more common and, similarly, certain insurance companies will only pay for pharmacogenetic testing for beneficiaries of high-risk populations. Genetic exceptionalism In the early 2000s, handling genetic information as exceptional, including legal or regulatory protections, garnered strong support. It was argued that genomic information may need special policy and practice protections within the context of electronic health records (EHRs). In 2008, the Genetic Information Nondiscrimination Act (GINA) was enacted to protect patients from health insurance companies discriminating against an individual based on genetic information. More recently it has been argued that genetic exceptionalism is past its expiration date as we move into a blended genomic/big data era of medicine, yet exceptionalism practices continue to permeate clinical healthcare today. Garrison et al. recently relayed a call to action to update verbiage from genetic exceptionalism to genomic contextualism in that we recognize a fundamental duality of genetic information. This allows room in the argument for different types of genetic information to be handled differently while acknowledging that genomic information is similar and yet distinct from other health-related information. Genomic contextualism would allow for a case-by-case analysis of the technology and the context of its use (e.g., clinical practice, research, secondary findings). Others argue that genetic information is indeed distinct from other health-related information but not to the extent of requiring legal/regulatory protections, similar to other sensitive health-related data such as HIV status. Additionally, Evans et al. argue that the EHR has sufficient privacy standards to hold other sensitive information such as social security numbers and that the fundamental nature of an EHR is to house highly personal information. Similarly, a systematic review reported that the public had concern over privacy of genetic information, with 60% agreeing that maintaining privacy was not possible; however, 96% agreed that a direct-to-consumer testing company had protected their privacy, with 74% saying their information would be similarly or better protected in an EHR. With increasing technological capabilities in EHRs, it is possible to mask or hide genetic data from subsets of providers and there is not consensus on how, when, or from whom genetic information should be masked. Rigorous protection and masking of genetic information is argued to impede further scientific progress and clinical translation into routine clinical practices. History Pharmacogenomics was first recognized by Pythagoras around 510 BC when he made a connection between the dangers of fava bean ingestion with hemolytic anemia and oxidative stress. In the 1950s, this identification was validated and attributed to deficiency of G6PD and is called favism. Although the first official publication was not until 1961, the unofficial beginnings of this science were around the 1950s. Reports of prolonged paralysis and fatal reactions linked to genetic variants in patients who lacked butyrylcholinesterase ('pseudocholinesterase') following succinylcholine injection during anesthesia were first reported in 1956. The term pharmacogenetics was first coined in 1959 by Friedrich Vogel of Heidelberg, Germany (although some papers suggest it was 1957 or 1958). In the late 1960s, twin studies supported the inference of genetic involvement in drug metabolism, with identical twins sharing remarkable similarities in drug response compared to fraternal twins. The term pharmacogenomics first began appearing around the 1990s. The first FDA approval of a pharmacogenetic test was in 2005 (for alleles in CYP2D6 and CYP2C19) Future Computational advances have enabled cheaper and faster sequencing. Research has focused on combinatorial chemistry, genomic mining, omic technologies, and high throughput screening. As the cost per genetic test decreases, the development of personalized drug therapies will increase. Technology now allows for genetic analysis of hundreds of target genes involved in medication metabolism and response in less than 24 hours for under $1,000. This a huge step towards bringing pharmacogenetic technology into everyday medical decisions. Likewise, companies like deCODE genetics, MD Labs Pharmacogenetics, Navigenics and 23andMe offer genome scans. The companies use the same genotyping chips that are used in GWAS studies and provide customers with a write-up of individual risk for various traits and diseases and testing for 500,000 known SNPs. Costs range from $995 to $2500 and include updates with new data from studies as they become available. The more expensive packages even included a telephone session with a genetics counselor to discuss the results. Ethics Pharmacogenetics has become a controversial issue in the area of bioethics. Privacy and confidentiality are major concerns. The evidence of benefit or risk from a genetic test may only be suggestive, which could cause dilemmas for providers. Drug development may be affected, with rare genetic variants possibly receiving less research. Access and patient autonomy are also open to discussion. Web-based resources See also Genomics Chemogenomics Clinomics Genetic engineering Toxicogenomics Cancer pharmacogenomics Metabolomics Pharmacovigilance Population groups in biomedicine Toxgnostics Medical terminology LOINC SNOMED CT HPO HGVS HL7 FHIR Genetic testing References Further reading External links Journals: Genomics Pharmacology Pharmacy
Pharmacogenomics
[ "Chemistry" ]
5,756
[ "Pharmacology", "Pharmacogenomics", "Medicinal chemistry", "Pharmacy" ]
1,004,679
https://en.wikipedia.org/wiki/Needleman%E2%80%93Wunsch%20algorithm
The Needleman–Wunsch algorithm is an algorithm used in bioinformatics to align protein or nucleotide sequences. It was one of the first applications of dynamic programming to compare biological sequences. The algorithm was developed by Saul B. Needleman and Christian D. Wunsch and published in 1970. The algorithm essentially divides a large problem (e.g. the full sequence) into a series of smaller problems, and it uses the solutions to the smaller problems to find an optimal solution to the larger problem. It is also sometimes referred to as the optimal matching algorithm and the global alignment technique. The Needleman–Wunsch algorithm is still widely used for optimal global alignment, particularly when the quality of the global alignment is of the utmost importance. The algorithm assigns a score to every possible alignment, and the purpose of the algorithm is to find all possible alignments having the highest score. Introduction This algorithm can be used for any two strings. This guide will use two small DNA sequences as examples as shown in Figure 1: GCATGCG GATTACA Constructing the grid First construct a grid such as one shown in Figure 1 above. Start the first string in the top of the third column and start the other string at the start of the third row. Fill out the rest of the column and row headers as in Figure 1. There should be no numbers in the grid yet. Choosing a scoring system Next, decide how to score each individual pair of letters. Using the example above, one possible alignment candidate might be: 12345678 The letters may match, mismatch, or be matched to a gap (a deletion or insertion (indel)): Match: The two letters at the current index are the same. Mismatch: The two letters at the current index are different. Indel (Insertion or Deletion): The best alignment involves one letter aligning to a gap in the other string. Each of these scenarios is assigned a score and the sum of the scores of all the pairings is the score of the whole alignment candidate. Different systems exist for assigning scores; some have been outlined in the Scoring systems section below. For now, the system used by Needleman and Wunsch will be used: Match: +1 Mismatch or Indel: −1 For the Example above, the score of the alignment would be 0: +−++−−+− −> 1*4 + (−1)*4 = 0 Filling in the table Start with a zero in the first row, first column (not including the cells containing nucleotides). Move through the cells row by row, calculating the score for each cell. The score is calculated by comparing the scores of the cells neighboring to the left, top or top-left (diagonal) of the cell and adding the appropriate score for match, mismatch or indel. Take the maximum of the candidate scores for each of the three possibilities: The path from the top or left cell represents an indel pairing, so take the scores of the left and the top cell, and add the score for indel to each of them. The diagonal path represents a match/mismatch, so take the score of the top-left diagonal cell and add the score for match if the corresponding bases (letters) in the row and column are matching or the score for mismatch if they do not. The resulting score for the cell is the highest of the three candidate scores. Given there is no 'top' or 'top-left' cells for the first row only the existing cell to the left can be used to calculate the score of each cell. Hence −1 is added for each shift to the right as this represents an indel from the previous score. This results in the first row being 0, −1, −2, −3, −4, −5, −6, −7. The same applies to the first column as only the existing score above each cell can be used. Thus the resulting table is: The first case with existing scores in all 3 directions is the intersection of our first letters (in this case G and G). The surrounding cells are below: This cell has three possible candidate sums: The diagonal top-left neighbor has score 0. The pairing of G and G is a match, so add the score for match: 0+1 = 1 The top neighbor has score −1 and moving from there represents an indel, so add the score for indel: (−1) + (−1) = (−2) The left neighbor also has score −1, represents an indel and also produces (−2). The highest candidate is 1 and is entered into the cell: The cell which gave the highest candidate score must also be recorded. In the completed diagram in figure 1 above, this is represented as an arrow from the cell in row and column 2 to the cell in row and column 1. In the next example, the diagonal step for both X and Y represents a mismatch: X: Top: (−2)+(−1) = (−3) Left: (+1)+(−1) = (0) Top-Left: (−1)+(−1) = (−2) Y: Top: (1)+(−1) = (0) Left: (−2)+(−1) = (−3) Top-Left: (−1)+(−1) = (−2) For both X and Y, the highest score is zero: The highest candidate score may be reached by two of the neighboring cells: Top: (1)+(−1) = (0) Top-Left: (1)+(−1) = (0) Left: (0)+(−1) = (−1) In this case, all directions reaching the highest candidate score must be noted as possible origin cells in the finished diagram in figure 1, e.g. in the cell in row and column 6. Filling in the table in this manner gives the scores of all possible alignment candidates, the score in the cell on the bottom right represents the alignment score for the best alignment. Tracing arrows back to origin Mark a path from the cell on the bottom right back to the cell on the top left by following the direction of the arrows. From this path, the sequence is constructed by these rules: A diagonal arrow represents a match or mismatch, so the letter of the column and the letter of the row of the origin cell will align. A horizontal or vertical arrow represents an indel. Vertical arrows will align a gap ("-") to the letter of the row (the "side" sequence), horizontal arrows will align a gap to the letter of the column (the "top" sequence). If there are multiple arrows to choose from, they represent a branching of the alignments. If two or more branches all belong to paths from the bottom right to the top left cell, they are equally viable alignments. In this case, note the paths as separate alignment candidates. Following these rules, the steps for one possible alignment candidate in figure 1 are: G → CG → GCG → -GCG → T-GCG → AT-GCG → CAT-GCG → GCAT-GCG A → CA → ACA → TACA → TTACA → ATTACA → -ATTACA → G-ATTACA ↓ (branch) → TGCG → -TGCG → ... → TACA → TTACA → ... Scoring systems Basic scoring schemes The simplest scoring schemes simply give a value for each match, mismatch and indel. The step-by-step guide above uses match = 1, mismatch = −1, indel = −1. Thus the lower the alignment score the larger the edit distance, for this scoring system one wants a high score. Another scoring system might be: Match = 0 Indel = -1 Mismatch = -1 For this system the alignment score will represent the edit distance between the two strings. Different scoring systems can be devised for different situations, for example if gaps are considered very bad for your alignment you may use a scoring system that penalises gaps heavily, such as: Match = 1 Indel = -10 Mismatch = -1 Similarity matrix More complicated scoring systems attribute values not only for the type of alteration, but also for the letters that are involved. For example, a match between A and A may be given 1, but a match between T and T may be given 4. Here (assuming the first scoring system) more importance is given to the Ts matching than the As, i.e. the Ts matching is assumed to be more significant to the alignment. This weighting based on letters also applies to mismatches. In order to represent all the possible combinations of letters and their resulting scores a similarity matrix is used. The similarity matrix for the most basic system is represented as: Each score represents a switch from one of the letters the cell matches to the other. Hence this represents all possible matches and mismatches (for an alphabet of ACGT). Note all the matches go along the diagonal, also not all the table needs to be filled, only this triangle because the scores are reciprocal.= (Score for A → C = Score for C → A). If implementing the T-T = 4 rule from above the following similarity matrix is produced: Different scoring matrices have been statistically constructed which give weight to different actions appropriate to a particular scenario. Having weighted scoring matrices is particularly important in protein sequence alignment due to the varying frequency of the different amino acids. There are two broad families of scoring matrices, each with further alterations for specific scenarios: PAM BLOSUM Gap penalty When aligning sequences there are often gaps (i.e. indels), sometimes large ones. Biologically, a large gap is more likely to occur as one large deletion as opposed to multiple single deletions. Hence two small indels should have a worse score than one large one. The simple and common way to do this is via a large gap-start score for a new indel and a smaller gap-extension score for every letter which extends the indel. For example, new-indel may cost -5 and extend-indel may cost -1. In this way an alignment such as: GAAAAAAT G--A-A-T which has multiple equal alignments, some with multiple small alignments will now align as: GAAAAAAT GAA----T or any alignment with a 4 long gap in preference over multiple small gaps. Advanced presentation of algorithm Scores for aligned characters are specified by a similarity matrix. Here, is the similarity of characters a and b. It uses a linear gap penalty, here called . For example, if the similarity matrix was then the alignment: AGACTAGTTAC CGA---GACGT with a gap penalty of −5, would have the following score: = −3 + 7 + 10 − (3 × 5) + 7 + (−4) + 0 + (−1) + 0 = 1 To find the alignment with the highest score, a two-dimensional array (or matrix) F is allocated. The entry in row i and column j is denoted here by . There is one row for each character in sequence A, and one column for each character in sequence B. Thus, if aligning sequences of sizes n and m, the amount of memory used is in . Hirschberg's algorithm only holds a subset of the array in memory and uses space, but is otherwise similar to Needleman-Wunsch (and still requires time). As the algorithm progresses, the will be assigned to be the optimal score for the alignment of the first characters in A and the first characters in B. The principle of optimality is then applied as follows: Basis: Recursion, based on the principle of optimality: The pseudo-code for the algorithm to compute the F matrix therefore looks like this: d ← Gap penalty score for i = 0 to length(A) F(i,0) ← d * i for j = 0 to length(B) F(0,j) ← d * j for i = 1 to length(A) for j = 1 to length(B) { Match ← F(i−1, j−1) + S(Ai, Bj) Delete ← F(i−1, j) + d Insert ← F(i, j−1) + d F(i,j) ← max(Match, Insert, Delete) } Once the F matrix is computed, the entry gives the maximum score among all possible alignments. To compute an alignment that actually gives this score, you start from the bottom right cell, and compare the value with the three possible sources (Match, Insert, and Delete above) to see which it came from. If Match, then and are aligned, if Delete, then is aligned with a gap, and if Insert, then is aligned with a gap. (In general, more than one choice may have the same value, leading to alternative optimal alignments.) AlignmentA ← "" AlignmentB ← "" i ← length(A) j ← length(B) while (i > 0 or j > 0) { if (i > 0 and j > 0 and F(i, j) == F(i−1, j−1) + S(Ai, Bj)) { AlignmentA ← Ai + AlignmentA AlignmentB ← Bj + AlignmentB i ← i − 1 j ← j − 1 } else if (i > 0 and F(i, j) == F(i−1, j) + d) { AlignmentA ← Ai + AlignmentA AlignmentB ← "−" + AlignmentB i ← i − 1 } else { AlignmentA ← "−" + AlignmentA AlignmentB ← Bj + AlignmentB j ← j − 1 } } Complexity Computing the score for each cell in the table is an operation. Thus the time complexity of the algorithm for two sequences of length and is . It has been shown that it is possible to improve the running time to using the Method of Four Russians. Since the algorithm fills an table the space complexity is Historical notes and algorithm development The original purpose of the algorithm described by Needleman and Wunsch was to find similarities in the amino acid sequences of two proteins. Needleman and Wunsch describe their algorithm explicitly for the case when the alignment is penalized solely by the matches and mismatches, and gaps have no penalty (d=0). The original publication from 1970 suggests the recursion . The corresponding dynamic programming algorithm takes cubic time. The paper also points out that the recursion can accommodate arbitrary gap penalization formulas: A penalty factor, a number subtracted for every gap made, may be assessed as a barrier to allowing the gap. The penalty factor could be a function of the size and/or direction of the gap. [page 444] A better dynamic programming algorithm with quadratic running time for the same problem (no gap penalty) was introduced later by David Sankoff in 1972. Similar quadratic-time algorithms were discovered independently by T. K. Vintsyuk in 1968 for speech processing ("time warping"), and by Robert A. Wagner and Michael J. Fischer in 1974 for string matching. Needleman and Wunsch formulated their problem in terms of maximizing similarity. Another possibility is to minimize the edit distance between sequences, introduced by Vladimir Levenshtein. Peter H. Sellers showed in 1974 that the two problems are equivalent. The Needleman–Wunsch algorithm is still widely used for optimal global alignment, particularly when the quality of the global alignment is of the utmost importance. However, the algorithm is expensive with respect to time and space, proportional to the product of the length of two sequences and hence is not suitable for long sequences. Recent development has focused on improving the time and space cost of the algorithm while maintaining quality. For example, in 2013, a Fast Optimal Global Sequence Alignment Algorithm (FOGSAA), suggested alignment of nucleotide/protein sequences faster than other optimal global alignment methods, including the Needleman–Wunsch algorithm. The paper claims that when compared to the Needleman–Wunsch algorithm, FOGSAA achieves a time gain of 70–90% for highly similar nucleotide sequences (with > 80% similarity), and 54–70% for sequences having 30–80% similarity. Applications outside bioinformatics Computer stereo vision Stereo matching is an essential step in the process of 3D reconstruction from a pair of stereo images. When images have been rectified, an analogy can be drawn between aligning nucleotide and protein sequences and matching pixels belonging to scan lines, since both tasks aim at establishing optimal correspondence between two strings of characters. Although in many applications image rectification can be performed, e.g. by camera resectioning or calibration, it is sometimes impossible or impractical since the computational cost of accurate rectification models prohibit their usage in real-time applications. Moreover, none of these models is suitable when a camera lens displays unexpected distortions, such as those generated by raindrops, weatherproof covers or dust. By extending the Needleman–Wunsch algorithm, a line in the 'left' image can be associated to a curve in the 'right' image by finding the alignment with the highest score in a three-dimensional array (or matrix). Experiments demonstrated that such extension allows dense pixel matching between unrectified or distorted images. See also Wagner–Fischer algorithm Smith–Waterman algorithm Sequence mining Levenshtein distance Dynamic time warping Sequence alignment References External links NW-align: A protein sequence-to-sequence alignment program by Needleman-Wunsch algorithm (online server and source code) A live Javascript-based demo of Needleman–Wunsch An interactive Javascript-based visual explanation of Needleman-Wunsch Algorithm Sequence Alignment Techniques at Technology Blog Biostrings R package implementing Needleman–Wunsch algorithm among others Bioinformatics algorithms Sequence alignment algorithms Computational phylogenetics Dynamic programming Articles with example pseudocode
Needleman–Wunsch algorithm
[ "Biology" ]
3,779
[ "Genetics techniques", "Computational phylogenetics", "Bioinformatics algorithms", "Bioinformatics", "Phylogenetics" ]
1,004,743
https://en.wikipedia.org/wiki/Similarity%20measure
In statistics and related fields, a similarity measure or similarity function or similarity metric is a real-valued function that quantifies the similarity between two objects. Although no single definition of a similarity exists, usually such measures are in some sense the inverse of distance metrics: they take on large values for similar objects and either zero or a negative value for very dissimilar objects. Though, in more broad terms, a similarity function may also satisfy metric axioms. Cosine similarity is a commonly used similarity measure for real-valued vectors, used in (among other fields) information retrieval to score the similarity of documents in the vector space model. In machine learning, common kernel functions such as the RBF kernel can be viewed as similarity functions. Use of different similarity measure formulas Different types of similarity measures exist for various types of objects, depending on the objects being compared. For each type of object there are various similarity measurement formulas. Similarity between two data points There are many various options available when it comes to finding similarity between two data points, some of which are a combination of other similarity methods. Some of the methods for similarity measures between two data points include Euclidean distance, Manhattan distance, Minkowski distance, and Chebyshev distance. The Euclidean distance formula is used to find the distance between two points on a plane, which is visualized in the image below. Manhattan distance is commonly used in GPS applications, as it can be used to find the shortest route between two addresses. When you generalize the Euclidean distance formula and Manhattan distance formula you are left with the Minkowski distance formulas, which can be used in a wide variety of applications. Euclidean distance Manhattan distance Minkowski distance Chebyshev distance Similarity between strings For comparing strings, there are various measures of string similarity that can be used. Some of these methods include edit distance, Levenshtein distance, Hamming distance, and Jaro distance. The best-fit formula is dependent on the requirements of the application. For example, edit distance is frequently used for natural language processing applications and features, such as spell-checking. Jaro distance is commonly used in record linkage to compare first and last names to other sources. Edit distance Levenshtein distance Lee distance Hamming distance Jaro distance Similarity between two probability distributions Typical measures of similarity for probability distributions are the Bhattacharyya distance and the Hellinger distance. Both provide a quantification of similarity for two probability distributions on the same domain, and they are mathematically closely linked. The Bhattacharyya distance does not fulfill the triangle inequality, meaning it does not form a metric. The Hellinger distance does form a metric on the space of probability distributions. Bhattacharyya distance Hellinger distance Similarity between two sets The Jaccard index formula measures the similarity between two sets based on the number of items that are present in both sets relative to the total number of items. It is commonly used in recommendation systems and social media analysis. The Sørensen–Dice coefficient also compares the number of items in both sets to the total number of items present but the weight for the number of shared items is larger. The Sørensen–Dice coefficient is commonly used in biology applications, measuring the similarity between two sets of genes or species. Jaccard index Sørensen–Dice coefficient Similarity between two sequences When comparing temporal sequences (time series), some similarity measures must additionally account for similarity of two sequences that are not fully aligned. Dynamic time warping Use in clustering Clustering or Cluster analysis is a data mining technique that is used to discover patterns in data by grouping similar objects together. It involves partitioning a set of data points into groups or clusters based on their similarities. One of the fundamental aspects of clustering is how to measure similarity between data points. Similarity measures play a crucial role in many clustering techniques, as they are used to determine how closely related two data points are and whether they should be grouped together in the same cluster. A similarity measure can take many different forms depending on the type of data being clustered and the specific problem being solved. One of the most commonly used similarity measures is the Euclidean distance, which is used in many clustering techniques including K-means clustering and Hierarchical clustering. The Euclidean distance is a measure of the straight-line distance between two points in a high-dimensional space. It is calculated as the square root of the sum of the squared differences between the corresponding coordinates of the two points. For example, if we have two data points and , the Euclidean distance between them is . Another commonly used similarity measure is the Jaccard index or Jaccard similarity, which is used in clustering techniques that work with binary data such as presence/absence data or Boolean data; The Jaccard similarity is particularly useful for clustering techniques that work with text data, where it can be used to identify clusters of similar documents based on their shared features or keywords. It is calculated as the size of the intersection of two sets divided by the size of the union of the two sets: . Similarities among 162 Relevant Nuclear Profile are tested using the Jaccard Similarity measure (see figure with heatmap). The Jaccard similarity of the nuclear profile ranges from 0 to 1, with 0 indicating no similarity between the two sets and 1 indicating perfect similarity with the aim of clustering the most similar nuclear profile. Manhattan distance, also known as Taxicab geometry, is a commonly used similarity measure in clustering techniques that work with continuous data. It is a measure of the distance between two data points in a high-dimensional space, calculated as the sum of the absolute differences between the corresponding coordinates of the two points . When dealing with mixed-type data, including nominal, ordinal, and numerical attributes per object, Gower's distance (or similarity) is a common choice as it can handle different types of variables implicitly. It first computes similarities between the pair of variables in each object, and then combines those similarities to a single weighted average per object-pair. As such, for two objects and having descriptors, the similarity is defined as: where the are non-negative weights and is the similarity between the two objects regarding their -th variable. In spectral clustering, a similarity, or affinity, measure is used to transform data to overcome difficulties related to lack of convexity in the shape of the data distribution. The measure gives rise to an -sized for a set of points, where the entry in the matrix can be simply the (reciprocal of the) Euclidean distance between and , or it can be a more complex measure of distance such as the Gaussian . Further modifying this result with network analysis techniques is also common. The choice of similarity measure depends on the type of data being clustered and the specific problem being solved. For example, working with continuous data such as gene expression data, the Euclidean distance or cosine similarity may be appropriate. If working with binary data such as the presence of a genomic loci in a nuclear profile, the Jaccard index may be more appropriate. Lastly, working with data that is arranged in a grid or lattice structure, such as image or signal processing data, the Manhattan distance is particularly useful for the clustering. Use in recommender systems Similarity measures are used to develop recommender systems. It observes a user's perception and liking of multiple items. On recommender systems, the method is using a distance calculation such as or to generate a with values representing the similarity of any pair of targets. Then, by analyzing and comparing the values in the matrix, it is possible to match two targets to a user's preference or link users based on their marks. In this system, it is relevant to observe the value itself and the absolute distance between two values. Gathering this data can indicate a mark's likeliness to a user as well as how mutually closely two marks are either rejected or accepted. It is possible then to recommend to a user targets with high similarity to the user's likes. Recommender systems are observed in multiple online entertainment platforms, in social media and streaming websites. The logic for the construction of this systems is based on similarity measures. Use in sequence alignment Similarity matrices are used in sequence alignment. Higher scores are given to more-similar characters, and lower or negative scores for dissimilar characters. Nucleotide similarity matrices are used to align nucleic acid sequences. Because there are only four nucleotides commonly found in DNA (Adenine (A), Cytosine (C), Guanine (G) and Thymine (T)), nucleotide similarity matrices are much simpler than protein similarity matrices. For example, a simple matrix will assign identical bases a score of +1 and non-identical bases a score of −1. A more complicated matrix would give a higher score to transitions (changes from a pyrimidine such as C or T to another pyrimidine, or from a purine such as A or G to another purine) than to transversions (from a pyrimidine to a purine or vice versa). The match/mismatch ratio of the matrix sets the target evolutionary distance. The +1/−3 DNA matrix used by BLASTN is best suited for finding matches between sequences that are 99% identical; a +1/−1 (or +4/−4) matrix is much more suited to sequences with about 70% similarity. Matrices for lower similarity sequences require longer sequence alignments. Amino acid similarity matrices are more complicated, because there are 20 amino acids coded for by the genetic code, and so a larger number of possible substitutions. Therefore, the similarity matrix for amino acids contains 400 entries (although it is usually symmetric). The first approach scored all amino acid changes equally. A later refinement was to determine amino acid similarities based on how many base changes were required to change a codon to code for that amino acid. This model is better, but it doesn't take into account the selective pressure of amino acid changes. Better models took into account the chemical properties of amino acids. One approach has been to empirically generate the similarity matrices. The Dayhoff method used phylogenetic trees and sequences taken from species on the tree. This approach has given rise to the PAM series of matrices. PAM matrices are labelled based on how many nucleotide changes have occurred, per 100 amino acids. While the PAM matrices benefit from having a well understood evolutionary model, they are most useful at short evolutionary distances (PAM10–PAM120). At long evolutionary distances, for example PAM250 or 20% identity, it has been shown that the BLOSUM matrices are much more effective. The BLOSUM series were generated by comparing a number of divergent sequences. The BLOSUM series are labeled based on how much entropy remains unmutated between all sequences, so a lower BLOSUM number corresponds to a higher PAM number. Use in computer vision See also Recurrence plot, a visualization tool of recurrences in dynamical (and other) systems References Clustering criteria Statistical classification Statistical distance
Similarity measure
[ "Physics" ]
2,276
[ "Similarity measures", "Physical quantities", "Statistical distance", "Distance" ]
1,004,764
https://en.wikipedia.org/wiki/Gap%20penalty
A Gap penalty is a method of scoring alignments of two or more sequences. When aligning sequences, introducing gaps in the sequences can allow an alignment algorithm to match more terms than a gap-less alignment can. However, minimizing gaps in an alignment is important to create a useful alignment. Too many gaps can cause an alignment to become meaningless. Gap penalties are used to adjust alignment scores based on the number and length of gaps. The five main types of gap penalties are constant, linear, affine, convex, and profile-based. Applications Genetic sequence alignment - In bioinformatics, gaps are used to account for genetic mutations occurring from insertions or deletions in the sequence, sometimes referred to as indels. Insertions or deletions can occur due to single mutations, unbalanced crossover in meiosis, slipped strand mispairing, and chromosomal translocation. The notion of a gap in an alignment is important in many biological applications, since the insertions or deletions comprise an entire sub-sequence and often occur from a single mutational event. Furthermore, single mutational events can create gaps of different sizes. Therefore, when scoring, the gaps need to be scored as a whole when aligning two sequences of DNA. Considering multiple gaps in a sequence as a larger single gap will reduce the assignment of a high cost to the mutations. For instance, two protein sequences may be relatively similar but differ at certain intervals as one protein may have a different subunit compared to the other. Representing these differing sub-sequences as gaps will allow us to treat these cases as “good matches” even though there are long consecutive runs with indel operations in the sequence. Therefore, using a good gap penalty model will avoid low scores in alignments and improve the chances of finding a true alignment. In genetic sequence alignments, gaps are represented as dashes(-) on a protein/DNA sequence alignment. Unix diff function - computes the minimal difference between two files similarly to plagiarism detection. Spell checking - Gap penalties can help find correctly spelled words with the shortest edit distance to a misspelled word. Gaps can indicate a missing letter in the incorrectly spelled word. Plagiarism detection - Gap penalties allow algorithms to detect where sections of a document are plagiarized by placing gaps in original sections and matching what is identical. The gap penalty for a certain document quantifies how much of a given document is probably original or plagiarized. Bioinformatics applications Global alignment A global alignment performs an end-to-end alignment of the query sequence with the reference sequence. Ideally, this alignment technique is most suitable for closely related sequences of similar lengths. The Needleman-Wunsch algorithm is a dynamic programming technique used to conduct global alignment. Essentially, the algorithm divides the problem into a set of sub-problems, then uses the results of the sub-problems to reconstruct a solution to the original query. Semi-global alignment The use of semi-global alignment exists to find a particular match within a large sequence. An example includes seeking promoters within a DNA sequence. Unlike global alignment, it compromises of no end gaps in one or both sequences. If the end gaps are penalized in one sequence 1 but not in sequence 2, it produces an alignment that contains sequence 2 within sequence 1. Local alignment A local sequence alignment matches a contiguous sub-section of one sequence with a contiguous sub-section of another. The Smith-Waterman algorithm is motivated by giving scores for matches and mismatches. Matches increase the overall score of an alignment whereas mismatches decrease the score. A good alignment then has a positive score and a poor alignment has a negative score. The local algorithm finds an alignment with the highest score by considering only alignments that score positives and picking the best one from those. The algorithm is a dynamic programming algorithm. When comparing proteins, one uses a similarity matrix which assigns a score to each possible residue pair. The score should be positive for similar residues and negative for dissimilar residue pairs. Gaps are usually penalized using a linear gap function that assigns an initial penalty for a gap opening, and an additional penalty for gap extensions, increasing the gap length. Scoring matrix Substitution matrices such as BLOSUM are used for sequence alignment of proteins. A Substitution matrix assigns a score for aligning any possible pair of residues. In general, different substitution matrices are tailored to detecting similarities among sequences that are diverged by differing degrees. A single matrix may be reasonably efficient over a relatively broad range of evolutionary change. The BLOSUM-62 matrix is one of the best substitution matrices for detecting weak protein similarities. BLOSUM matrices with high numbers are designed for comparing closely related sequences, while those with low numbers are designed for comparing distant related sequences. For example, BLOSUM-80 is used for alignments that are more similar in sequence, and BLOSUM-45 is used for alignments that have diverged from each other. For particularly long and weak alignments, the BLOSUM-45 matrix may provide the best results. Short alignments are more easily detected using a matrix with a higher "relative entropy" than that of BLOSUM-62. The BLOSUM series does not include any matrices with relative entropies suitable for the shortest queries. Indels During DNA replication, the cellular replication machinery is prone to making two types of errors while duplicating the DNA. These two replication errors are insertions and deletions of single DNA bases from the DNA strand (indels). Indels can have severe biological consequences by causing mutations in the DNA strand that could result in the inactivation or over activation of the target protein. For example, if a one or two nucleotide indel occurs in a coding sequence the result will be a shift in the reading frame, or a frameshift mutation that may render the protein inactive. The biological consequences of indels are often deleterious and are frequently associated with pathologies such as cancer. However, not all indels are frameshift mutations. If indels occur in trinucleotides, the result is an extension of the protein sequence that may also have implications on protein function. Types Constant This is the simplest type of gap penalty: a fixed negative score is given to every gap, regardless of its length. This encourages the algorithm to make fewer, larger, gaps leaving larger contiguous sections. ATTGACCTGA || ||||| AT---CCTGA Aligning two short DNA sequences, with '-' depicting a gap of one base pair. If each match was worth 1 point and the whole gap -1, the total score: 7 − 1 = 6. Linear Compared to the constant gap penalty, the linear gap penalty takes into account the length (L) of each insertion/deletion in the gap. Therefore, if the penalty for each inserted/deleted element is B and the length of the gap L; the total gap penalty would be the product of the two BL. This method favors shorter gaps, with total score decreasing with each additional gap. ATTGACCTGA || ||||| AT---CCTGA Unlike constant gap penalty, the size of the gap is considered. With a match with score 1 and each gap -1, the score here is (7 − 3 = 4). Affine The most widely used gap penalty function is the affine gap penalty. The affine gap penalty combines the components in both the constant and linear gap penalty, taking the form . This introduces new terms, A is known as the gap opening penalty, B the gap extension penalty and L the length of the gap. Gap opening refers to the cost required to open a gap of any length, and gap extension the cost to extend the length of an existing gap by 1. Often it is unclear as to what the values A and B should be as it differs according to purpose. In general, if the interest is to find closely related matches (e.g. removal of vector sequence during genome sequencing), a higher gap penalty should be used to reduce gap openings. On the other hand, gap penalty should be lowered when interested in finding a more distant match. The relationship between A and B also have an effect on gap size. If the size of the gap is important, a small A and large B (more costly to extend a gap) is used and vice versa. Only the ratio A/B is important, as multiplying both by the same positive constant will increase all penalties by : which does not change the relative penalty between different alignments. Convex Using the affine gap penalty requires the assigning of fixed penalty values for both opening and extending a gap. This can be too rigid for use in a biological context. The logarithmic gap takes the form and was proposed as studies had shown the distribution of indel sizes obey a power law. Another proposed issue with the use of affine gaps is the favoritism of aligning sequences with shorter gaps. Logarithmic gap penalty was invented to modify the affine gap so that long gaps are desirable. However, in contrast to this, it has been found that using logarithmatic models had produced poor alignments when compared to affine models. Profile-based Profile–profile alignment algorithms are powerful tools for detecting protein homology relationships with improved alignment accuracy. Profile-profile alignments are based on the statistical indel frequency profiles from multiple sequence alignments generated by PSI-BLAST searches. Rather than using substitution matrices to measure the similarity of amino acid pairs, profile–profile alignment methods require a profile-based scoring function to measure the similarity of profile vector pairs. Profile-profile alignments employ gap penalty functions. The gap information is usually used in the form of indel frequency profiles, which is more specific for the sequences to be aligned. ClustalW and MAFFT adopted this kind of gap penalty determination for their multiple sequence alignments. Alignment accuracies can be improved using this model, especially for proteins with low sequence identity. Some profile–profile alignment algorithms also run the secondary structure information as one term in their scoring functions, which improves alignment accuracy. Comparing time complexities The use of alignment in computational biology often involves sequences of varying lengths. It is important to pick a model that would efficiently run at a known input size. The time taken to run the algorithm is known as the time complexity. Challenges There are a few challenges when it comes to working with gaps. When working with popular algorithms there seems to be little theoretical basis for the form of the gap penalty functions. Consequently, for any alignment situation gap placement must be empirically determined. Also, pairwise alignment gap penalties, such as the affine gap penalty, are often implemented independent of the amino acid types in the inserted or deleted fragment or at the broken ends, despite evidence that specific residue types are preferred in gap regions. Finally, alignment of sequences implies alignment of the corresponding structures, but the relationships between structural features of gaps in proteins and their corresponding sequences are only imperfectly known. Because of this incorporating structural information into gap penalties is difficult to do. Some algorithms use predicted or actual structural information to bias the placement of gaps. However, only a minority of sequences have known structures, and most alignment problems involve sequences of unknown secondary and tertiary structure. References Further reading Computational phylogenetics
Gap penalty
[ "Biology" ]
2,328
[ "Bioinformatics", "Phylogenetics", "Computational phylogenetics", "Genetics techniques" ]
1,004,977
https://en.wikipedia.org/wiki/Microlepidoptera
Microlepidoptera (micromoths) is an artificial (i.e., unranked and not monophyletic) grouping of moth families, commonly known as the "smaller moths" (micro, Lepidoptera). These generally have wingspans of under 20 mm, so are harder to identify by external phenotypic markings than macrolepidoptera. They present some lifestyles that the larger Lepidoptera do not have, but this is not an identifying mark. Some hobbyists further divide this group into separate groups, such as leaf miners or rollers, stem or root borers, and then usually follow the more rigorous scientific taxonomy of lepidopterans. Efforts to stabilize the term have usually proven inadequate. Diversity Vernacular usage divides the Lepidoptera simply into smaller and larger or into more-primitive and less-primitive groups, microlepidoptera and macrolepidoptera, respectively. Intuitively, the "micros" are any lepidopteran not currently placed in the macrolepidoptera. This paraphyletic assemblage, however, includes also the superfamilies Zygaenoidea, Sesioidea, and Cossoidea that would in common parlance normally be lumped with the "macros". A lepidopterist might call these groups "primitive macros". Furthermore, even all of the nonditrysian moths are not small. For example, the Hepialidae or "swift moths" (up to 25 cm wingspan) fall quite basally in the lepidopteran "tree of life". The recently discovered primitive superfamily Andesianoidea is another case in point; lurking within the Cossoidae until 2001, these moths have up to an order of magnitude greater wingspan (5.5 cm) than most previously known monotrysian "micros". Whilst the smaller moths are usually also more seldom noticed, a more expansive "nonmacrolepidopteran" concept of the microlepidoptera would include about 37 out of the roughly 47 superfamilies. Whilst usually less popular, micros are thus more important in the sense that they include a much wider span of the "tree of life" (i.e., phylogenetic diversity). Whereas they include no butterflies, micros do also include a surprising number of day-flying groups, and the advent of online identification resources in many countries (e.g. "UK moths") combined with the widespread use of digital macrophotography, is making them much easier to identify. Lifestyle Microlepidoptera can be found in a broad variety of habitats and ecological niches worldwide, both terrestrial and freshwater aquatic (e.g. Acentropinae). They have a wide variety of feeding habits in both larval and adult life stages. Caterpillars feed on a wide variety of plant tissue and across a wide spectrum of plant groups from liverworts to angiosperms. They are either external feeders ("exophagous") or more usually feed internally ("endophagous"), typically as miners or tunnellers, but some feed on fungi, scavenge on dead animals, are parasitoids usually of other insects (some Zygaenoidea) or are detritivores, and Hyposmocoma molluscivora even feeds on live snails. Adult moths feed with mandibles on spores and pollen (Micropterigidae) on dew (e.g. Eriocraniidae), with their probosces on nectar (many groups e.g. Choreutidae) or are simply nonfeeding with mouthparts reduced or absent. The larvae of many smaller moths are considered economic pests, causing damage to plants, as well as fabrics and other manmade goods. Commonly noticed "micros" include the plume moth and the various species of clothes moth. Main groups The list below is ordered initially in approximate order of species diversity and ecological abundance. The first four superfamilies listed here may comprise 90% of species in a sample of smaller moths and the listed characters may be of some assistance to sort these out, particularly the form of the labial palp and scaling of the proboscis (Robinson et al. 2001). 1. Curved horn moths, twirler moths, case-bearers and allies – 16,250 spp. Gelechioidea: Head smooth-scaled, labial palps usually are slender, recurved, with the terminal segment long and pointed; the long proboscis bears scales on basal half. Resting posture very varied. Gelechiidae – twirler moths Oecophoridae – concealer moths Lecithoceridae – tropical longhorned moths Cosmopterigidae – cosmet moths Coleophoridae – case-bearers Elachistidae – grass-miner miners Momphidae – mompha moths Ethmiidae Blastobasidae – scavenger moths Batrachedridae – flower moths Scythrididae – flower moths Pterolonchidae – lance-wing moths Symmocidae Agonoxenidae – palm moths Holcopogonidae Metachandidae 2. Pyralids, snout moths and grass moths – 16,000 spp. Pyraloidea: Head rough-scaled, proboscis scaled, tympanal organs on abdomen; labial palps usually not recurved, terminal segment usually blunt. Hindwing veins ("Sc" + "R1") and "Rs" are close or fused in the middle of the wing; resting posture usually either with wings tightly rolled or and held quite flat to surface in triangular shape and with labial palps often projecting forward, giving Concorde-like appearance; antennae often swept back parallel together over body. Generally they are considered the closest group to 'macrolepidoptera', and maybe ancestral to it, macrolepidoptera itself is not a universally accepted taxon. Pyralidae – pyrales or snout moths Crambidae – grass moths 3. Tortrix moths, leaf-roller moths, bell moths, codling moths and allies – 6,200 spp. Tortricidae: Head rough-scaled, labial palps with short blunt apical segment, basal half of proboscis not scaled; wings held over back in tent-like or flattened position; forewing costa often quite strongly convex or sinuate in many Tortricinae giving bell-like shape 4. Clothes moths, bagworms and allies – 4,200 spp. Tineoidea: Head often with tufty erect scales; labial palps usually have bristles on middle segment and terminal segment is long; wings usually held over back in tent-like position and head close to surface; tineids often run fast Tineidae – clothes moths and fungus moths Eriocottidae – Old World spiny winged moths Acrolophidae – tube moths Arrhenophanidae – tropical lattice moths Psychidae – bagworm moths Lypusidae – European bagworm moths 5, 6. Leaf miner moths – 3,200 spp. Gracillarioidea – 2,300 spp. Gracillariidae – blotch leaf miner moths Bucculatricidae – ribbed cocoon makers Douglasiidae – Douglas moths Roeslerstammiidae – double-eye moths Nepticuloidea – 900 spp. - eyecap moths Nepticulidae – pygmy eyecap moths Opostegidae – white eyecap moths 7. Ermine moths, webworm moths, yucca moths and allies – 1,500 spp. Yponomeutoidea Yponomeutidae – ermine moths Acrolepiidae – false diamond-back moths Ypsolophidae Plutellidae – diamond-back moths and allies Glyphipterigidae – sedge moths Heliodinidae – sun moths Bedelliidae Lyonetiidae – lyonet moths 8, 9. Plume moths – 1,160 spp. Pterophoridae – plume moths – 1,000 spp. Alucitidae – many-plumed moths – 160 spp. 10. Tropical leaf moths or picture-winged moths – more than 1000 spp. Thyrididae: Small mainly dayflying moths: 11. Fairy moths, longhorn moths and allies – 600 spp. Adeloidea Incurvariidae – leaf-cutter moths Adelidae – fairy moths Heliozelidae – shield-bearer leaf-miners Prodoxidae – yucca moths Cecidosidae – gall moths 12. Metalmark moths – 402 spp. Choreutidae 13. Mandibulate archaic moths – 180 spp. Micropterigidae 14. Sparkling archaic sun moths or spring jewel moths – 24 spp. Eriocraniidae Superfamilies less likely to be encountered: 15. Tropical fruitworm moths – 318 spp. Copromorphoidea Copromorphidae Carposinidae 16. Fringe tufted moths – 83 spp. Epermeniidae 17. Blackberry leaf skeletonizer and allies – 8 spp. Schreckensteiniidae 18. Immid moths – 250 spp. Immidae 19. False burnet moths – 60 spp. Urodidae 20. Tropical teak moths – 20 spp. Hyblaeidae 21. Whalley's Malagasy moths – 2 spp. Whalleyanidae More rarely encountered "primitive" families: 22. Kauri pine moths – 2 spp. Agathiphagidae 22. Southern beech moths or Valdivian archaic moths – 9 spp. Heterobathmiidae 23. Archaic sun moths – 4 spp. Acanthopteroctetidae 24. Australian archaic sun moths – 6 spp. Lophocoronidae 25. Archaic bell moths – 12 spp. Neopseustidae 26. New Zealand endemic moths – 7 spp. Mnesarchaeidae 27. Gondwanaland moths – 60 spp. Palaephatidae 28. Trumpet leaf miner moths – 107 spp. Tischeriidae 29. Simaethistid moths – 4 spp. Simaethistidae 30. Galacticoid moths or webworm moths – 17 spp. Galacticidae Larger "micros" These groups have been formerly included in macros by hobbyists. 'Archaic and primitive macros' is not a recommended name for these as it may create confusion of their placement in some classification systems. 31. Swift moths and allies – 544 spp. Hepialoidea Hepialidae – swift moths Anomosetidae – Australian primitive ghost moths Prototheoridae – African primitive ghost moths Neotheoridae – Amazonian primitive ghost moths Palaeosetidae – miniature ghost moths Unassigned to superfamily: 32. Meyrick's mystic moth – 1 sp. Prodidactidae Large monotrysian micros: 33. Andean endemic moths – 3 spp. Andesianidae Large ditrysian micros (formerly 'primitive macros'): 34. Burnet moths, slug moths, hag moths, glass moths and allies – 2,600 spp. Zygaenoidea Zygaenidae – burnet and forester moths Limacodidae – slug moths or saddleback caterpillar moths Megalopygidae – flannel moths Epipyropidae – planthopper parasite moths Heterogynidae – Mediterranean burnet moths Himantopteridae – long-tailed burnet moths Anomoeotidae Cyclotornidae – Australian parasite moths Somabrachyidae – African flannel moths Dalceridae – glass moths Lacturidae – Australian burnet moths Aididae 35. Clearwing moths, castniid moths, little bear moths and allies – 1,300 spp. Sesioidea Sesiidae – clearwing moths Castniidae – castniid moths Brachodidae – little bear moths 36, 37. Goat or carpenter moths and allies – 676 spp. Cossoidea Cossidae – goat moths, leopard moths or carpenterworm moths Dudgeoneidae – Dudgeon carpenterworm moths Sources Robinson, G.S., Tuck, K.R., Shaffer, M. and Cook, K. (1994). The smaller moths of South-East Asia. Malaysian Nature Society, Kuala Lumpur. Common Name Index Moth taxonomy Polyphyletic groups
Microlepidoptera
[ "Biology" ]
2,549
[ "Phylogenetics", "Paraphyletic groups" ]
7,223,383
https://en.wikipedia.org/wiki/Cryptographic%20Module%20Testing%20Laboratory
Cryptographic Module Testing Laboratory (CMTL) is an information technology (IT) computer security testing laboratory that is accredited to conduct cryptographic module evaluations for conformance to the FIPS 140-2 U.S. Government standard. The National Institute of Standards and Technology (NIST) National Voluntary Laboratory Accreditation Program (NVLAP) accredits CMTLs to meet Cryptographic Module Validation Program (CMVP) standards and procedures. This has been replaced by FIPS 140-2 and the Cryptographic Module Validation Program (CMVP). CMTL requirements These laboratories must meet the following requirements: NIST Handbook 150, NVLAP Procedures and General Requirements NIST Handbook 150-17 Information Technology Security Testing - Cryptographic Module Testing NVLAP Specific Operations Checklist for Cryptographic Module Testing FIPS 140-2 in relation to the Common Criteria A CMTL can also be a Common Criteria (CC) Testing Laboratory (CCTL). The CC and FIPS 140-2 are different in the abstractness and focus of evaluation. FIPS 140-2 testing is against a defined cryptographic module and provides a suite of conformance tests to four FIPS 140 security levels. FIPS 140-2 describes the requirements for cryptographic modules and includes such areas as physical security, key management, self tests, roles and services, etc. The standard was initially developed in 1994 - prior to the development of the CC. The CC is an evaluation against a Protection Profile (PP), or security target (ST). Typically, a PP covers a broad range of products. A CC evaluation does not supersede or replace a validation to either FIPS 140-1, FIPS140-2 or FIPS 140-3. The four security levels in FIPS 140-1 and FIPS 140-2 do not map directly to specific CC EALs or to CC functional requirements. A CC certificate cannot be a substitute for a FIPS 140-1 or FIPS 140-2 certificate. If the operational environment is a modifiable operational environment, the operating system requirements of the Common Criteria are applicable at FIPS Security Levels 2 and above. FIPS 140-1 required evaluated operating systems that referenced the Trusted Computer System Evaluation Criteria (TCSEC) classes C2, B1 and B2. However, TCSEC is no longer in use and has been replaced by the Common Criteria. Consequently, FIPS 140-2 now references the Common Criteria. FIPS 140-2 or FIPS 140-3 validation efforts can be in some parts reused in Common Criteria evaluations, specifically in areas related to entropy source and cryptographic algorithms. References External links List of CMTLs from NIST Computer security procedures Tests Cryptography
Cryptographic Module Testing Laboratory
[ "Mathematics", "Engineering" ]
558
[ "Applied mathematics", "Computer security procedures", "Cryptography", "Cybersecurity engineering" ]
7,224,143
https://en.wikipedia.org/wiki/Orbit%20Award
The Orbit Awards were given by the National Space Society and the Space Tourism Society to pioneers in the private space travel industry, and presented at the Annual International Space Development Conference The actual award is a holographic crystal created by international artist, Eileen Borgeson and holography pioneer, Jeff Allen. ‘Orbit Awards’ were co-sponsored by EArt Gallery and Interior Systems, and were received by: Buzz Aldrin, Richard Branson, Paul Allen, Rick Searfoss, Robert Bigelow, The X PRIZE Foundation, Scaled Composites, Zero Gravity Corporation, Eric Anderson and Anousheh Ansari. See also List of space technology awards References External links from Art News, September 29, 2006 Space-related awards
Orbit Award
[ "Technology" ]
149
[ "Science and technology awards", "Space-related awards" ]
7,224,368
https://en.wikipedia.org/wiki/Bengt%20Holmstr%C3%B6m
Bengt Robert Holmström (born 18 April 1949) is a Finnish economist who is currently Paul A. Samuelson Professor of Economics (Emeritus) at the Massachusetts Institute of Technology. Together with Oliver Hart, he received the Central Bank of Sweden Nobel Memorial Prize in Economic Sciences in 2016. Early life and education Holmström was born in Helsinki, Finland, on 18 April 1949, and belongs to the Swedish speaking minority of Finland. He received his B.S. in mathematics and science from the University of Helsinki in 1972. He also received a Master of Science degree in operations research from Stanford University in 1975. He received his Ph.D. from the Graduate School of Business at Stanford in 1978. He moved to the United States in 1976. Career He worked as a corporate planner from 1972 until 1974, then was an assistant professor at the Hanken School of Economics from 1978 until 1979. He served as an associate professor at the Kellogg Graduate School of Management at Northwestern University (1979–1983) and as the Edwin J. Beinecke Professor of Management at Yale University’s School of Management (1983–1994). Holmström was elected Alumnus of The Year by the University of Helsinki Alumni Association in 2010. He has been on the faculty of M.I.T. since 1994, when he was appointed professor of economics and management at the department of economics and Sloan School of Management. Holmström is particularly well known for his work on principal-agent theory. His work made seminal advances in understanding contracting in the presence of uncertainty. More generally, he has worked on the theory of contracting and incentives especially as applied to the theory of the firm, to corporate governance and to liquidity problems in financial crises. He praised the taxpayer-backed bailouts by the US government during the financial crisis of 2007–2008 and emphasizes the benefits of opacity in the money market. Holmström was elected member of the Finnish Society of Sciences and Letters in 1992 and an honorary member of the same society in 2016. He is a fellow of the American Academy of Arts and Sciences, the Econometric Society, the European Economic Association and the American Finance Association, and a foreign member of the Royal Swedish Academy of Sciences and the Finnish Academy of Science and Letters. In 2011, he served as President of the Econometric Society. He holds honorary doctorate degrees from the Stockholm School of Economics, Sweden, the University of Vaasa and the Hanken School of Economics in Finland. Holmström was a member of Nokia's board of directors from 1999 until 2012. He was a member of the Board of the Aalto University from 2008 until 2017. Accolades He was awarded the 2012 Banque de France-TSE Senior Prize in Monetary Economics and Finance, the 2013 Stephen A. Ross Prize in Financial Economics and the 2013 Chicago Mercantile Exchange – MSRI Prize for Innovative Quantitative Applications. In 2016, Holmström won the Sveriges Riksbank Prize in Economic Sciences in Memory of Alfred Nobel together with Oliver Hart "for their contributions to contract theory". Personal life He is married to Anneli Holmström and they have one son. Publications Holmström, Bengt, 1972. "En icke-linear lösningsmetod för allokationsproblem". University of Helsinki. Holmström, Bengt, 1979. "Moral Hazard and Observability," Bell Journal of Economics, 10(1), pp. 74–91. Holmstrom, Bengt. "Moral hazard in teams." The Bell Journal of Economics (1982): 324–340. Holmstrom, Bengt. "Equilibrium long-term labor contracts." The Quarterly Journal of Economics (1983): 23–54. 23 Holmström, B., 1999. Managerial incentive problems: A dynamic perspective. The Review of Economic Studies, 66(1), pp. 169–182.169–182 Holmström, Bengt, and Paul Milgrom, 1991. "Multitask Principal-Agent Analyses: Incentive Contracts, Asset Ownership, and Job Design," Journal of Law, Economics, and Organization, 7, 24–52. Holmstrom, B. and Milgrom, P., 1994. The firm as an incentive system. The American Economic Review, pp. 972–991. 972–991. Holmström, Bengt, and John Roberts, 1998. "The Boundaries of the Firm Revisited," Journal of Economic Perspectives, 12(4), pp. 73–94 Holmström, Bengt, and Jean Tirole, 1998. "Private and Public Supply of Liquidity," Journal of Political Economy, 106(1), pp. 1–40. References External links including the Prize Lecture 8 December 2016 Pay for Performance and Beyond |- 1949 births Living people Finnish business theorists 20th-century Finnish economists Finnish Nobel laureates Game theorists Information economists Stanford Graduate School of Business alumni Stanford University alumni MIT Sloan School of Management faculty Members of the Royal Swedish Academy of Sciences Fellows of the American Academy of Arts and Sciences Fellows of the Econometric Society Presidents of the Econometric Society Swedish-speaking Finns Nokia people Finnish expatriates in the United States Nobel laureates in Economics 21st-century Finnish economists MIT School of Humanities, Arts, and Social Sciences faculty Fellows of the European Economic Association
Bengt Holmström
[ "Mathematics" ]
1,076
[ "Game theorists", "Game theory" ]
7,226,703
https://en.wikipedia.org/wiki/Hamlet%27s%20Mill
Hamlet's Mill: An Essay on Myth & the Frame of Time (first published by Gambit Inc., Boston, 1969), later Hamlet's Mill: An Essay Investigating the Origins of Human Knowledge and Its Transmission Through Myth, by Giorgio de Santillana, a professor of the history of science at the Massachusetts Institute of Technology in Cambridge, MA, US, and Hertha von Dechend, a professor of the history of science at Johann Wolfgang Goethe-Universität in Frankfurt, Germany, is a nonfiction work of history of science and comparative mythology, particularly in the subfield of archaeoastronomy. It is primarily about the possibility of a Neolithic era or earlier discovery of axial precession and the transmission of that knowledge in mythology. Santillana's academic colleague Nathan Sivin described the book as "an end run around those scholarly custodians of the history of early astronomy who consider myths best ignored, and those ethnologists who consider astronomy best ignored, to arouse public enthusiasm for exploration into the astronomical content of myth." The book was sharply criticized by other academics upon its publication. Argument The main theses of the book include (1) a late Neolithic or earlier discovery of the precession of the equinoxes and (2) an associated long-lived megalith-building late Neolithic civilization that made astronomical observations sufficient for that discovery in the Near East, and (3) that the knowledge of this civilization about precession and the associated astrological ages was encoded in mythology, typically in the form of a story relating to a millstone and a young protagonist. This last thesis gives the book its title, "Hamlet's Mill", by reference to the kenning Amlóða kvern recorded in the Old Icelandic Skáldskaparmál. The authors claim that this mythology is primarily to be interpreted as in terms of archaeoastronomy and they reject, and in fact mock, alternative interpretations in terms of fertility or agriculture. The book's project is an examination of the "relics, fragments and allusions that have survived the steep attrition of the ages". In particular, the book centers on the mytheme of a heavenly mill which rotates around the celestial pole and is associated with the maelstrom and the Milky Way. The authors argue for the pervasiveness of their hypothetical civilization's astronomical ideas by selecting and comparing elements of global mythology in light of hypothetical shared astronomical symbolism, especially among heavenly mill myths, heavenly milk-churn myths, celestial succession myths, and flood myths. Their sources include African myths collected by Marcel Griaule, the Persian epic Shahnameh, the Classical mythology of Plato, Pindar, and Plutarch, the Finnish epic Kalevala, the eddas of Norse mythology, the Hindu Mahabharata, Vedas, and Upanishads, Babylonian astrology, and the Sumerian Gilgamesh and King List. Santillana and Dechend state in their introduction to Hamlet's Mill that they are well aware of contrasting modern interpretations of myth and folklore but find them shallow and lacking insight: "...the experts now are benighted by the current folk fantasy, which is the belief that they are beyond all this – critics without nonsense and extremely wise". Consequently, Santillana and Dechend prefer to rely on the work of "meticulous scholars such as Ideler, Lepsius, Chwolson, Boll and, to go farther back, of Athanasius Kircher and Petavius..." They continue to argue throughout the book for preferring the work of earlier scholars and of the early mythologists themselves in contrast to the work of their closer contemporaries. Origins The book's two authors, Giorgio de Santillana and Hertha von Dechend, met at a symposium in Frankfurt, Germany, in 1958, and they began to collaborate on the work that became Hamlet's Mill in 1959 after Santillana was inspired by von Dechend's original research shared with him in 1959. At the time, Santillana was a professor at the Massachusetts Institute of Technology while von Dechend was formally a professor at Johann Wolfgang Goethe University, but in practice a researcher without a pension. During the time of writing between 1959 and 1969, Santillana became seriously ill in the mid-1960s, leading at least one reviewer to see the book as inspired by Santillana, but more substantially written by von Dechend. Both authors had prior interests and influences that were identified by contemporary reviewers as important features of the finished book. Von Dechend's training with Leo Frobenius at his Frankfurt Museum of Ethnology 1934–1938 was emphasized by critical reviewer Edmund Leach, Santillana's prior interests in the earliest roots of rationalism as in his The Origins of Scientific Thought (1961) were emphasized by critical reviewer Lynn White, Jr., and the authors' personal concerns with the problems of sustaining humanism against political and technological dogmatism were highlighted by positive reviewers. The Origins of Scientific Thought anticipated Hamlet's Mill's arguments as in this quotation: "We can see then, how so many myths, fantastic and arbitrary in semblance, of which the Greek tale of the Argonaut is a late offspring, may provide a terminology of image motifs, a kind of code which is beginning to be broken. It was meant to allow those who knew (a) to determine unequivocally the position of given planets in respect to the earth, to the firmament, and to one another; (b) to present what knowledge there was of the fabric of the world in the form of tales about 'how the world began'." The two authors' shared concern with supporting humanism against political and technological dogmatism has been attributed to the authors' shared lessons from European fascism in the lead-up to World War II, von Dechend under the Nazi Party in Frankfurt and Santillana under the Mussolini government in Rome, Italy. Santillana's prior work The Case of Galileo had been a study of the institutional persecution dynamics in Galileo Galilei's trial by the Catholic Church, for instance, which its reviewers connected to his experiences of Fascist Italy and of McCarthyism in the US, and Santillana had written publicly in support of J. Robert Oppenheimer after the Oppenheimer security clearance hearing in "Galileo and J. Robert Oppenheimer" (The Reporter, December 26, 1957). Both Santillana and von Dechend were known for responding to persecution with esotericism, as their colleagues classicist and political philosopher Leo Strauss and historian of science Alexandre Koyré had each written about in Persecution and the Art of Writing (1952) and "The Political Function of the Modern Lie" (The Contemporary Jewish Record, 1945), respectively. Positive reviewer Philip Morrison noted this esotericism as a crucial influence on the arguments of Hamlet's Mill. Reception Hamlet's Mill was severely criticized by notable academic reviewers on a number of grounds: tenuous arguments based on incorrect or outdated linguistic information; lack of familiarity with modern sources; an over-reliance on coincidence or analogy; and the general implausibility of a far-flung and influential civilization existing and not leaving behind solid evidence. Thus, Jaan Puhvel (1970) concluded that Writing in The New York Review of Books, Edmund Leach (1970) noted: H. R. Ellis Davidson (1974) referred to Hamlet’s Mill as: In contrast, others praised Hamlet's Mill. The astrophysicist Philip Morrison, a friend of Santillana's, began with criticism but concluded "here is a book for the wise, however it may appear," for a review in Scientific American. Another colleague of Santillana's, classical scholar Harald Reiche, also reviewed Hamlet's Mill positively. Reiche further went on to develop archaeoastronomical interpretations of ancient myth in a series of lectures and publications similarly to Hamlet's Mill, though dealing more specifically with Greek mythology, that included an interpretation of "the layout of Atlantis as a sort of map of the sky", published as a chapter in Astronomy of the Ancients (1979), with an introduction by Morrison. Others recommended the book for the controversy it had stirred. The Swedish astronomer Peter Nilson, while stating that Hamlet's Mill is not a work of science, expressed admiration for it and credited it as a source of inspiration when he wrote his own book on classic mythologies based on the night sky: Himlavalvets sällsamheter (1977). Barber & Barber's When They Severed Earth from Sky: How the Human Mind Shapes Myth (2006), itself a study aiming to "uncover seismic, geological, astrological, or other natural events" from mythology, appreciated the book for its pioneer work in mythography, judging that "although controversial, [Santillana and von Dechend] have usefully flagged and collected Herculean amounts of relevant data." Publishing history The full hardcover title is Hamlet's Mill: An Essay on Myth & the Frame of Time. Later softcover editions would use Hamlet's Mill: An Essay Investigating the Origins of Human Knowledge and its Transmission Through Myth. The English edition was assembled and published five years prior to Santillana's death. Hertha von Dechend prepared an expanded second edition several years later. The essay was reissued by David R. Godine, Publisher in 1992. The German translation, which appeared in 1993, is slightly longer than the original. The 8th Italian edition of 2000 was expanded from 552 to 630 pages. First English paperback edition: Boston: Godine, 1977 Italian editions: Giorgio de Santillana, Hertha von Dechend, Il mulino di Amleto. Saggio sul mito e sulla struttura del tempo (Milan: Adelphi, 1983, 552 pages). Giorgio de Santillana, Hertha von Dechend, Il mulino di Amleto. Saggio sul mito e sulla struttura del tempo (Milan: Adelphi, 2000, 8th expanded Italian edition, 630 pages) German edition: Giorgio de Santillana, Hertha von Dechend: Die Mühle des Hamlet. Ein Essay über Mythos und das Gerüst der Zeit (Berlin : Kammerer und Unverzagt, 1993. ) French edition: Giorgio de Santillana; Hertha von Dechend, Claude Gaudriault (tr.) Le moulin d'Hamlet : la connaissance, origine et transmission par les mythes (Paris : Editions Edite, 2012) See also Athanasius Kircher Charles François Dupuis Marcel Griaule Geomythology The Masks of God References Bibliography External links Hamlet's Mill - Full Text Mythology books Comparative mythology Archaeoastronomy 1969 non-fiction books Harvard University Press books
Hamlet's Mill
[ "Astronomy" ]
2,253
[ "Archaeoastronomy", "Astronomical sub-disciplines" ]
7,226,822
https://en.wikipedia.org/wiki/Futoshiki
, or More or Less, is a logic puzzle game from Japan. Its name means "inequality". It is also spelled hutosiki (using Kunrei-shiki romanization). Futoshiki was developed by Tamaki Seto in 2001. The puzzle is played on a square grid. The objective is to place the numbers such that each row and column contains only one of each digit. Some digits may be given at the start. Inequality constraints are initially specified between some of the squares, such that one must be higher or lower than its neighbor. These constraints must be honored in order to complete the puzzle. Strategy Solving the puzzle requires a combination of logical techniques. Numbers in each row and column restrict the number of possible values for each position, as do the inequalities. Once the table of possibilities has been determined, a crucial tactic to solve the puzzle involves "AB elimination", in which subsets are identified within a row whose range of values can be determined. Another important technique is to work through the range of possibilities in open inequalities. A value on one side of an inequality determines others, which then can be worked through the puzzle until a contradiction is reached and the first value is excluded. A solved futoshiki puzzle is a Latin square. Futoshiki in the United Kingdom A futoshiki puzzle is published in the following UK newspapers: The Daily Telegraph — Saturdays Dundee Courier — daily i — Mondays through Fridays The Guardian — Saturdays The Times — daily Notes Logic puzzles Latin squares Japanese board games
Futoshiki
[ "Mathematics" ]
316
[ "Recreational mathematics", "Latin squares" ]
7,228,030
https://en.wikipedia.org/wiki/Emil%20Oskar%20Nobel
Emil Oskar Nobel ( , ; a.k.a. Oscar; 29 October 1843 – 3 September 1864) was a member of the Nobel family. Biography Emil Nobel was born in Saint Petersburg, Russia. He was the youngest son of Immanuel Nobel (1801–1872) and Karolina Andrietta Ahlsell (1803–1889). He was the brother of Robert Nobel, Ludvig Nobel and Alfred Nobel. In 1842, Immanuel Nobel opened a workshop with foundry in St. Petersburg returning to Sweden in 1859 with his youngest sons Emil and Alfred. Emil was the only member of the family to go to college, attending the University of Uppsala. Emil died together with several other factory workers, the victim of an explosion while experimenting with nitroglycerine at Nobels Sprängolja, his father's factory at Heleneborg in Stockholm. At the time, Nobel was a student in Uppsala, but also helped in the factory. After the explosion, production of nitroglycerin was banned in the factory, but continued close to Heleneborg on an anchored barge in a bay of Lake Mälaren. His brother Alfred was not in the factory at the time of Emil's death but later managed to stabilize dynamite with a diatomaceous earth called kieselguhr. References Other sources Förfärlig olyckshändelse i Stockholm. Nya Dagligt Allehanda. 3 September 1864 1843 births 1864 deaths Emil Oskar Uppsala University alumni Deaths from explosion Industrial accident deaths Expatriates in the Russian Empire
Emil Oskar Nobel
[ "Chemistry" ]
316
[ "Deaths from explosion", "Explosions" ]
7,228,413
https://en.wikipedia.org/wiki/Watermarking%20attack
In cryptography, a watermarking attack is an attack on disk encryption methods where the presence of a specially crafted piece of data can be detected by an attacker without knowing the encryption key. Problem description Disk encryption suites generally operate on data in 512-byte sectors which are individually encrypted and decrypted. These 512-byte sectors alone can use any block cipher mode of operation (typically CBC), but since arbitrary sectors in the middle of the disk need to be accessible individually, they cannot depend on the contents of their preceding/succeeding sectors. Thus, with CBC, each sector has to have its own initialization vector (IV). If these IVs are predictable by an attacker (and the filesystem reliably starts file content at the same offset to the start of each sector, and files are likely to be largely contiguous), then there is a chosen plaintext attack which can reveal the existence of encrypted data. The problem is analogous to that of using block ciphers in the electronic codebook (ECB) mode, but instead of whole blocks, only the first block in different sectors are identical. The problem can be relatively easily eliminated by making the IVs unpredictable with, for example, ESSIV. Alternatively, one can use modes of operation specifically designed for disk encryption (see disk encryption theory). This weakness affected many disk encryption programs, including older versions of BestCrypt as well as the now-deprecated cryptoloop. To carry out the attack, a specially crafted plaintext file is created for encryption in the system under attack, to "NOP-out" the IV such that the first ciphertext block in two or more sectors is identical. This requires that the input to the cipher (plaintext, , XOR initialisation vector, ) for each block must be the same; i.e., . Thus, we must choose plaintexts, such that . The ciphertext block patterns generated in this way give away the existence of the file, without any need for the disk to be decrypted first. See also Disk encryption theory Initialization vector Block cipher modes of operation Watermark References Cryptographic attacks Disk encryption
Watermarking attack
[ "Technology" ]
449
[ "Cryptographic attacks", "Computer security exploits" ]
7,228,438
https://en.wikipedia.org/wiki/Darwinian%20literary%20studies
Darwinian literary studies (also known as literary Darwinism) is a branch of literary criticism that studies literature in the context of evolution by means of natural selection, including gene-culture coevolution. It represents an emerging trend of neo-Darwinian thought in intellectual disciplines beyond those traditionally considered as evolutionary biology: evolutionary psychology, evolutionary anthropology, behavioral ecology, evolutionary developmental psychology, cognitive psychology, affective neuroscience, behavioural genetics, evolutionary epistemology, and other such disciplines. History and scope Interest in the relationship between Darwinism and the study of literature began in the nineteenth century, for example, among Italian literary critics. For example, Ugo Angelo Canello argued that literature was the history of the human psyche, and as such, played a part in the struggle for natural selection, while Francesco de Sanctis argued that Emile Zola "brought the concepts of natural selection, struggle for existence, adaptation and environment to bear in his novels". Modern Darwinian literary studies arose in part as a result of its proponents' dissatisfaction with the poststructuralist and postmodernist philosophies that had come to dominate literary study during the 1970s and 1980s. In particular, the Darwinists took issue with the argument that discourse constructs reality. The Darwinists argue that biologically grounded dispositions constrain and inform discourse. This argument runs counter to what evolutionary psychologists assert is the central idea in the "Standard Social Science Model": that culture wholly constitutes human values and behaviors. Literary Darwinists use concepts from evolutionary biology and the evolutionary human sciences to formulate principles of literary theory and interpret literary texts. They investigate interactions between human nature and the forms of cultural imagination, including literature and its oral antecedents. By "human nature", they mean a pan-human, genetically transmitted set of dispositions: motives, emotions, features of personality, and forms of cognition. Because the Darwinists concentrate on relations between genetically transmitted dispositions and specific cultural configurations, they often describe their work as "biocultural critique". Many literary Darwinists aim not just at creating another "approach" or "movement" in literary theory; they aim at fundamentally altering the paradigm within which literary study is now conducted. They want to establish a new alignment among the disciplines and ultimately to encompass all other possible approaches to literary study. They rally to Edward O. Wilson's cry for "consilience" among all the branches of learning. Like Wilson, they envision nature as an integrated set of elements and forces extending in an unbroken chain of material causation from the lowest level of subatomic particles to the highest levels of cultural imagination. And like Wilson, they regard evolutionary biology as the pivotal discipline uniting the hard sciences with the social sciences and the humanities. They believe that humans have evolved in an adaptive relation to their environment. They argue that for humans, as for all other species, evolution has shaped the anatomical, physiological, and neurological characteristics of the species, and they think that human behavior, feeling, and thought are fundamentally shaped by those characteristics. They make it their business to consult evolutionary biology and evolutionary social science in order to determine what those characteristics are, and they bring that information to bear on their understanding of the products of the human imagination. Evolutionary literary criticism of a minimalist kind consists in identifying basic, common human needs—survival, sex, and status, for instance—and using those categories to describe the behavior of characters depicted in literary texts. Others pose for themselves a form of criticism involving an overarching interpretive challenge: to construct continuous explanatory sequences linking the highest level of causal evolutionary explanation to the most particular effects in individual works of literature. Within evolutionary biology, the highest level of causal explanation involves adaptation by means of natural selection. Starting from the premise that the human mind has evolved in an adaptive relation to its environment, literary Darwinists undertake to characterize the phenomenal qualities of a literary work (tone, style, theme, and formal organization), locate the work in a cultural context, explain that cultural context as a particular organization of the elements of human nature within a specific set of environmental conditions (including cultural traditions), identify an implied author and an implied reader, examine the responses of actual readers (for instance, other literary critics), describe the socio-cultural, political, and psychological functions the work fulfills, locate those functions in relation to the evolved needs of human nature, and link the work comparatively with other artistic works, using a taxonomy of themes, formal elements, affective elements, and functions derived from a comprehensive model of human nature. Contributors to evolutionary studies in literature have included humanists, biologists, and social scientists. Some of the biologists and social scientists have adopted primarily discursive methods for discussing literary subjects, and some of the humanists have adopted the empirical, quantitative methods typical of research in the sciences. Literary scholars and scientists have also collaborated in research that combines the methods typical of work in the humanities with methods typical of work in the sciences. Adaptive function of literature and the arts The most hotly debated issue in evolutionary literary study concerns the adaptive functions of literature and other arts—whether there are any adaptive functions, and if so, what they might be. Proposed functions include transmitting information, including about kin relations, and by providing the audience with a model and rehearsal for how to behave in similar situations that may arise in the future. Steven Pinker (How the Mind Works, 1997) suggests that aesthetic responsiveness is merely a side effect of cognitive powers that evolved to fulfill more practical functions, but Pinker also suggests that narratives can provide information for adaptively relevant problems. Geoffrey Miller (The Mating Mind, 2000) argues that artistic productions in the ancestral environment served as forms of sexual display in order to demonstrate fitness and attract mates, similarly to the function of the peacock's tail. Brian Boyd (On the Origin of Stories, 2009) argues that the arts are forms of cognitive "play" that enhance pattern recognition. In company with Ellen Dissanayake (Art and Intimacy, 2000), Boyd also argues that the arts provide means of creating shared social identity and help create and maintain human bonding. Dissanayake, Joseph Carroll (Literary Darwinism 2004), and Denis Dutton (The Art Instinct, 2009) all argue that the arts help organize the human mind by giving emotionally and aesthetically modulated models of reality. By participating in the simulated life of other people one gains a greater understanding of the motivations of oneself and other people. The idea that the arts function as means of psychological organization subsumes the ideas that the arts provide adaptively relevant information, enable us to consider alternative behavioral scenarios, enhance pattern recognition, and serve as means for creating shared social identity. And of course, the arts can be used for sexual display. In that respect, the arts are like most other human products—clothing, jewelry, shelter, means of transportation, etc. The hypothesis that the arts help organize the mind is not incompatible with the hypothesis of sexual display, but it subordinates sexual display to a more primary adaptive function. Hypotheses about formal literary features Some Darwinists have proposed explanations for formal literary features, including genres. Poetic meter has been attributed to a biologically based three-second metric. Gender preferences for pornography and romance novels have been explained by sexual selection. Different genres have been conjectured to correspond to different basic emotions: tragedy corresponding to sadness, fear, and anger; comedy to joy and surprise; and satire to anger, disgust, and contempt. Tragedy has also been associated with status conflict and comedy with mate selection. The satiric dystopian novel has been explained by contrasting universal human needs and oppressive state organization. Distinguishing literary Darwinism Cosmic evolutionism and evolutionary analogism: Literary Theorists who would call themselves "literary Darwinists" or claim some close alignment with the literary Darwinists share one central idea: that the adapted mind produces literature and that literature reflects the structure and character of the adapted mind. There are at least two other ways of integrating evolution into literary theory: cosmic evolutionism and evolutionary analogism. Cosmic evolutionists identify some universal process of development or progress and identify literary structures as microcosmic versions of that process. Proponents of cosmic evolution include Frederick Turner, Alex Argyros, and Richard Cureton. Evolutionary analogists take the process of Darwinian evolution—blind variation and selective retention—as a widely applicable model for all development. The psychologist Donald Campbell advances the idea that all intellectual creativity can be conceived as a form of random variation and selective retention. Rabkin and Simon offer an instance in literary study. They argue that cultural creations "evolve in the same way as do biological organisms, that is, as complex adaptive systems that succeed or fail according to their fitness to their environment." Other critics or theorists who have some affiliation with evolutionary biology but who would not identify themselves as literary Darwinists include William Benzon (Beethoven's Anvil) and William Flesch (Comeuppance). Cognitive rhetoric: Practitioners of "cognitive rhetoric" or cognitive poetics affiliate themselves with certain language-centered areas of cognitive psychology. The chief theorists in this school argue that language is based in metaphors, and they claim that metaphors are themselves rooted in biology or the body, but they do not argue that human nature consists in a highly structured set of motivational and cognitive dispositions that have evolved through an adaptive process regulated by natural selection. Cognitive rhetoricians are generally more anxious than literary Darwinists to associate themselves with postmodern theories of "discourse," but some cognitive rhetoricians make gestures toward evolutionary psychology, and some critics closely affiliated with evolutionary psychology have found common ground with the cognitive rhetoricians. The seminal authorities in cognitive rhetoric are the language philosophers Mark Johnson and George Lakoff. The most prominent literary theorist in the field is Mark Turner. Other literary scholars associated with cognitive rhetoric include Mary Thomas Crane, F. Elizabeth Hart, Tony Jackson, Alan Richardson, Ellen Spolsky, Francis Steen, and Lisa Zunshine. Critical commentaries Some of the commentaries included in the special double issue of Style are critical of literary Darwinism. Other critical commentaries include those of William Benzon, "Signposts for a Naturalist Criticism," (Entelechy: Mind & Culture, Fall 2005/Winter 2006); William Deresiewicz, "Adaptation: On Literary Darwinism," The Nation June 8, 2009: 26-31; William Flesch, Comeuppance: Costly Signaling, Altruistic Punishment, and Other Biological Components of Fiction, (Cambridge: Harvard UP, 2008); Eugene Goodheart, Darwinian Misadventures in the Humanities, (New Brunswick: NJ: Transaction, 2007); Jonathan Kramnick, "Against Literary Darwinism," in Critical Inquiry, Winter 2011; "Debating Literary Darwinism," a set of responses to Jonathan Kramnick's essay, along with Kramnick's rejoinder, in Critical Inquiry, Winter 2012; Alan Richardson, "Studies in Literature and Cognition: A Field Map," in The Work of Fiction: Cognition, Culture, and Complexity, ed. Alan Richardson and Ellen Spolsky (Burlington, VT: Ashgate, 2004 1-29); and Lisa Zunshine, "What is Cognitive Cultural Studies?," in Introduction to Cognitive Cultural Studies (Johns Hopkins UP, 2010 1-33). Goodheart and Deresiewicz, adopting a traditional humanist perspective, reject efforts to ground literary study in biology. Richardson disavows the Darwinists' tendency to attack poststructuralism. Richardson and Benzon both align themselves with cognitive science and distinguish that alignment from one with evolutionary psychology. Flesch makes use of evolutionary research on game theory, costly signaling, and altruistic punishment but, like Stephen Jay Gould, professes himself hostile to evolutionary psychology. For a commentary that is sympathetic to evolutionary psychology but skeptical about the possibilities of using it for literary study, see Steven Pinker, "Toward a Consilient Study of Literature," a review of The Literary Animal, Philosophy and Literature 31 (2007): 162-178. David Fishelov has argued that the attempt to link Darwinism to literary studies has failed "to produce compelling evidence to support some of its basic assumptions (notably that literature is an adaptation)" and has called on literary scholars to be more conceptually rigorous when they pursue "empirical research into different aspects of literary evolution."<ref> David Fishelov, "Evolution and Literary Studies: Time to Evolve," Philosophy and Literature, Vol. 41, No. 2 (October 2017): 286.</ref> Whitley Kaufman has argued that the Darwinist approach to literature has caused its proponents to misunderstand what is important and great in literature. See also James Arthur Anderson Brian Boyd Joseph Carroll Ellen Dissanayake Denis Dutton Evolutionary philosophy Jonathan Gottschall Mathias Clasen Evolutionary psychology Universal Darwinism References Bibliography In addition to books oriented specifically to literature, this list includes books on cinema and books by authors who propound theories like those of the literary Darwinists but discuss the arts in general. Anderson, James Arthur. 2020. Excavating Stephen King: A Darwinist Hermeneutic Study of the Fiction. Lexington Books. Anderson, Joseph. 1996. The Reality of Illusion: An Ecological Approach to Cognitive Film Theory. Southern Illinois Press. Austin, Michael. 2010. Useful Fictions: Evolution, Anxiety, and the Origins of Literature. University of Nebraska Press. Barash, David P., and Nanelle Barash. 2005. Madame Bovary's Ovaries: A Darwinian Look at Literature. Delacorte Press. Blair, Linda Nicole. 2017. Virginia Woolf and the Power of Story: A Literary Darwinist Reading of Six Novels. McFarland. Bordwell, David. 2008. Poetics of Cinema. Routledge. Boyd, Brian. 2009. On the Origin of Stories: Evolution, Cognition. and Fiction. Harvard University Press. Boyd, Brian, Joseph Carroll, and Jonathan Gottschall, eds. 2010. Evolution, Literature, and Film: A Reader. Columbia University Press. Canello, Ugo Angelo. 1882. Letteratura e darwinismo: lezioni due. Padova, Tipografia A. Draghi. Carroll, Joseph. 1995. Evolution and Literary Theory. University of Missouri. Carroll, Joseph. 2004. Literary Darwinism: Evolution, Human Nature, and Literature. Routledge. Carroll, Joseph. 2011. Reading Human Nature: Literary Darwinism in Theory and Practice. SUNY Press. Carroll, Joseph, Jonathan Gottschall, John Johnson, and Daniel Kruger. 2012. Graphing Jane Austen: The Evolutionary Basis of Literary Meaning. Palgrave. Clasen, Mathias. 2017. Why Horror Seduces. Oxford University Press. Coe, Kathryn. 2003. The Ancestress Hypothesis: Visual Art as Adaptation. Rutgers University Press. Cooke, Brett. 2002. Human Nature in Utopia: Zamyatin's We. Northwestern University Press. Cooke, Brett, and Frederick Turner, eds. 1999. Biopoetics: Evolutionary Explorations in the Arts. ICUS. Dissanayake, Ellen. 2000. Art and Intimacy: How the Arts Began. University of Washington Press. Dissanayake, Ellen. 1995. Homo Aestheticus. University of Washington Press. Dissanayake, Ellen. 1990. What Is Art For? University of Washington Press. Dutton, Denis. 2009. The Art Instinct: Beauty, Pleasure, and Human Evolution. Oxford University Press. Easterlin, Nancy. 2012. A Biocultural Approach to Literary Theory and Interpretation. Johns Hopkins University Press. Fromm, Harold. 2009. The Nature of Being Human: From Environmentalism to Consciousness. Johns Hopkins University Press. Gottschall, Jonathan. 2008. Literature, Science, and a New Humanities. Palgrave Macmillan. Gottschall, Jonathan. 2007. The Rape of Troy: Evolution, Violence, and the World of Homer. Cambridge. Gottschall, Jonathan. 2012. The Storytelling Animal: How Stories Make Us Human. Houghton Mifflin. Gottschall, Jonathan, and David Sloan Wilson, eds. 2005. The Literary Animal: Evolution and the Nature of Narrative. Northwestern University Press. Grodal, Torben. 2009. Embodied Visions: Evolution, Emotion, Culture, and Film. Oxford University Press. Headlam Wells, Robin. 2005. Shakespeare's Humanism. Cambridge University Press. Headlam Wells, Robin, and JonJoe McFadden, eds. 2006. Human Nature: Fact and Fiction. Continuum. Hoeg, Jerry, and Kevin S. Larsen, eds. 2009. Interdisciplinary Essays on Darwinism in Hispanic Literature and Film: The Intersection of Science and the Humanities. Mellen. Hood, Randall. 1979. The Genetic Function and Nature of Literature. Cal Poly, San Luis Obispo. Kaufman, Whitley. 2016. Human Nature and the Limits of Darwinism. Palgrave, New York. Love, Glen. 2003. Practical Ecocriticism: Literature, Biology, and the Environment. University of Virginia Press. Machann, Clinton. 2009. Masculinity in Four Victorian Epics: A Darwinist Reading. Ashgate. Martindale, Colin, and Paul Locher, and Vladimir M. Petrov, eds. 2007. Evolutionary and Neurocognitive Approaches to Aesthetics, Creativity, and the Arts. Baywood. Nordlund, Marcus. 2007. Shakespeare and the Nature of Love: Literature, Culture, Evolution. Northwestern University Press. Parrish, Alex C. 2013. Adaptive Rhetoric: Evolution, Culture, and the Art of Persuasion. Routledge. Salmon, Catherine, and Donald Symons. 2001. Warrior Lovers: Erotic Fiction, Evolution, and Female Sexuality. Weidenfeld & Nicolson. Saunders, Judith. 2009. Reading Edith Wharton through A Darwinian Lens: Evolutionary Biological Issues In Her Fiction. McFarland. Saunders, Judith. 2018. American Literary Classics: Evolutionary Perspectives. Academic Studies Press. Storey, Robert. 1996. Mimesis and the Human Animal: On the Biogenetic Foundations of Literary Representation. Northwestern University Press. Swirski, Peter. 2010. Literature, Analytically Speaking: Explorations in the Theory of Interpretation, Analytic Aesthetics, and Evolution. University of Texas Press. Swirski, Peter. 2007. Of Literature and Knowledge: Explorations in Narrative Thought Experiments, Evolution, and Game Theory. Routledge. Vermeule, Blakey. 2010. Why Do We Care about Literary Characters? Johns Hopkins University Press.Edited collections: The volume edited by Boyd, Carroll, and Gottschall (2010) is an anthology, that is, a selection of essays and book excerpts, most of which had been previously published. Collections of essays that had not, for the most part, been previously published include those edited by Cooke and Turner (1999); Gottschall and Wilson (2005); Headlam Wells and McFadden (2006); Martindale, Locher, and Petrov (2007); Gansel and Vanderbeke;and Hoeg and Larsen (2009).Journals: Much evolutionary literary criticism has been published in the journal Philosophy and Literature. The journal Style has also been an important venue for the Darwinists. Social science journals that have published research on the arts include Evolution and Human Behavior, , and Human Nature. The first issue of an annual volume The Evolutionary Review: Art, Science, Culture appeared in 2010; the journal ceased publication in 2013. The first issue of a semi-annual journal Evolutionary Studies in Imaginative Culture appeared in spring of 2017.Symposia: A special double-issue of the journal Style (vol. 42, numbers 2/3, summer/fall 2008) was devoted to evolutionary literary theory and criticism, with a target article by Joseph Carroll ("An Evolutionary Paradigm for Literary Study"), responses by 35 scholars and scientists, and a rejoinder by Carroll. Also, a special evolutionary issue of the journal contains 32 essays, including contributions to a symposium on the question "How is culture biological?", which includes six primary essays along with responses and rejoinders.Discussion groups: Online forums for news and discussion include the Biopoetics listserv, the Facebook group for Evolutionary Narratology, and the Facebook homepage for The Evolutionary Review. Researchers with similar interests can also be located on Academia.edu'' by searching for people who have a research interest in Evolutionary Literary Criticism and Theory / Biopoetics or in Literary Darwinism or Evolutionary Literary Theory. External links Mathias Clasen's website (in English and Danish) The Evolutionary Review: Art, Science, Culture The Literature Project: Maya Lessov's Interviews with Scholars Involved in the Debate over the Two Cultures Evolutionary Studies in Imaginative Culture Darwinism Literary theory Evolutionary biology Evolutionary psychology
Darwinian literary studies
[ "Biology" ]
4,265
[ "Evolutionary biology" ]
7,232,913
https://en.wikipedia.org/wiki/Mauisaurus
Mauisaurus ("Māui lizard") is a dubious genus of plesiosaur that lived during the Late Cretaceous period in what is now New Zealand. Numerous specimens have been attributed to this genus in the past, but a 2017 paper restricts Mauisaurus to the lectotype and declares it a nomen dubium. History of discovery Mauisaurus remains have all been found in New Zealand's South Island, in Canterbury. Mauisaurus haasti was described by Hector in 1874 based on eight specimens and diagnosed by its cervical vertebrae and a humerus with large tuberosities. However, of these eight specimens, two, consisting of ribs and paddle, were lost, while another, the cast of a jaw fragment (the original fossil of which was also lost) was found to be a mosasaur. The most substantial specimen, 8a (DM R1529), consisted of fragmentary pubes, a partial ilium and hindlimbs, originally misidentified as part of the pectoral girdle. Mauisaurus gets its name from the New Zealand Māori mythological demigod, Māui. Māui is said to have pulled New Zealand up from the seabed using a fish hook, thus creating the country. Thus, Mauisaurus means "Māui lizard". Mauisaurus gets its scientific last name from its original finder, Julius von Haast, who found the first Mauisaurus fossil in 1870 around Gore Bay, New Zealand. The specimen was then first described in 1874. A second species was also named by Hector, Mauisaurus brachiolatus, based on the proximal end of a very large humerus as well as a humerus together with radius and radiale. There was some confusion regarding this species, as the description named it M. latibrachialis, while the specimen list included it under the name M. brachiolatus. In 1962 specimen 8a was declared the lectotype of Mauisaurus haasti by Welles who further suggested that M. brachiolatus should be deemed a nomen vanum in an overview of Cretaceous plesiosaurs. Later in 1971 Welles & Gregg revised the diagnosis of M. haasti and produced a detailed description of the lectotype, assigned Hector's specimen 8g as the paralectotype and rejected the remaining 3 specimen of Hector's original 8 as non-diagnostic, while themselves referring 9 new specimens (including both "M. brachiolatus" specimens) to the species. Mauisaurus was examined once more in 2005 by Hiller et al., rejecting the inclusion of the former "M. brachiolatus" material as well as several of the specimens referred to Mauisaurus in 1971, deeming all of them undiagnostic. In the same paper two more specimens are instead referred to the genus. One of these specimens, CM Zfr 115, consisted of skull bones, a nearly complete series of vertebrae and bones from all four limbs. The animal was considered to be over in length. A variety of other specimens were also referred to either Mauisaurus sp. or cf. Mauisaurus sp. during the early to late 2000s. It was later concluded that a hemispherical femoral capitulum, the defining apomorphy of Mauisaurus was also present in members of the Aristonectinae, which referred specimen CM Zfr 115 with its more than 60 neck vertebrae did not belong to. This , together with additional information from Aristonectes quiriquinensis and Kaiwhekea katiki, was discussed in detail by Otero et al. in 2015. The presence of femora with strongly hemispherical capitula in more than one aristonectine and also in non-aristonectine elasmosaurids brings Mauisaurus once again into question, with material previously referred to it now being placed in separate clades. Other anatomical features of Mauisaurus were also found amongst both aristonectines and non-aristonectine elasmosaurs, excluding them from being able to be used as apomorphies. More refined biostrategraphy furthermore questions the referral of many specimen, as the analysis showed that the various fossils attributed to this genus range from the middle Campanian to the early Cretaceous, a timespan of 10 million years (longer when taking into account referred specimens from Antarctica and South America). This longevity of a single genus is deemed unusually long by Hiller et al.. The paper concludes that the hypodigm of Mauisaurus consists of more than one taxon, with Mauisaurus only significant apomorphy being present in a variety of genera from different clades, rendering it non-diagnostic. While DM R1529 remains the lectotype, the genus must be treated as a nomen dubium and should instead be referred to as Elasmosauridae indet. Description Little can be said about the appearance of Mauisaurus as the only known material is an undiagnostic, fragmentary pelvic area and flippers. The lectotype material shows some features that may indicate aristonectine affinities, but simultaneously possesses anatomical features more consistent with non-aristonectine elasmosaurs. Cultural significance Mauisaurus is one of the few New Zealand prehistoric creatures, and so, has had much publicity in the country. On 1 October 1993, a set of stamps was released to the general public. Although it depicted many other dinosaurs and prehistoric life, Mauisaurus was featured hunting fish on the $1.20 stamp. See also List of plesiosaur genera Timeline of plesiosaur research References Late Cretaceous plesiosaurs Extinct animals of New Zealand Fossil taxa described in 1874 Elasmosaurids Plesiosaurs of Oceania Taxa named by James Hector Sauropterygian genera Nomina dubia Taxa with lost type specimens
Mauisaurus
[ "Biology" ]
1,202
[ "Biological hypotheses", "Nomina dubia", "Controversial taxa" ]
7,233,280
https://en.wikipedia.org/wiki/Semantic%20interoperability
Semantic interoperability is the ability of computer systems to exchange data with unambiguous, shared meaning. Semantic interoperability is a requirement to enable machine computable logic, inferencing, knowledge discovery, and data federation between information systems. Semantic interoperability is therefore concerned not just with the packaging of data (syntax), but the simultaneous transmission of the meaning with the data (semantics). This is accomplished by adding data about the data (metadata), linking each data element to a controlled, shared vocabulary. The meaning of the data is transmitted with the data itself, in one self-describing "information package" that is independent of any information system. It is this shared vocabulary, and its associated links to an ontology, which provides the foundation and capability of machine interpretation, inference, and logic. Syntactic interoperability (see below) is a prerequisite for semantic interoperability. Syntactic interoperability refers to the packaging and transmission mechanisms for data. In healthcare, HL7 has been in use for over thirty years (which predates the internet and web technology), and uses the pipe character (|) as a data delimiter. The current internet standard for document markup is XML, which uses "< >" as a data delimiter. The data delimiters convey no meaning to the data other than to structure the data. Without a data dictionary to translate the contents of the delimiters, the data remains meaningless. While there are many attempts at creating data dictionaries and information models to associate with these data packaging mechanisms, none have been practical to implement. This has only perpetuated the ongoing "babelization" of data and inability to exchange data with meaning. Since the introduction of the Semantic Web concept by Tim Berners-Lee in 1999, there has been growing interest and application of the W3C (World Wide Web Consortium) standards to provide web-scale semantic data exchange, federation, and inferencing capabilities. Semantic as a function of syntactic interoperability Syntactic interoperability, provided by for instance XML or the SQL standards, is a pre-requisite to semantic. It involves a common data format and common protocol to structure any data so that the manner of processing the information will be interpretable from the structure. It also allows detection of syntactic errors, thus allowing receiving systems to request resending of any message that appears to be garbled or incomplete. No semantic communication is possible if the syntax is garbled or unable to represent the data. However, information represented in one syntax may in some cases be accurately translated into a different syntax. Where accurate translation of syntaxes is possible, systems using different syntaxes may also interoperate accurately. In some cases, the ability to accurately translate information among systems using different syntaxes may be limited to one direction, when the formalisms used have different levels of expressivity (ability to express information). A single ontology containing representations of every term used in every application is generally considered impossible, because of the rapid creation of new terms or assignments of new meanings to old terms. However, though it is impossible to anticipate every concept that a user may wish to represent in a computer, there is the possibility of finding some finite set of "primitive" concept representations that can be combined to create any of the more specific concepts that users may need for any given set of applications or ontologies. Having a foundation ontology (also called upper ontology) that contains all those primitive elements would provide a sound basis for general semantic interoperability, and allow users to define any new terms they need by using the basic inventory of ontology elements, and still have those newly defined terms properly interpreted by any other computer system that can interpret the basic foundation ontology. Whether the number of such primitive concept representations is in fact finite, or will expand indefinitely, is a question under active investigation. If it is finite, then a stable foundation ontology suitable to support accurate and general semantic interoperability can evolve after some initial foundation ontology has been tested and used by a wide variety of users. At the present time, no foundation ontology has been adopted by a wide community, so such a stable foundation ontology is still in the future. Words and meanings One persistent misunderstanding recurs in discussion of semantics is "the confusion of words and meanings". The meanings of words change, sometimes rapidly. But a formal language such as used in an ontology can encode the meanings (semantics) of concepts in a form that does not change. In order to determine what is the meaning of a particular word (or term in a database, for example) it is necessary to label each fixed concept representation in an ontology with the word(s) or term(s) that may refer to that concept. When multiple words refer to the same (fixed) concept in language this is called synonymy; when one word is used to refer to more than one concept, that is called ambiguity. Ambiguity and synonymy are among the factors that make computer understanding of language very difficult. The use of words to refer to concepts (the meanings of the words used) is very sensitive to the context and the purpose of any use for many human-readable terms. The use of ontologies in supporting semantic interoperability is to provide a fixed set of concepts whose meanings and relations are stable and can be agreed to by users. The task of determining which terms in which contexts (each database is a different context) is then separated from the task of creating the ontology, and must be taken up by the designer of a database, or the designer of a form for data entry, or the developer of a program for language understanding. When the meaning of a word used in some interoperable context is changed, then to preserve interoperability it is necessary to change the pointer to the ontology element(s) that specifies the meaning of that word. Knowledge representation requirements and languages A knowledge representation language may be sufficiently expressive to describe nuances of meaning in well understood fields. There are at least five levels of complexity of these. For general semi-structured data one may use a general purpose language such as XML. Languages with the full power of first-order predicate logic may be required for many tasks. Human languages are highly expressive, but are considered too ambiguous to allow the accurate interpretation desired, given the current level of human language technology. Semantic interoperability healthcare systems leverage data in a standardized way as they break down and share information. For example, two systems can now recognize terminology and medication symbols. Semantic interoperability healthcare systems leverage data in a standardized way as they break down and share information. For example, two systems can now recognize terminology, medication symbols, and other nuances while exchanging data automatically, without human intervention. Prior agreement not required Semantic interoperability may be distinguished from other forms of interoperability by considering whether the information transferred has, in its communicated form, all of the meaning required for the receiving system to interpret it correctly, even when the algorithms used by the receiving system are unknown to the sending system. Consider sending one number: If that number is intended to be the sum of money owed by one company to another, it implies some action or lack of action on the part of both those who send it and those who receive it. It may be correctly interpreted if sent in response to a specific request, and received at the time and in the form expected. This correct interpretation does not depend only on the number itself, which could represent almost any of millions of types of quantitative measurement, rather it depends strictly on the circumstances of transmission. That is, the interpretation depends on both systems expecting that the algorithms in the other system use the number in exactly the same sense, and it depends further on the entire envelope of transmissions that preceded the actual transmission of the bare number. By contrast, if the transmitting system does not know how the information will be used by other systems, it is necessary to have a shared agreement on how information with some specific meaning (out of many possible meanings) will appear in a communication. For a particular task, one solution is to standardize a form, such as a request for payment; that request would have to encode, in standardized fashion, all of the information needed to evaluate it, such as: the agent owing the money, the agent owed the money, the nature of the action giving rise to the debt, the agents, goods, services, and other participants in that action; the time of the action; the amount owed and currency in which the debt is reckoned; the time allowed for payment; the form of payment demanded; and other information. When two or more systems have agreed on how to interpret the information in such a request, they can achieve semantic interoperability for that specific type of transaction. For semantic interoperability generally, it is necessary to provide standardized ways to describe the meanings of many more things than just commercial transactions, and the number of concepts whose representation needs to be agreed upon are at a minimum several thousand. Ontology research How to achieve semantic interoperability for more than a few restricted scenarios is currently a matter of research and discussion. For the problem of General Semantic Interoperability, some form of foundation ontology ('upper ontology') is required that is sufficiently comprehensive to provide the definition of concepts for more specialized ontologies in multiple domains. Over the past decade, more than ten foundation ontologies have been developed, but none have as yet been adopted by a wide user base. The need for a single comprehensive all-inclusive ontology to support Semantic Interoperability can be avoided by designing the common foundation ontology as a set of basic ("primitive") concepts that can be combined to create the logical descriptions of the meanings of terms used in local domain ontologies or local databases. This tactic is based on the principle that: If: (1) the meanings and usage of the primitive ontology elements in the foundation ontology are agreed on, and (2) the ontology elements in the domain ontologies are constructed as logical combinations of the elements in the foundation ontology, Then: The intended meanings of the domain ontology elements can be computed automatically using an FOL (first-order logic) reasoner, by any system that accepts the meanings of the elements in the foundation ontology, and has both the foundation ontology and the logical specifications of the elements in the domain ontology. Therefore: Any system wishing to interoperate accurately with another system need transmit only the data to be communicated, plus any logical descriptions of terms used in that data that were created locally and are not already in the common foundation ontology. This tactic then limits the need for prior agreement on meanings to only those ontology elements in the common Foundation Ontology (FO). Based on several considerations, this may require fewer than 10,000 elements (types and relations). However, for ease of understanding and use, more ontology elements with additional detail and specifics can help to find the exact location in the FO where specific domain concepts can be found or added. In practice, together with the FO focused on representations of the primitive concepts, a set of domain extension ontologies to the FO with elements specified using the FO elements will likely also be used. Such pre-existing extensions will ease the cost of creating domain ontologies by providing existing elements with the intended meaning, and will reduce the chance of error by using elements that have already been tested. Domain extension ontologies may be logically inconsistent with each other, and that needs to be determined if different domain extensions are used in any communication. Whether use of such a single foundation ontology can itself be avoided by sophisticated mapping techniques among independently developed ontologies is also under investigation. Importance The practical significance of semantic interoperability has been measured by several studies that estimate the cost (in lost efficiency) due to lack of semantic interoperability. One study, focusing on the lost efficiency in the communication of healthcare information, estimated that US$77.8 billion per year could be saved by implementing an effective interoperability standard in that area. Other studies, of the construction industry and of the automobile manufacturing supply chain, estimate costs of over US$10 billion per year due to lack of semantic interoperability in those industries. In total these numbers can be extrapolated to indicate that well over US$100 billion per year is lost because of the lack of a widely used semantic interoperability standard in the US alone. There has not yet been a study about each policy field that might offer big cost savings applying semantic interoperability standards. But to see which policy fields are capable of profiting from semantic interoperability, see 'Interoperability' in general. Such policy fields are eGovernment, health, security and many more. The EU also set up the Semantic Interoperability Centre Europe in June 2007. Semantic Interoperability for Internet of Things Digital transformation holds huge benefits for enabling organizations to be more efficient, more flexible, and more nimble in responding to changes in business and operating conditions. This involves the need to integrate heterogeneous data and services throughout organizations. Semantic interoperability addresses the need for shared understanding of the meaning and context. To support this, a cross-organization expert group involving ISO/IEC JTC1, ETSI, oneM2M and W3C are collaborating with AIOTI on accelerating adoption of semantic technologies in the IoT. The group has very recently published two joint white papers on semantic interoperability respectively named “Semantic IoT Solutions – A Developer Perspective” and “Towards semantic interoperability standards based on ontologies“. This follows on the success of the earlier white paper on “Semantic Interoperability for the Web of Things.” Source: “Semantic IoT Solutions – A Developer Perspective” “Towards semantic interoperability standards based on ontologies“. This follows on the success of the earlier white paper on “Semantic Interoperability for the Web of Things.” https://www.w3.org/blog/2019/10/aioti-iso-iec-jtc1-etsi-onem2m-and-w3c-collaborate-on-two-joint-white-papers-on-semantic-interoperability-targeting-developers-and-standardization-engineers/ See also Data integration Business semantics management Interoperability, a more general concept Ontology alignment Semantic computing UDEF, Universal Data Element Framework References External links the ONTACWG Glossary Other definitions of Semantic Interoperability MMI Guide: Achieving Semantic Interoperability Knowledge representation Technical communication Information science Ontology (information science) Computing terminology Telecommunication theory Interoperability
Semantic interoperability
[ "Technology", "Engineering" ]
3,011
[ "Computing terminology", "Telecommunications engineering", "Interoperability" ]
7,233,400
https://en.wikipedia.org/wiki/WEA%20Manufacturing
WEA Manufacturing was the record, tape, and compact disc manufacturing arm of WEA International Inc. from 1978 to 2003, when it was sold and merged into Cinram International, a previous competitor. The last owner when the plant closed was Technicolor. History WEA Manufacturing Inc. was created in 1978–1979 when Warner Communications Inc. purchased two of its longtime suppliers: the record pressing plants Specialty Records Corporation (Olyphant, Pennsylvania) and Allied Record Company (Los Angeles). The company was headquartered in Olyphant, where the original plant was replaced in late 1981 by a new facility which retained the name Specialty Records Corporation. The Specialty Records Corporation name was dropped in 1996 in favor of WEA Manufacturing. The company invested in CD manufacturing in 1986, matching a $247,000 contribution by economic development corporation Ben Franklin Technology Partners to develop and implement new processes of manufacturing audio CDs and CD-ROMs. BFTP assembled a team of experts in physics, electrical engineering, and thin film technology from the University of Scranton and Lehigh University to carry out the research and development. The Olyphant plant and another plant in Alsdorf, Germany, were expanded to support CD pressing that year, with the Olyphant facility's production commencing first in September 1986. WEA Manufacturing grew to become one of the largest manufacturers of recorded media in the world. The company began manufacturing Laserdiscs in July 1991. The company's DVD division, Warner Advanced Media Operations (WAMO), helped design the high-density format used in DVDs, and manufactured some of the first DVDs in the late 1990s. The company was sold to Cinram International in October 2003 and no longer exists under the name WEA Manufacturing, but the Olyphant plant continued to operate under its new ownership. In 2005, the company was Lackawanna County's largest employer, with over 2,300 people working at the Olyphant plant. Cinram closed the former Allied plant in 2006, while Technicolor (which purchased Cinram's assets in 2015) closed the Olyphant plant in 2018. Patents WEA Manufacturing held U.S. patents related to compact disc manufacture: Print scanner, (1993). Interference of converging spherical waves with application to the design of light-readable information-recording media and systems for reading such media, (2004). Method of manufacturing a composite disc structure and apparatus for performing the method, (2005). Methods and apparatus for reducing the shrinkage of an optical disc's clamp area and the resulting optical disc, (2005). Litigation In 1990, WEA Manufacturing was sued by a Canadian firm, Optical Recording Co. (ORC), for alleged infringement of two 1971 patents related to glass mastering equipment which was used by Time Warner and WEA Manufacturing in the manufacture of approximately 450 million CDs. ORC contended that unlike five other major CD manufacturers in the U.S., Time Warner had refused to license the technology from ORC. In 1992, a jury assessed damages of 6 cents per disc, plus $4–5 million in interest. See also Sony Digital Audio Disc Corporation, another early U.S. manufacturer of CDs References DVD manufacturing Compact disc Digital media Optical disc authoring Former Time Warner subsidiaries Lackawanna County, Pennsylvania Entertainment companies established in 1978 Manufacturing companies established in 1978 Manufacturing companies disestablished in 2003 1978 establishments in Pennsylvania 2003 disestablishments in Pennsylvania Defunct companies based in Pennsylvania Defunct manufacturing companies based in Pennsylvania
WEA Manufacturing
[ "Technology" ]
723
[ "Multimedia", "Optical disc authoring", "Digital media" ]
7,236,456
https://en.wikipedia.org/wiki/Volume%20Logic
Volume Logic was commercial software which added audio enhancement features to media players. Originally released by Octiv Inc. in 2004, it was the first plug-in for Apple's iTunes for Mac and Windows. In April 2005, the Octiv corporation was acquired by Plantronics. Description Volume Logic was available for RealPlayer, Windows Media Player, Winamp and Musicmatch. It was designed to subjectively improve the listening experience by increasing loudness of soft passages, controlling loudness of loud passages without audible distortion, emphasizes loudness of bass separately, for example. It corrected a problem with RealPlayer and the system's wave volume control. Volume Logic disabled RealPlayer's volume control and uses its own. Presets stored settings for the amount of each kind of processing to be applied: automatic gain control, limiting, bass boost, etc. Presets cannot be added. The Volume Logic plug-in incorporated multi-band dynamics processing technology, solving common audio problems such as speaker distortion and volume shifting. In late 2005, Volume Logic 1.3 was released. This new version was recognized in Softpedia, MacUpdate, and Brothersoft. Having compatibility issues with Apple's Mac OS X v10.5, Plantronics ceased further development with Volume Logic, while leaving Windows users with a v1.4, which is compatible with iTunes 7. Leif Claesson, the inventor of audio processing core technology utilized by Octiv and Volume Logic, in 2007 joined with Octiv co-founder Keith Edwards to form a partnership to sell follow-on technology called Breakaway. References External links Official Site Product Support Page Shareware Audio enhancement Multimedia
Volume Logic
[ "Technology" ]
341
[ "Multimedia" ]
7,237,326
https://en.wikipedia.org/wiki/Free%20viewpoint%20television
Free viewpoint television (FTV) is a system for viewing natural video, allowing the user to interactively control the viewpoint and generate new views of a dynamic scene from any 3D position. The equivalent system for computer-simulated video is known as virtual reality. With FTV, the focus of attention can be controlled by the viewers rather than a director, meaning that each viewer may be observing a unique viewpoint. It remains to be seen how FTV will affect television watching as a group activity. History Systems for rendering arbitrary views of natural scenes have been well known in the computer vision community for a long time but only in recent years has the speed and quality reached levels that are suitable for serious consideration as an end-user system. Professor Masayuki Tanimoto from Nagoya University (Japan) has done much to promote the use of the term "free viewpoint television" and has published many papers on the ray space representation, although other techniques can be, and are used for FTV. QuickTime VR might be considered a predecessor to FTV. Capture and display In order to acquire the views necessary to allow a high-quality rendering of the scene from any angle, several cameras are placed around the scene; either in a studio environment or an outdoor venue, such as a sporting arena for example. The output Multiview Video (MVV) must then be packaged suitably so that the data may be compressed and also so that the users' viewing device may easily access the relevant views to interpolate new views. It is not enough to simply place cameras around the scene to be captured. The geometry of the camera set up must be measured by a process known in computer vision as "camera calibration." Manual alignment would be too cumbersome so typically a "best effort" alignment is performed prior to capturing a test pattern that is used to generate calibration parameters. Restricted free viewpoint television views for large environments can be captured from a single location camera system mounted on a moving platform. Depth data must also be captured, which is necessary to generate the free viewpoint. The Google Street View capture system is an example with limited functionality. The first full commercial implementation, iFlex, was delivered in 2009 by Real Time Race. Multiview video capture varies from partial (usually about 30 degrees) to complete (360 degrees) coverage of the scene. Therefore, it is possible to output stereoscopic views suitable for viewing with a 3D display or other 3D methods. Systems with more physical cameras can capture images with more coverage of the viewable scene, however, it is likely that certain regions will always be occluded from any viewpoint. A larger number of cameras should make it possible to obtain high quality output because less interpolation is needed. More cameras mean that efficient coding of the Multiview Video is required. This may not be such a big disadvantage as there are representations that can remove the redundancy in MVV; such as inter view coding using MPEG-4 or Multiview Video Coding, the ray space representation, geometry videos, etc. In terms of hardware, the user requires a viewing device that can decode MVV and synthesize new viewpoints, and a 2D or 3D display. Standardization The Moving Picture Experts Group (MPEG) has normalized Annex H of MPEG-4 AVC in March 2009 called Multiview Video Coding after the work of a group called '3DAV' (3D Audio and Visual) headed by Aljoscha Smolic at the Heinrich-Hertz Institute. See also 3D reconstruction Rendering (computer graphics) Volumetric video References Bibliography External links Canon announced development of a Free Viewpoint TV system on 21 September 2017, to be showcased at Inter BEE 2017. iview is a British DTI project between BBC, Snell & Wilcox and University of Surrey to develop an FTV system. Eye Vision is a system developed by Professor Takeo Kanade at CMU for CBS's coverage of Super Bowl XXXV. The user is not able to change viewpoint but the camera operator is able to choose any virtual viewpoint by synthesizing images from an active vision system. created the first-ever live on-air 3D reconstruction during the London 2012 Olympic Games; their website now seems to point to Intel freeD 360 Replay Year of introduction missing Television technology Applications of computer vision 3D imaging Motion capture
Free viewpoint television
[ "Technology" ]
876
[ "Information and communications technology", "Television technology" ]
9,402,865
https://en.wikipedia.org/wiki/List%20of%20thermal%20conductivities
In heat transfer, the thermal conductivity of a substance, k, is an intensive property that indicates its ability to conduct heat. For most materials, the amount of heat conducted varies (usually non-linearly) with temperature. Thermal conductivity is often measured with laser flash analysis. Alternative measurements are also established. Mixtures may have variable thermal conductivities due to composition. Note that for gases in usual conditions, heat transfer by advection (caused by convection or turbulence for instance) is the dominant mechanism compared to conduction. This table shows thermal conductivity in SI units of watts per metre-kelvin (W·m−1·K−1). Some measurements use the imperial unit BTUs per foot per hour per degree Fahrenheit ( = Sortable list This concerns materials at atmospheric pressure and around . Analytical list Thermal conductivities have been measured with longitudinal heat flow methods where the experimental arrangement is so designed to accommodate heat flow in only the axial direction, temperatures are constant, and radial heat loss is prevented or minimized. For the sake of simplicity the conductivities that are found by that method in all of its variations are noted as L conductivities, those that are found by radial measurements of the sort are noted as R conductivities, and those that are found from periodic or transient heat flow are distinguished as P conductivities. Numerous variations of all of the above and various other methods have been discussed by some G. K. White, M. J. Laubits, D. R. Flynn, B. O. Peirce and R. W. Wilson and various other theorists who are noted in an international Data Series from Purdue University, Volume I pages 14a–38a. This concerns materials at various temperatures and pressures. See also Laser flash analysis List of insulation materials R-value (insulation) Thermal transmittance Specific heat capacity Thermal conductivity Thermal conductivities of the elements (data page) Thermal diffusivity Thermodynamics References Bibliography External links Heat Conduction Calculator Thermal Conductivity Online Converter - An online thermal conductivity calculator Thermal Conductivities of Solders Thermal conductivity of air as a function of temperature can be found at James Ierardi's Fire Protection Engineering Site Non-Metallic Solids: The thermal conductivities of non-metallic solids are found in about 1286 pages in the TPRC Data Series volume 2 at the PDF link here (Identifier ADA951936): http://www.dtic.mil/docs/citations/ADA951936 with full text link https://apps.dtic.mil/dtic/tr/fulltext/u2/a951936.pdf retrieved February 2, 2019 at 10:15 PM EST. Gasses and Liquids: The thermal conductivities of gasses and liquids are found in the TPRC Data Series volume 3 at the PDF link here (Identifier ADA951937): http://www.dtic.mil/docs/citations/ADA951937 with full text link https://apps.dtic.mil/dtic/tr/fulltext/u2/a951937.pdf retrieved February 2, 2019 at 10:19 PM EST. Metals and Alloys: The thermal conductivities of metals are found in about 1595 pages in the TPRC Data Series volume 1 at the PDF link here: http://www.dtic.mil/docs/citations/ADA951935 with full text link https://apps.dtic.mil/dtic/tr/fulltext/u2/a951935.pdf retrieved February 2, 2019 at 10:20 PM EST. Specific Heat and Thermal Radiation: Primary sources are found in the TPRC data series volumes 4 — 9, links: https://apps.dtic.mil/dtic/tr/fulltext/u2/a951938.pdf, https://apps.dtic.mil/dtic/tr/fulltext/u2/a951939.pdf, https://apps.dtic.mil/dtic/tr/fulltext/u2/a951940.pdf, https://apps.dtic.mil/dtic/tr/fulltext/u2/a951941.pdf, https://apps.dtic.mil/dtic/tr/fulltext/u2/a951942.pdf and https://apps.dtic.mil/dtic/tr/fulltext/u2/a951943.pdf retrieved at various times February 2 and 3, 2019. Vacuums: Vacuums and various levels of vacuums and the thermal conductivities of air at reduced pressures are known at http://www.electronics-cooling.com/2002/11/the-thermal-conductivity-of-air-at-reduced-pressures-and-length-scales/ retrieved February 2, 2019 at 10:44 PM EST. Chemical properties Physical quantities Heat conduction Technology-related lists Heat transfer Thermodynamics
List of thermal conductivities
[ "Physics", "Chemistry", "Mathematics" ]
1,079
[ "Transport phenomena", "Physical phenomena", "Heat transfer", "Physical quantities", "Quantity", "Thermodynamics", "nan", "Heat conduction", "Physical properties", "Dynamical systems" ]
9,403,144
https://en.wikipedia.org/wiki/Tricorn%20%28mathematics%29
In mathematics, the tricorn, sometimes called the Mandelbar set, is a fractal defined in a similar way to the Mandelbrot set, but using the mapping instead of used for the Mandelbrot set. It was introduced by W. D. Crowe, R. Hasson, P. J. Rippon, and P. E. D. Strain-Clark. John Milnor found tricorn-like sets as a prototypical configuration in the parameter space of real cubic polynomials, and in various other families of rational maps. The characteristic three-cornered shape created by this fractal repeats with variations at different scales, showing the same sort of self-similarity as the Mandelbrot set. In addition to smaller tricorns, smaller versions of the Mandelbrot set are also contained within the tricorn fractal. Formal definition The tricorn is defined by a family of quadratic antiholomorphic polynomials given by where is a complex parameter. For each , one looks at the forward orbit of the critical point of the antiholomorphic polynomial . In analogy with the Mandelbrot set, the tricorn is defined as the set of all parameters for which the forward orbit of the critical point is bounded. This is equivalent to saying that the tricorn is the connectedness locus of the family of quadratic antiholomorphic polynomials; i.e. the set of all parameters for which the Julia set is connected. The higher degree analogues of the tricorn are known as the multicorns. These are the connectedness loci of the family of antiholomorphic polynomials . Basic properties The tricorn is compact, and connected. In fact, Nakane modified Douady and Hubbard's proof of the connectedness of the Mandelbrot set to construct a dynamically defined real-analytic diffeomorphism from the exterior of the tricorn onto the exterior of the closed unit disc in the complex plane. One can define external parameter rays of the tricorn as the inverse images of radial lines under this diffeomorphism. Every hyperbolic component of the tricorn is simply connected. The boundary of every hyperbolic component of odd period of the tricorn contains real-analytic arcs consisting of quasi-conformally equivalent but conformally distinct parabolic parameters. Such an arc is called a parabolic arc of the tricorn. This is in stark contrast with the corresponding situation for the Mandelbrot set, where parabolic parameters of a given period are known to be isolated. The boundary of every odd period hyperbolic component consists only of parabolic parameters. More precisely, the boundary of every hyperbolic component of odd period of the tricorn is a simple closed curve consisting of exactly three parabolic cusp points as well as three parabolic arcs, each connecting two parabolic cusps. Every parabolic arc of period k has, at both ends, an interval of positive length across which bifurcation from a hyperbolic component of odd period k to a hyperbolic component of period 2k occurs. Image gallery of various zooms Much like the Mandelbrot set, the tricorn has many complex and intricate designs. Due to their similarity, they share many features. However, in the tricorn such features appear to be squeezed and stretched along its boundary. The following images are progressional zooms on a selected value where . The images are not stretched or altered, that is how they look on magnification. Implementation The below pseudocode implementation hardcodes the complex operations for Z. Consider implementing complex number operations to allow for more dynamic and reusable code.For each pixel (x, y) on the screen, do: { x = scaled x coordinate of pixel (scaled to lie in the Mandelbrot X scale (-2.5, 1)) y = scaled y coordinate of pixel (scaled to lie in the Mandelbrot Y scale (-1, 1)) zx = x; // zx represents the real part of z zy = y; // zy represents the imaginary part of z iteration = 0 max_iteration = 1000 while (zx*zx + zy*zy < 4 AND iteration < max_iteration) { xtemp = zx*zx - zy*zy + x zy = -2*zx*zy + y zx = xtemp iteration = iteration + 1 } if (iteration == max_iteration) //Belongs to the set return insideColor; return iteration * color; } Further topological properties The tricorn is not path connected. Hubbard and Schleicher showed that there are hyperbolic components of odd period of the tricorn that cannot be connected to the hyperbolic component of period one by paths. A stronger statement to the effect that no two (non-real) odd period hyperbolic components of the tricorn can be connected by a path was proved by Inou and Mukherjee. It is well known that every rational parameter ray of the Mandelbrot set lands at a single parameter. On the other hand, the rational parameter rays at odd-periodic (except period one) angles of the tricorn accumulate on arcs of positive length consisting of parabolic parameters. Moreover, unlike the Mandelbrot set, the dynamically natural straightening map from a baby tricorn to the original tricorn is discontinuous at infinitely many parameters. References Fractals
Tricorn (mathematics)
[ "Mathematics" ]
1,138
[ "Mathematical analysis", "Functions and mappings", "Mathematical objects", "Fractals", "Mathematical relations" ]
9,403,552
https://en.wikipedia.org/wiki/Situated%20robotics
In artificial intelligence and cognitive science, the term situated refers to an agent which is embedded in an environment. In this used, the term is used to refer to robots, but some researchers argue that software agents can also be situated if: they exist in a dynamic (rapidly changing) environment, which they can manipulate or change through their actions, and which they can sense or perceive. Being situated is generally considered to be part of being embodied, but it is useful to take both perspectives. The situated perspective emphasizes the environment and the agent's interactions with it. These interactions define an agent's embodiment. See also Robot general heading Cognitive agents Scruffies - people who tend to worry about whether their agent is situated. References Hendriks-Jansen, Horst (1996) Catching Ourselves in the Act: Situated Activity, Interactive Emergence, Evolution, and Human Thought. Cambridge, Mass.: MIT Press. Robotics
Situated robotics
[ "Engineering" ]
187
[ "Robotics", "Automation" ]
9,403,814
https://en.wikipedia.org/wiki/Self-testing%20code
Self-testing code is software that incorporates built-in tests (see test-first development). In Java, to execute a unit test from the command line, a class can have methods like the following. // Executing <code>main</code> runs the unit test. public static void main(String[] args) { test(); } static void test() { assert foo == bar; } To invoke a full system test, a class can incorporate a method call. public static void main(String[] args) { test(); TestSuite.test(); // invokes full system test }In addition, Java has some Jupiter API libraries for self-testing code. assert can be used in various ways such as assert equals, which checks if the given variable is equal to the value given.@Test void checkplayer() { Board board = new Board(10); board.addplayer(1); int check = board.getCurrentPlayer(1); assertEquals(1, check); } See also Software development Extreme programming References Articles with example Java code Unit testing
Self-testing code
[ "Engineering" ]
247
[ "Software engineering", "Software engineering stubs" ]
9,405,895
https://en.wikipedia.org/wiki/Virgin%20Earth%20Challenge
The Virgin Earth Challenge was a competition offering a $25 million prize for whoever could demonstrate a commercially viable design which results in the permanent removal of greenhouse gases out of the Earth's atmosphere to contribute materially in global warming avoidance. The prize was conceived by Richard Branson, and was announced in London on 9 February 2007 by Branson and former US Vice President Al Gore. Among more than 2600 applications, 11 finalists were announced on 2 November 2011. These were Biochar Solutions, from the US; Biorecro, Sweden; Black Carbon, Denmark; Carbon Engineering, Canada; Climeworks, Switzerland; COAWAY, US; Full Circle Biochar, US; Global Thermostat, US; Kilimanjaro Energy, US; Smartstones – Olivine Foundation, Netherlands, and The Savory Institute, US. The prize was never awarded. In 2019, Virgin took the prize website offline after having kept the 11 finalists in suspension for eight years. Al Gore had withdrawn from the jury earlier and commented that he was not part of the decision to discontinue the contest. The challenge The Prize was to be awarded to "a commercially viable design which, achieves or appears capable of achieving the net removal of significant volumes of anthropogenic, atmospheric GHGs each year for at least 10 years", with significant volumes specified as "should be scalable to a significant size in order to meet the informal removal target of 1 billion tonnes of carbon-equivalent per year". One tonne of carbon-equivalent (C) equals 3.67 tonnes of carbon dioxide (). (Because of the relationship between their atomic weights, more precisely 44/12.) At present, fossil fuel emissions are around 6.3 gigatons of carbon. The prize would initially only be open for five years, with ideas assessed by a panel of judges including Richard Branson, Al Gore and Crispin Tickell (British diplomat), as well as scientists James E. Hansen, James Lovelock and Tim Flannery. The prize term was extended until 2019. Around two hundred billion metric tons of carbon dioxide have accumulated in the atmosphere since the beginning of the Industrial Revolution, raising concentrations by more than 100 parts per million (ppm), from 280 to more than 380 ppm. The Virgin Earth Challenge was intended to inspire inventors to find ways of bringing that back down again to avoid the dangerous levels of global warming and sea level rise predicted by organisations such as the Intergovernmental Panel on Climate Change. The Virgin Earth Challenge was similar in concept to other high technology competitions, such as the Orteig Prize for flying across the Atlantic, and the Ansari X Prize for spaceflight. Competing technologies The eleven finalists represent five competing technologies, some being represented by multiple finalists. Biochar Biochar, created by pyrolysis of biomass. Pyrolysis is a process where biomass is partially combusted in an oxygen-limited environment, which produces a char rich in carbon. This char can be distributed in soils as a soil amendment. Finalists competing with biochar designs: Biochar Solutions, US Black Carbon, Denmark Circle Biochar, US BECCS (Bio-energy with carbon capture and storage) Bio-energy with carbon capture and storage (BECCS) combines combustion or processing of biomass with geologic carbon capture and storage. BECCS is applied to industries such as electrical power, combined heat and power, pulp and paper, ethanol production, and biogas production. There is 550 000 tonnes /year in total BECCS capacity operating, divided between three different facilities (as of January 2012). BECCS was pointed out in the IPCC Fourth Assessment Report by the Intergovernmental Panel on Climate Change (IPCC) as a key technology for reaching low carbon dioxide atmospheric concentration targets. The negative emissions that can be produced by BECCS has been estimated by the Royal Society to be equivalent to a 50 to 150 ppm decrease in global atmospheric carbon dioxide concentrations and according to the International Energy Agency, the BLUE map climate change mitigation scenario calls for more than 2 gigatonnes of negative CO2 emissions per year with BECCS in 2050. According to the OECD, "Achieving lower concentration targets (450 ppm) depends significantly on the use of BECCS". The sustainable technical potential for net negative emissions with BECCS has been estimated to 10 Gt of equivalent annually, with an economic potential of up to 3.5 Gt of equivalent annually at a cost of less than 50 €/tonne, and up to 3.9 Gt of equivalent annually at a cost of less than 100 €/tonne. Imperial College London, the UK Met Office Hadley Centre for Climate Prediction and Research, the Tyndall Centre for Climate Change Research, the Walker Institute for Climate System Research, and the Grantham Institute for Climate Change issued a joint report on carbon dioxide removal technologies as part of the AVOID: Avoiding dangerous climate change research program, stating that "Overall, of the technologies studied in this report, BECCS has the greatest maturity and there are no major practical barriers to its introduction into today’s energy system. The presence of a primary product will support early deployment." Finalist competing with BECCS design: Biorecro, Sweden Direct air capture Direct Air Capture is the process of capturing carbon dioxide directly from ambient air using solvents, filters or other methods. Subsequent to being captured, the carbon dioxide would be stored with carbon capture and storage technologies to keep it permanently out of the atmosphere. Finalists competing with direct air capture designs: Carbon Engineering, Canada Climeworks, Switzerland Coaway, US Global Thermostat, US Kilimanjaro Energy, US Enhanced weathering Enhanced weathering refers to a chemical approach to in-situ carbonation of silicates, where carbon dioxide is combined through natural weathering processes with mined minerals, such as olivine. The idea was based on the work of Dutch geoscientist Olaf Schuiling, whose ideas continue to be explored in the Netherlands, with promising results. Finalist competing with enhanced weathering design: Smartstones – Olivine Foundation, The Netherlands Grassland restoration Changed management methods for grasslands can significantly increase the uptake of carbon dioxide into the soil, creating a carbon sink. This and other land use change methods is not generally considered among negative emission technologies because of uncertain long-term sequestration permanence. Finalist competing with grassland restoration design: The Savory Institute, US after receiving criticism from many respected scientists has been removed from the finalist list and no longer appears on the Virgin Earth Challenge website. Discontinuance The finalists who were announced in 2011 were kept in suspension for nine years, with many additional requests for information and data, as contestant Global Thermostat reported. Another contestant, Carbon Engineering received notification in 2019 that they fulfilled all technical criteria and were selected for the final judgment. Subsequently they were informed that the prize was "indefinitely put on hold". At the end of 2019, the prize was discontinued and the website taken offline. Carbon Engineering was informed by the Virgin Earth Challenge that "the market conditions necessary to support commercial and sustainable investment in the relevant carbon removal techniques were not foreseeable". Nevertheless, Carbon Engineering had raised 95 million dollar in investments by other parties, including Bill Gates. Contestant Graciela Chichilnisky of Global Thermostat, another direct air capture contestant, who had raised 60 million dollars in investments from other parties, expressed strong criticism in Dutch daily Volkskrant: "If you want to encourage scientific progress with a prize, it's not enough to open your mouth and say "25 million dollars." None of the 11 finalists received any funding or concrete help from Virgin during the 13 years of assessment. Similar competitions Since the Virgin Earth Challenge, two new multimillion climate technology contests have been announced. In 2015, NRG COSIA Carbon XPRIZE was launched. It awards $20 million to "breakthrough technologies to convert emissions into usable products". The prize focuses on commercial exploitation of the carbon capture process. The prize will be awarded in the winter of 2021. In 2021, Elon Musk of Tesla Inc announced a $100 million prize for development of the best technology to capture carbon dioxide emissions. See also Bio-energy with carbon capture and storage (BECCS) Biochar Carbon dioxide removal Carbon negative Carbon sequestration Carbon War Room (also established by Richard Branson) Climate engineering (also called geoengineering and climate intervention) Enhanced weathering Negative carbon dioxide emission List of environmental awards References External links Official website The International Biochar Initiative Greenhouse gas emissions Environmental awards Challenge awards
Virgin Earth Challenge
[ "Chemistry" ]
1,781
[ "Greenhouse gases", "Greenhouse gas emissions" ]
9,406,595
https://en.wikipedia.org/wiki/Reach-in%20oven
Reach-in ovens are meant for different industrial applications that may need uniform temperature throughout. The ovens normally use horizontal re-circulating air to ensure the uniform temperature, and can use fans that circulate air, creating the airflow. Reach-in ovens can be used in numerous production and laboratory applications, including curing, drying, sterilizing, aging, and other process-critical applications. Reach-in ovens are considered a type of industrial batch oven. Other types of batch ovens are bench/laboratory, burn in, laboratory, walk in/truck in, and clean process. External links National Fire Protection Association Industrial ovens
Reach-in oven
[ "Engineering" ]
134
[ "Industrial ovens", "Industrial machinery" ]
9,406,780
https://en.wikipedia.org/wiki/World%20Programming%20System
The World Programming System, also known as WPS Analytics or WPS, is a software product developed by a company called World Programming (acquired by Altair Engineering). WPS Analytics supports users of mixed ability to access and process data and to perform data science tasks. It has interactive visual programming tools using data workflows, and it has coding tools supporting the use of the SAS language mixed with Python, R and SQL. About WPS can use programs written in the language of SAS without the need for translating them into any other language. In this regard WPS is compatible with the SAS system. WPS has a built-in language interpreter able to process the language of SAS and produce similar results. WPS is available to run on z/OS, Windows, macOS, Linux (x86, Armv8 64-bit, IBM Power LE, IBM Z), and AIX. On all supported platforms, programs written in the language of SAS can be executed from a WPS command line interface, often referred to as running in batch mode. WPS can also be used from a graphical user interface known as the WPS Workbench for managing, editing and running programs written in the language of SAS. The WPS Workbench user interface is based on Eclipse. WPS version 4 (released in March 2018) introduced a drag-and-drop workflow canvas providing interactive blocks for data retrieval, blending and preparation, data discovery and profiling, predictive modelling powered by machine learning algorithms, model performance validation and scorecards. WPS version 3 (released in February 2012) provided a new client/server architecture that allows the WPS Workbench GUI to execute SAS programs on remote server installations of WPS in a network or cloud. The resulting output, data sets, logs, etc., can then all be viewed and manipulated from inside the Workbench as if the workloads had been executed locally. SAS programs do not require any special language statements to use this feature. Summary of main features Runs on Windows, macOS, z/OS, Linux (x86, Armv8 64-bit, IBM Power LE, IBM Z), and AIX An integrated development environment based on Eclipse for Linux, macOS and Windows. Support for language of SAS elements. Support for the language of SAS Macros. Matrix Programming support using PROC IML. Support for generating band plots, bar charts, box plots, bubble plots, contour plots, dendrogram plots, ellipse plots, fringe plots, heat maps, high-low plots, histograms, loess plots, needle plots, pie charts, penalised b-spline, radar charts, reference lines, scatter plots, series plots, step plots, regression plots and vector plots. Support for statistical procedures ACECLUS, ASSOCRULES, ANOVA, BIN, BOXPLOT, CANCORR, CANDISC, CLUSTER, CORRESP, DISCRIM, DISTANCE, FACTOR, FASTCLUS, FREQ, GAM, GANNO, GENMOD, GLIMMIX, GLM, GLMMOD, GLMSELECT, ICLIFETEST, KDE, LIFEREG, LIFETEST, LOESS, LOGISTIC, MDS, MEANS, MI, MIANALYSE, MIXED, MODECLUS, NESTED, NLIN, NPAR1WAY, PHREG, PLAN, PLS, POWER, PRINCOMP, PROBIT, QUANTREG, RBF, REG, ROBUSTREG, RSREG, SCORE, SEGMENT, SIMNORMAL, STANDARD, STDSIZE, STDRATE, STEPDISC, SUMMARY, SURVEYMEANS, SURVEYSELECT, TPSPLINE, TRANSREG, TREE, TTEST, UNIVARIATE, VARCLUS, VARCOMP Support for time series procedures ARIMA, AUTOREG, ESM, EXPAND, FORECAST, LOAN, SEVERITY, SPECTRA, TIMESERIES, X12 Support for machine learning procedures DECISIONFOREST, DECISIONTREE, GMM, MLP, OPTIMALBIN, SEGMENT, SVM Support for ODS. Reads and writes SAS datasets (compressed or uncompressed). Access: Actian Matrix (previously known as ParAccel), DASD, DB2, Excel, Greenplum, Hadoop, Informix, Kognitio, MariaDB, MySQL, Netezza, ODBC, OLEDB, Oracle, PostgreSQL, SAND, Snowflake, SPSS/PSPP, SQL Server, Sybase, Sybase IQ, Teradata, VSAM, Vertica and XML. Support for SAS Tape Format. Direct output of reports to CSV, PDF and HTML. Support to connect WPS systems programmatically, remote submit parts of a program to execute on connected remote servers, upload and download data between the connected systems. Support for Hadoop Support for R Support for Python Industry recognition Gartner recognized World Programming in their Cool Vendors in Data Science, 2014 Report. Lawsuit In 2010 World Programming defended its use of the language of SAS in the High Court of England and Wales in SAS Institute Inc. v World Programming Ltd. The software was the subject of a lawsuit by SAS Institute. The EU Court of Justice ruled in favor of World Programming, stating that the copyright protection does not extend to the software functionality, the programming language used and the format of the data files used by the program. It stated that there is no copyright infringement when a company which does not have access to the source code of a program studies, observes and tests that program to create another program with the same functionality. References External links World Programming web site Statistical software Data mining and machine learning software Extract, transform, load tools Business intelligence software Data analysis software Data warehousing Proprietary commercial software for Linux
World Programming System
[ "Mathematics" ]
1,199
[ "Statistical software", "Mathematical software" ]
9,407,051
https://en.wikipedia.org/wiki/Batch%20oven
Batch ovens are a type of furnace used for thermal processing. They are used in numerous production and laboratory applications, including curing, drying, sterilizing, aging, and other process-critical applications. Sizes can vary depending on what type of thermal processing application is needed. Batch ovens are used mainly for single batch thermal processing. Cabinet and bench ovens are smaller batch ovens, and walk in/drive in ovens are larger and to be used for more variations of industrial applications. Other types of batch ovens include laboratory, burn-in, reach-in, and clean process. Batch Ovens can be used for a wide variety of heat processes including drying, curing, aging, annealing, stress relieving, bonding, tempering, preheating, and forming. Batch ovens are essentially heated boxes with insulated doors that process products one at a time or in groups. The part(s) to be processed are brought into the oven in batches on racks, carts, or trucks. Production requirements can accommodate manual or automated loading. Industrial furnaces Industrial ovens
Batch oven
[ "Chemistry", "Engineering" ]
225
[ "Metallurgical processes", "Industrial furnaces", "Industrial ovens", "Industrial machinery" ]
9,407,168
https://en.wikipedia.org/wiki/Cumberland%20Railway%20and%20Coal%20Company
The Cumberland Railway and Coal Company is a defunct Canadian industrial company with interests in coal mines in Springhill, Nova Scotia, and a railway that operated from Springhill Junction to Parrsboro. Springhill and Parrsboro Coal and Railway Company The General Mining Association (GMA) had been established in 1825 to develop mineral rights in Nova Scotia held by the Duke of York. The lease was abrogated in 1857 after the colonial government of Nova Scotia had released all mineral rights in the colony in 1849. In compensation for this loss of mineral rights, the GMA was permitted to retain certain assets in specific geographic areas. Among those rights was a 4 square mile (10 km2) property on a hill in central Cumberland County. The lack of transportation prevented mining development at Springhill until 1870 when the construction of the Intercolonial Railway between Truro and Moncton came through the area. This instigated several corporate moves for acquiring mineral rights in the Springhill Coal Field. Since the Intercolonial Railway's preferred route was the most direct east-west line possible, the Spring Hill and Parrsborough Coal and Railway Company (Limited) was incorporated in 1872 as a mining and railway company to link from a mine at Springhill south to the port of Parrsboro on the Bay of Fundy from which coal could be shipped to destinations in southern Nova Scotia and along the eastern seaboard of North America. The same investors also created the Pugwash and Spring Hill Railway Company, which received a charter to build a line north to the Northumberland Strait port of Pugwash from which coal could be shipped to northern Nova Scotia, Prince Edward Island, eastern New Brunswick and Quebec. Both railway lines were promised a subsidy that year by the provincial government for their construction. However, the investors were able to reduce the amount of new railway construction required in Cumberland County after they encouraged local politicians to persuade the Intercolonial Railway surveyors to route that railway's main line further south from the direct route between Oxford Junction and Amherst. Thus the line made a diversion of several miles to what came to be named Springhill Junction where the Spring Hill and Parrsoborough Railway would link to the new government-owned railway. The prospect of the railway connection with the Intercolonial saw the Spring Hill & Parrsborough Coal & Railway Company (Limited) lease several areas of Crown mineral rights outside the GMA holdings in the Springhill area to develop a coal mine. In 1874 the provincial government confirmed an attractive subsidy for constructing the railway: 10,000 acres (40 km2) and $5,000 per mile. In 1875, the company secured financing and began construction with the railway line reaching Parrsboro two years later. The Spring Hill & Parrsborough Railway officially opened on July 1, 1877 and began shipping coal to the port; the first year saw 900 ships loaded in the port. The Pugwash & Spring Hill Railway was never constructed as a result of the construction of the Intercolonial Railway connecting to additional markets; in the 1880s the Intercolonial would build a spur to Pugwash off its Oxford Junction - Stellarton. In 1878, the Springhill colliery had reached the boundary of the GMA holdings and in 1879 the provincial government revoked the GMA lease and transferred the mineral rights for the property to the Spring Hill and Parrsborough Coal and Railway Company (Limited). Cumberland Railway and Coal Company Unfortunately, construction costs for the railway and expansion of the colliery had impacted company finances. Revenues were insufficient to pay interest on company bonds and bankruptcy was declared with the company liquidated in 1883. The Cumberland Coal and Railway Company was incorporated in 1883 and changed its name to Cumberland Railway and Coal Company in 1884 when it purchased the assets of the Springhill and Parrsborough Coal and Railway Company (Limited). The new CR&C began mining on a much larger scale, opening the No. 1 and No. 2 collieries on the Springhill Coal Field. The company suffered a devastating loss on February 21, 1891 when a fire ignited accumulated coal dust in both collieries killed 125 miners (see the 1891 Fire under Springhill mining disaster). Following the fire, coal production resumed on an ever-increasing scale in the Springhill Coal Field, fed by the railway boom across Canada and the economic protection afforded by the National Policy which prevented a flood of cheap American coal into the country. DOMCO and DOSCO In 1910 the Dominion Coal Company Limited (DOMCO) absorbed the Cumberland Railway and Coal Company, maintaining the CR&C as a subsidiary. DOMCO was merged into the British Empire Steel Corporation (BESCO) in the early 1920s, which was later subsumed by the Dominion Steel and Coal Corporation (DOSCO) in 1930. In 1957 DOSCO was acquired by Avro Canada, which became Hawker Siddeley Canada in 1962. Under DOSCO ownership, the CR&C operated its Springhill mines as efficiently as possible, however by the 1950s, demand for coal was softening as railways dieselized and alternative heating fuels were implemented. DOSCO made few capital investments in the Springhill mines as production was winding down, which is believed to have contributed to two mining tragedies in that decade. The 1956 Explosion was caused by a runaway mine tram on November 1, 1956 and killed 39 miners. (see 1956 Explosion under Springhill mining disaster) The mines returned to production in January 1957 however few improvements were made, other than what was necessary to begin mining again. Declining export markets for Springhill coal saw the CR&C decide to stop shipments through the port of Parrsboro in the summer of 1958. The last train operated to Parrsboro on June 14. That fall saw the final chapter in Springhill mining history. The 1958 Bump was caused by the use of "room and pillar" mining techniques up until the late 1930s, creating undue stress on the local geology. Despite using the newer "long wall retreating" method, a devastating bump on October 23, 1958 killed 74 miners when the collieries collapsed. Following the 1958 Bump, DOSCO never reopened the mine and abandoned all of its mining properties in the Springhill Coal Field, throwing thousands out of work and devastating the economy of central Cumberland County. The CR&C railway limped on for a few years after the closure of the coal mines. After June 14, 1958, the southern terminus of the railway was in Southampton, to serve blueberry packers there. Scheduled CR&C service was reduced to one daily round trip between Springhill and Springhill Junction. Traffic continued to decline, and permission to commence abandonment of the line was granted in February 1961. The last train ran in 1962, and the last of the tracks were lifted in 1964. Foray onto Cape Breton Island DOSCO wasn't quite finished with its CR&C subsidiary. In 1961, DOSCO had the Cumberland Railway (which, like its predecessor the Spring Hill and Parrsborough Railway had a federal railway charter, thus qualifying it for federal railway subsidies) assume the operations of the Sydney and Louisburg Railway on Cape Breton Island. The reason for this change in title was that the S&L had been formed under a provincial charter in 1910, which made it ineligible for federal railway subsidies. Thus the Cumberland Railway name continued until 1968 when its property, along with DOSCO's coal mines, was expropriated by the Canadian federal government to form the Cape Breton Development Corporation (DEVCO). DEVCO in turn created the Devco Railway from the part of former S&L connecting Glace Bay and New Waterford to Sydney; the remaining lines of the former S&L Railway were abandoned. Even under Devco, for several years the company did business as the Sydney & Louisburg Division of the Cumberland Railway. In 1972, with H.S. Haslam as general manager, the company operated 39 miles of route with offices in Sydney. The road owned at that date 15 diesel locomotives and 1,100 freight cars. See also Robert Gilmour Leckie List of defunct Canadian railways Notes Coal companies of Canada Defunct Nova Scotia railways Mining railways Transport in Cumberland County, Nova Scotia Transport in the Cape Breton Regional Municipality Companies established in 1825 Energy companies established in 1825 Railway companies established in 1825 Energy companies established in 1884 Non-renewable resource companies established in 1884 Mining in Nova Scotia Coal in Canada
Cumberland Railway and Coal Company
[ "Engineering" ]
1,683
[ "Mining equipment", "Mining railways" ]
9,407,202
https://en.wikipedia.org/wiki/Dockominium
A dockominium is the water-based version of a condominium; rather than owning an apartment in a building, one owns a boat slip on the water. The term is a portmanteau of "dock" and "condominium." In addition to the exclusive right to use the boat slip, ownership also provides one with the right to use the common elements of the marina, much the same as one would have the right to use the common areas in a residential condominium development. Also, unit owners may use, rent, or sell their unit at any time, subject to association approval. Dockominium Similar to a condominium, a management company manages the common areas and provides all required services such as maintenance, security, insurance, bookkeeping, legal, and overall management and supervision of the dockominium facility. A monthly fee is charged to cover these expenses. Typically, water is included, while electricity and cable, etc. are billed separately via the management association. Real estate taxes are separately assessed by the municipality and are the responsibility of the unit owner. Purpose A dockominium is created when a marina converts or sells individual slips to individual owners. Traditionally, marinas are in the business of renting or leasing space. A comparison would be the conversion of a rental apartment to a condominium. An association is created that monitors the maintenance and operation of the marina. Individual owners are responsible for paying their monthly, quarterly, or annual association dues and for paying their own property taxes assessed on the slip. Dockominium conversions are a popular trend taking place in the marina industry in high demand areas focusing on the luxury markets. Limits However, despite the advantages, whether or not dockominium sales are legal varies according to laws of each area. Few marina owners also own the land under the water, and most have only an easement to the property. Individual unit sales may violate law, thus following the legal concept of the public trust doctrine that provides that public trust lands, waters, and living resources in a state are held by the State in trust for the benefit of all of the people. See also Condominium Marina Real estate Coastal construction Condominium
Dockominium
[ "Engineering" ]
439
[ "Construction", "Coastal construction" ]
9,407,911
https://en.wikipedia.org/wiki/Symbols%20of%20Alberta
Alberta is one of Canada's provinces, and has established several official emblems that reflect the province's history, its natural and diverse landscapes, and its people. Official symbols of Alberta De facto symbols While not officially adopted through legislation as emblems by the government of Alberta, these places and things are popularly associated with (hence could be considered symbols of) the province. See also List of Canadian provincial and territorial symbols Canadian royal symbols References Alberta Symbols Canadian provincial and territorial symbols
Symbols of Alberta
[ "Mathematics" ]
97
[ "Symbols", "Lists of symbols" ]
9,407,981
https://en.wikipedia.org/wiki/Symbols%20of%20British%20Columbia
British Columbia is Canada's westernmost province, and has established several provincial symbols. Official symbols Other symbols References British Columbia Symbols Canadian provincial and territorial symbols
Symbols of British Columbia
[ "Mathematics" ]
32
[ "Symbols", "Lists of symbols" ]
9,408,017
https://en.wikipedia.org/wiki/Symbols%20of%20Manitoba
There are several symbols of Manitoba, one of the ten provinces of Canada. These symbols are designated by The Coat of Arms, Emblems and the Manitoba Tartan Act, which came into force on Feb 1, 1988. Symbols References Manitoba Symbols Canadian provincial and territorial symbols
Symbols of Manitoba
[ "Mathematics" ]
55
[ "Symbols", "Lists of symbols" ]
9,408,074
https://en.wikipedia.org/wiki/Symbols%20of%20Newfoundland%20and%20Labrador
Newfoundland and Labrador is one of Canada's provinces, and has established several official symbols. Labrador, the mainland portion of the province, has its own distinct cultural identity and has established several unofficial symbols for itself. Official symbols of Newfoundland and Labrador Unofficial symbols of Labrador References Newfoundland Symbols Canadian provincial and territorial symbols
Symbols of Newfoundland and Labrador
[ "Mathematics" ]
62
[ "Symbols", "Lists of symbols" ]
9,408,111
https://en.wikipedia.org/wiki/Symbols%20of%20New%20Brunswick
New Brunswick is one of Canada's provinces, and has established several provincial symbols. Official symbols References New Brunswick Symbols Canadian provincial and territorial symbols
Symbols of New Brunswick
[ "Mathematics" ]
30
[ "Symbols", "Lists of symbols" ]
9,408,138
https://en.wikipedia.org/wiki/Symbols%20of%20Nova%20Scotia
Nova Scotia is one of Canada's provinces, and has established several provincial symbols. Symbols References Nova Scotia Symbols Canadian provincial and territorial symbols
Symbols of Nova Scotia
[ "Mathematics" ]
29
[ "Symbols", "Lists of symbols" ]