id
int64
39
79M
url
stringlengths
31
227
text
stringlengths
6
334k
source
stringlengths
1
150
categories
listlengths
1
6
token_count
int64
3
71.8k
subcategories
listlengths
0
30
357,452
https://en.wikipedia.org/wiki/Induced%20representation
In group theory, the induced representation is a representation of a group, , which is constructed using a known representation of a subgroup . Given a representation of , the induced representation is, in a sense, the "most general" representation of that extends the given one. Since it is often easier to find representations of the smaller group than of , the operation of forming induced representations is an important tool to construct new representations. Induced representations were initially defined by Frobenius, for linear representations of finite groups. The idea is by no means limited to the case of finite groups, but the theory in that case is particularly well-behaved. Constructions Algebraic Let be a finite group and any subgroup of . Furthermore let be a representation of . Let be the index of in and let be a full set of representatives in of the left cosets in . The induced representation can be thought of as acting on the following space: Here each is an isomorphic copy of the vector space V whose elements are written as with . For each g in and each gi there is an hi in and j(i) in {1, ..., n} such that . (This is just another way of saying that is a full set of representatives.) Via the induced representation acts on as follows: where for each i. Alternatively, one can construct induced representations by extension of scalars: any K-linear representation of the group H can be viewed as a module V over the group ring K[H]. We can then define This latter formula can also be used to define for any group and subgroup , without requiring any finiteness. Examples For any group, the induced representation of the trivial representation of the trivial subgroup is the right regular representation. More generally the induced representation of the trivial representation of any subgroup is the permutation representation on the cosets of that subgroup. An induced representation of a one dimensional representation is called a monomial representation, because it can be represented as monomial matrices. Some groups have the property that all of their irreducible representations are monomial, the so-called monomial groups. Properties If is a subgroup of the group , then every -linear representation of can be viewed as a -linear representation of ; this is known as the restriction of to and denoted by . In the case of finite groups and finite-dimensional representations, the Frobenius reciprocity theorem states that, given representations of and of , the space of -equivariant linear maps from to has the same dimension over K as that of -equivariant linear maps from to . The universal property of the induced representation, which is also valid for infinite groups, is equivalent to the adjunction asserted in the reciprocity theorem. If is a representation of H and is the representation of G induced by , then there exists a -equivariant linear map with the following property: given any representation of and -equivariant linear map , there is a unique -equivariant linear map with . In other words, is the unique map making the following diagram commute: The Frobenius formula states that if is the character of the representation , given by , then the character of the induced representation is given by where the sum is taken over a system of representatives of the left cosets of in and Analytic If is a locally compact topological group (possibly infinite) and is a closed subgroup then there is a common analytic construction of the induced representation. Let be a continuous unitary representation of into a Hilbert space V. We can then let: Here means: the space G/H carries a suitable invariant measure, and since the norm of is constant on each left coset of H, we can integrate the square of these norms over G/H and obtain a finite result. The group acts on the induced representation space by translation, that is, for g,x∈G and . This construction is often modified in various ways to fit the applications needed. A common version is called normalized induction and usually uses the same notation. The definition of the representation space is as follows: Here are the modular functions of and respectively. With the addition of the normalizing factors this induction functor takes unitary representations to unitary representations. One other variation on induction is called compact induction. This is just standard induction restricted to functions with compact support. Formally it is denoted by ind and defined as: Note that if is compact then Ind and ind are the same functor. Geometric Suppose is a topological group and is a closed subgroup of . Also, suppose is a representation of over the vector space . Then acts on the product as follows: where and are elements of and is an element of . Define on the equivalence relation Denote the equivalence class of by . Note that this equivalence relation is invariant under the action of ; consequently, acts on . The latter is a vector bundle over the quotient space with as the structure group and as the fiber. Let be the space of sections of this vector bundle. This is the vector space underlying the induced representation . The group acts on a section given by as follows: Systems of imprimitivity In the case of unitary representations of locally compact groups, the induction construction can be formulated in terms of systems of imprimitivity. Lie theory In Lie theory, an extremely important example is parabolic induction: inducing representations of a reductive group from representations of its parabolic subgroups. This leads, via the philosophy of cusp forms, to the Langlands program. See also Restricted representation Nonlinear realization Frobenius character formula Notes References Representation theory of groups Group theory
Induced representation
[ "Mathematics" ]
1,139
[ "Group theory", "Fields of abstract algebra" ]
357,512
https://en.wikipedia.org/wiki/Royal%20Astronomical%20Society
The Royal Astronomical Society (RAS) is a learned society and charity that encourages and promotes the study of astronomy, solar-system science, geophysics and closely related branches of science. Its headquarters are in Burlington House, on Piccadilly in London. The society has over 4,000 members, known as fellows, most of whom are professional researchers or postgraduate students. Around a quarter of Fellows live outside the UK. The society holds monthly scientific meetings in London, and the annual National Astronomy Meeting at varying locations in the British Isles. The RAS publishes the scientific journals Monthly Notices of the Royal Astronomical Society, Geophysical Journal International and RAS Techniques and Instruments, along with the trade magazine Astronomy & Geophysics. The RAS maintains an astronomy research library, engages in public outreach and advises the UK government on astronomy education. The society recognises achievement in astronomy and geophysics by issuing annual awards and prizes, with its highest award being the Gold Medal of the Royal Astronomical Society. The RAS is the UK adhering organisation to the International Astronomical Union and a member of the UK Science Council. History The society was founded in 1820 as the Astronomical Society of London to support astronomical research. At that time, most members were 'gentleman astronomers' rather than professionals. It became the Royal Astronomical Society in 1831 on receiving a Royal Charter from William IV. In 1846 the RAS absorbed the Spitalfields Mathematical Society, which had been founded in 1717 but was suffering from a decline in membership and dwindling finances. The nineteen remaining members of the mathematical society were given free lifetime membership of the RAS; in exchange, their society's extensive library was donated to the RAS. Between 1835 and 1916 women were not allowed to become fellows, but Anne Sheepshanks, Lady Margaret Lindsay Huggins, Agnes Clerke, Annie Jump Cannon and Williamina Fleming were made honorary members. In 1886 Isis Pogson was the first woman to attempt election as a fellow of the RAS, being nominated (unsuccessfully) by her father and two other fellows. All fellows had been male up to this time and her nomination was withdrawn when lawyers claimed that under the provisions of the society's royal charter, fellows were only referred to as he and as such had to be men. A Supplemental Charter in 1915 opened up fellowship to women. On 14 January 1916, Mary Adela Blagg, Ella K Church, A Grace Cook, Irene Elizabeth Toye Warner and Fiammetta Wilson were the first five women to be elected to Fellowship. Publications One of the major activities of the RAS is publishing refereed journals. It publishes three primary research journals: Monthly Notices of the Royal Astronomical Society for topics in astronomy; Geophysical Journal International for topics in geophysics (in association with the Deutsche Geophysikalische Gesellschaft); and RAS Techniques & Instruments for research methods in those disciplines. The society also publishes a trade magazine for members, Astronomy & Geophysics. The history of journals published by the RAS (with abbreviations used by the Astrophysics Data System) is: Memoirs of the Royal Astronomical Society (MmRAS): 1822–1977 Monthly Notices of the Royal Astronomical Society (MNRAS): 1827–present Geophysical Supplement to Monthly Notices (MNRAS): 1922–1957 Geophysical Journal (GeoJ): 1958–1988 Geophysical Journal International (GeoJI): 1989–present (volume numbering continues from GeoJ) Quarterly Journal of the Royal Astronomical Society (QJRAS): 1960–1996 Astronomy & Geophysics (A&G): 1997–present (volume numbering continues from QJRAS) RAS Techniques & Instruments (RASTI): 2021–present Membership Fellows Full members of the RAS are styled Fellows, and may use the post-nominal letters FRAS. Fellowship is open to anyone over the age of 18 who is considered acceptable to the society. As a result of the society's foundation in a time before there were many professional astronomers, no formal qualifications are required. However, around three quarters of fellows are professional astronomers or geophysicists. Most of the other fellows are postgraduate students studying for a PhD in those fields, but there are also advanced amateur astronomers, historians of science who specialise in those disciplines, and other related professionals. The society acts as the professional body for astronomers and geophysicists in the UK and fellows may apply for the Science Council's Chartered Scientist status through the society. The fellowship passed 3,000 in 2003. Friends In 2009 an initiative was launched for those with an interest in astronomy and geophysics but without professional qualifications or specialist knowledge in the subject. Such people may join the Friends of the RAS, which offers popular talks, visits and social events. Meetings The Society organises an extensive programme of meetings: The biggest RAS meeting each year is the National Astronomy Meeting, a major conference of professional astronomers. It is held over 4–5 days each spring or early summer, usually at a university campus in the United Kingdom. Hundreds of astronomers attend each year. More frequent smaller 'highlight' meetings feature lectures about research topics in astronomy and geophysics, often given by winners of the society's awards. They are normally held in Burlington House in London on the afternoon of the second Friday of each month from October to May. The talks are intended to be accessible to a broad audience of astronomers and geophysicists, and are free for anyone to attend (not just members of the society). Formal reports of the meetings are published in The Observatory magazine. Specialist discussion meetings are held on the same day as each highlight meeting. These are aimed at professional scientists in a particular research field, and allow several speakers to present new results or reviews of scientific fields. Usually two discussion meetings on different topics (one in astronomy and one in geophysics) take place simultaneously at different locations within Burlington House, prior to the day's highlight meeting. They are free for members of the society, but charge a small entry fee for non-members. The RAS holds a regular programme of public lectures aimed at a general, non-specialist, audience. These are mostly held on Tuesdays once a month, with the same talk given twice: once at lunchtime and once in the early evening. The venues have varied, but are usually in Burlington House or another nearby location in central London. The lectures are free, though some popular sessions require booking in advance. The society occasionally hosts or sponsors meetings in other parts of the United Kingdom, often in collaboration with other scientific societies and universities. Library The Royal Astronomical Society has a more comprehensive collection of books and journals in astronomy and geophysics than the libraries of most universities and research institutions. The library receives some 300 current periodicals in astronomy and geophysics and contains more than 10,000 books from popular level to conference proceedings. Its collection of astronomical rare books is second only to that of the Royal Observatory in Edinburgh in the UK. The RAS library is a major resource not just for the society but also the wider community of astronomers, geophysicists, and historians. Education The society promotes astronomy to members of the general public through its outreach pages for students, teachers, the public and media researchers. The RAS has an advisory role in relation to UK public examinations, such as GCSEs and A Levels. Associated groups The RAS sponsors topical groups, many of them in interdisciplinary areas where the group is jointly sponsored by another learned society or professional body: The Astrobiology Society of Britain (with the NASA Astrobiology Institute) The Astroparticle Physics Group (with the Institute of Physics) The Astrophysical Chemistry Group (with the Royal Society of Chemistry) The British Geophysical Association (with the Geological Society of London) The Magnetosphere Ionosphere and Solar-Terrestrial group (generally known by the acronym MIST) The UK Planetary Forum The UK Solar Physics group Presidents The first person to hold the title of President of the Royal Astronomical Society was William Herschel, though he never chaired a meeting, and since then the post has been held by many distinguished astronomers. The post has generally had a term of office of two years, but some holders resigned after one year e.g. due to poor health. Francis Baily and George Airy were elected a record four times each. Baily's eight years in the role are a record (Airy served for seven). Since 1876 no one has served for more than two years in total. The current president is Mike Lockwood, who began his term in May 2024 and will serve for two years. Awards and prizes The highest award of the Royal Astronomical Society is its Gold Medal, which can be awarded for any purpose but most frequently recognises extraordinary lifetime achievement. Among the recipients best known to the general public are Albert Einstein in 1926, and Stephen Hawking in 1985. Other awards are for particular topics in astronomy or geophysics research, which include the Eddington Medal, the Herschel Medal, the Chapman Medal and the Price Medal. Beyond research, there are specific awards for school teaching (Patrick Moore Medal), public outreach (Annie Maunder Medal), instrumentation (Jackson-Gwilt Medal) and history of science (Agnes Mary Clerke Medal). Lectureships include the Harold Jeffreys Lectureship in geophysics, the George Darwin Lectureship in astronomy, and the Gerald Whitrow Lectureship in cosmology. Each year, the society grants a handful of free memberships for life (termed honorary fellowship) to prominent researchers resident outside the UK. Other activities The society occupies premises at Burlington House, London, where a library and meeting rooms are available to fellows and other interested parties. The society represents the interests of astronomy and geophysics to UK national and regional, and European government and related bodies, and maintains a press office, through which it keeps the media and the public at large informed of developments in these sciences. The society allocates grants to worthy causes in astronomy and geophysics, and assists in the management of the Paneth Trust. See also National Astronomy Week (NAW) List of astronomical societies List of geoscience organizations References External links The Royal Astronomical Society Scientific organizations established in 1820 Learned societies of the United Kingdom Astronomy organizations Astronomy societies Astronomy in the United Kingdom Astronomical Organisations based in London with royal patronage 1820 establishments in the United Kingdom
Royal Astronomical Society
[ "Astronomy" ]
2,061
[ "Astronomy societies", "Astronomy organizations" ]
357,565
https://en.wikipedia.org/wiki/Potassium%20sodium%20tartrate
Potassium sodium tartrate tetrahydrate, also known as Rochelle salt, is a double salt of tartaric acid first prepared (in about 1675) by an apothecary, Pierre Seignette, of La Rochelle, France. Potassium sodium tartrate and monopotassium phosphate were the first materials discovered to exhibit piezoelectricity. This property led to its extensive use in crystal phonograph cartridges, microphones and earpieces during the post-World War II consumer electronics boom of the mid-20th century. Such transducers had an exceptionally high output with typical pick-up cartridge outputs as much as 2 volts or more. Rochelle salt is deliquescent so any transducers based on the material deteriorated if stored in damp conditions. It has been used medicinally as a laxative. It has also been used in the process of silvering mirrors. It is an ingredient of Fehling's solution (reagent for reducing sugars). It is used in electroplating, in electronics and piezoelectricity, and as a combustion accelerator in cigarette paper (similar to an oxidizer in pyrotechnics). In organic synthesis, it is used in aqueous workups to break up emulsions, particularly for reactions in which an aluminium-based hydride reagent was used. Sodium potassium tartrate is also important in the food industry. It is a common precipitant in protein crystallography and is also an ingredient in the Biuret reagent which is used to measure protein concentration. This ingredient maintains cupric ions in solution at an alkaline pH. Preparation The starting material is tartar with a minimum 68% tartaric acid content. This is first dissolved in water or in the mother liquor of a previous batch. It is then basified with hot saturated sodium hydroxide solution to pH 8, decolorized with activated charcoal, and chemically purified before being filtered. The filtrate is evaporated to 42 °Bé at 100 °C, and passed to granulators in which Seignette's salt crystallizes on slow cooling. The salt is separated from the mother liquor by centrifugation, accompanied by washing of the granules, and is dried in a rotary furnace and sieved before packaging. Commercially marketed grain sizes range from 2000 μm to < 250 μm (powder). Larger crystals of Rochelle salt have been grown under conditions of reduced gravity and convection on board Skylab. Rochelle salt crystals will begin to dehydrate when the relative humidity drops to about 30% and will begin to dissolve at relative humidities above 84%. Piezoelectricity In 1824, Sir David Brewster demonstrated piezoelectric effects using Rochelle salts, which led to him naming the effect pyroelectricity. In 1919, Alexander McLean Nicolson worked with Rochelle salt, developing audio-related inventions like microphones and speakers at Bell Labs. References Potassium compounds Organic sodium salts Piezoelectric materials Ferroelectric materials Tartrates Food acidity regulators Food antioxidants Double salts E-number additives Deliquescent materials
Potassium sodium tartrate
[ "Physics", "Chemistry", "Materials_science" ]
658
[ "Physical phenomena", "Ferroelectric materials", "Double salts", "Salts", "Organic sodium salts", "Materials", "Electrical phenomena", "Deliquescent materials", "Piezoelectric materials", "Hysteresis", "Matter" ]
357,599
https://en.wikipedia.org/wiki/Component%20Pascal
Component Pascal is a programming language in the tradition of Niklaus Wirth's Pascal, Modula-2, Oberon and Oberon-2. It bears the name of the language Pascal and preserves its heritage, but is incompatible with Pascal. Instead, it is a minor variant and refinement of Oberon-2 with a more expressive type system and built-in string support. Component Pascal was originally named Oberon/L, and was designed and supported by a small ETH Zürich spin-off company named Oberon microsystems. They developed an integrated development environment (IDE) named BlackBox Component Builder. Since 2014, development and support has been taken over by a small group of volunteers. The first version of the IDE was released in 1994, as Oberon/F. At the time, it presented a novel approach to graphical user interface (GUI) construction based on editable forms, where fields and command buttons are linked to exported variables and executable procedures. This approach bears some similarity to the code-behind way used in Microsoft's .NET 3.0 to access code in Extensible Application Markup Language (XAML), which was released in 2008. An open-source software implementation of Component Pascal exists for the .NET and Java virtual machine (JVM) platforms, from the Gardens Point team around John Gough at Queensland University of Technology in Australia. On 23 June 2004 Oberon microsystems announced that the BlackBox Component Builder was made available as a free download and that an open-source version was planned. The beta open-source version was initially released in December 2004 and updated to a final v1.5 release in December 2005. It includes the complete source code of the IDE, compiler, debugger, source analyser, profiler, and interfacing libraries, and can also be downloaded from their website. Several release candidates for v1.6 appeared in the years 2009–2011, the latest one (1.6rc6) appeared on Oberon microsystems web pages in 2011. At the end of 2013, Oberon microsystems released the final release 1.6. It is probably the last release bundled by them. A small community took over the ongoing development. BlackBox Component Pascal uses the extensions .odc (Oberon document) for document files, such as source files, and .osf (Oberon symbol file) for symbol files while Gardens Point Component Pascal uses .cp for source and .cps for symbol files. BlackBox Component Pascal has its own executable and loadable object format .ocf (Oberon code file); it includes a runtime linking loader for this format. The document format (.odc) is a rich text binary format, which allows formatting, supports conditional folding, and allows active content to be embedded in the source text. It also handles user interface elements in editable forms. This is in the tradition of the Oberon Text format. Syntax The full syntax for CP, as given by the Language Report, is shown below. In the extended Backus–Naur form, only 34 grammatical productions are needed, one more than for Oberon-2, although it is a more advanced language. Module = MODULE ident ";" [ImportList] DeclSeq [BEGIN StatementSeq] [CLOSE StatementSeq] END ident ".". ImportList = IMPORT [ident ":="] ident {"," [ident ":="] ident} ";". DeclSeq = { CONST {ConstDecl ";" } | TYPE {TypeDecl ";"} | VAR {VarDecl ";"}} { ProcDecl ";" | ForwardDecl ";"}. ConstDecl = IdentDef "=" ConstExpr. TypeDecl = IdentDef "=" Type. VarDecl = IdentList ":" Type. ProcDecl = PROCEDURE [Receiver] IdentDef [FormalPars] MethAttributes [";" DeclSeq [BEGIN StatementSeq] END ident]. MethAttributes = ["," NEW] ["," (ABSTRACT | EMPTY | EXTENSIBLE)]. ForwardDecl = PROCEDURE "^" [Receiver] IdentDef [FormalPars] MethAttributes. FormalPars = "(" [FPSection {";" FPSection}] ")" [":" Type]. FPSection = [VAR | IN | OUT] ident {"," ident} ":" Type. Receiver = "(" [VAR | IN] ident ":" ident ")". Type = Qualident | ARRAY [ConstExpr {"," ConstExpr}] OF Type | [ABSTRACT | EXTENSIBLE | LIMITED] RECORD ["("Qualident")"] FieldList {";" FieldList} END | POINTER TO Type | PROCEDURE [FormalPars]. FieldList = [IdentList ":" Type]. StatementSeq = Statement {";" Statement}. Statement = [ Designator ":=" Expr | Designator ["(" [ExprList] ")"] | IF Expr THEN StatementSeq {ELSIF Expr THEN StatementSeq} [ELSE StatementSeq] END | CASE Expr OF Case {"|" Case} [ELSE StatementSeq] END | WHILE Expr DO StatementSeq END | REPEAT StatementSeq UNTIL Expr | FOR ident ":=" Expr TO Expr [BY ConstExpr] DO StatementSeq END | LOOP StatementSeq END | WITH [ Guard DO StatementSeq ] {"|" [ Guard DO StatementSeq ] } [ELSE StatementSeq] END | EXIT | RETURN [Expr] ]. Case = [CaseLabels {"," CaseLabels} ":" StatementSeq]. CaseLabels = ConstExpr [".." ConstExpr]. Guard = Qualident ":" Qualident. ConstExpr = Expr. Expr = SimpleExpr [Relation SimpleExpr]. SimpleExpr = ["+" | "-"] Term {AddOp Term}. Term = Factor {MulOp Factor}. Factor = Designator | number | character | string | NIL | Set | "(" Expr ")" | " ~ " Factor. Set = "{" [Element {"," Element}] "}". Element = Expr [".." Expr]. Relation = "=" | "#" | "<" | "<=" | ">" | ">=" | IN | IS. AddOp = "+" | "-" | OR. MulOp = "*" | "/" | DIV | MOD | "&". Designator = Qualident {"." ident | "[" ExprList "]" | "^" | "(" Qualident ")" | "(" [ExprList] ")"} [ "$" ]. ExprList = Expr {"," Expr}. IdentList = IdentDef {"," IdentDef}. Qualident = [ident "."] ident. IdentDef = ident ["*" | "-"]. References Further reading From Modula to Oberon Wirth (1990) The Programming Language Oberon Wirth (1990) Differences between Oberon and Oberon-2 Mössenböck and Wirth (1993) The Programming Language Oberon-2 H. Mössenböck, N. Wirth, Institut für Computersysteme, ETH Zürich (ETHZ), January 1992. What's New in Component Pascal (changes from Oberon-2 to CP), Pfister (2001) Components and Objects Together, Clemens Szyperski, Dr.Dobbs, May, 1999 External links Last available version of former official website see also historical notes on the download page Gardens Point Component Pascal for .NET & JVM Component Pascal to C transpiler based on Josef Templ's OFront .NET programming languages Oberon programming language family Modula programming language family Pascal
Component Pascal
[ "Technology" ]
1,783
[ "Component-based software engineering", "Components" ]
357,616
https://en.wikipedia.org/wiki/Outline%20of%20software%20engineering
The following outline is provided as an overview of and topical guide to software engineering: Software engineering – application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is the application of engineering to software. The ACM Computing Classification system is a poly-hierarchical ontology that organizes the topics of the field and can be used in semantic web applications and as a de facto standard classification system for the field. The major section "Software and its Engineering" provides an outline and ontology for software engineering. Software applications Software engineers build software (applications, operating systems, system software) that people use. Applications influence software engineering by pressuring developers to solve problems in new ways. For example, consumer software emphasizes low cost, medical software emphasizes high quality, and Internet commerce software emphasizes rapid development. Business software Accounting software Analytics Data mining closely related to database Decision support systems Airline reservations Banking Automated teller machines Cheque processing Credit cards Commerce Trade Auctions (e.g. eBay) Reverse auctions (procurement) Bar code scanners Compilers Parsers Compiler optimization Interpreters Linkers Loaders Communication E-mail Instant messengers VOIP Calendars — scheduling and coordinating Contact managers Computer graphics Animation Special effects for video and film Editing Post-processing Cryptography Databases, support almost every field Embedded systems Both software engineers and traditional engineers write software control systems for embedded products. Automotive software Avionics software Heating ventilating and air conditioning (HVAC) software Medical device software Telephony Telemetry Engineering All traditional engineering branches use software extensively. Engineers use spreadsheets, more than they ever used calculators. Engineers use custom software tools to design, analyze, and simulate their own projects, like bridges and power lines. These projects resemble software in many respects, because the work exists as electronic documents and goes through analysis, design, implementation, and testing phases. Software tools for engineers use the tenets of computer science; as well as the tenets of calculus, physics, and chemistry. Computer Aided Design (CAD) Electronic Design Automation (EDA) Numerical Analysis Simulation File FTP File sharing File synchronization Finance Bond market Futures market Stock market Games Poker Multiuser Dungeons Video games Information systems, support almost every field LIS Management of laboratory data MIS Management of financial and personnel data Logistics Supply chain management Manufacturing Computer Aided Manufacturing (CAM) Distributed Control Systems (DCS) Music Music sequencers Sound effects Music synthesis Network Management Network management system Element Management System Operations Support System Business Support Systems Networks and Internet Domain Name System Protocols Routers Office suites Word processors Spreadsheets Presentations Operating systems Embedded Graphical Multitasking Real-time Robotics Signal processing, encoding and interpreting signals Image processing, encoding and interpreting visual information Speech processing Text recognition Handwriting recognition Simulation, supports almost every field. Engineering, A software simulation can be cheaper to build and more flexible to change than a physical engineering model. Sciences Sciences Genomics Traffic Control Air traffic control Ship traffic control Road traffic control Training Drill Simulation Testing Visualization, supports almost every field Architecture Engineering Sciences Voting World Wide Web Browsers Servers Software engineering topics Programming paradigm, based on a programming language technology Object-oriented programming Aspect-oriented programming Functional decomposition Structured programming Rule-based programming Databases Hierarchical Object Relational SQL/XML SQL MYSQL NoSQL Graphical user interfaces GTK+ GIMP Toolkit wxWidgets Ultimate++ Qt toolkit FLTK Programming tools Configuration management and source code management CVS Subversion Git Mercurial RCS GNU Arch LibreSource Synchronizer Team Foundation Server Visual Studio Team Services Build tools Make Rake Cabal Ant CADES Nant Maven Final Builder Gradle Team Foundation Server Visual Studio Team Services Visual Build Pro Editors Integrated development environments (IDEs) Text editors Word processors Parser creation tools Yacc/Bison Static code analysis tools Libraries Component-based software engineering Design languages Unified Modeling Language (UML) Patterns, document many common programming and project management techniques Anti-patterns Patterns Processes and methodologies Agile Agile software development Extreme programming Lean software development Rapid application development (RAD) Rational Unified Process Scrum Heavyweight Cleanroom ISO/IEC 12207 — software life cycle processes ISO 9000 and ISO 9001 Process Models CMM and CMMI/SCAMPI ISO 15504 (SPICE) Metamodels ISO/IEC 24744 SPEM Platforms A platform combines computer hardware and an operating system. As platforms grow more powerful and less costly, applications and tools grow more widely available. BREW Cray supercomputers DEC minicomputers IBM mainframes Linux PCs Classic Mac OS and macOS PCs Microsoft .NET Palm PDAs Sun Microsystems Solaris Windows PCs (Wintel) Symbian OS Other Practices Communication Method engineering Pair programming Performance Engineering Programming productivity Refactoring Software inspections/Code reviews Software reuse Systems integration Teamwork Other tools Decision tables Feature User stories Use cases Computer science topics Skilled software engineers know a lot of computer science including what is possible and impossible, and what is easy and hard for software. Algorithms, well-defined methods for solving specific problems. Searching Sorting Parsing Numerical analysis Compiler theory Yacc/Bison Data structures, well-defined methods for storing and retrieving data. Lists Trees Hash tables Computability, some problems cannot be solved at all List of unsolved problems in computer science Halting problem Complexity, some problems are solvable in principle, yet unsolvable in practice NP completeness Computational complexity theory Formal methods Proof of correctness Program synthesis Adaptive Systems Neural Networks Evolutionary Algorithms Mathematics topics Discrete mathematics is a key foundation of software engineering. Number representation Set (computer science) Bags Graphs Sequences Trees Graph (data structure) Logic Deduction First-order logic Higher-order logic Combinatory logic Induction Combinatorics Other Domain knowledge Statistics Decision theory Type theory Life cycle phases Development life cycle phase Requirements gathering / analysis Software architecture Computer programming Testing, detects bugs Black box testing White box testing Quality assurance, ensures compliance with process. Product Life cycle phase and Project lifecycle Inception First development Major release Minor release Bug fix release Maintenance Obsolescence Release development stage, near the end of a release cycle Alpha Beta Gold master 1.0; 2.0 Software development lifecycle Waterfall model — Structured programming and Stepwise refinement SSADM Spiral model — Iterative development V-model Agile software development DSDM Chaos model — Chaos strategy Deliverables Deliverables must be developed for many SE projects. Software engineers rarely make all of these deliverables themselves. They usually cooperate with the writers, trainers, installers, marketers, technical support people, and others who make many of these deliverables. Application software — the software Database — schemas and data. Documentation, online and/or print, FAQ, Readme, release notes, Help, for each role User Administrator Manager Buyer Administration and Maintenance policy, what should be backed-up, checked, configured, ... Installers Migration Upgrade from previous installations Upgrade from competitor's installations Training materials, for each role User Administrator Manager Buyer Support info for computer support groups. Marketing and sales materials White papers, explain the technologies used in the applications Business roles Operations Users Administrators Managers Buyers Development Analysts Programmers Testers Managers Business Consulting — customization and installation of applications Sales Marketing Legal — contracts, intellectual property rights Privacy and Privacy engineering Support — helping customers use applications Personnel — hiring and training qualified personnel Finance — funding new development Academe Educators Researchers Management topics Leadership Coaching Communication Listening Motivation Vision, SEs are good at this Example, everyone follows a good example best Human resource management Hiring, getting people into an organization Training Evaluation Project management Goal setting Customer interaction (Rethink) Estimation Risk management Change management Process management Software development processes Metrics Business topics Quality programs Malcolm Baldrige National Quality Award Six Sigma Total Quality Management (TQM) Software engineering profession Software engineering demographics Software engineering economics CCSE History of software engineering Software engineering professionalism Ethics Licensing Legal Intellectual property Consumer protection History of software engineering History of software engineering Pioneers Many people made important contributions to SE technologies, practices, or applications. John Backus: Fortran, first optimizing compiler, BNF Victor Basili: Experience factory. F.L. Bauer: Stack principle, popularized the term Software Engineering Kent Beck: Refactoring, extreme programming, pair programming, test-driven development. Tim Berners-Lee: World Wide Web Barry Boehm: SE economics, COCOMO, Spiral model. Grady Booch: Object-oriented design, UML. Fred Brooks: Managed System 360 and OS 360. Wrote The Mythical Man-Month and No Silver Bullet. Larry Constantine: Structured design, coupling, cohesion Edsger Dijkstra: Wrote Notes on Structured Programming, A Discipline of Programming and Go To Statement Considered Harmful, algorithms, formal methods, pedagogy. Michael Fagan: Software inspection. Tom Gilb: Software metrics, Software inspection, Evolutionary Delivery ("Evo"). Adele Goldstine: Wrote the Operators Manual for the ENIAC, the first electronic digital computer, and trained some of the first human computers Lois Haibt: FORTRAN, wrote the first parser Margaret Hamilton: Coined the term "software engineering", developed Universal Systems Language Mary Jean Harrold: Regression testing, fault localization Grace Hopper: The first compiler (Mark 1), COBOL, Nanoseconds. Watts Humphrey: Capability Maturity Model, Personal Software Process, fellow of the Software Engineering Institute. Jean Ichbiah: Ada Michael A. Jackson: Jackson Structured Programming, Jackson System Development Bill Joy: Berkeley Unix, vi, Java. Alan Kay: Smalltalk Brian Kernighan: C and Unix. Donald Knuth: Wrote The Art of Computer Programming, TeX, algorithms, literate programming Nancy Leveson: System safety Bertrand Meyer: Design by Contract, Eiffel programming language. Peter G. Neumann: RISKS Digest, ACM Sigsoft. David Parnas: Module design, social responsibility, professionalism. Jef Raskin: Developed the original Macintosh GUI, authored The Humane Interface Dennis Ritchie: C and Unix. Winston W. Royce: Waterfall model. Mary Shaw: Software architecture. Richard Stallman: Founder of the Free Software Foundation Linus Torvalds: Linux kernel, free software / open source development. Will Tracz: Reuse, ACM Software Engineering Notes. Gerald Weinberg: Wrote The Psychology of Computer Programming. Elaine Weyuker: Software testing Jeannette Wing: Formal specifications. Ed Yourdon: Structured programming, wrote The Decline and Fall of the American Programmer. See also List of programmers List of computer scientists Notable publications About Face: The Essentials of User Interface Design by Alan Cooper, about user interface design. The Capability Maturity Model by Watts Humphrey. Written for the Software Engineering Institute, emphasizing management and process. (See Managing the Software Process ) The Cathedral and the Bazaar by Eric Raymond about open source development. The Decline and Fall of the American Programmer by Ed Yourdon predicts the end of software development in the U.S. Design Patterns by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides. Extreme Programming Explained by Kent Beck "Go To Statement Considered Harmful" by Edsger Dijkstra. "Internet, Innovation and Open Source:Actors in the Network" — First Monday article by Ilkka Tuomi (2000) source The Mythical Man-Month by Fred Brooks, about project management. Object-oriented Analysis and Design by Grady Booch. Peopleware by Tom DeMarco and Tim Lister. The pragmatic engineer versus the scientific designer by E. W. Dijkstra Principles of Software Engineering Management by Tom Gilb about evolutionary processes. The Psychology of Computer Programming by Gerald Weinberg. Written as an independent consultant, partly about his years at IBM. Refactoring: Improving the Design of Existing Code by Martin Fowler, Kent Beck, John Brant, William Opdyke, and Don Roberts. The Pragmatic Programmer: from journeyman to master by Andrew Hunt, and David Thomas. Software Engineering Body of Knowledge (SWEBOK) ISO/IEC TR 19759 Related fields Computer science Information engineering Information technology Traditional engineering Computer engineering Electrical engineering Software engineering Domain engineering Information technology engineering Knowledge engineering User interface engineering Web engineering Arts and Sciences Mathematics Computer science Information science Application software Information systems Programming Systems Engineering See also Index of software engineering articles Search-based software engineering SWEBOK Software engineering body of knowledge CCSE Computing curriculum for software engineering Computer terms etymology, the origins of computer terms Complexity or scaling Second system syndrome optimization Source code escrow Feature interaction problem Certification (software engineering) Engineering disasters#Failure due to software Outline of software development List of software development philosophies References External links ACM Computing Classification System Guide to the Software Engineering Body of Knowledge (SWEBOK) Professional organizations British Computer Society Association for Computing Machinery IEEE Computer Society Professionalism SE Code of Ethics Professional licensing in Texas Education CCSE Undergraduate curriculum Standards IEEE Software Engineering Standards Internet Engineering Task Force ISO Government organizations European Software Institute Software Engineering Institute Agile Organization to promote Agile software development Test driven development Extreme programming Other organizations Online community for software engineers Software Engineering Society Demographics U.S. Bureau of Labor Statistics on SE Surveys David Redmiles page from the University of California site Other Full text in PDF from the NATO conference in Garmisch Computer Risks Peter G. Neumann's risks column. Outlines of applied sciences Outlines Software engineering, Outline of
Outline of software engineering
[ "Technology", "Engineering" ]
2,715
[ "Systems engineering", "Computer engineering", "Computing-related lists", "Software engineering", "Information technology" ]
357,632
https://en.wikipedia.org/wiki/Mating
In biology, mating is the pairing of either opposite-sex or hermaphroditic organisms for the purposes of sexual reproduction. Fertilization is the fusion of two gametes. Copulation is the union of the sex organs of two sexually reproducing animals for insemination and subsequent internal fertilization. Mating may also lead to external fertilization, as seen in amphibians, fishes and plants. For most species, mating is between two individuals of opposite sexes. However, for some hermaphroditic species, copulation is not required because the parent organism is capable of self-fertilization (autogamy); for example, banana slugs. The term mating is also applied to related processes in bacteria, archaea and viruses. Mating in these cases involves the pairing of individuals, accompanied by the pairing of their homologous chromosomes and then exchange of genomic information leading to formation of recombinant progeny (see mating systems). Animals For animals, mating strategies include random mating, disassortative mating, assortative mating, or a mating pool. In some birds, it includes behaviors such as nest-building and feeding offspring. The human practice of mating and artificially inseminating domesticated animals is part of animal husbandry. In some terrestrial arthropods, including insects representing basal (primitive) phylogenetic clades, the male deposits spermatozoa on the substrate, sometimes stored within a special structure. Courtship involves inducing the female to take up the sperm package into her genital opening without actual copulation. Courtship is often facilitated through forming groups, called leks, in flies and many other insects. For example, male Tokunagayusurika akamusi forms swarms dancing in the air to attract females. In groups such as dragonflies and many spiders, males extrude sperm into secondary copulatory structures removed from their genital opening, which are then used to inseminate the female (in dragonflies, it is a set of modified sternites on the second abdominal segment; in spiders, it is the male pedipalps). In advanced groups of insects, the male uses its aedeagus, a structure formed from the terminal segments of the abdomen, to deposit sperm directly (though sometimes in a capsule called a "spermatophore") into the female's reproductive tract. Other animals reproduce sexually with external fertilization, including many basal vertebrates. Vertebrates reproduce with internal fertilization through cloacal copulation (in reptiles, some fish, and most birds) or ejaculation of semen through the penis into the female's vagina (in mammals). In domesticated animals, there are various type of mating methods being employed to mate animals like pen mating (when female is moved to the desired male into a pen) or paddock mating (where one male is let loose in the paddock with several females). Plants and fungi Like in animals, mating in other Eukaryotes, such as plants and fungi, denotes . However, in vascular plants this is mostly achieved without physical contact between mating individuals (see pollination), and in some cases, e.g., in fungi no distinguishable male or female organs exist (see isogamy); however, mating types in some fungal species are somewhat analogous to sexual dimorphism in animals, and determine whether or not two individual isolates can mate. Yeasts are eukaryotic microorganisms classified in the kingdom Fungi, with 1,500 species currently described. In general, under high stress conditions like nutrient starvation, haploid cells will die; under the same conditions, however, diploid cells of Saccharomyces cerevisiae can undergo sporulation, entering sexual reproduction (meiosis) and produce a variety of haploid spores, which can go on to mate (conjugate) and reform the diploid. Protists Protists are a large group of diverse eukaryotic microorganisms, mainly unicellular animals and plants, that do not form tissues. The earliest eukaryotes were likely protists. Mating and sexual reproduction are widespread among extant eukaryotes including protists such as Paramecium and Chlamydomonas. In many eukaryotic species, mating is promoted by sex pheromones including the protist Blepharisma japonicum. Based on a phylogenetic analysis, Dacks and Roger proposed that facultative sex was present in the common ancestor of all eukaryotes. However, to many biologists it seemed unlikely until recently, that mating and sex could be a primordial and fundamental characteristic of eukaryotes. A principal reason for this view was that mating and sex appeared to be lacking in certain pathogenic protists whose ancestors branched off early from the eukaryotic family tree. However, several of these protists are now known to be capable of, or to recently have had, the capability for meiosis and hence mating. To cite one example, the common intestinal parasite Giardia intestinalis was once considered to be a descendant of a protist lineage that predated the emergence of meiosis and sex. However, G. intestinalis was recently found to have a core set of genes that function in meiosis and that are widely present among sexual eukaryotes. These results suggested that G. intestinalis is capable of meiosis and thus mating and sexual reproduction. Furthermore, direct evidence for meiotic recombination, indicative of mating and sexual reproduction, was also found in G. intestinalis. Other protists for which evidence of mating and sexual reproduction has recently been described are parasitic protozoa of the genus Leishmania, Trichomonas vaginalis, and acanthamoeba. Protists generally reproduce asexually under favorable environmental conditions, but tend to reproduce sexually under stressful conditions, such as starvation or heat shock. See also Heterosexuality Animal husbandry Breeding in the wild Breeding season Evolution of sex Lordosis behavior Mate choice copying Mating system Reproduction Sex determination system Sexual conflict Sexual intercourse References External links Introduction to Animal Reproduction Advantages of Sexual Reproduction Animal developmental biology Reproduction in animals Sexology Sexuality Ethology Fertility
Mating
[ "Biology" ]
1,304
[ "Reproduction in animals", "Behavior", "Sex", "Reproduction", "Sexology", "Behavioural sciences", "Ethology", "Sexuality", "Mating" ]
357,636
https://en.wikipedia.org/wiki/Flight%20control%20surfaces
Aircraft flight control surfaces are aerodynamic devices allowing a pilot to adjust and control the aircraft's flight attitude. Development of an effective set of flight control surfaces was a critical advance in the development of aircraft. Early efforts at fixed-wing aircraft design succeeded in generating sufficient lift to get the aircraft off the ground, but once aloft, the aircraft proved uncontrollable, often with disastrous results. The development of effective flight controls is what allowed stable flight. This article describes the control surfaces used on a fixed-wing aircraft of conventional design. Other fixed-wing aircraft configurations may use different control surfaces but the basic principles remain. The controls (stick and rudder) for rotary wing aircraft (helicopter or autogyro) accomplish the same motions about the three axes of rotation, but manipulate the rotating flight controls (main rotor disk and tail rotor disk) in a completely different manner. Flight control surfaces are operated by aircraft flight control systems. Considered as a generalized fluid control surface, rudders, in particular, are shared between aircraft and watercraft. Development The Wright brothers are credited with developing the first practical control surfaces. It is a main part of their patent on flying. Unlike modern control surfaces, they used wing warping. In an attempt to circumvent the Wright patent, Glenn Curtiss made hinged control surfaces, the same type of concept first patented some four decades earlier in the United Kingdom. Hinged control surfaces have the advantage of not causing stresses that are a problem of wing warping and are easier to build into structures. Axes of motion An aircraft is free to rotate around three axes that are perpendicular to each other and intersect at its center of gravity (CG). To control position and direction a pilot must be able to control rotation about each of them. Transverse axis The transverse axis, also known as lateral axis, passes through an aircraft from wingtip to wingtip. Rotation about this axis is called pitch. Pitch changes the vertical direction that the aircraft's nose is pointing. The elevators are the primary control surfaces for pitch. Longitudinal axis The longitudinal axis passes through the aircraft from nose to tail. Rotation about this axis is called roll. The angular displacement about this axis is called bank. The pilot changes bank angle by increasing the lift on one wing and decreasing it on the other. This differential lift causes rotation around the longitudinal axis. The ailerons are the primary control of bank. The rudder also has a secondary effect on bank. Vertical axis The vertical axis passes through an aircraft from top to bottom. Rotation about this axis is called yaw. Yaw changes the direction the aircraft's nose is pointing, left or right. The primary control of yaw is with the rudder. Ailerons also have a secondary effect on yaw. These axes move with the aircraft and change relative to the earth as the aircraft moves. For example, for an aircraft whose left wing is pointing straight down, its "vertical" axis is parallel with the ground, while its "transverse" axis is perpendicular to the ground. Main control surfaces The main control surfaces of a fixed-wing aircraft are attached to the airframe on hinges or tracks so they may move and thus deflect the air stream passing over them. This redirection of the air stream generates an unbalanced force to rotate the plane about the associated axis. Ailerons Ailerons are mounted on the trailing edge of each wing near the wingtips and move in opposite directions. When the pilot moves the aileron control to the left, or turns the wheel counter-clockwise, the left aileron goes up and the right aileron goes down. A raised aileron reduces lift on that wing and a lowered one increases lift, so moving the aileron control in this way causes the left wing to drop and the right wing to rise. This causes the aircraft to roll to the left and begin to turn to the left. Centering the control returns the ailerons to the neutral position, maintaining the bank angle. The aircraft will continue to turn until opposite aileron motion returns the bank angle to zero to fly straight. Elevator The elevator is a moveable part of the horizontal stabilizer, hinged to the back of the fixed part of the horizontal tail. The elevators move up and down together. When the pilot pulls the stick backward, the elevators go up. Pushing the stick forward causes the elevators to go down. Raised elevators push down on the tail and cause the nose to pitch up. This makes the wings fly at a higher angle of attack, which generates more lift and more drag. Centering the stick returns the elevators to neutral and stops the change of pitch. Some aircraft, such as an MD-80, use a servo tab within the elevator surface to aerodynamically move the main surface into position. The direction of travel of the control tab will thus be in a direction opposite to the main control surface. It is for this reason that an MD-80 tail looks like it has a 'split' elevator system. In the canard arrangement, the elevators are hinged to the rear of a foreplane and move in the opposite sense, for example when the pilot pulls the stick back the elevators go down to increase the lift at the front and lift the nose up. Rudder The rudder is typically mounted on the trailing edge of the vertical stabilizer, part of the empennage. When the pilot pushes the left pedal, the rudder deflects left. Pushing the right pedal causes the rudder to deflect right. Deflecting the rudder right pushes the tail left and causes the nose to yaw to the right. Centering the rudder pedals returns the rudder to neutral and stops the yaw. Secondary effects of controls Ailerons The ailerons primarily cause roll. Whenever lift is increased, induced drag is also increased so when the aileron control is moved to roll the aircraft to the left, the right aileron is lowered which increases lift on the right wing and therefore increases induced drag on the right wing. Using ailerons causes adverse yaw, meaning the nose of the aircraft yaws in a direction opposite to the aileron application. When moving the aileron control to bank the wings to the left, adverse yaw moves the nose of the aircraft to the right. Adverse yaw is most pronounced in low-speed aircraft with long wings, such as gliders. It is counteracted by the pilot using the rudder pedals. Differential ailerons are ailerons which have been rigged such that the downgoing aileron deflects less than the upward-moving one, causing less adverse yaw. Rudder The rudder is a fundamental control surface which is typically controlled by pedals rather than at the stick. It is the primary means of controlling yaw—the rotation of an airplane about its vertical axis. The rudder may also be called upon to counter-act the adverse yaw produced by the roll-control surfaces. If rudder is continuously applied in level flight the aircraft will yaw initially in the direction of the applied rudder – the primary effect of rudder. After a few seconds the aircraft will tend to bank in the direction of yaw. This arises initially from the increased speed of the wing opposite to the direction of yaw and the reduced speed of the other wing. The faster wing generates more lift and so rises, while the other wing tends to go down because of generating less lift. Continued application of rudder sustains rolling tendency because the aircraft flying at an angle to the airflow - skidding towards the forward wing. When applying right rudder in an aircraft with dihedral the left hand wing will have increased angle of attack and the right hand wing will have decreased angle of attack which will result in a roll to the right. An aircraft with anhedral will show the opposite effect. This effect of the rudder is commonly used in model aircraft where if sufficient dihedral or polyhedral is included in the wing design, primary roll control such as ailerons may be omitted altogether. Turning the aircraft Unlike turning a boat, changing the direction of an aircraft normally must be done with the ailerons rather than the rudder. The rudder turns (yaws) the aircraft but has little effect on its direction of travel. With aircraft, the change in direction is caused by the horizontal component of lift, acting on the wings. The pilot tilts the lift force, which is perpendicular to the wings, in the direction of the intended turn by rolling the aircraft into the turn. As the bank angle is increased, the lifting force can be split into two components: one acting vertically and one acting horizontally. If the total lift is kept constant, the vertical component of lift will decrease. As the weight of the aircraft is unchanged, this would result in the aircraft descending if not countered. To maintain level flight requires increased positive (up) elevator to increase the angle of attack, increase the total lift generated and keep the vertical component of lift equal with the weight of the aircraft. This cannot continue indefinitely. The total load factor required to maintain level flight is directly related to the bank angle. This means that for a given airspeed, level flight can only be maintained up to a certain given angle of bank. Beyond this angle of bank, the aircraft will suffer an accelerated stall if the pilot attempts to generate enough lift to maintain level flight. Alternate main control surfaces Some aircraft configurations have non-standard primary controls. For example, instead of elevators at the back of the stabilizers, the entire tailplane may change angle. Some aircraft have a tail in the shape of a V, and the moving parts at the back of those combine the functions of elevators and rudder. Delta wing aircraft may have "elevons" at the back of the wing, which combine the functions of elevators and ailerons. Secondary control surfaces Spoilers On low drag aircraft such as sailplanes, spoilers are used to disrupt airflow over the wing and greatly reduce lift. This allows a glider pilot to lose altitude without gaining excessive airspeed. Spoilers are sometimes called "lift dumpers". Spoilers that can be used asymmetrically are called spoilerons and can affect an aircraft's roll. Flaps Flaps are mounted on the trailing edge on the inboard section of each wing (near the wing roots). They are deflected down to increase the effective curvature of the wing. Flaps raise the maximum lift coefficient of the aircraft and therefore reduce its stalling speed. They are used during low speed, high angle of attack flight including take-off and descent for landing. Some aircraft are equipped with "flaperons", which are more commonly called "inboard ailerons". These devices function primarily as ailerons, but on some aircraft, will "droop" when the flaps are deployed, thus acting as both a flap and a roll-control inboard aileron. Slats Slats, also known as leading edge devices, are extensions to the front of a wing for lift augmentation, and are intended to reduce the stalling speed by altering the airflow over the wing. Slats may be fixed or retractable - fixed slats (e.g. as on the Fieseler Fi 156 Storch) give excellent slow speed and STOL capabilities, but compromise higher speed performance. Retractable slats, as seen on most airliners, provide reduced stalling speed for take-off and landing, but are retracted for cruising. Air brakes Air brakes are used to increase drag. Spoilers might act as air brakes, but are not pure air brakes as they also function as lift-dumpers or in some cases as roll control surfaces. Air brakes are usually surfaces that deflect outwards from the fuselage (in most cases symmetrically on opposing sides) into the airstream in order to increase form-drag. As they are in most cases located elsewhere on the aircraft, they do not directly affect the lift generated by the wing. Their purpose is to slow down the aircraft. They are particularly useful when a high rate of descent is required. They are common on high performance military aircraft as well as civilian aircraft, especially those lacking reverse thrust capability. Control trimming surfaces Trimming controls allow a pilot to balance the lift and drag being produced by the wings and control surfaces over a wide range of load and airspeed. This reduces the effort required to adjust or maintain a desired flight attitude. Elevator trim Elevator trim balances the control force necessary to maintain the correct aerodynamic force on the tail to balance the aircraft. Whilst carrying out certain flight exercises, a lot of trim could be required to maintain the desired angle of attack. This mainly applies to slow flight, where a nose-up attitude is required, in turn requiring a lot of trim causing the tailplane to exert a strong downforce. Elevator trim is correlated with the speed of the airflow over the tail, thus airspeed changes to the aircraft require re-trimming. An important design parameter for aircraft is the stability of the aircraft when trimmed for level flight. Any disturbances such as gusts or turbulence will be damped over a short period of time and the aircraft will return to its level flight trimmed airspeed. Trimming tail plane Except for very light aircraft, trim tabs on the elevators are unable to provide the force and range of motion desired. To provide the appropriate trim force the entire horizontal tail plane is made adjustable in pitch. This allows the pilot to select exactly the right amount of positive or negative lift from the tail plane while reducing drag from the elevators. Control horn A control horn is a section of control surface which projects ahead of the pivot point. It generates a force which tends to increase the surface's deflection thus reducing the control pressure experienced by the pilot. Control horns may also incorporate a counterweight which helps to balance the control and prevent it from fluttering in the airstream. Some designs feature separate anti-flutter weights. (In radio controlled model aircraft, the term "control horn" has a different meaning) Spring trim In the simplest arrangement, trimming is done by a mechanical spring (or bungee) which adds appropriate force to augment the pilot's control input. The spring is usually connected to an elevator trim lever to allow the pilot to set the spring force applied. Rudder and aileron trim Most fixed-wing aircraft have a trimming control surface on the elevator, but larger aircraft also have a trim control for the rudder, and another for the ailerons. The rudder trim is to counter any asymmetric thrust from the engines. Aileron trim is to counter the effects of the centre of gravity being displaced from the aircraft centerline. This can be caused by fuel or an item of payload being loaded more on one side of the aircraft compared to the other, such as when one fuel tank has more fuel than the other. See also Aircraft engine controls Aircraft flight control systems Aircraft flight mechanics Flight with disabled controls Ship motions Six degrees of freedom V-tail Wing warping Notes References Private Pilot Manual; Jeppesen Sanderson; (hardcover, 1999) Airplane Flying Handbook; U.S. Department of Transportation, Federal Aviation Administration, FAA-8083-3A. (2004) Clancy, L.J. (1975) Aerodynamics Pitman Publishing Limited, London External links A clear explanation of model aircraft flight controls by BMFA See How It Flies By John S. Denker. A new spin on the perceptions, procedures, and principles of flight. Aircraft aerodynamics Aircraft controls Attitude control de:Flugzeug#Flugsteuerung
Flight control surfaces
[ "Engineering" ]
3,167
[ "Attitude control", "Aerospace engineering" ]
357,657
https://en.wikipedia.org/wiki/Bloch%27s%20theorem
In condensed matter physics, Bloch's theorem states that solutions to the Schrödinger equation in a periodic potential can be expressed as plane waves modulated by periodic functions. The theorem is named after the Swiss physicist Felix Bloch, who discovered the theorem in 1929. Mathematically, they are written where is position, is the wave function, is a periodic function with the same periodicity as the crystal, the wave vector is the crystal momentum vector, is Euler's number, and is the imaginary unit. Functions of this form are known as Bloch functions or Bloch states, and serve as a suitable basis for the wave functions or states of electrons in crystalline solids. The description of electrons in terms of Bloch functions, termed Bloch electrons (or less often Bloch Waves), underlies the concept of electronic band structures. These eigenstates are written with subscripts as , where is a discrete index, called the band index, which is present because there are many different wave functions with the same (each has a different periodic component ). Within a band (i.e., for fixed ), varies continuously with , as does its energy. Also, is unique only up to a constant reciprocal lattice vector , or, . Therefore, the wave vector can be restricted to the first Brillouin zone of the reciprocal lattice without loss of generality. Applications and consequences Applicability The most common example of Bloch's theorem is describing electrons in a crystal, especially in characterizing the crystal's electronic properties, such as electronic band structure. However, a Bloch-wave description applies more generally to any wave-like phenomenon in a periodic medium. For example, a periodic dielectric structure in electromagnetism leads to photonic crystals, and a periodic acoustic medium leads to phononic crystals. It is generally treated in the various forms of the dynamical theory of diffraction. Wave vector Suppose an electron is in a Bloch state where is periodic with the same periodicity as the crystal lattice. The actual quantum state of the electron is entirely determined by , not or directly. This is important because and are not unique. Specifically, if can be written as above using , it can also be written using , where is any reciprocal lattice vector (see figure at right). Therefore, wave vectors that differ by a reciprocal lattice vector are equivalent, in the sense that they characterize the same set of Bloch states. The first Brillouin zone is a restricted set of values of with the property that no two of them are equivalent, yet every possible is equivalent to one (and only one) vector in the first Brillouin zone. Therefore, if we restrict to the first Brillouin zone, then every Bloch state has a unique . Therefore, the first Brillouin zone is often used to depict all of the Bloch states without redundancy, for example in a band structure, and it is used for the same reason in many calculations. When is multiplied by the reduced Planck constant, it equals the electron's crystal momentum. Related to this, the group velocity of an electron can be calculated based on how the energy of a Bloch state varies with ; for more details see crystal momentum. Detailed example For a detailed example in which the consequences of Bloch's theorem are worked out in a specific situation, see the article Particle in a one-dimensional lattice (periodic potential). Statement A second and equivalent way to state the theorem is the following Proof Using lattice periodicity Being Bloch's theorem a statement about lattice periodicity, in this proof all the symmetries are encoded as translation symmetries of the wave function itself. Using operators In this proof all the symmetries are encoded as commutation properties of the translation operators Using group theory Apart from the group theory technicalities this proof is interesting because it becomes clear how to generalize the Bloch theorem for groups that are not only translations. This is typically done for space groups which are a combination of a translation and a point group and it is used for computing the band structure, spectrum and specific heats of crystals given a specific crystal group symmetry like FCC or BCC and eventually an extra basis. In this proof it is also possible to notice how it is key that the extra point group is driven by a symmetry in the effective potential but it shall commute with the Hamiltonian. In the generalized version of the Bloch theorem, the Fourier transform, i.e. the wave function expansion, gets generalized from a discrete Fourier transform which is applicable only for cyclic groups, and therefore translations, into a character expansion of the wave function where the characters are given from the specific finite point group. Also here is possible to see how the characters (as the invariants of the irreducible representations) can be treated as the fundamental building blocks instead of the irreducible representations themselves. Velocity and effective mass If we apply the time-independent Schrödinger equation to the Bloch wave function we obtain with boundary conditions Given this is defined in a finite volume we expect an infinite family of eigenvalues; here is a parameter of the Hamiltonian and therefore we arrive at a "continuous family" of eigenvalues dependent on the continuous parameter and thus at the basic concept of an electronic band structure. This shows how the effective momentum can be seen as composed of two parts, a standard momentum and a crystal momentum . More precisely the crystal momentum is not a momentum but it stands for the momentum in the same way as the electromagnetic momentum in the minimal coupling, and as part of a canonical transformation of the momentum. For the effective velocity we can derive For the effective mass The quantity on the right multiplied by a factor is called effective mass tensor and we can use it to write a semi-classical equation for a charge carrier in a band where is an acceleration. This equation is analogous to the de Broglie wave type of approximation As an intuitive interpretation, both of the previous two equations resemble formally and are in a semi-classical analogy with Newton's second law for an electron in an external Lorentz force. History and related equations The concept of the Bloch state was developed by Felix Bloch in 1928 to describe the conduction of electrons in crystalline solids. The same underlying mathematics, however, was also discovered independently several times: by George William Hill (1877), Gaston Floquet (1883), and Alexander Lyapunov (1892). As a result, a variety of nomenclatures are common: applied to ordinary differential equations, it is called Floquet theory (or occasionally the Lyapunov–Floquet theorem). The general form of a one-dimensional periodic potential equation is Hill's equation: where is a periodic potential. Specific periodic one-dimensional equations include the Kronig–Penney model and Mathieu's equation. Mathematically Bloch's theorem is interpreted in terms of unitary characters of a lattice group, and is applied to spectral geometry. See also Bloch oscillations Bloch wave – MoM method Electronic band structure Nearly free electron model Periodic boundary conditions Symmetries in quantum mechanics Tight-binding model Wannier function References Further reading Eponymous theorems of physics Quantum mechanics Condensed matter physics
Bloch's theorem
[ "Physics", "Chemistry", "Materials_science", "Mathematics", "Engineering" ]
1,476
[ "Theorems in quantum mechanics", "Equations of physics", "Phases of matter", "Quantum mechanics", "Materials science", "Eponymous theorems of physics", "Theorems in mathematical physics", "Condensed matter physics", "Matter", "Physics theorems" ]
357,695
https://en.wikipedia.org/wiki/Geometric%20centre%20of%20Slovenia
The Geometric Centre of Slovenia (, GEOSS) is the geometric centre of the country. Its geographic coordinates are and its elevation is 644.842m. It lies in the hamlet of Spodnja Slivna near Vače in the Municipality of Litija. Since 4July1982, it has been marked with a memorial stone designed by the architect Marjan Božič, about 50m away from the given coordinates. A plaque reading Živimo in gospodarimo na svoji zemlji ('We live and prosper upon our land') was added on 14September1989. In 2003, Slovenia adopted the Geometric Centre of Slovenia Act, which is a unique case in Europe. References External links GEOSS homepage Virtual panoramas, maps and aerial photography of the location Point GEOSS (Local Landmark). Map and description. Pespoti.si. Geography of Slovenia Tourist attractions in Slovenia Slovenia Municipality of Litija
Geometric centre of Slovenia
[ "Physics", "Mathematics" ]
204
[ "Point (geometry)", "Geometric centers", "Geographical centres", "Symmetry" ]
357,715
https://en.wikipedia.org/wiki/The%20First%20%2420%20Million%20Is%20Always%20the%20Hardest
The First $20 Million Is Always the Hardest is a 2002 film based on the novel of the same name by technology-culture writer Po Bronson. The film stars Adam Garcia and Rosario Dawson. The screenplay was written by Jon Favreau and Gary Tieche. Plot Andy Kasper is a marketer who quits his job in search of something more fulfilling. He gets hired at LaHonda Research Institute, where Francis Benoit assigns him to design the PC99, a $99 PC. He moves into a run-down boarding house where he meets his neighbor Alisa, an artist. He puts together a team of unassigned LaHonda employees. The team includes: Salman Fard, a short, foreign man with an accent who is hacking into CIA files when Andy meets him; Curtis "Tiny" Russell, a massively obese, anthropophobic man; and Darrell, a tall, blond, pierced, scary, germaphobic, deep-voiced man with personal space issues who regularly refers to himself in the third person. The team finds many non-essential parts but cannot come close to the $99 mark. It is Salman's idea to put all the software on the internet, eliminating the need for a hard drive, RAM, a CD-ROM drive, a floppy drive, and anything that holds information. The computer has been reduced to a microprocessor, a monitor, a mouse, a keyboard, and the internet, but it is still too expensive. Having seen the rest of his team watching a hologram of an attractive lady the day before, in a dream Andy is inspired to eliminate the monitor in favor of the cheaper holographic projector. The last few hundred dollars come off when Darrell suggests using virtual reality gloves in place of a mouse and keyboard. Tiny then writes a "hypnotizer" code to link the gloves, the projector, and the internet, and they're done. But immediately before he finishes, the whole team (except for Tiny, who is still writing the code) quits LaHonda after being told that there are no more funds for their project, but sign a non-exclusive patent waiver, meaning that LaHonda will share the patent rights to any technology they had developed up to that point. After leaving LaHonda, they pitch their product to numerous companies, but do not get accepted, mainly because the prototype emagi (electronic magic) was ugly, and something always seemed to go wrong during the demonstration of their product. Alisa, whose relationship with Andy has been growing steadily, helps improve the emagi's looks, which helps the team with their callback with executive. They agree to give her 51% of their company in exchange for getting their product manufactured and for getting Andy's Porsche bought back, which he had had to sell in order to raise money to build a new emagi after leaving LaHonda. Unfortunately, she then sells the patent rights to the emagi to Francis Benoit, who plans to sell the emagi at $999 a piece and reap a huge profit. The team interrupts the meeting in which Benoit is going to introduce the emagi to the world and introduces an even newer computer he and his team developed and manufactured at LaHonda, which was in a state of disaster when they arrived. It was a small silver tube that projected a hologram and lasers which would detect where the hands were, eliminating the need even for virtual reality gloves. Andy then reminds Benoit of the non-exclusive patent waiver, which had been Benoit's idea in the first place. Cast Adam Garcia as Andy Kasper Rosario Dawson as Alisa Anjul Nigam as Salman Fard Ethan Suplee as Curtis "Tiny" Russell Jake Busey as Darrell Enrico Colantoni as Francis Benoit Gregory Jbara as Hank Dan Butler as Lloyd Linda Hart as Mrs. 'B' Shiva Rose as Torso Chandra West as Robin Rob Benedict as Willy Heather Paige Kent as Claudia Goss John Rothman as Ben Reggie Lee as Suit Reception Box office Opening Weekend Gross was $2,535 (USA) (30 June 2002) (2 Screens (NY, LA) The feature received limited release in New York and Los Angeles. Its domestic gross was just $5,491, making it one of the greatest flops in movie history. Critical response References External links The First $20 Million Is Always the Hardest at the AFI Catalog of Feature Films The First $20 Million Is Always the Hardest at tcmdb The First $20 Million Is Always the Hardest at Allmovie The First $20 Million Is Always the Hardest at Metacritic The First $20 Million Is Always the Hardest at Rotten Tomatoes 2002 films Films about computing 2002 comedy films American comedy films Films set in the San Francisco Bay Area Films shot in San Francisco Films directed by Mick Jackson Films with screenplays by Jon Favreau 2000s English-language films 2000s American films
The First $20 Million Is Always the Hardest
[ "Technology" ]
1,004
[ "Works about computing", "Films about computing" ]
9,598,851
https://en.wikipedia.org/wiki/Atomic%20carbon
Atomic carbon, systematically named carbon and λ0-methane, is a colourless gaseous inorganic chemical with the chemical formula C (also written [C]). It is kinetically unstable at ambient temperature and pressure, being removed through autopolymerisation. Atomic carbon is the simplest of the allotropes of carbon, and is also the progenitor of carbon clusters. In addition, it may be considered to be the monomer of all (condensed) carbon allotropes like graphite and diamond. Nomenclature The trivial name monocarbon is the most commonly used and preferred IUPAC name. The systematic name carbon, a valid IUPAC name, is constructed according to the compositional nomenclature. However, as a compositional name, it does not distinguish between different forms of pure carbon. The systematic name λ0-methane, also valid IUPAC name, is constructed according to the substitutive nomenclature. Along with monocarbon, this name does distinguish the titular compound as they derived using structural information about the molecule. To better reflect its structure, free atomic carbon is often written as [C]. λ2-methylium () is the ion resulting from the gain of by atomic carbon. Properties Amphotericity A Lewis acid can join with an electron pair of atomic carbon, and an electron pair of a Lewis base can join with atomic carbon by adduction: :[C] + M → [MC] [C] + :L → [CL] Because of this donation or acceptance of an adducted electron pair, atomic carbon has Lewis amphoteric character. Atomic carbon has the capacity to donate up to two electron pairs to Lewis acids, or accept up to two pairs from Lewis bases. A proton can join with the atomic carbon by protonation: C + → Because of this capture of the proton (), atomic carbon and its adducts of Lewis bases, such as water, also have Brønsted–Lowry basic character. Atomic carbon's conjugate acid is λ2-methylium (). + C + Aqueous solutions of adducts are however, unstable due to hydration of the carbon centre and the λ2-methylium group to produce λ2-methanol (CHOH) or λ2-methane (), or hydroxymethylium () groups, respectively. + C → CHOH + → The λ2-methanol group in adducts can potentially isomerise to form formaldehyde, or be further hydrated to form methanediol. The hydroxymethylium group in adducts can potentially be further hydrated to form dihydroxymethylium (), or be oxidised by water to form formylium (). Electromagnetic properties The electrons in atomic carbon are distributed among the atomic orbitals according to the aufbau principle to produce unique quantum states, with corresponding energy levels. The state with the lowest energy level, or ground state, is a triplet diradical state (3P0), closely followed by 3P1 and 3P2. The next two excited states that are relatively close in energy are a singlet (1D2) and singlet diradical (1S0). The non-radical state of atomic carbon is systematically named λ2-methylidene, and the diradical states that include the ground state is named carbon(2•) or λ2-methanediyl. The 1D2 and 1S0 states lie 121.9 kJ mol−1 and 259.0 kJ mol−1 above the ground state, respectively. Transitions between these three states are formally forbidden from occurring due to the requirement of spin flipping and or electron pairing. This means that atomic carbon phosphoresces in the near-infrared region of the electromagnetic spectrum at 981.1 nm. It can also fluoresce in infrared and phosphoresce in the blue region at 873.0 nm and 461.9 nm, respectively, upon excitation by ultraviolet radiation. The different states of atomic carbon exhibits varying chemical behaviours. For example, reactions of the triplet radical with non-radical species generally involves abstraction, whereas reactions of the singlet non-radical involves not only abstraction, but also addition by insertion. [C]2•(3P0) + → [CHOH] → [CH] + [HO] [C](1D2) + → [CHOH] → CO + or Production One method of synthesis, developed by Phil Shevlin has done the principal work in the field., is by passing a large current through two adjacent carbon rods, generating an electric arc. The way this species is made is closely related to the formation of fullerenes C60, the chief difference being that a much lower vacuum is used in atomic carbon formation. Atomic carbon is generated in the thermolysis of 5-diazotetrazole upon extrusion of 3 equivalents of dinitrogen: CN6 → :C: + 3N2 A clean source of atomic carbon can be obtained based on the thermal decomposition of tantalum carbide. In the developed source, carbon is loaded into a thin-walled tantalum tube. After being sealed, it is heated by direct electric current. The solvated carbon atoms diffuse to the outer surface of the tube and, when the temperature rises, the evaporation of atomic carbon from the surface of the tantalum tube is observed. The source provides purely carbon atoms without presence of any additional species. Carbon suboxide decarbonylation Atomic carbon can be produced by carbon suboxide decarbonylation. In this process, carbon suboxide decomposes to produce atomic carbon and carbon monoxide according to the equation: → 2 CO + [C] The process involves dicarbon monoxide as an intermediate, and occurs in two steps. Photolytic far ultraviolet radiation is needed for both decarbonylations. → [CCO] + CO [CCO] → CO + [C] Uses Normally, a sample of atomic carbon exists as a mixture of excited states in addition to the ground-state in thermodynamic equilibrium. Each state contributes differently to the reaction mechanisms that can take place. A simple test used to determine which state is involved is to make use of the diagnostic reaction of the triplet state with O2, if the reaction yield is unchanged it indicates that the singlet state is involved. The diradical ground-state normally undergoes abstraction reactions. Atomic carbon has been used to generate "true" carbenes by the abstraction of oxygen atoms from carbonyl groups: R2C=O + :C: → R2C: + CO Carbenes formed in this way will exhibit true carbenic behaviour. Carbenes prepared by other methods such as diazo compounds, might exhibit properties better attributed to the diazo compound used to make the carbene (which mimic carbene behaviour), rather than to the carbene itself. This is important from a mechanistic understanding of true carbene behaviour perspective. Reactions As atomic carbon is an electron-deficient species, it spontaneously autopolymerises in its pure form, or converts to an adduct upon treatment with a Lewis acid or base. Oxidation of atomic carbon gives carbon monoxide, whereas reduction gives λ2-methane. Non-metals, including oxygen, strongly attack atomic carbon, forming divalent carbon compounds: 2 [C] + → 2 CO Atomic carbon is highly reactive, most reactions are very exothermic. They are generally carried out in the gas phase at liquid nitrogen temperatures (77 K). Typical reactions with organic compounds include: Insertion into a C-H bond in alkanes to form a carbene Deoxygenation of carboxyl groups in ketones and aldehydes to form a carbene, 2-butanone forming 2-butanylidene. Insertion into carbon -carbon double bonds to form a cyclopropylidene which undergoes ring-opening, a simple example being insertion into an alkene to form a cumulene. With water insertion into the O-H bond forms the carbene, H-C-OH that rearranges to formaldehyde, HCHO. References Further reading Allotropes of carbon
Atomic carbon
[ "Chemistry" ]
1,730
[ "Allotropes of carbon", "Allotropes" ]
9,599,147
https://en.wikipedia.org/wiki/Melanotroph
A melanotroph (or melanotrope) is a cell in the pituitary gland that generates melanocyte-stimulating hormone (α‐MSH) from its precursor pro-opiomelanocortin. Chronic stress can induce the secretion of α‐MSH in melanotrophs and lead to their subsequent degeneration. See also Chromophobe cell Chromophil Acidophil cell Basophil cell Oxyphil cell Oxyphil cell (parathyroid) Pituitary gland Neuroendocrine cell List of distinct cell types in the adult human body References Endocrine system
Melanotroph
[ "Chemistry", "Biology" ]
134
[ "Endocrine system", "Biotechnology stubs", "Biochemistry stubs", "Organ systems", "Biochemistry" ]
9,599,592
https://en.wikipedia.org/wiki/Atomic%20domain
In mathematics, more specifically ring theory, an atomic domain or factorization domain is an integral domain in which every non-zero non-unit can be written in at least one way as a finite product of irreducible elements. Atomic domains are different from unique factorization domains in that this decomposition of an element into irreducibles need not be unique; stated differently, an irreducible element is not necessarily a prime element. Important examples of atomic domains include the class of all unique factorization domains and all Noetherian domains. More generally, any integral domain satisfying the ascending chain condition on principal ideals (ACCP) is an atomic domain. Although the converse is claimed to hold in Cohn's paper, this is known to be false. The term "atomic" is due to P. M. Cohn, who called an irreducible element of an integral domain an "atom". Motivation In this section, a ring can be viewed as merely an abstract set in which one can perform the operations of addition and multiplication; analogous to the integers. The ring of integers (that is, the set of integers with the natural operations of addition and multiplication) satisfy many important properties. One such property is the fundamental theorem of arithmetic. Thus, when considering abstract rings, a natural question to ask is under what conditions such a theorem holds. Since a unique factorization domain is precisely a ring in which an analogue of the fundamental theorem of arithmetic holds, this question is readily answered. However, one notices that there are two aspects of the fundamental theorem of the arithmetic: first, that any integer is the finite product of prime numbers, and second, that this product is unique up to rearrangement (and multiplication by units). Therefore, it is also natural to ask under what conditions particular elements of a ring can be "decomposed" without requiring uniqueness. The concept of an atomic domain addresses this. Definition Let R be an integral domain. If every non-zero non-unit x of R can be written as a product of irreducible elements, R is referred to as an atomic domain. (The product is necessarily finite, since infinite products are not defined in ring theory. Such a product is allowed to involve the same irreducible element more than once as a factor.) Any such expression is called a factorization of x. Special cases In an atomic domain, it is possible that different factorizations of the same element x have different lengths. It is even possible that among the factorizations of x there is no bound on the number of irreducible factors. If on the contrary the number of factors is bounded for every non-zero non-unit x, then R is a bounded factorization domain (BFD); formally this means that for each such x there exists an integer N such that if with none of the xi invertible then n < N. If such a bound exists, no chain of proper divisors from x to 1 can exceed this bound in length (since the quotient at every step can be factored, producing a factorization of x with at least one irreducible factor for each step of the chain), so there cannot be any infinite strictly ascending chain of principal ideals of R. That condition, called the ascending chain condition on principal ideals or ACCP, is strictly weaker than the BFD condition, and strictly stronger than the atomic condition (in other words, even if there exist infinite chains of proper divisors, it can still be that every x possesses a finite factorization). Two independent conditions that are both strictly stronger than the BFD condition are the half-factorial domain condition (HFD: any two factorizations of any given x have the same length) and the finite factorization domain condition (FFD: any x has but a finite number of non-associate divisors). Every unique factorization domain obviously satisfies these two conditions, but neither implies unique factorization. References Commutative algebra
Atomic domain
[ "Mathematics" ]
814
[ "Fields of abstract algebra", "Commutative algebra" ]
9,599,621
https://en.wikipedia.org/wiki/List%20of%20British%20Rail%20electric%20multiple%20unit%20classes
This article lists every electric-powered multiple unit allocated a TOPS classification or used on the mainline network since 1948, i.e. British Railways and post-privatisation. For a historical overview of electric multiple unit development in Great Britain, see British electric multiple units. British Rail operated a wide variety of electric multiple units for use on electrified lines: AC units operate off (AC) from overhead wires. Where clearances for the overhead wires on the Great Eastern Main Line, North Clyde Line and London, Tilbury and Southend railway routes were below standard, a reduced voltage of was used. The Midland Railway units used . Under the computer numbering, AC units (including mixed-voltage units that can also work off a DC supply) were given a class in the range 300-399. DC units operate off (DC) from a third rail on the Southern Region and North London, Merseyside and Tyneside networks. The Manchester-Bury Railway line used from a side-contact third rail. The Manchester South Junction & Altrincham and "Woodhead" and initially the Great Eastern Railway routes used from overhead wires. Under the computer numbering, DC units were given a class in the range 400-599. AC EMUs and dual-voltage EMUs First generation Second generation Modern/Third generation These use solid state switching devices (thyristors and transistors) and have electronic power control. High speed trains High speed multiple unit or fixed formation trainsets, capable of operating at speeds above . DC EMUs Southern Region units The Southern Railway and its successor, the Southern Region of British Rail, used three letter codes to classify their DC EMU fleets, as shown after the TOPS class numbers. Southern Region EMUs were classified in the 400 series under TOPS. Pre-Nationalisation Mark 1 and 2 bodyshell Tube Stock Note that TOPS class 499 is currently allocated to London Underground owned stock that needs to use Network Rail owned tracks. This does not involve any renumbering of the stock involved, and is only for electronic recording purposes. Modern EMUs Other DC units The 500 series classes were reserved for miscellaneous DC EMUs not from the Southern Region. This included the DC (third/fourth rail) lines in North London, Manchester and Merseyside and the OHLE lines in Greater Manchester. The DC electric network around Tyneside had been de-electrified by the time TOPS was introduced, and the stock withdrawn or transferred to the Southern Region. TOPS classes Pre-TOPS classes Ex-LNER units (Tyneside stock) Ex-LNWR units (North London stock) Ex-LOR units (Liverpool Overhead Railway stock) Ex-LYR units (Manchester-Bury stock) Ex-Mersey Railway units (Merseyside DC stock) Ex-W&CR units (Waterloo & City Railway stock) Battery electric multiple unit (BEMU) The original BEMU was a one-off unit, withdrawn before the introduction of TOPS. A new generation battery EMU (called an Independently Powered Electric Multiple Unit) was created in 2014, converted from a Class 379. Non National Rail units All rail vehicles operating on Network Rail infrastructure are required to be given TOPS codes. For this reason, London Underground, Sheffield Supertram and Tyne & Wear Metro trains have their own TOPS classes: See also List of British Rail classes List of British Rail modern traction locomotive classes List of British Rail diesel multiple unit classes British Rail locomotive and multiple unit numbering and classification SR multiple unit numbering and classification British Rail coach type codes Electric multiple unit References Sources List British Rail electric multiple unit classes British Rail rolling stock Electric multiple units of Great Britain British Rail
List of British Rail electric multiple unit classes
[ "Engineering" ]
735
[ "Electrical engineering", "Electrical-engineering-related lists" ]
9,599,860
https://en.wikipedia.org/wiki/Gag-onc%20fusion%20protein
The gag-onc fusion protein is a general term for a fusion protein formed from a group-specific antigen ('gag') gene and that of an oncogene ('onc'), a gene that plays a role in the development of a cancer. The name is also written as Gag-v-Onc, with "v" indicating that the Onc sequence resides in a viral genome. Onc is a generic placeholder for a given specific oncogene, such as C-jun. (In the case of a fusion with C-jun, the resulting "gag-jun" protein is known alternatively as p65). Background Gag genes are part of a general architecture for retroviruses, viruses that replicate through reverse transcription, where the gag region of the genome encodes proteins that constitute the matrix, capsid and nucleocapsid of the mature virus particles. Like in HIV's replication cycle, these proteins are needed for viral budding from the host cell's plasma membrane, where the fully formed virions leave the cell to infect other cells. gag-v-onc When a viral gene is introduced into the host cell and is sufficient to induce oncogenesis – the creation of cancerous cells – in the infected cell line, the gene is said to be a "viral transforming gene". When this type of gene is translated to a protein, the protein is called a "transforming protein". Note that since the viral oncogenes originated from a host genome, the transformation event is different from transduction, which describes the process of introducing non-native genes to a host organism via a viral infection. Rous sarcoma virus The Gag-v-Onc fusion protein from the Rous sarcoma virus illustrates the dual role that the fusion protein plays in the viral and host cellular life cycle. For example, the viral gene Src (as in "sarcoma") is not necessary for viral reproduction, but does affect virulence. Due to evidence of conserved homology between the v-Src gene and its host (animal) genomes, and its non-essential status for viral reproduction, the v-Src gene is likely to have been acquired from a host genome and altered by subsequent mutations. These subsequent mutations are responsible for the oncogenic capabilities of the virus, as the normal (host) version of the Src gene, c-Src promotes survival, angiogenesis, proliferation and invasion pathways. These native pathways are disrupted in the presence of the mutant Src gene (v-Src) such that oncogenesis becomes more likely for the infected host cells, since the v-Src gene is translated into a functionally distinct version of its host counterpart. murine leukemia virus In the case of the murine leukemia viruses, a species of viruses capable of causing cancer in murines (mice), the viral life cycle can also be responsible for oncogenesis through a Gag-v-Onc fusion protein called "Mo-MuLV(src)", which is a Gag-v-Src protein capable of inducing oncogenesis in living mice. See also Rous sarcoma virus Fusion protein Fusion gene Fusion transcript Chimeric gene Bcr-abl fusion protein Oncovirus Retrovirus Retrotransposon Retroposon Integrase External links http://www.ijbs.com/v06p0730.htm#headingA7 References Viral protein class Viral oncoproteins Biochemistry Cell biology
Gag-onc fusion protein
[ "Chemistry", "Biology" ]
734
[ "Biochemistry", "Cell biology", "nan" ]
9,600,102
https://en.wikipedia.org/wiki/Acid%20growth
Acid growth refers to the ability of plant cells and plant cell walls to elongate or expand quickly at low (acidic) pH. The cell wall needs to be modified in order to maintain the turgor pressure. This modification is controlled by plant hormones like auxin. Auxin also controls the expression of some cell wall genes. This form of growth does not involve an increase in cell number. During acid growth, plant cells enlarge rapidly because the cell walls are made more extensible by expansin, a pH-dependent wall-loosening protein. Expansin loosens the network-like connections between cellulose microfibrils within the cell wall, which allows the cell volume to increase by turgor and osmosis. A typical sequence leading up to this would involve the introduction of a plant hormone (auxin, for example) that causes protons (H+ ions) to be pumped out of the cell into the cell wall. As a result, the cell wall solution becomes more acidic. It was suggested by different scientist that the epidermis is a unique target of the auxin but this theory has been disapproved over time. This activates expansin activity, causing the wall to become more extensible and to undergo wall stress relaxation, which enables the cell to take up water and to expand. The acid growth theory has been very controversial in the past. References Plant cells Plant physiology Auxin action
Acid growth
[ "Biology" ]
298
[ "Plant physiology", "Plants" ]
9,600,896
https://en.wikipedia.org/wiki/Zero%20fret
A zero fret is a fret placed at the headstock end of the neck of a banjo, guitar, mandolin, or bass guitar. It serves one of the functions of a nut: holding the strings the correct distance above the other frets on the instrument's fretboard. A separate string-guide (often a regular nut) is still required to establish the correct string spacing when a zero fret is used. Function The zero fret is positioned at the location normally occupied by the nut. On a guitar having a zero fret, the nut is located behind the zero fret and serves solely to keep the strings spaced properly. The strings rest atop the zero fret, which is generally at the same height as all the others. Some people prefer and feel more comfortable with the zero fret slightly taller than the rest of the frets. The zero fret functions as all other frets do. Purpose It is claimed that with a zero fret, the sound of an open string more closely approximates the sound of a fretted string as compared to the open string sound on a guitar with no zero fret. Countering this claim are musicians who feel that a bone or even synthetic nut will enhance the overall tone of the instrument regardless of the string being played open or fretted. Some manufacturers that frequently use(d) a zero fret are Gretsch, Kay, Selmer, Höfner, Mosrite, Framus, Vox, Vigier, Harley Benton and bass guitar manufacturer MTD. Now very few manufacturers use this design and those who do list it as a feature. Steinberger uses a zero fret with their headless guitars but omit the nut; strings are mounted in place where the head would normally be, so there is no need for the string guides that the nut provides. 2015 model year Gibson guitars incorporate a zero fret in order to accommodate a brass adjustable nut, which the manufacturer claims causes better sustain and intonation. The British acoustic guitar manufacturer Fylde Guitars uses a zero fret as standard. Drawbacks Low string action (distance between string and fret wire) results from a non-elevated zero fret when using the same fingerboard fret wire size. On some guitars it may be necessary to raise the bridge saddle height by a small amount. Advantages A large number of different gauges of strings can be used. Strings reside on top of the zero fret regardless of thickness, and have the same distance travel down to the first fret. If you have cut the grooves into the nut for thick strings, it may be necessary to change the nut out completely in order to go back to lighter gauge strings. On zero fret, this isn't needed. A conventional nut made of relatively soft plastic or bone will easily clamp the string and make fine tuning difficult. The clamping effect does not only result from too narrow nut slots, but notably more from the impression of the string windings into the nut material at the bottom of the slot. That effect is more prevalent with wire wound nylon strings than with steel strings. Using a zero fret relieves the pressure from the nut material and the nut serves only to center the strings sideways. Tuning is smooth and without sudden movement and intonation jumps. There are only a few manufacturers making metal (bronze) conventional nuts which avoid the string clamping effect. Elevated and Straight Zero Fret When the zero fret is viewed as an extension of the fingerboard by adding one more fret wire before the nut, then the luthier would fit a fret wire of identical dimensions. This will decrease the string action at the lower fret positions. To avoid this effect, luthiers often use a thicker fret wire at the zero fret position. While the straight and level zero fret improves the tuning accuracy along the lower frets (from fret 1 to 3 mainly) the elevated fret keeps the string action higher and helps to avoid string buzz. Intonation Issues The term "guitar intonation" denotes to what precision the ideal equally tempered musical scale can be produced by a guitar, while "piano intonation" refers to the setup procedure of the individual hammers in the piano keyboard. A conventional nut or an elevated zero fret will cause pitch errors on the lower frets due to the increased pressure on the strings. The fret distances would have to be corrected for that increased tension effect, but with guitars that is not common. On the contrary, the "level" zero fret will have no pitch errors on the fingered notes. This can be demonstrated by measuring the pitch deviation of each single note on the fingerboard. Players who frequently use the lower frets from 0 to 5 will benefit most from a zero fret. Using mainly the upper range of the fingerboard neutralizes the advantage of the zero fret and conventional nuts are equally suited. Those players usually avoid playing first and second fret positions because of the pitch problems there. For beginners a zero fretted guitar is preferable. Most guitars nowadays are manufactured with rather high string action and high conventional nuts. The user is expected to adjust nut height to their personal playing style. Unfortunately, sales personnel in music stores do know about that, but lack the skill, extra time and/or cost to properly set up a guitar in what has become a very price competitive, low profit product. Guitars with zero frets would be helpful in that situation. The photograph shows a high conventional nut on a factory-made Japanese concert guitar and may serve to illustrate the necessity for evaluating the advantages and the drawbacks of modern guitar manufacturing. The high string tension makes the instrument almost unplayable for beginner students, especially for children, and additionally serves to spoil the ear training for harmonies and tone intervals due to intonation errors. Luthier intervention is required in such cases. High string action at the nut is often preferred to avoid string buzz with heavy playing style. The lower fret area is therefore out of tune. A partial solution is to apply nut corrections by reducing the distance from the nut to fret one for a small amount and then re-tune the now shorter string. That procedure lowers the tuning of all the other notes on the complete string and compromises have to be found. It may no longer be feasible to tune the open string to its base tone. Some manufacturers suggest tuning at the third or fifth fret using electronic tuners. See also Nut (string instrument) References Musical instrument parts and accessories
Zero fret
[ "Technology" ]
1,326
[ "Components", "Musical instrument parts and accessories" ]
9,600,919
https://en.wikipedia.org/wiki/Gadodiamide
Gadodiamide, sold under the brand name Omniscan, is a gadolinium-based MRI contrast agent (GBCA), used in magnetic resonance imaging (MRI) procedures to assist in the visualization of blood vessels. Medical uses Gadodiamide is a contrast medium used for cranial and spinal magnetic resonance imaging (MRI) and for general MRI of the body after intravenous administration. It provides contrast enhancement and facilitates visualisation of abnormal structures or lesions in various parts of the body including the central nervous system (CNS). It crosses intact the blood brain barrier. Adverse effects Gadodiamide is one of the main GBCA associated with nephrogenic systemic fibrosis (NSF), a toxic reaction occurring in some people with kidney problems. No cases have been seen in people with normal kidney function. A 2015 study found gadolinium deposited in the brain tissue of people who had received gadodiamide. Other studies using post-mortem mass spectrometry found most of the deposit remained at least 2 years after an injection and deposit also in individuals with no kidney issues. In vitro studies found it to be neurotoxic. An Italian task force recommended that breastfeeding mothers precautionally avoid any contrast agent, such as gadodiamide, that has been associated with nephrogenic systemic fibrosis. Society and culture Gadodiamide was suspended along with gadopentetic acid (Magnevist) by the European Medicines Agency in 2017. References MRI contrast agents Organogadolinium compounds Withdrawn drugs
Gadodiamide
[ "Chemistry" ]
328
[ "Drug safety", "Withdrawn drugs" ]
9,601,295
https://en.wikipedia.org/wiki/Corbadrine
Corbadrine, sold under the brand name Neo-Cobefrine and also known as levonordefrin and α-methylnorepinephrine, is a catecholamine sympathomimetic used as a topical nasal decongestant and vasoconstrictor in dentistry in the United States. It is usually used in a pre-mixed solution with local anesthetics, such as mepivacaine. The drug acts as a non-selective agonist of the α1-, α2-, and β-adrenergic receptors. It is said to have preferential activity at the α2-adrenergic receptor. Corbadrine is also a metabolite of the antihypertensive drug methyldopa and lays a role in its pharmacology and effects. Pharmacology Pharmacokinetics Corbadrine is metabolized primarily by catechol O-methyltransferase (COMT). Chemistry Corbadrine, also known as 3,4,β-trihydroxy-α-methylphenethylamine or as 3,4,β-trihydroxyamphetamine, as well as α-methylnorepinephrine or (–)-3,4-dihydroxynorephedrine, is a substituted phenethylamine and amphetamine derivative. Analogues of corbadrine include α-methyldopamine, dioxifedrine (3,4-dihydroxyephedrine; α-methylepinephrine), dioxethedrin (3,4-dihydroxy-N-ethylnorephedrine; α-methyl-N-ethylnorepinephrine), and hydroxyamphetamine (4-hydroxyamphetamine; α-methyltyramine). Society and culture Names Corbadrine is the generic name of the drug and its . It is also known as levonordefrin, which is its . Synonyms of corbadrine include α-methylnorepinephrine and (–)-3,4-dihydroxynorephedrine. The drug has been sold under the brand name Neo-Cobefrine. References External links Alpha-1 adrenergic receptor agonists Alpha-2 adrenergic receptor agonists Beta-adrenergic agonists Beta-Hydroxyamphetamines Catecholamines Decongestants Human drug metabolites Norepinephrine releasing agents Vasoconstrictors
Corbadrine
[ "Chemistry" ]
556
[ "Chemicals in medicine", "Human drug metabolites" ]
9,601,834
https://en.wikipedia.org/wiki/Swain%20School%20of%20Design
The Swain School of Design (1881–1988) was an independent tuition-free non-profit school of higher learning in New Bedford, Massachusetts. It first defined its mission as a "school of design" for the "application of art to the industries" in 1902, making it the 12th oldest art school in the United States. By then, the 19th-century whaling capital of the world was already in a textile boom, one that required designers. In response, Swain's trustees developed a meticulous program of study. In the first year, students would train for 40 hours a week in "Pure Design" to prepare them for a second year in "Historic Design." Applied skills spanned a panoply of techniques, involving the design of picture frames, book and magazine covers, illuminations, lettering, stained glass, metalwork, architectural moldings and the "application of ornament to prints." Within a generation, that foresight had made New Bedford, with nearly 70 mills and 41,000 mill workers, the richest city per capita in the U.S. In 1921, the school removed the word "free" from its name, instituted a range of fees, and began providing options for diplomas and certificates. That's also when they created a teacher training program, and an Atelier Swain, modeled on the principles of instruction at the influential École des Beaux-Arts in Paris, with multiple annual competitions. In 1925, they built the William W. Crapo Gallery, where frequent exhibitions and lectures were held, featuring well-known artists. The school's mission was no longer limited to providing applied training in the arts to the city's poor, but also to "rais[e] the standard of artistic knowledge, and appreciation" for its wealthy potential benefactors. But the boom was effectively over by the 1930s. A glut of mills and growing competition from the South had decreased profits. Lower profits meant lower wages, and led to strikes. By the 1940s, America was back at war, fabrics were rationed, and the mills were repurposed for the military. In the 1950s and 1960s, Swain focused on the fine arts during a post-war surge of interest in American Art. The school created new undergraduate degree programs in painting, printmaking, sculpture and graphic design. Then they hired professional artists, with European or émigré training and exhibition histories, to staff them. The new faculty included Sigmund Abeles (1934–), Ron Kowalke (1936–2021), Alphonse Mattia (1947–2023), Joyce Reopel (1933–2019), Nathaniel Cannon Smith (1866–1943), Mel Zabarsky (1932–2019). By 1969, New York City's Parsons School of Design was accepting more than 40 percent of Swain's students in its graduate programs. In response, Swain created a Bachelor of Fine Arts (BFA) degree, and graduated its first dozen BFAs the following year. In 1985, Swain created more degree programs: undergraduate and graduate degrees in "ceramics, fiber, metal, and wood as part of a transfer agreement with Boston University's Program in Artisanry." In the same year, Swain introduced a one-year certificate and a baccalaureate program in Architectural Artisanry, which was aimed at both novice students as well as those seeking retraining. But the program never took off. In 1988, spurred by low enrollment and a financially struggling city, the school sold its New Bedford campus, and merged with Southeastern Massachusetts University's College of Visual and Performing Arts in nearby North Dartmouth. Swain's archives are now part of the since renamed University of Massachusetts Dartmouth archive. In 1999, the New Bedford Art Museum curated an exhibition of notable Swain student and faculty work called "Swain Resurgent." History Liberal arts (1882–1902) The "Swain Free School" was founded in 1881 through the provisions of the estate of William W. Swain (1793–1858), a shipping-and-oil magnate. Swain had already bequeathed his mansion to the school in 1858, and that served as Swain's first building until it burned to the ground in 1948. Ultimately, however, the campus would comprise some thirteen buildings, including the purpose-built New Bedford Textile School, two residence halls and the Rodman Mansion, listed on the National Register of Historic Places. In 1882, the Board of Trustees appointed Francis F. Gummere as the school's first president, and the school opened on October 25 of that year. For its first 20 years, the school provided a well-rounded liberal arts instruction in languages, literature, history, education, art and chemistry. Aimed at local residents without means, students' only financial obligation to the school was a mandatory deposit of $10 per semester as a measure of good faith. Once the "whaling center of the world," there were slow but steady signs that New Bedford might also become the textile center of the world. The first mill was built in New Bedford in 1835. By the mid-19th century, housing was being built adjacent to the mills in "mill districts," and the city was attracting large numbers of immigrants seeking work. Once the "richest city per capita in the United States, if not in the world, during the whaling boom, the town fathers determined that to achieve that level of prosperity all over again, textiles required textile designers, and that was the need Swain School sought to meet. Design (1902–1930s) In 1902, the trustees set a course for Swain as a "school of design." With a rigorous curriculum requiring 40 full-time hours a week, students were trained in "Pure Design" in the first year to prepare for the second in "Historic Design." Annual catalogues, then called "circulars," were student-designed and competitively selected every year, and exhibitions and lectures were free, frequent and focused on well-known artists. The second year curriculum focused on mastering technique in historically correct ways: the design of picture frames, book and magazine covers, illuminations, lettering, stained glass, metalwork, architectural moldings and the "application of ornament to prints." This approach lasted a generation. By 1920, however, the city had grown richer per capita than during its whaling days. The 1921–22 circular showed the school's leadership no longer sought to attract only the people without means, but also sought to build the city's cultural appreciation, in a manner appropriate for a city with immense wealth. Although the circular still described the tuition as "free," the word was no longer included in the school's formal name, the endowment was described as "limited," and fees between $5 and $25 were introduced, depending on whether students attended day, evening or Saturday morning classes. In the book 100 Boston Artists, the late painter and photographer Steven Trefonides (1926–2021) mentioned trading campus labor for classes at age 12, which could reflect that he did not meet the required admission age of 14 or could not afford the fees — or both. The circular described the school's expanded mission this way:The Swain School of Design is directing a limited endowment toward raising the standard of artistic knowledge, and appreciation in this community. It aims to give its pupils a knowledge of the fundamental principles of artistic design, a skill of hand and a facility of invention. It seeks to accent the relation of Art and Industry. Many of the Courses of Study are planned to practically teach the theory of design that the pupils may apply the principles of Art to the requirements of Trade and Manufacture.The school's revised approach to art study offered general studies in art and design, teacher training, arts and crafts, architecture, jewelry and metal, ceramics, painting and sketching. In addition to frequent exhibitions, an Art Club was created, with attached models' fees. After a year-long course, students were eligible to earn a certificate and, after three, a diploma. In 1925, the school built the William W. Crapo Gallery. On the main floor, it had a large central exhibition hall and student lounge. Programs included a series of lectures on art and important exhibitions of 19th and 20th century masterpieces, a yearly drawing show and surveys of the work of significant contemporary artists. An "Atelier Swain," had already been introduced by then. Modeled on the principles of instruction at the influential École des Beaux-Arts in Paris, its aim was to organize thirty-five competitions a year, which were showcased at the gallery. As it turned out, however, 1920 was the peak of New Bedford's textile boom. By the 1930s, too many northern cities had built too many mills, and work was migrating to the less expensive South. Wages plummeted, and labor unrest followed, as did another war. Fine arts (1950s–1960s) In the post-war 1950s and 1960s, the school took a turn toward the fine arts by hiring artists to staff new programs "in painting, printmaking, sculpture, graphic design and the Bachelor of Fine Arts curriculum." The new faculty, all professional artists, with exhibition experience in Boston and New York City, helped make Swain "one of finest small art schools in America," generating more than 40 percent of the graduate students at New York City's Parsons School of Design. By then, the abstract expressionists had made a name for New York City, and suddenly New York had replaced Paris as the influential global center of Western art. Boston, meanwhile, had a vibrant art scene of its own. Certificates, artisanry and fine arts degrees (1970s–1988) Starting in the mid-1960s, and extending until the early 1980s, civil rights marches, anti-war protests and steady inflation contributed to a growing mistrust of traditional authority and chronological history, and which had a negative impact on Swain's fortunes. By the 1970s, Swain only had 100 students enrolled at any given time, despite graduating a dozen students with a BFA a year after the program was created. As the 1980s ushered in a pro-business pop culture era, Swain doubled down on its instruction in the skills of master craftsmen, creating undergraduate and graduate degrees in "ceramics, fiber, metal, and wood as part of a transfer agreement with Boston University's Program in Artisanry in 1985, the same year they introduced both a certificate program and a bachelor's degree program in Architectural Artisanry." In keeping with Swain's century-long approach to the applied arts and contextualized by a city with both an historic district and a central historic district, both of which are listed in the National Register of Historic Places, the focus was on providing a complex array of highly specialized artisanry, meant to serve the entire gamut of building trades: architectural restoration, rehabilitation or new construction. Thus, the curriculum included training for "ornamental plasterers; metal workers; decorative brick, stone and concrete masons; wood cabinet makers; ornamental carpenters; architectural ceramic artists." But only three years later, due to high costs and low enrollment, Swain shut its doors. In 1988, the school merged with Southeastern Massachusetts University's College of Visual and Performing Arts. Its archive is now part of the University of Massachusetts Dartmouth, within the College of Visual and Performing Arts and the Claire T. Carney Library. Notable Faculty Sigmund Abeles Jacqueline Block Jim Bobrick Nathaniel Cannon Smith Tom Corey Russell Daly Dick Dougherty Severin Haines Leo Kelly Nicolas Kilmer Ron Kowalke Ed Lazansky David Loeffler Smith Benjamin Martinez Alphonse Mattia[1][2] John Osbourne Joyce Reopel Marc St. Pierre Robin Taffler Steven Trefonides Melvin Zabarsky Notable Alumni Dennis Broadbent Eliza (Lidie) Collins Meredith Wildes-Cornell William D'Elia Richard Dougherty Leonard Dufresne Severin (Sig) Haines John Hopkins Robert (Tex) Lavery Scattergood Moore Gallery (Note: Selections are from student-designed catalogues. Images in color are from 1986 to 1987, and images in black-and-white are from 1908 to 1909. See also Catalogues through the Decades (Archive.org) Facebook Group—Swain School of Design (Facebook Group) News link—Rodman Mansion Tour (News link) Overview—Swain School of Design (Archive.org) Website—Alumni Gallery UMass Dartmouth—Swain School of Design References Free universities Educational institutions established in 1881 1881 establishments in Massachusetts Educational institutions disestablished in 1988 Defunct private universities and colleges in Massachusetts Cotton mills in the United States Training programs Textile mills Textile design Craft occupations Art schools in Massachusetts Decorative arts Art in Massachusetts University of Massachusetts Dartmouth
Swain School of Design
[ "Engineering" ]
2,617
[ "Textile design", "Design" ]
9,606,667
https://en.wikipedia.org/wiki/Pervious%20concrete
Pervious concrete (also called porous concrete, permeable concrete, no fines concrete and porous pavement) is a special type of concrete with a high porosity used for concrete flatwork applications that allows water from precipitation and other sources to pass directly through, thereby reducing the runoff from a site and allowing groundwater recharge. Pervious concrete is made using large aggregates with little to no fine aggregates. The concrete paste then coats the aggregates and allows water to pass through the concrete slab. Pervious concrete is traditionally used in parking areas, areas with light traffic, residential streets, pedestrian walkways, and greenhouses. It is an important application for sustainable construction and is one of many low impact development techniques used by builders to protect water quality. History Pervious concrete was first used in the 1800s in Europe as pavement surfacing and load bearing walls. Cost efficiency was the main motive due to a decreased amount of cement. It became popular again in the 1920s for two storey homes in Scotland and England. It became increasingly viable in Europe after WWII due to the scarcity of cement. It did not become as popular in the US until the 1970s. In India it became popular in 2000. Stormwater management The proper utilization of pervious concrete is a recognized Best Management Practice by the U.S. Environmental Protection Agency (EPA) for providing first flush pollution control and stormwater management. As regulations further limit stormwater runoff, it is becoming more expensive for property owners to develop real estate, due to the size and expense of the necessary drainage systems. Pervious concrete lowers the NRCS Runoff Curve Number or CN by retaining stormwater on site. This allows the planner/designer to achieve pre-development stormwater goals for pavement intense projects. Pervious concrete reduces the runoff from paved areas, which reduces the need for separate stormwater retention ponds and allows the use of smaller capacity storm sewers. This allows property owners to develop a larger area of available property at a lower cost. Pervious concrete also naturally filters storm water and can reduce pollutant loads entering into streams, ponds, and rivers. Pervious concrete functions like a storm water infiltration basin and allows the storm water to infiltrate the soil over a large area, thus facilitating recharge of precious groundwater supplies locally. All of these benefits lead to more effective land use. Pervious concrete can also reduce the impact of development on trees. A pervious concrete pavement allows the transfer of both water and air to root systems to help trees flourish even in highly developed areas. Properties Pervious concrete consists of cement, coarse aggregate (size should be 9.5 mm to 12.5 mm) and water with little to no fine aggregates. The addition of a small amount of sand will increase the strength. The mixture has a water-to-cement ratio of 0.28 to 0.40 with a void content of 15 to 25 percent. The correct quantity of water in the concrete is critical. A low water to cement ratio will increase the strength of the concrete, but too little water may cause surface failure. A proper water content gives the mixture a wet-metallic appearance. As this concrete is sensitive to water content, the mixture should be field checked. Entrained air may be measured by a Rapid Air system, where the concrete is stained black and sections are analyzed under a microscope. A common flatwork form has riser strips on top such that the screed is 3/8-1/2 inches (9 to 12 mm) above final pavement elevation. Mechanical screeds are preferable to manual. The riser strips are removed to guide compaction. Immediately after screeding, the concrete is compacted to improve the bond and smooth the surface. Excessive compaction of pervious concrete results in higher compressive strength, but lower porosity (and thus lower permeability). Jointing varies little from other concrete slabs. Joints are tooled with a rolling jointing tool prior to curing or saw cut after curing. Curing consists of covering concrete with 6 mil plastic sheeting within 20 minutes of concrete discharge. However, this contributes to a substantial amount of waste sent to landfills. Alternatively, preconditioned absorptive lightweight aggregate as well as internal curing admixture (ICA) have been used to effectively cure pervious concrete without waste generation. Testing and inspection Pervious concrete has a common strength of though strengths up to can be reached. There is no standardized test for compressive strength. Acceptance is based on the unit weight of a sample of poured concrete using ASTM standard no. C1688. An acceptable tolerance for the density is plus or minus of the design density. Slump and air content tests are not applicable to pervious concrete because of the unique composition. The designer of a storm water management plan should ensure that the pervious concrete is functioning properly through visual observation of its drainage characteristics prior to opening of the facility. Cold climates Concerns over the resistance to the freeze-thaw cycle have limited the use of pervious concrete in cold weather environments. The rate of freezing in most applications is dictated by the local climate. Entrained air may help protect the paste as it does in regular concrete. The addition of a small amount of fine aggregate to the mixture increases the durability of the pervious concrete. Avoiding saturation during the freeze cycle is the key to the longevity of the concrete. Related, having a well prepared 8 to 24 inch (200 to 600 mm) sub-base and a good drainage preventing water stagnation will reduce the possibility of freeze-thaw damage. Using permeable concrete for pavements can make them safer for pedestrians in the winter because water won't settle on the surface and freeze leading to dangerously icy conditions. Roads can also be made safer for cars by the use of permeable concrete as the reduction in the formation of standing water will reduce the possibility of aquaplaning, and porous roads will also reduce tire noise. Maintenance To prevent reduction in permeability, pervious concrete needs to be cleaned regularly. Cleaning can be accomplished through wetting the surface of the concrete and vacuum sweeping. See also References Further reading US EPA. Office of Research and Development. "Research Highlights: Porous Pavements: Managing Rainwater Runoff." October 17, 2008. External links National Pervious Concrete Pavement Association Pervious Concrete Design Resources American Concrete Institute Building materials Concrete Environmental engineering
Pervious concrete
[ "Physics", "Chemistry", "Engineering" ]
1,304
[ "Structural engineering", "Building engineering", "Chemical engineering", "Architecture", "Construction", "Materials", "Civil engineering", "Environmental engineering", "Concrete", "Matter", "Building materials" ]
9,606,881
https://en.wikipedia.org/wiki/68%E2%80%9395%E2%80%9399.7%20rule
In statistics, the 68–95–99.7 rule, also known as the empirical rule, and sometimes abbreviated 3sr, is a shorthand used to remember the percentage of values that lie within an interval estimate in a normal distribution: approximately 68%, 95%, and 99.7% of the values lie within one, two, and three standard deviations of the mean, respectively. In mathematical notation, these facts can be expressed as follows, where is the probability function, is an observation from a normally distributed random variable, (mu) is the mean of the distribution, and (sigma) is its standard deviation: The usefulness of this heuristic especially depends on the question under consideration. In the empirical sciences, the so-called three-sigma rule of thumb (or 3 rule) expresses a conventional heuristic that nearly all values are taken to lie within three standard deviations of the mean, and thus it is empirically useful to treat 99.7% probability as near certainty. In the social sciences, a result may be considered statistically significant if its confidence level is of the order of a two-sigma effect (95%), while in particle physics and astrophysics, there is a convention of requiring statistical significance of a five-sigma effect (99.99994% confidence) to qualify as a discovery. A weaker three-sigma rule can be derived from Chebyshev's inequality, stating that even for non-normally distributed variables, at least 88.8% of cases should fall within properly calculated three-sigma intervals. For unimodal distributions, the probability of being within the interval is at least 95% by the Vysochanskij–Petunin inequality. There may be certain assumptions for a distribution that force this probability to be at least 98%. Proof We have that doing the change of variable in terms of the standard score , we have and this integral is independent of and . We only need to calculate each integral for the cases . Cumulative distribution function These numerical values "68%, 95%, 99.7%" come from the cumulative distribution function of the normal distribution. The prediction interval for any standard score z corresponds numerically to . For example, , or , corresponding to a prediction interval of . This is not a symmetrical interval – this is merely the probability that an observation is less than . To compute the probability that an observation is within two standard deviations of the mean (small differences due to rounding): This is related to confidence interval as used in statistics: is approximately a 95% confidence interval when is the average of a sample of size . Normality tests The "68–95–99.7 rule" is often used to quickly get a rough probability estimate of something, given its standard deviation, if the population is assumed to be normal. It is also used as a simple test for outliers if the population is assumed normal, and as a normality test if the population is potentially not normal. To pass from a sample to a number of standard deviations, one first computes the deviation, either the error or residual depending on whether one knows the population mean or only estimates it. The next step is standardizing (dividing by the population standard deviation), if the population parameters are known, or studentizing (dividing by an estimate of the standard deviation), if the parameters are unknown and only estimated. To use as a test for outliers or a normality test, one computes the size of deviations in terms of standard deviations, and compares this to expected frequency. Given a sample set, one can compute the studentized residuals and compare these to the expected frequency: points that fall more than 3 standard deviations from the norm are likely outliers (unless the sample size is significantly large, by which point one expects a sample this extreme), and if there are many points more than 3 standard deviations from the norm, one likely has reason to question the assumed normality of the distribution. This holds ever more strongly for moves of 4 or more standard deviations. One can compute more precisely, approximating the number of extreme moves of a given magnitude or greater by a Poisson distribution, but simply, if one has multiple 4 standard deviation moves in a sample of size 1,000, one has strong reason to consider these outliers or question the assumed normality of the distribution. For example, a 6σ event corresponds to a chance of about two parts per billion. For illustration, if events are taken to occur daily, this would correspond to an event expected every 1.4 million years. This gives a simple normality test: if one witnesses a 6σ in daily data and significantly fewer than 1 million years have passed, then a normal distribution most likely does not provide a good model for the magnitude or frequency of large deviations in this respect. In The Black Swan, Nassim Nicholas Taleb gives the example of risk models according to which the Black Monday crash would correspond to a 36-σ event: the occurrence of such an event should instantly suggest that the model is flawed, i.e. that the process under consideration is not satisfactorily modeled by a normal distribution. Refined models should then be considered, e.g. by the introduction of stochastic volatility. In such discussions it is important to be aware of the problem of the gambler's fallacy, which states that a single observation of a rare event does not contradict that the event is in fact rare. It is the observation of a plurality of purportedly rare events that increasingly undermines the hypothesis that they are rare, i.e. the validity of the assumed model. A proper modelling of this process of gradual loss of confidence in a hypothesis would involve the designation of prior probability not just to the hypothesis itself but to all possible alternative hypotheses. For this reason, statistical hypothesis testing works not so much by confirming a hypothesis considered to be likely, but by refuting hypotheses considered unlikely. Table of numerical values Because of the exponentially decreasing tails of the normal distribution, odds of higher deviations decrease very quickly. From the rules for normally distributed data for a daily event: See also p-value Standard score t-statistic References External links "Calculate percentage proportion within x sigmas at WolframAlpha Normal distribution Statistical approximations Rules of thumb pl:Odchylenie standardowe#Dla rozkładu normalnego
68–95–99.7 rule
[ "Mathematics" ]
1,321
[ "Statistical approximations", "Mathematical relations", "Approximations" ]
9,607,393
https://en.wikipedia.org/wiki/Retention%20uniformity
Retention uniformity, or RU, is a concept in thin layer chromatography. It is designed for the quantitative measurement of equal-spreading of the spots on the chromatographic plate and is one of the Chromatographic response functions. Formula Retention uniformity is calculated from the following formula: where n is the number of compounds separated, Rf (1...n) are the Retention factor of the compounds sorted in non-descending order. Theoretical considerations The coefficient lies always in range <0,1> and 0 indicates worst case of separation (all Rf values equal to 0 or 1), value 1 indicates ideal equal-spreading of the spots, for example (0.25,0.5,0.75) for three solutes, or (0.2,0.4,0.6,0.8) for four solutes. This coefficient was proposed as an alternative to earlier approaches, such as D (separation response), Ip (performance index) or Sm (informational entropy). Besides its stable range, the advantage is a stable distribution as a random variable, regardless of compounds investigated. In contrast to the similar concept called Retention distance, Ru is insensitive to Rf values close to 0 or 1, or close to themselves. If two values are not separated, it still indicates some uniformity of chromatographic system. For example, the Rf values (0,0.2,0.2,0.3) (two compounds not separated at 0.2 and one at the start ) result in RU equal to 0.3609. See also Chromatographic response function References Komsta Ł., Markowski W., Misztal G., A proposal for new RF equal-spread criteria with stable distribution parameters as a random variable. J. Planar Chromatogr. 2007 (20) 27-37. Chromatography
Retention uniformity
[ "Chemistry" ]
395
[ "Chromatography", "Separation processes" ]
9,607,629
https://en.wikipedia.org/wiki/Retention%20distance
Retention distance, or RD, is a concept in thin layer chromatography, designed for quantitative measurement of equal-spreading of the spots on the chromatographic plate and one of the Chromatographic response functions. It is calculated from the following formula: where n is the number of compounds separated, Rf (1...n) are the Retention factor of the compounds sorted in non-descending order, Rf0 = 0 and Rf(n+1) = 1. Theoretical considerations The coefficient lies always in range <0,1> and 0 indicates worst case of separation (all Rf values equal to 0 or 1), value 1 indicates ideal equal-spreading of the spots, for example (0.25,0.5,0.75) for three solutes, or (0.2,0.4,0.6,0.8) for four solutes. This coefficient was proposed as an alternative to earlier approaches, such as delta-Rf, delta-Rf product or MRF (Multispot Response Function). Besides its stable range, the advantage is a stable distribution as a random variable, regardless of compounds investigated. In contrast to the similar concept called Retention uniformity, Rd is sensitive to Rf values close to 0 or 1, or close to themselves. If two values are not separated, it is equal to 0. For example, the Rf values (0,0.2,0.2,0.3) (two compounds not separated at 0.2 and one at the start ) result in RD equal to 0, but RU equal to 0.3609. When some distance from 0 and spots occurs, the value is larger, for example Rf values (0.1,0.2,0.25,0.3) give RD = 0.4835, RU = 0.4066. See also Chromatographic response function References Komsta Ł., Markowski W., Misztal G., A proposal for new RF equal-spread criteria with stable distribution parameters as a random variable. J. Planar Chromatogr. 2007 (20) 27-37. Chromatography
Retention distance
[ "Chemistry" ]
449
[ "Chromatography", "Separation processes", "Analytical chemistry stubs" ]
9,607,933
https://en.wikipedia.org/wiki/Handshaking%20lemma
In graph theory, the handshaking lemma is the statement that, in every finite undirected graph, the number of vertices that touch an odd number of edges is even. For example, if there is a party of people who shake hands, the number of people who shake an odd number of other people's hands is even. The handshaking lemma is a consequence of the degree sum formula, also sometimes called the handshaking lemma, according to which the sum of the degrees (the numbers of times each vertex is touched) equals twice the number of edges in the graph. Both results were proven by in his famous paper on the Seven Bridges of Königsberg that began the study of graph theory. Beyond the Seven Bridges of Königsberg Problem, which subsequently formalized Eulerian Tours, other applications of the degree sum formula include proofs of certain combinatorial structures. For example, in the proofs of Sperner's lemma and the mountain climbing problem the geometric properties of the formula commonly arise. The complexity class PPA encapsulates the difficulty of finding a second odd vertex, given one such vertex in a large implicitly-defined graph. Definitions and statement An undirected graph consists of a system of vertices, and edges connecting unordered pairs of vertices. In any graph, the degree of a vertex is defined as the number of edges that have as an endpoint. For graphs that are allowed to contain loops connecting a vertex to itself, a loop should be counted as contributing two units to the degree of its endpoint for the purposes of the handshaking lemma. Then, the handshaking lemma states that, in every finite graph, there must be an even number of vertices for which is an odd number. The vertices of odd degree in a graph are sometimes called odd nodes (or odd vertices); in this terminology, the handshaking lemma can be rephrased as the statement that every graph has an even number of odd nodes. The degree sum formula states that where is the set of nodes (or vertices) in the graph and is the set of edges in the graph. That is, the sum of the vertex degrees equals twice the number of edges. In directed graphs, another form of the degree-sum formula states that the sum of in-degrees of all vertices, and the sum of out-degrees, both equal the number of edges. Here, the in-degree is the number of incoming edges, and the out-degree is the number of outgoing edges. A version of the degree sum formula also applies to finite families of sets or, equivalently, multigraphs: the sum of the degrees of the elements (where the degree equals the number of sets containing it) always equals the sum of the cardinalities of the sets. Both results also apply to any subgraph of the given graph and in particular to its connected components. A consequence is that, for any odd vertex, there must exist a path connecting it to another odd vertex. Applications Euler paths and tours Leonhard Euler first proved the handshaking lemma in his work on the Seven Bridges of Königsberg, asking for a walking tour of the city of Königsberg (now Kaliningrad) crossing each of its seven bridges once. This can be translated into graph-theoretic terms as asking for an Euler path or Euler tour of a connected graph representing the city and its bridges: a walk through the graph that traverses each edge once, either ending at a different vertex than it starts in the case of an Euler path or returning to its starting point in the case of an Euler tour. Euler stated the fundamental results for this problem in terms of the number of odd vertices in the graph, which the handshaking lemma restricts to be an even number. If this number is zero, an Euler tour exists, and if it is two, an Euler path exists. Otherwise, the problem cannot be solved. In the case of the Seven Bridges of Königsberg, the graph representing the problem has four odd vertices, and has neither an Euler path nor an Euler tour. It was therefore impossible to tour all seven bridges in Königsberg without repeating a bridge. In the Christofides–Serdyukov algorithm for approximating the traveling salesperson problem, the geometric implications of the degree sum formula plays a vital role, allowing the algorithm to connect vertices in pairs in order to construct a graph on which an Euler tour forms an approximate TSP tour. Combinatorial enumeration Several combinatorial structures may be shown to be even in number by relating them to the odd vertices in an appropriate "exchange graph". For instance, as C. A. B. Smith proved, in any cubic graph there must be an even number of Hamiltonian cycles through any fixed edge ; these are cycles that pass through each vertex exactly once. used a proof based on the handshaking lemma to extend this result to graphs in which all vertices have odd degree. Thomason defines an exchange graph the vertices of which are in one-to-one correspondence with the Hamiltonian paths in beginning at and continuing through edge . Two such paths and are defined as being connected by an edge in if one may obtain by adding a new edge to the end of and removing another edge from the middle of . This operation is reversible, forming a symmetric relation, so is an undirected graph. If path ends at vertex then the vertex corresponding to in has degree equal to the number of ways that may be extended by an edge that does not connect back to ; that is, the degree of this vertex in is either (an even number) if does not form part of a Hamiltonian cycle through or (an odd number) if is part of a Hamiltonian cycle through . Since has an even number of odd vertices, must have an even number of Hamiltonian cycles through . Other applications The handshaking lemma (or degree sum formula) are also used in proofs of several other results in mathematics. These include the following: Sperner's lemma states that, if a big triangle is subdivided into smaller triangles meeting edge-to-edge, and the vertices are labeled with three colors so that only two of the colors are used along each edge of the big triangle, then at least one of the smaller triangles has vertices of all three colors; it has applications in fixed-point theorems, root-finding algorithms, and fair division. One proof of this lemma forms an exchange graph whose vertices are the triangles (both small and large) and whose edges connect pairs of triangles that share two vertices of some particular two colors. The big triangle necessarily has odd degree in this exchange graph, as does a small triangle with all three colors, but not the other small triangles. By the handshaking lemma, there must be an odd number of small triangles with all three colors, and therefore at least one such triangle must exist. The mountain climbing problem states that, for sufficiently well-behaved functions on a unit interval, with equal values at the ends of the interval, it is possible to coordinate the motion of two points, starting from opposite ends of the interval, so that they meet somewhere in the middle while remaining at points of equal value throughout the motion. One proof of this involves approximating the function by a piecewise linear function with the same extreme points, parameterizing the position of the two moving points by the coordinates of a single point in the unit square, and showing that the available positions for the two points form a finite graph, embedded in this square, with only the starting position and its reversal as odd vertices. By the handshaking lemma, these two positions belong to the same connected component of the graph, and a path from one to the other necessarily passes through the desired meeting point. The reconstruction conjecture concerns the problem of uniquely determining the structure of a graph from the multiset of subgraphs formed by removing a single vertex from it. Given this information, the degree-sum formula can be used to recover the number of edges in the given graph and the degrees of each vertex. From this, it is possible to determine whether the given graph is a regular graph, and if so to determine it uniquely from any vertex-deleted subgraph by adding a new neighbor for all the subgraph vertices of too-low degree. Therefore, all regular graphs can be reconstructed. The game of Hex is played by two players, who place pieces of their color on a tiling of a parallelogram-shaped board by hexagons until one player has a connected path of adjacent pieces from one side of the board to the other. It can never end in a draw: by the time the board has been completely filled with pieces, one of the players will have formed a winning path. One proof of this forms a graph from a filled game board, with vertices at the corners of the hexagons, and with edges on sides of hexagons that separate the two players' colors. This graph has four odd vertices at the corners of the board, and even vertices elsewhere, so it must contain a path connecting two corners, which necessarily has a winning path for one player on one of its sides. Proof Euler's proof of the degree sum formula uses the technique of double counting: he counts the number of incident pairs where is an edge and vertex is one of its endpoints, in two different ways. Vertex belongs to pairs, where (the degree of ) is the number of edges incident to it. Therefore, the number of incident pairs is the sum of the degrees. However, each edge in the graph belongs to exactly two incident pairs, one for each of its endpoints; therefore, the number of incident pairs is . Since these two formulas count the same set of objects, they must have equal values. The same proof can be interpreted as summing the entries of the incidence matrix of the graph in two ways, by rows to get the sum of degrees and by columns to get twice the number of edges. For graphs, the handshaking lemma follows as a corollary of the degree sum formula. In a sum of integers, the parity of the sum is not affected by the even terms in the sum; the overall sum is even when there is an even number of odd terms, and odd when there is an odd number of odd terms. Since one side of the degree sum formula is the even number the sum on the other side must have an even number of odd terms; that is, there must be an even number of odd-degree vertices. Alternatively, it is possible to use mathematical induction to prove the degree sum formula, or to prove directly that the number of odd-degree vertices is even, by removing one edge at a time from a given graph and using a case analysis on the degrees of its endpoints to determine the effect of this removal on the parity of the number of odd-degree vertices. In special classes of graphs Regular graphs The degree sum formula implies that every -regular graph with vertices has edges. Because the number of edges must be an integer, it follows that when is odd the number of vertices must be even. Additionally, for odd values the number of edges must be divisible Bipartite and biregular graphs A bipartite graph has its vertices split into two subsets, with each edge having one endpoint in each subset. It follows from the same double counting argument that, in each subset, the sum of degrees equals the number of edges in the graph. In particular, both subsets have equal degree sums. For biregular graphs, with a partition of the vertices into subsets and with every vertex in a subset having degree , it must be the case that ; both equal the number of edges. Infinite graphs The handshaking lemma does not apply in its usual form to infinite graphs, even when they have only a finite number of odd-degree vertices. For instance, an infinite path graph with one endpoint has only a single odd-degree vertex rather than having an even number of such vertices. However, it is possible to formulate a version of the handshaking lemma using the concept of an end, an equivalence class of semi-infinite paths ("rays") considering two rays as equivalent when there exists a third ray that uses infinitely many vertices from each of them. The degree of an end is the maximum number of edge-disjoint rays that it contains, and an end is odd if its degree is finite and odd. More generally, it is possible to define an end as being odd or even, regardless of whether it has infinite degree, in graphs for which all vertices have finite degree. Then, in such graphs, the number of odd vertices and odd ends, added together, is either even or infinite. Subgraphs By a theorem of Gallai the vertices of any graph can be partitioned as where in the two resulting induced subgraphs, has all degrees even and has all degrees odd. Here, must be even by the handshaking lemma. It is also possible to find even-degree and odd-degree induced subgraphs with many vertices. An induced subgraph of even degree can be found with at least half of the vertices, and an induced subgraph of odd degree (in a graph with no isolated vertices) can be found with . Computational complexity In connection with the exchange graph method for proving the existence of combinatorial structures, it is of interest to ask how efficiently these structures may be found. For instance, suppose one is given as input a Hamiltonian cycle in a cubic graph; it follows from Smith's theorem that there exists a second cycle. How quickly can this second cycle be found? investigated the computational complexity of questions such as this, or more generally of finding a second odd-degree vertex when one is given a single odd vertex in a large implicitly-defined graph. He defined the complexity class PPA to encapsulate problems such as this one; a closely related class defined on directed graphs, PPAD, has attracted significant attention in algorithmic game theory because computing a Nash equilibrium is computationally equivalent to the hardest problems in this class. Computational problems proven to be complete for the complexity class PPA include computational tasks related to Sperner's lemma and to fair subdivision of resources according to the Hobby–Rice theorem. Notes Lemmas in graph theory
Handshaking lemma
[ "Mathematics" ]
2,956
[ "Lemmas", "Lemmas in graph theory" ]
9,608,295
https://en.wikipedia.org/wiki/Reed%E2%80%93Muller%20expansion
In Boolean logic, a Reed–Muller expansion (or Davio expansion) is a decomposition of a Boolean function. For a Boolean function we call the positive and negative cofactors of with respect to , and the boolean derivation of with respect to , where denotes the XOR operator. Then we have for the Reed–Muller or positive Davio expansion: Description This equation is written in a way that it resembles a Taylor expansion of about . There is a similar decomposition corresponding to an expansion about (negative Davio expansion): Repeated application of the Reed–Muller expansion results in an XOR polynomial in : This representation is unique and sometimes also called Reed–Muller expansion. E.g. for the result would be where . For the result would be where . Geometric interpretation This case can be given a cubical geometric interpretation (or a graph-theoretic interpretation) as follows: when moving along the edge from to , XOR up the functions of the two end-vertices of the edge in order to obtain the coefficient of . To move from to there are two shortest paths: one is a two-edge path passing through and the other one a two-edge path passing through . These two paths encompass four vertices of a square, and XORing up the functions of these four vertices yields the coefficient of . Finally, to move from to there are six shortest paths which are three-edge paths, and these six paths encompass all the vertices of the cube, therefore the coefficient of can be obtained by XORing up the functions of all eight of the vertices. (The other, unmentioned coefficients can be obtained by symmetry.) Paths The shortest paths all involve monotonic changes to the values of the variables, whereas non-shortest paths all involve non-monotonic changes of such variables; or, to put it another way, the shortest paths all have lengths equal to the Hamming distance between the starting and destination vertices. This means that it should be easy to generalize an algorithm for obtaining coefficients from a truth table by XORing up values of the function from appropriate rows of a truth table, even for hyperdimensional cases ( and above). Between the starting and destination rows of a truth table, some variables have their values remaining fixed: find all the rows of the truth table such that those variables likewise remain fixed at those given values, then XOR up their functions and the result should be the coefficient for the monomial corresponding to the destination row. (In such monomial, include any variable whose value is 1 (at that row) and exclude any variable whose value is 0 (at that row), instead of including the negation of the variable whose value is 0, as in the minterm style.) Similar to binary decision diagrams (BDDs), where nodes represent Shannon expansion with respect to the according variable, we can define a decision diagram based on the Reed–Muller expansion. These decision diagrams are called functional BDDs (FBDDs). Derivations The Reed–Muller expansion can be derived from the XOR-form of the Shannon decomposition, using the identity : Derivation of the expansion for : Derivation of the second-order boolean derivative: See also Algebraic normal form (ANF) Ring sum normal form (RSNF) Zhegalkin polynomial Karnaugh map Irving Stoy Reed David Eugene Muller Reed–Muller code References Further reading (188 pages) Boolean algebra
Reed–Muller expansion
[ "Mathematics" ]
704
[ "Boolean algebra", "Fields of abstract algebra", "Mathematical logic" ]
9,608,854
https://en.wikipedia.org/wiki/Sim%20scanner
Sim Scanner is a feature of the Olympus FluoView FV1000 confocal laser scanning microscope. The system incorporates two laser scanners, one for confocal imaging and the other for simultaneous stimulation. They can be illuminated separately and independently, making it possible to stimulate the specimen during observation. As a result, the rapid cell reactions that occur right after laser stimulation can be captured, making the Sim Scanner suitable for such applications as Fluorescence recovery after photobleaching (FRAP), Fluorescence loss in photobleaching (FLIP), photoactivation and photoconversion. References Sim Scanner in Nature Methods Microscopes Microscopy
Sim scanner
[ "Chemistry", "Technology", "Engineering" ]
133
[ "Microscopes", "Measuring instruments", "Microscopy" ]
9,608,937
https://en.wikipedia.org/wiki/Fluorescence%20loss%20in%20photobleaching
Fluorescence Loss in Photobleaching (FLIP) is a fluorescence microscopy technique used to examine movement of molecules inside cells and membranes. A cell membrane is typically labeled with a fluorescent dye to allow for observation. A specific area of this labeled section is then bleached several times using the beam of a confocal laser scanning microscope. After each imaging scan, bleaching occurs again. This occurs several times, to ensure that all accessible fluorophores are bleached since unbleached fluorophores are exchanged for bleached fluorophores, causing movement through the cell membrane. The amount of fluorescence from that region is then measured over a period of time to determine the results of the photobleaching on the cell as a whole. Experimental Setup Before photobleaching can occur, cells must be injected with a fluorescent protein, often a green fluorescent protein (GFP), which will allow the targeted proteins to fluoresce and therefore be followed throughout the process. Then, a region of interest must be defined. This initial region of interest usually contains the whole cell or several cells. In FLIP, photobleaching occurs just outside the region of interest; therefore a photobleaching region also needs to be defined. A third region, the region where measurement will take place, needs to be determined as well. A number of initial scans need to be made to determine fluorescence before photobleaching. These scans will serve as the control scans, to which the photobleached scans will be compared later on. Photobleaching can then occur. Between each bleach pulse, it is necessary to allow time for recovery of fluorescent material. It is also important to take several scans of the region of interest immediately after each bleach pulse for further study. The change in fluorescence at the region of interest can then be quantified in one of three ways. The most common is to choose the location, size and number of the regions of interest based on visual inspection of the image sets. The two other, rather new but more reliable approaches are either by detecting areas of different probe mobility on an individual image basis or by physical modeling of fluorescence loss from moving bodies. Loss of fluorescence is defined by the mobile fraction, or the fraction of fluorophores capable of recovering into a photobleached area, of the fluorescently labeled protein. Incomplete loss of fluorescence indicates that there are fluorophores that do not move or travel to the bleached area. This allows for definition of the immobile fraction, or the fraction of fluorophores incapable of recovering into a photobleached area, of fluorescent-labeled proteins. Immobility indicates that there are proteins that may be in compartments closed off from the rest of the cell, preventing them from being affected by the repeated photobleaching. Applications Verifying Continuity of Membranous Organelles The primary use of FLIP is to determine the continuity of membranous organelles. This continuity or lack thereof is determined by observing the amount of fluorescence in the region of interest. If there is a complete loss of fluorescence, this indicates that the organelles are continuous. However, if there is incomplete loss of fluorescence, then there is not continuity between the organelles. Instead, these organelles are compartmentalized and therefore closed off to the transfer of any photobleached fluorophores. Continuity of the Golgi apparatus, endoplasmic reticulum, and nucleus have been verified using FLIP. Exchange Rate Between the Nucleus and Cytoplasm Two of the other, less frequently employed uses of FLIP are to determine how proteins are shuttled from the Cytoplasm to the Nucleus and then determine the rate at which this shuttling occurs. To determine what portions are involved and when in the shuttling process they are involved, continuous scans are observed. The sooner a part of the cytoplasm is used in the shuttling process, the more rapidly it experiences complete loss of fluorescence. The resulting image of this process should be a completely photobleached cytoplasm. If the cell also participates in nuclear export from the nucleus to the cytoplasm, photobleaching will also occur in the nucleus. The exchange rate between the nucleus and cytoplasm can also be determined from this type of data. In these instances, the region of photobleaching is within the nucleus. If shuttling occurs at a rapid pace, the fluorescence levels within the nuclear compartments will decrease rapidly through the frames taken. However, if shuttling occurs slowly, the fluorescence levels will remain unaffected or decrease only slightly. FLIP vs. FRAP FLIP is often used and is closely associated with Fluorescence recovery after photobleaching (FRAP). The major difference between these two microscopy techniques is that FRAP involves the study of a cell’s ability to recover after a single photobleaching event whereas FLIP involves the study of how the loss of fluorescence spreads throughout the cell after multiple photobleaching events. This difference in purpose also leads to a difference in what parts of the cell are observed. In FRAP, the area that is actually photobleached is the area of interest. Conversely, in FLIP, the region of interest is just outside the region that is being photobleached. Another important difference is that in FRAP, there is a single photobleaching event and a recovery period to observe how well fluorophores move back to the bleached site. However, in FLIP, multiple photobleaching events occur to prevent the return of unbleached fluorophores to the bleaching region. Like FLIP, FRAP is used in the study of continuity of membranous organelles. FLIP and FRAP are often used together to determine the mobility of GFP-tagged proteins. FLIP can also be used to measure the molecular transfer between regions of a cell regardless of the rate of movement. This allows for a more comprehensive analysis of protein trafficking within a cell. This differs from FRAP which is primarily useful for determining mobility of proteins in regions local to the photobleaching only. Potential Complications There are several complications that are involved with both FLIP and FRAP. Since both forms of microscopy examine living cells, there is always the possibility that the cells will move, causing false results. The best way to avoid this is to use an alignment algorithm, which will compensate for any movement and eliminate a large portion of the error due to movement. In these experiments, it is also key to have a control group in order to adjust the results and correct the recovery curve for the overall loss in fluorescence. Another way to minimize error is to keep photobleaching contained to a single region or area. This limitation will serve as a control and limit fluorescence loss due to photo-damage as compared to fluorescence loss due to the photobleaching. See also Fluorescence recovery after photobleaching Fluorescence Microscope Photobleaching References Fluorescence techniques
Fluorescence loss in photobleaching
[ "Biology" ]
1,446
[ "Fluorescence techniques" ]
9,609,051
https://en.wikipedia.org/wiki/Thiamine%20triphosphate
Thiamine triphosphate (ThTP) is a biomolecule found in most organisms including bacteria, fungi, plants and animals. Chemically, it is the triphosphate derivative of the vitamin thiamine. Function It has been proposed that ThTP has a specific role in nerve excitability, but this has never been confirmed and recent results suggest that ThTP probably plays a role in cell energy metabolism. Low or absent levels of thiamine triphosphate have been found in Leigh's disease. In E. coli, ThTP is accumulated in the presence of glucose during amino acid starvation. On the other hand, suppression of the carbon source leads to the accumulation, of adenosine thiamine triphosphate (AThTP). Metabolism It has been shown that in brain ThTP is synthesized in mitochondria by a chemiosmotic mechanism, perhaps similar to ATP synthase. In mammals, ThTP is hydrolyzed to thiamine pyrophosphate (ThDP) by a specific thiamine-triphosphatase. It can also be converted into ThDP by thiamine-diphosphate kinase. History Thiamine triphosphate (ThTP) was chemically synthesized in 1948 at a time when the only organic triphosphate known was ATP. The first claim of the existence of ThTP in living organisms was made in rat liver, followed by baker’s yeast. Its presence was later confirmed in rat tissues and in plants germs, but not in seeds, where thiamine was essentially unphosphorylated. In all those studies, ThTP was separated from other thiamine derivatives using a paper chromatographic method, followed by oxidation in fluorescent thiochrome compounds with ferricyanide in alkaline solution. This method is at best semi-quantitative, and the development of liquid chromatographic methods suggested that ThTP represents far less than 10% of total thiamine in animal tissues. References Biomolecules Organophosphates Thiazoles Thiamine Pyrimidines Phosphate esters
Thiamine triphosphate
[ "Chemistry", "Biology" ]
441
[ "Natural products", "Organic compounds", "Structural biology", "Biomolecules", "Biochemistry", "Molecular biology" ]
9,609,382
https://en.wikipedia.org/wiki/Michael%20Kent%20%28computer%20specialist%29
Michael Kent was one of two founders of the Computer Group which used a statistics based sports betting to predict the outcome of college football. The group reportedly made millions each season. According to figures compiled at the time by Michael Kent, the Computer Group in 1983-84 earned almost $5 million from wagers on college and, occasionally, NFL games. Yet Michael Kent suspects that his records are incomplete. They do not account for personal bets made by Dr. Mindlin, or Billy Walters and Glen Walker or by the dozens of other associates who had access to the Computer Group's information. By the time everyone had exhausted Kent's forecasts in the 1983-84 sports year, the group was estimated to have earned $10 to $15 million. Kent invented the statistical models. He was 34 when he had created the first successful program for handicapping basketball and football games: together with his brother, Michael collected statistical data about every team to put all that info to his computer and update the program. The story was first reported by a national publication in the March 1986 Sports Illustrated. References External links Keyboard Cappers: A sports-betting history lesson, with a nod to the computer and the trailblazers who saw the future Gambling � The Story of the Computer Group Living people Year of birth missing (living people)
Michael Kent (computer specialist)
[ "Technology" ]
264
[ "Computing stubs", "Computer specialist stubs" ]
9,609,729
https://en.wikipedia.org/wiki/Precipitin
A precipitin is an antibody which can precipitate out of a solution upon antigen binding. Precipitin reaction The precipitin reaction provided the first quantitative assay for antibody. The precipitin reaction is based upon the interaction of antigen with antibody leading to the production of antigen-antibody complexes. To produce a precipitin reaction, varying amounts of soluble antigen are added to a fixed amount of serum containing antibody. As the amount of antigen added: In the zone of antibody excess, each molecule of antigen is bound extensively by antibody and crosslinked to other molecules of antigen. The average size of antibody-antigen complex is small; cross-linking between antigen molecules by antibody is rare. In the zone of equivalence, the formation of precipitin complexes is optimal. Extensive lattices of antigen and antibody are formed by cross-linking. At high concentrations of antigen, the average size of antibody-antigen complexes is once again small because few antibody molecules are available to cross-link antigen molecules together. The small, soluble immune complexes formed in vivo in the zone of antigen excess can cause a variety of pathological syndromes. Antibody can only precipitate antigenic substrates that are multivalent—that is, only antigens that have multiple antibody-binding sites epitopes. This allows for the formation of large antigen:antibody complexes. Medical diagnosis using precipitin tests Infectious disease diagnosis Precipitin assays are commonly used in the diagnosis of infectious diseases caused by bacteria, viruses, fungi, and parasites. By detecting the presence of pathogen-specific antigens in patient samples, healthcare professionals can identify the causing agent of an infection and initiate appropriate treatment. For example, precipitin tests can be used to detect antigens of infectious bronchitis caused by the infectious bronchitis virus (IBV). Allergy testing Precipitin assays are used in allergy testing to identify allergen-specific antibodies (IgE) in patient serum samples. By exposing the serum to a panel of common allergens, such as pollen, dust mites, pet dander, and food proteins, healthcare professionals can determine the specific allergens triggering an individual's allergic reactions. References External links Biochemistry detection reactions Immune system
Precipitin
[ "Chemistry", "Biology" ]
466
[ "Immune system", "Biochemistry detection reactions", "Biochemical reactions", "Organ systems", "Microbiology techniques" ]
9,610,271
https://en.wikipedia.org/wiki/UCL%20Department%20of%20Science%20and%20Technology%20Studies
The UCL Department of Science and Technology Studies (STS) is an academic department in University College London, London, England. It is part of UCL's Faculty of Mathematics and Physical Sciences. The department offers academic training at both undergraduate and graduate (MSc and MPhil/PhD) levels. The department received its current name in 1995. It had been the "Department of History and Philosophy of Science" from 1938 to 1995, and the "Department of History and Method of Science" from 1921 to 1938. University College London was the first UK university to offer single honours undergraduate degrees in this interdisciplinary subject, launching its BSc in history and philosophy of science in 1993. Two related BSc degrees followed shortly thereafter. At UCL, science and technology studies (abbreviated "STS") includes three specialist research clusters: "history of science," "philosophy of science," and "science, culture, and democracy". In 2022 STS accepted its first cohort for an MSc in Science Communication. The department offices are located on UCL's campus in Gordon Square, Bloomsbury, London. References External links UCL Department of Science and Technology Studies website Educational institutions established in 1994 Science and Technology Studies History of science and technology in England Science and technology studies Science and technology in London 1994 establishments in England
UCL Department of Science and Technology Studies
[ "Technology" ]
263
[ "Science and technology studies" ]
9,610,679
https://en.wikipedia.org/wiki/Morse%E2%80%93Palais%20lemma
In mathematics, the Morse–Palais lemma is a result in the calculus of variations and theory of Hilbert spaces. Roughly speaking, it states that a smooth enough function near a critical point can be expressed as a quadratic form after a suitable change of coordinates. The Morse–Palais lemma was originally proved in the finite-dimensional case by the American mathematician Marston Morse, using the Gram–Schmidt orthogonalization process. This result plays a crucial role in Morse theory. The generalization to Hilbert spaces is due to Richard Palais and Stephen Smale. Statement of the lemma Let be a real Hilbert space, and let be an open neighbourhood of the origin in Let be a -times continuously differentiable function with that is, Assume that and that is a non-degenerate critical point of that is, the second derivative defines an isomorphism of with its continuous dual space by Then there exists a subneighbourhood of in a diffeomorphism that is with inverse, and an invertible symmetric operator such that Corollary Let be such that is a non-degenerate critical point. Then there exists a -with--inverse diffeomorphism and an orthogonal decomposition such that, if one writes then See also References Calculus of variations Hilbert spaces Lemmas in analysis
Morse–Palais lemma
[ "Physics", "Mathematics" ]
264
[ "Theorems in mathematical analysis", "Quantum mechanics", "Lemmas in mathematical analysis", "Hilbert spaces", "Lemmas" ]
16,343,705
https://en.wikipedia.org/wiki/List%20of%20vehicle%20speed%20records
The following is a list of speed records for various types of vehicles. This list only presents the single greatest speed achieved in each broad record category; for more information on records under variations of test conditions, see the specific article for each record category. As with many world records, there may be some dispute over the criteria for a record-setting event, the authority of the organization certifying the record, and the actual speed achieved. Land vehicles Rail vehicles Aircraft Aircraft speed records are based on true airspeed, rather than ground speed. Noted unofficial records Watercraft Spacecraft In order to unambiguously express the speed of a spacecraft, a frame of reference must be specified. Typically, this frame is fixed to the body with the greatest gravitational influence on the spacecraft, as this is the most relevant frame for most purposes. Velocities in different frames of reference are not directly comparable; thus the matter of the "fastest spacecraft" depends on the reference frame used. Because of the influence of gravity, maximum velocities are usually attained when a spacecraft is close to its primary body: either just after launch, at a point of closest approach (periapsis), or during the early stages of atmospheric entry. See also Orders of magnitude (speed) References Vehicle speed World records Vehicles
List of vehicle speed records
[ "Physics" ]
261
[ "Vehicles", "Transport", "Physical systems" ]
16,344,093
https://en.wikipedia.org/wiki/Local%20convergence
In numerical analysis, an iterative method is called locally convergent if the successive approximations produced by the method are guaranteed to converge to a solution when the initial approximation is already close enough to the solution. Iterative methods for nonlinear equations and their systems, such as Newton's method are usually only locally convergent. An iterative method that converges for an arbitrary initial approximation is called globally convergent. Iterative methods for systems of linear equations are usually globally convergent. Numerical analysis Iterative methods Optimization algorithms and methods
Local convergence
[ "Mathematics" ]
107
[ "Mathematical analysis", "Mathematical analysis stubs", "Computational mathematics", "Mathematical relations", "Numerical analysis", "Approximations" ]
16,344,601
https://en.wikipedia.org/wiki/Fuculose
Fuculose or 6-deoxy-tagatose is a ketohexose deoxy sugar. Fuculose is involved in the process of sugar metabolism. -Fuculose can be formed from -fucose by -fucose isomerase and converted to L-fuculose-1-phosphate by -fuculose kinase. See also -fuculose-phosphate aldolase -fuculosekinase References Deoxy sugars Ketohexoses Furanoses
Fuculose
[ "Chemistry", "Biology" ]
115
[ "Carbohydrates", "Deoxy sugars", "Biotechnology stubs", "Biochemistry stubs", "Biochemistry" ]
16,345,197
https://en.wikipedia.org/wiki/The%20Purchase%20of%20the%20North%20Pole
The Purchase of the North Pole or Topsy-Turvy () is an adventure novel by Jules Verne, published in 1889. It is the third and last novel of the Baltimore Gun Club, first appearing in From the Earth to the Moon, and later in Around the Moon, featuring the same characters but set twenty years later. Like some other books of his later years, in this novel Verne tempers his love of science and engineering with a good dose of irony about their potential for harmful abuse and the fallibility of human endeavors. Plot In the year of 1890, an international auction is organized to define the sovereign rights to the part of the Arctic extending from the 84th parallel, the highest yet reached by man, to the North Pole. Several countries send their official delegates, but the auction is won by a representative from an anonymous United States buyer. After the auction closes, the mysterious buyer is revealed to be Barbicane and Co., a company founded by Impey Barbicane, J.T. Maston and Captain Nicholl — the same members of the Baltimore Gun Club who, twenty years earlier, had traveled around the Moon inside a large cannon shell. The brave gunmen-astronauts had come out of their retirement with an even more ambitious engineering project: using the recoil of a huge cannon to remove the tilt of the Earth's axis — so that it would become perpendicular to the planet's orbit, like Jupiter's. That change would bring an end to seasons, as day and night would be always equal and each place would have the same climate all year round. The society's interest lay in another effect of the recoil: a displacement of the Earth's rotation axis, that would bring the lands around the North Pole, which they had secured in the auction, to latitude 67 north. Then the vast coal deposits that were conjectured to exist under the ice could be easily mined and sold. The technical feasibility of the plan had been confirmed by J. T. Maston's computations. The necessary capital had been provided by Ms. Evangelina Scorbitt, a wealthy widow and ardent admirer of Maston. The cannon needed for that plan would be enormous, much larger than the huge Columbiad that had sent them to the Moon. Once the plan became public, the brilliant French engineer Alcide Pierdeux quickly computes the required force of the explosion. He then discovers that the recoil would buckle the Earth's crust; many countries (mostly in Asia) would be flooded, while others (including the United States) would gain new land. Alcide's note sends the world into panic and rage, and authorities promptly rush to stop the project. However Barbicane and Nicholl had left America for destination unknown, to supervise the completion and firing of the monster gun. J. T. Maston is caught and jailed, but he is unwilling or unable to reveal the cannon's location. Frantic searches around the world fail to find it either. The cannon in fact had been dug deep into the flanks of Mount Kilimanjaro, by a small army of workers provided by a local sultan who was an enthusiastic fan of the former Moon explorers. The projectile, a steel-braced chunk of rock weighing 180,000 tons, would exit the barrel at the fantastic speed of 2,800 kilometres per second — thanks to a new powerful explosive invented by Nicholl, which he had called "melimelonite". The cannon is fired as planned, and the explosion causes huge damage in the immediate vicinity. However, the Earth's axis retains its tilt and position, and not the slightest tremor is felt in the rest of the world. Alcide, shortly before the cannon was fired, had discovered that J. T. Maston, while computing the size of the cannon, had made a calculation error; he had accidentally erased three zeros from the blackboard when he was struck by lightning during a telephone call from Ms. Scorbitt. Because of that single mistake, twelve zeros got omitted from the result. Because Maston's calculations were undoubtedly considered correct when they were discovered, this error was not discovered early enough. The cannon he designed was indeed far too small: a trillion of them would have had to be fired to achieve the intended effect. Ridiculed by the whole world and bearing the bitter resentment of his two associates, J. T. Maston goes back into retirement, vowing to never again make any mathematical calculations. Ms. Scorbitt finally declares her feelings to Maston, and he gladly surrenders to marriage. Alcide gains worldwide recognition by revealing the cause of the failure of the operation to the public. Trivia The notion of tilting the Earth's axis to affect the climate is first put forward by J.T. Maston in From the Earth to the Moon. The fancy name "melimelonite", based on the French word for 'mishmash', "méli-mélo", may be a reference to melinite, a high explosive composed of picric acid and guncotton adopted by the French army in 1887; and perhaps also to melon, a heptazine polymer described by Berzelius in 1830, whose structure remained a chemical puzzle until the 1930s. Publication history 1890 USA: New York: J. G. Ogilvie and Company, published as Topsy-Turvy 1891, UK, London: Sampson Low, Marston, Searle, & Rivington. First UK edition. 2012, UK, London: Hesperus Press, new translation by Sophie Lewis with a foreword by Professor Ian Fells, published as The World Turned Upside Down References External links Sans dessus dessous available at Jules Verne Collection See also 1889 in science fiction 1889 French novels 1889 science fiction novels Novels by Jules Verne Novels set in the 1890s Novels set in the Arctic Planetary engineering Sequel novels Climate change novels
The Purchase of the North Pole
[ "Engineering" ]
1,216
[ "Planetary engineering" ]
16,345,498
https://en.wikipedia.org/wiki/Home-stored%20product%20entomology
Home-stored product entomology is the study of insects that infest foodstuffs stored in the home. It deals with the prevention, detection and eradication of pests. This field is related to forensic entomology, as consumers who find contaminated products may choose to take legal action against the producers. Suitably qualified entomologists are likely to be able to determine the identity of contaminant species, even when no insects are found and the only evidence of infestation is the resulting damage. They should also be able to determine whether the foodstuff was contaminated before or after purchase, to determine whether the producer (rather than the consumer) is at fault. Major stored product pests Flour beetles (Tribolium castaneum and Tribolium confusum) Two different types of beetles are classified as flour beetles: the red flour beetle and the confused flour beetle, which have similar physical characteristics. They are flat and oval in shape and usually range around long. Their exoskeletons are reddish brown with a shiny and smooth texture. In both species, the eggs are white or colorless. They are very small in size and have a sticky outer covering that causes certain food particles to stick to them. The larvae have six legs, with two pointy projections toward the caudal end. Finally, the pupal stage (a cocoon-like form) is usually a white or brownish color. The beetle life cycle lasts about three years or more, with the larval stage ranging anywhere from 20 to over 100 days, and the pupal stage around eight days. Beetles usually breed in damaged grain, grain dust, high-moisture wheat kernels, and flour. The female flour beetle can lay between 300 and 400 eggs during her lifetime [a period of 5 to 8 months]. The flour beetles mainly infest grains, including, but not limited to: cereal, corn-meal, oats, rice, flour, and crackers. This type of beetle is the most abundant insect pest in flour mills across the United States. Their small size allows them to maneuver through cracks and crevices and get into the home and other areas. Once they are present in areas with potential food sources, they can infest material such as flour, resulting in a sharp odor or moldy flavor. The red flour beetle can fly short distances and the confused flour beetle is unable to fly. While the confused flour beetle is more commonly found in the northern United States, the red flour beetles are more predominant in the southern United States in areas with warmer climates. The red flour beetle and the confused flour beetle are commonly used as model organisms, to study genetics and ecology. The genome of the red flour beetle has been sequenced. Drugstore beetle (Stegobium paniceum) This beetle is related to the commonly known cigarette beetle. Adult drugstore beetles are cylindrical with lengths ranging from . They are a reddish-brown color and have elytra, sclerotized (hardened) wings that fold back over the abdomen and hinge upwards, allowing the hind wings to come out to fly. Females are capable of laying up to 75 eggs during a 13- to 65-day period. After the eggs are laid, they hatch into a larval period that can range anywhere from four to 20 weeks. After the larval period, drugstore beetle larvae move out of the substrate to build a cocoon and pupate. The pupation period takes a total of 12–18 days. The entire life cycle of the drugstore beetle lasts approximately two months but can be as long as seven months. These stored product pests will infest almost anything readily available. Food products prone to infestation include flour, dry mixes, breads, cookies, and other spices. Nonfood materials include wool, hair, leather, and museum specimens. This specific type of beetle has symbiotic yeasts that produce B vitamins, which allow the beetle to survive even when consuming foods of low nutritional value. They are found in areas that have a warmer climate, yet are less plentiful in the tropics than the cigarette beetle. Sawtoothed grain beetle (Oryzaephilus surinamensis) The sawtoothed grain beetle is closely related to the merchant grain beetle, and is commonly found in kitchen cabinets feeding on items such as cereal, breakfast foods, dried fruits, macaroni, crackers, etc. They are the most common grain and stored product pest in the United States. They are very active and tend to crawl rapidly while searching for food. They are small insects, reaching a length of about of an inch. Their name originates from their distinguishable, sawtooth-like projections found on each side of the thorax. The body of the beetle is flat and slender in shape, and brown in color. The size and shape of the mandibles allow the beetles to easily break through well-sealed packaged foods. An adult female can lay between 45 and 250 eggs that usually hatch within three to 17 days. The larvae have a caterpillar-like appearance, with a yellowish coloration to the body and a brown head. The larval period can last as long as 10 weeks but can be as short as two weeks. Following the larval instars is the pupal period, which can last one to three weeks. The pupal stage is characterized by the unique process by which these beetles stick together pieces of food material to form protective coverings around their bodies. A fully mature adult beetle, under optimal conditions, can live a maximum of four years, a long lifespan for an arthropod. Indianmeal moth (Plodia interpunctella) Indianmeal moths can infest a variety of foods found in the home. Coarsely ground grains, cereals, dried fruits, and herbs are common items the moths have been known to infest. They have also been found in animal feeds, such as dry dog food, fish food, and even bird seed. Adult moths are small; generally, their length averages about inch, with a -inch wing span. As adults, the moths are easily identified by an overall grayish, dirty complexion. However, the wing tips have a bronze color that helps differentiate this particular moth from other household moths. The adults have a distinct forewing pattern, as well, which consists of a light-colored base with about two-thirds of the distal area a red to copper color. The larval stage, or caterpillar, is characterized by a pinkish or yellowish-green body color with a dark brown head. The larval stage of the moth's life cycle is centered on food sources; during the last instar, these larvae are characterized by a movement towards a protected area to pupate. These caterpillars can chew through plastic packaging and will often produce silk that loosely binds to food fragments. The pupal stage is generally observed as tiny cocoons that hang from the ceiling; these cocoons can also be found on walls, as well as near the food source. A female can lay over 200 eggs and will usually die after this process because adult Indianmeal moths do not eat. Fruit flies (Drosophila melanogaster) Fruit flies are found near ripened or fermenting fruit. Tomatoes, melons, squash, grapes and other perishable items brought in from the garden are a common cause of an indoor infestation. Fruit flies can also be attracted to rotting bananas, potatoes, onions and other unrefrigerated produce purchased at the grocery store and taken home. The body of the fruit fly is tan towards the front part of the body and black towards the rear. They usually have red eyes and are about inch long. Females have the ability to lay over 500 eggs, usually in fermenting fruit as a food source. The only environment necessary for successful reproduction is a moist film and fermenting material. Generally, fruit flies are a problem during late summer and fall due to their attraction to ripening and fermenting fruits and vegetables. The entire life cycle can be completed in about two weeks. Rarely, because of their ability to fly in and out of the home through windows and screens, they have the capability of contaminating food with bacteria and disease-producing organisms. Other stored product pests Flour mite Acarus siro House cricket Acheta domesticus Common furniture beetle Anobium punctatum Varied carpet beetle Anthrenus verbasci Fur beetle or carpet beetle Attagenus pellio Black carpet beetle Attagenus unicolor Black larder beetle Dermestes ater Larder beetle Dermestes lardarius Hide beetle Dermestes maculatus Cacao moth Ephestia elutella Mediterranean flour moth Ephestia kuehniella Cigarette beetle Lasioderma serricorne Silverfish Lepisma saccharina Spider beetle Mezium americanum Red-legged ham beetle Necrobia rufipes Golden spider beetle Niptus hololeucus Yellow V moth Oinophila v-flavum Merchant grain beetle Oryzaephilus mercator Australian spider beetle Ptinus tectus Meal moth Pyralis farinalis Lesser grain borer Rhyzopertha dominica Rice weevil Sitophilus oryzae Maize weevil Sitophilus zeamais Angoumois grain moth Sitotroga cerealella Yellow mealworm Tenebrio molitor Cadelle beetle Tenebroides mauritanicus Destructive flour beetle Tribolium destructor Carpet moth or tapestry moth Trichophaga tapetzella Khapra beetle Trogoderma granarium Clothes moths – several species Detection of an infestation Insects can be identified by examining the type of food and the character of the damage done in the absence of the insect itself, which helps determine what type of control is needed. Having an insect specimen and accurately identifying it can lead to eradication, and ultimately, prevention. Foods commonly infested include: Whole or cracked grains (rice) Flour, meal, or similar ground grain products Spices Cereals Pasta Candy Powdered milk Nuts (whole or pieces) Other items include, but are not limited to: Rodent baits (that contain grain as a feeding attractant), dry pet food, bird seed, grass seed, some powdered soap detergents, dried flowers, potpourri, items stuffed with dried beans or other plant material, and tobacco products. To identify an insect, and consequently make a decision about the type of control to be implemented, the type of food must first be noted, especially in the absence of a specimen. Although identifying the food is a general start to begin to identify the insect, it must be remembered that it is not always the most accurate method, but is mostly used as a guideline, as some insects are more likely than others to be found in certain types of grain, flour, etc. The type of food is not always conclusive to the type of insect found in it, as insects are not extremely picky, and many families and species are found on a wide range of different foodstuffs. Using the infested item as a guideline, noting the type of damage done to the product is the next step. Some insects, like the drugstore beetle, leave telltale tiny holes in the damaged product, while Indianmeal moths are notorious for the spider web-like threads left behind in the food they infest. These observations can generally lead to a mostly accurate conclusion about the type of insects causing the damage, but obviously the most accurate conclusion relies on any specimen found either directly in the stored product or in the vicinity. The larvae, pupae, and adults can be found directly in the product while usually only the pupae and adults are found in the vicinity of the product. It is not practical to assume any person has knowledge of general entomology, so the following analysis focuses on the five major pests that most commonly infest stored products, beginning with the type of foods infested, signs indicative of a particular insect infestation, and a description of the larvae, pupae, and adults, including behavior, as well as appearance. Red flour beetle detection This beetle is similar to the saw-toothed grain beetles in both habits and types of products infested. It is a serious pest in flour mills and wherever cereal products and other dried products are stored and/or processed. Generally, the beetle is attracted to grain with a high moisture content, and usually causes the grain to acquire a grayish tint. The beetle may also impart a bad odor, which then affects the taste of the infested products, as well as encouraging the growth of mold in the grain. This foul odor and taste in the various food products are caused by pheromones and toxic quinone compounds. Sawtoothed grain beetle detection The sawtoothed grain beetle feeds on a plethora of feeds, but is not capable of attacking whole or undamaged grains; therefore, the larvae are commonly found in processed grains (flour and meal), dry dog food, dried fruits, candy bars, tobacco, drugs, dried meats, and a variety of other stored food products. Drugstore beetle detection These beetles will infest almost anything—they are found most often, however, in flour, bread, spices, breakfast foods, and meal. In the case of an infestation, contaminated products have telltale tunnels which have the appearance of tiny holes. These beetles do not sting, bite, or harm pets or damage a house, yet have the potential, in large infestations, to become a nuisance by flying on doors and windows in heavy populations. Indianmeal moth detection Indianmeal moths infest both cereal and stored grain products, packaged goods, and surface layers of shelled corn. The most telltale sign of the Indianmeal moth is the silk webbing the larvae (caterpillars) produce when feeding on the surfaces of foods. This silk webbing may appear to be or resemble cobwebs inside the products' containers. Often, a few larvae may be found in the packaging of the product, along with the 'cobwebs', cast skins and frass. Larvae are white worms with black heads, which, when ready to pupate, crawl up the walls of the home in most cases, and are suspended from the ceiling attached by a single silken thread. Most complaints about these moths come during the warmer parts of the year- usually July through August- but the moths can appear during any month. As with all insects important to stored product entomology, it cannot be automatically assumed that products were previously infested, yet, it is more common for these moths to contaminate products before purchase than for the moth to fly into a home through open windows or doors. An important aspect of the Indianmeal moth is that the larva is the only stage of the insect's life cycle to feed on stored products, the adults do not. Fruit fly detection Fruit flies are attracted to ripened fruits and vegetables, usually in the kitchen area, but will breed in garbage disposals, empty bottles and cans, wet or damp mops or cleaning rags, and trash containers. The only requirement for these flies to breed is a moist film of fermenting material. Infestations can originate from over-ripened fruits or vegetables that were previously infested, and then brought into the home, or from fruit over-ripening in the home. Since adults can also fly from the outside through screened doors or windows, it can not always be assumed that the product in question was infested before it was brought into the home. The larvae are found on the inside layer of the fruit, directly beneath the skin. If the outer layer of the fruit is removed, the rest of the fruit can be salvaged. Fruit flies are primarily a nuisance pest. FDA regulations Defect action levels have been a part of the food industry for nearly a century. The first established defect action level was created in 1911 for mold in tomato pulp. However, limits for insect fragments and larvae were not added until the 1920s on various fruits and vegetables. In 1938, the Federal Food, Drug and Cosmetic Act was established in the United States to provide a more defined reference based on strict limitations and methods. Major companies spend a large amount of money every year to aid in the prevention of food contamination. Most of these dollars are well spent and do, in fact, prevent food from becoming contaminated on a large scale; however, many "defects" are found in consumers' meals on a daily basis. The Food and Drug Administration states, "it is economically impractical to grow, harvest, or process raw products that are totally free of nonhazardous, naturally occurring, unavoidable defects". The general public proposes that companies should use more chemicals or pesticides to control this "problem", though the amount of pesticide and chemicals necessary to eradicate all insects from foodstuff would pose a threat to any human's health, much more harmful than a controlled quantity of insect and rodent fragments. The food defect action levels, as proposed by the FDA, is a list of ordinances and guidelines by which manufacturers and industrial food agencies must abide to ensure the safe service of foodstuff. However, these detection levels are labeled with maximum limitations only. Due to the impossibility of preventing all unavoidable defects in foods, the FDA attempts to prevent these health hazards from reaching a harmful level. Therefore, it is understood and regarded that all manufactures are allowed to have low numbers of insect and rodent hairs present in food, as long as the product is still considered "safe" for human consumption. Prevention and eradication Prevention To prevent the infestation of foodstuffs by pests of stored products, or "pantry pests", a thorough inspection must be conducted of the food item intended for purchase at the supermarket or the place of purchase. The expiration date of grains and flour must also be noted, as products that sit undisturbed on the shelf for an extended period of time are more likely to become infested. This does not, however, exclude even the freshest of products from being contaminated. Packaging should be inspected for tiny holes that indicate there might be an infestation. If there is evidence of an insect infestation, the product should not be purchased. The store should be notified immediately, as further infestation must be prevented. Most stores have a plan of action for insect infestations. Bringing an infested product into a pantry or a home leads to a greater degree of infestation. In the home, putting cereal or grain-type items in protective containers will also help to prevent an infestation or the spread of insects from one product to another. Insects can chew through thin plastic, foil, cardboard and other packaging used for product for resale; transferring purchased products into heavy glass containers that can be tightly sealed or heavy plastic containers can improve sanitation and prevent infestation. Using the oldest products first and buying grains and cereals in smaller quantities which can be used quickly, depending on the size or intake of the family, decreases the chances of infestation. Fruit flies, however, present an entirely different approach to prevention. The primary method to controlling and eliminating fruit flies is to eradicate sources of attraction. Ripened produce should be either eaten, discarded, or refrigerated. Any damaged or cracked fruit or vegetable needs to be trimmed, and the damaged piece discarded in case larvae or eggs are present in the area in question. Careful attention must be paid to potential breeding sites that, when forgotten, could cause a massive infestation—all recycling and compost bins must be cleaned, and areas must be checked for forgotten, rotting fruit. Because of their small size, fruit flies are capable of breeding on the inside of the lid of a container. Therefore, when personally canning fruits or vegetables, beer, cider, or wine, the container must be well-sealed. Adults moths can lay eggs under the lid of a jar, allowing the larvae to crawl into the food source when hatched. Homeowners should also outfit their doors and windows with tight mesh screens to prevent the adult fruit flies from flying in from outdoors. Preventive methods and sanitation are the keys to avoiding an infestation or contamination of foodstuffs. Eradication Although not seen when groceries are purchased, some products have the possibility of being infested prior to being placed in the pantry. A periodic check of susceptible foodstuffs is necessary, especially in summer months when most insects are more active. In the event an infestation is discovered, steps must be taken to eradicate the insects. Controlling an infestation is a lengthy process and insects may still be seen, albeit in dwindling numbers, for several weeks. All infested items, as well as uninfested items, must be removed from shelves, thoroughly cleaned and vacuumed. After vacuuming, the waste containing the infested material must be removed and discarded. Items should be checked for beetles, larvae, and pupae; all food items must be inspected, as well, and special attention must be paid to items rarely used. The infested items may either be discarded, heated, or frozen to kill the insects. If the food is chosen to be discarded, the item must be completely removed from surrounding premises to prevent re-infestation. Freezing products for three to four days or heating them to about for 30 to 40 minutes will rid the product of the pests. Decorative ornaments and objects made with plant material and seed in the vicinity of stored products will increase the risk of re-infestation; insects can feed on those items until they locate stored products. These items should also be thrown out or disinfected by freezing or heating. Cleaning the area where the infested products were found is advisable, as well. Cleaning with bleach or ammonia, however, will not help with the eradication of the pests. Using a vacuum cleaner to clean the area thoroughly, especially in cracks and corners where insects may hide, will decrease the chances of re-infestation. Because food will be stored in that area again, pesticides are not a good method of eradication. Pesticides can leave a residue that can contaminate food products stored near it. Also, once a pest is inside the container, the pesticides have no effect. If the infestation is so severe that pesticides are the only way to contain the problem, a professional should be contacted immediately. Do not try to apply pesticides to any area where food is stored for human or animal consumption. Contamination can occur and cause illness or more severe conditions. Proper storage and cleanliness are the only ways to prevent an infestation from occurring. Sanitation is the key to prevention and eradication of any pests. See also List of common household pests The Food Defect Action Levels References Further reading Robinson, W. H. (2008). Urban Insects and Arachnids A Handbook of Urban Entomology. Cambridge University Press. , . Also available in eBook format. External links Indianmeal moth, Plodia interpunctella on the University of Florida/IFAS Featured Creatures Web site red and confused flour beetles, Tribolium'' spp. on the UF/IFAS Featured Creatures Web site a stored product pest, Oryzaephilus acuminatus on the UF/IFAS Featured Creatures Web site FAO Insect pests of cured fish PADIL Long download. Forensic entomology Food safety Insects in culture Stored-product pests
Home-stored product entomology
[ "Biology" ]
4,858
[ "Pests (organism)", "Stored-product pests" ]
16,346,990
https://en.wikipedia.org/wiki/Eocrinoidea
The Eocrinoidea were an extinct class of echinoderms that lived between the Early Cambrian and Late Silurian periods. They are the earliest known group of stalked, brachiole-bearing echinoderms, and were the most common echinoderms during the Cambrian. The earliest genera had a short holdfast and irregularly structured plates. Later forms had a fully developed stalk with regular rows of plates. They were benthic suspension feeders, with five ambulacra on the upper surface, surrounding the mouth and extending into a number of narrow arms. Phylogeny Eocrinoids were a paraphyletic group that are seen as the basal stock from which all other blastozoan groups evolved. Early evolution The following cladogram, after Nardin et al. 2017, shows the progression of early eocrinoid families, with all other eocrinoid families (including representatives Trachelocrinus and Ridersia) grouped with "derived Blastozoans" as their relationships with each other and with other blastozoans are not addressed. Note that some other sources use a more restricted definition of Eocrinidae, or use the spelling Lichenoididae in place of Lichenoidae. Relationships to other groups Relationships among the eocrinidae and other blastozoan clades are an area of ongoing study. Below are two of many cladograms showing some aspect of eocrinoid paraphyly or polyphyly. References Works cited Blastozoa Paleozoic echinoderms Cambrian echinoderms Silurian echinoderms Cambrian first appearances Silurian extinctions Paraphyletic groups
Eocrinoidea
[ "Biology" ]
348
[ "Phylogenetics", "Paraphyletic groups" ]
16,347,321
https://en.wikipedia.org/wiki/List%20of%20planetariums
This entry is a list of permanent planetariums across the world. Permanent planetariums Planetariums are ordered by continent and then by country in alphabetical order. The planetariums are listed in the following format: name, city. The International Planetarium Society has a more complete list on its website. Africa Algeria Complexe Culturel Abdelwahab Salim, Tipaza Planetarium de Ghardaia, Ghardaia Egypt Arab Academy for Science and Technology Planetarium, Alexandria The Child Museum, Cairo Planetarium Science Center, Alexandria Suez Discovery & Science Center, Suez Ghana Ghana Planetarium, Accra Libya Planetarium of Tripoli, Al-Quba Al-Falakia, Tripoli, Libya South Africa Iziko Planetarium at the Iziko South African Museum, Cape Town Johannesburg Planetarium at University of the Witwatersrand, Johannesburg Sutherland Planetarium Naval Hill Planetarium, Bloemfontein Tunisia Planetarium of Tunis Science City, Tunis Asia Bangladesh Bangabandhu Sheikh Mujibur Rahman Novo Theatre, Dhaka Bangabandhu Sheikh Mujibur Rahman Novo Theatre, Rajshahi (Under Construction) China Beijing Planetarium, Beijing Hong Kong Space Museum, Tsim Sha Tsui, Hong Kong Macao Science Center, Macao Shanghai Astronomy Museum, Shanghai India Indonesia Jagad Raya Planetarium, Tenggarong Jakarta Planetarium, Jakarta Loka Jala Crana Planetarium, Surabaya Iraq Baghdad Planetarium, Baghdad Israel Eretz Israel Museum Planetarium, Tel Aviv Givatayim Observatory, Givatayim Wise Observatory, Mitzpe Ramon Japan Kuwait Kuwait Planetarium Kazakhstan Aktobe Planetarium Malaysia Melaka Planetarium, Malacca Planetarium Negara, Kuala Lumpur Sultan Iskandar Planetarium, Sarawak Myanmar Yangon Planetarium, Yangon Pakistan PIA Institute of Planetaria, Astronomy & Cosmology (PIA-IPAC), Karachi PIA Institute of Planetaria, Astronomy & Cosmology (PIA-IPAC), Lahore PIA Institute of Planetaria, Astronomy & Cosmology (PIA-IPAC), Peshawar Philippines National Museum Planetarium, Manila PAGASA Planetarium, Quezon City DOST-PAGASA Mindanao Planetarium, Mindanao South Korea Gwacheon National Science Museum Planetarium, Gwacheon National Science Museum, Daejeon Eunpyung Planetarium, Seoul National Science Museum, Seoul Sri Lanka Sri Lanka Planetarium, Colombo Taiwan Taipei Astronomical Museum, Taipei National Museum of Natural Science, Taichung Tainan Astronomical Education Area, Tainan Thailand Turkmenistan Ashgabad Planetarium, Ashgabad Europe Austria Digitales Planetarium im Naturhistorischen Museum Wien, Vienna Sternenturm Planetarium Judenburg, Judenburg , Vienna Belarus Minsk Planetarium, Minsk Belgium Belgian Planetarium, Brussels Europlanetarium Genk, Genk Planetarium Antwerp Zoo, Antwerp Planètarium Olympus Mons, Mons Planétarium, Université de Liège, Liège Volkssterrenwacht vzw Beisbroek, Bruges Bulgaria NAOP Gabrovo, Gabrovo NAOP Smolyan, Smolyan NAOP Varna, Varna NAOP Yambol, Yambol Planetarium of Plovdiv, Plovdiv Public Astronomical Observatory and Planetarium, Dimitrovgrad Croatia Astronomical Centre Rijeka, Rijeka Nikola Tesla Technical Museum Planetarium, Zagreb Cyprus The Cyprus Planetarium, Nicosia Czech Republic Denmark Orion Planetarium, Jels The Steno Museum, Aarhus Tycho Brahe Planetarium, Copenhagen Estonia Planetarium in Science Centre AHHAA, Tartu Planetarium in Tartu Old Observatory Planetarium in Pernova Nature House, Pärnu Planetarium in Energy Discovery Centre, Tallinn Finland Heureka Planetarium, Vantaa Kallioplanetaario, Jyväskylä Särkänniemi Planetarium, Tampere Ursa Starlab, Helsinki Kakslauttanen Planetarium, Saariselkä France Germany Greece Eugenides Planetarium (see also Evgenidio Foundation), Athens Thessaloniki Planetarium, Thessaloniki Hungary Budapest Planetarium (:hu:TIT Budapesti Planetárium), Budapest Kecskemét Planetarium, Kecskemét Bukk Astronomical Observatory (:hu:Bükki Csillagda), Répáshuta Ireland Inishowen Planetarium, Inishowen Schull Planetarium, Schull Italy Kosovo Kosovo Planetarium of Çabrat, Gjakova, Scien.-Edu. Center Cosmos & Human National Observatory and Planetarium of Kosovo, Shtime Lithuania Planetariumas, Vilnius Netherlands Artis Planetarium, Amsterdam Eise Eisinga Planetarium, Franeker Omniversum, The Hague Planetarium Planetron, Dwingeloo Planetarium Ridderkerk, Museum Johannes Postschool, Ridderkerk-Rijsoord Norway Vitensenteret i Trondheim (Trondheim Science Center), Trondheim Nordnorsk vitensenter (Science Center of Northern Norway), Tromsø Saint Exupery Planetarium, Oslo Science Factory (Vitenfabrikken, Norway), Sandnes Poland Portugal Calouste Gulbenkian Planetarium, Lisbon Espinho Planetarium, Navegar Foundation, Espinho Planetario Coimbra, Coimbra Planetário do Porto, Porto Romania Russia Serbia Belgrade Planetarium, Belgrade Novi Sad Planetarium Slovakia Slovenské technické múzeum, Košice CVC Domino, Košice Observatory and planetarium Milan Rastislav Štefánik, Hlohovec Observatory and planetarium Presov, Prešov Observatory Vihorlat, Kolonické sedlo Regional Observatory and Planetarium Maximilian Hell, Žiar nad Hronom Slovak central observatory Hurbanovo, Hurbanovo Spain Sweden Switzerland Planetarium at Swiss Museum of Transport, Swiss Museum of Transport, Luzern , Schwanden near Sigriswil Turkey Ukraine Dnipro Planetarium, Dnipro Donetsk Planetarium, Donetsk Kharkiv Planetarium, Kharkiv Kyiv Planetarium, Kyiv United Kingdom Other Aboard the RMS Queen Mary 2, the first planetarium at sea North America Canada Alberta Queen Elizabeth II Planetarium, Edmonton, Alberta TELUS Spark Science Centre, Calgary, Alberta Telus World of Science, Edmonton, Alberta British Columbia Centre of the Universe, Victoria, British Columbia H. R. MacMillan Space Centre, Vancouver, British Columbia Manitoba Manitoba Museum, Winnipeg, Manitoba Ontario Doran Planetarium, Sudbury, Ontario Ontario Science Centre Digital Planetarium, Toronto Royal Ontario Museum, Toronto Science North, Greater Sudbury, Ontario W.J. McCallion Planetarium, Hamilton, Ontario Quebec Rio Tinto Alcan Planetarium, Montreal, Quebec Yukon Northern Lights Centre, Watson Lake, Yukon Costa Rica Planetario Ciudad de San José, San José Mexico United States Alabama Boyd E. Christenberry Planetarium, Homewood W. A. Gayle Planetarium, Montgomery Wernher von Braun Planetarium Alaska Marie Drake Planetarium, Juneau Thomas Planetarium at the Anchorage Museum, Anchorage University of Alaska Planetarium & Visualization Theater Anchorage Arizona Dorrance Planetarium at the Arizona Science Center, Phoenix Flandrau Science Center and Planetarium at the University of Arizona, Tucson Jim and Linda Lee Planetarium Planetarium at Mesa Community College The Star Barn Cave Creek, Arizona Arkansas EpiSphere at the Aerospace Education Center, Little Rock California Colorado Fiske Planetarium at the University of Colorado at Boulder, Boulder Gates Planetarium at Denver Museum of Nature and Science, Denver United States Air Force Academy Planetarium at United States Air Force Academy, Colorado Springs Connecticut The Children's Museum, West Hartford The Discovery Museum, Bridgeport Leitner Family Observatory and Planetarium at Yale University, New Haven Treworgy Planetarium at Mystic Seaport, Mystic District of Columbia Albert Einstein Planetarium, National Air and Space Museum, Smithsonian Institution Rock Creek Park Planetarium, Rock Creek Park Nature Center Florida Georgia Jim Cherry Memorial Planetarium at the Fernbank Science Center, Atlanta Mark Smith Planetarium at the Museum of Arts and Sciences, Macon Omnisphere Theater, Coca-Cola Challenger Space Science Center, Columbus State University, Columbus Rollins Planetarium at Young Harris College, Young Harris Tellus Planetarium at Tellus: Northwest Georgia Science Museum, Cartersville Wetherbee Planetarium at Thronateeska Heritage Center, Albany Guam University of Guam Planetarium at the University of Guam, Hagåtña Hawaii Hōkūlani Imaginarium, Windward Community College, Kāne‘ohe, Hawai‘i ʻImiloa Astronomy Center, Hilo Jhamandas Watumull Planetarium at the Bernice P. Bishop Museum, Honolulu Idaho Capital High School, Boise Illinois Indiana Iowa Bettendorf High School, Bettendorf Sanford Museum and Planetarium, Cherokee, Iowa Kansas Justice Planetarium at the Kansas Cosmosphere and Space Center, Hutchinson Lakin High School Lakin, Kansas Peterson Planetarium at Emporia State University Kentucky Gheen's Science Hall & Rauch Planetarium at the University of Louisville, Louisville Golden Pond Planetarium and Observatory, Golden Pond Hardin Planetarium at Western Kentucky University, Bowling Green Hummel Planetarium at Eastern Kentucky University, Richmond Star Theater, at Morehead State University, Morehead Varia Planetarium (part of East Kentucky Science Center) at Big Sandy Community and Technical College, Prestonsburg Louisiana Dayna & Ronald L. Sawyer Space Dome Planetarium, Shreveport Irene W. Pennington Planetarium, Baton Rouge Nature Center at Audubon Nature Institute, New Orleans Maine Francis Malcolm Science Center Planetarium at Easton, Maine, 776 Houlton Road Ladd Planetarium at Bates College, Lewiston, 44 Campus Avenue Maynard F. Jordan Planetarium at the University of Maine, Orono Southworth Planetarium at University of Southern Maine - Portland campus located at 70 Falmouth Street Maryland Arthur Storer Planetarium, Prince Frederick, named after the first astronomer in the American colonies and the original namesake of Halley's Comet Davis Planetarium at the Maryland Science Center, Baltimore James E. Richmond Science Center and Planetarium, Charles County Public Schools, Waldorf (60' diameter, 184 seats) Watson-King Planetarium at Towson University William Brish Planetarium Massachusetts Charles Hayden Planetarium at the Museum of Science, Boston Framingham State College Planetarium, Framingham George Alden Planetarium at the Ecotarium, Worcester Seymour Planetarium at the Springfield Science Museum, Springfield, the oldest operating planetarium in the United States Michigan Minnesota Como Planetarium, Como Park Elementary School, St. Paul Forestview Planetarium, Forestview Middle School, Baxter Marshall W. Alworth Planetarium, University of Minnesota Duluth, Duluth, Minnesota Mayo High School, Rochester MSUM Planetarium, Minnesota State University Moorhead, Moorhead Paulucci Space Theatre, Hibbing Community College, Hibbing SMSU Planetarium, Southwest Minnesota State University, Marshall Whitney and Elizabeth MacMillan Planetarium, Bell Museum of Natural History, St. Paul Mississippi Russell C. Davis Planetarium, Jackson Missouri Del & Norma Robison Planetarium, Kirksville Gottlieb Planetarium, Kansas City James S. McDonnell Planetarium, St. Louis Rock Bridge Senior High School Planetarium, Columbia Montana Taylor Planetarium at the Museum of the Rockies Nebraska Mallory Kountze Planetarium (UNO), Omaha Martin Luther King, Jr. Planetarium, Omaha Ralph Mueller Planetarium, Lincoln Nevada Fleischmann Planetarium & Science Center, Reno New Hampshire McAuliffe-Shepard Discovery Center, Concord New Jersey New Mexico The Planetarium at the New Mexico Museum of Natural History & Science, Albuquerque Early College Academy, Hefferan Planetarium, Albuquerque Robert H. Goddard Planetarium, Roswell New York North Carolina Ohio Oklahoma James E. Bertelsmeyer Planetarium at the Tulsa Air and Space Museum & Planetarium, Tulsa Kirkpatrick Planetarium at Science Museum Oklahoma, Oklahoma City Mackie Planetarium, Northern Oklahoma College Jenks Public Schools Planetarium at Jenks High School Math Science Complex, Jenks, Oklahoma Oregon North Medford High School Planetarium, Medford Harry C. Kendall Planetarium (part of Oregon Museum of Science and Industry), Portland Planetarium at Chemeketa Community College, Hayesville Science Factory, Eugene Planetarium Sky Theater, Mt. Hood Community College, Gresham, Oregon Pennsylvania Edinboro University Planetarium at Edinboro University of Pennsylvania, Edinboro, Pennsylvania Puerto Rico RUM Planetarium, University of Puerto Rico at Mayagüez, Mayagüez Rhode Island Roger Williams Park Museum of Natural History and Planetarium, Providence South Carolina DuPont Planetarium at the University of South Carolina Aiken, Aiken SCSM Planetarium, Columbia Stanback Planetarium, Orangeburg T. C. Hooper Planetarium at the Roper Mountain Science Center, Greenville Tennessee Bays Mountain Planetarium at Bays Mountain Park, Kingsport Heavens Declare Planetarium at the Wonders Center & Science Museum, Dickson Sharpe Planetarium at the Pink Palace Museum and Planetarium, Memphis Sudekum Planetarium at Adventure Science Center, Nashville Texas Utah Clark Planetarium, Salt Lake City Ott Planetarium at Weber State University, Ogden Royden G. Derrick Planetarium, at Brigham Young University, Provo Snow Planetarium at Snow College, Ephraim Christa McAuliffe Space Education Center at Central Elementary, Pleasant Grove Vermont Lyman Spitzer Jr. Planetarium at Fairbanks Museum in Saint Johnsbury Virginia Washington Wisconsin Oceania Australia Science Space, Wollongong, NSW Melbourne, Scienceworks Museum Planetarium, Melbourne Queen Victoria Museum and Art Gallery, Launceston Scitech Planetarium, Perth Sir Thomas Brisbane Planetarium, Brisbane UNISA Planetarium, Mawson Lakes, Adelaide Cosmos Centre, Charleville, Queensland New Zealand Sir Edmund Hillary Alpine Centre Digital Dome at The Hermitage Hotel, Mount Cook Village Perpetual Guardian Planetarium, Tūhura Otago Museum, Dunedin Planetarium North, Whangārei Space Place at Carter Observatory, Wellington Stardome Observatory, Auckland South America Argentina Complejo Astronómico Municipal, Rosario Complejo Planetario Malargüe, Malargüe Galileo Galilei planetarium, Buenos Aires Parque Astronómico la Punta Brazil Chile Planetario Chile (University of Santiago, Chile), Carl Zeiss VI, Santiago Planetario Mamalluca, Municipalidad de Vicuña, Región de Coquimbo Planetario Rapa Nui (Fundación Planetario Rapa Nui, Chile), Isla de Pascua Planetario Movil Tikva, Purranque, Región Los Lagos Colombia Planetarium of Bogotá, Bogotá Planetarium of Medellín, Medellín Planetarium La Enseñanza, Medellín Planetario Móvil Colombia, Bogotá Ecuador Planetarium of Mitad del Mundo, Ciudad Mitad del Mundo-Quito Planetario de la Armada -Guayaquil- (INOCAR) Planetario Mundo Juvenil -Cuenca- Centro cultural Planetario -Quito- (IGM) Uruguay Planetario de Montevideo "Agr. Germán Barbato", first Latin American planetarium (1955), Montevideo Venezuela Planetario del Museo de los Niños de Caracas, Caracas Planetario Humboldt, [Zeiss] Caracas Planetario Fundación la Salle de Ciencias Naturales La Salle, Punta de Piedra Planetario Simón Bolívar (part of Complejo Científico, Cultural y Turístico), Maracaibo See also Amateur astronomy References Further reading Worldwide Planetarium Database External links Worldwide Planetariums Database (WPD) Planetariums & Digital Dome Theatres Database Plafinder – planetarium search engine Loch Ness Productions Fulldome Theater Compendium (fulldome facilities) Loch Ness Productions Dome Theater Compendium (classic facilities)
List of planetariums
[ "Astronomy" ]
3,209
[ "Astronomy education", "Astronomy organizations", "Planetaria" ]
16,348,839
https://en.wikipedia.org/wiki/Ultra-low%20volume
Ultra-low volume (ULV) application of pesticides has been defined as spraying at a Volume Application Rate (VAR) of less than 5 L/ha for field crops or less than 50 L/ha for tree/bush crops. VARs of 0.25 – 2 L/ha are typical for aerial ULV application to forest or migratory pests. In order to maintain efficacy at such low rates, droplet size must be rigorously controlled in order to minimise waste: this is Controlled Droplet Application (CDA). Although often designed for non-evaporative (e.g. oil-based) formulations, ULV equipment may sometimes be adapted for use with water, often at Very Low volume (VLV: 5-20 L/ha) VAR. Purpose ULV spraying is a well-established spraying technique and remains the standard method of locust control with pesticides and is also widely used by cotton farmers in central-southern and western Africa. It has also been used in massive aerial spraying campaigns against disease vectors such as the tse-tse fly. A major benefit of ULV application is high work rate (i.e. many hectares can be treated in one day). It is a good option if all (or some) of these conditions apply: large area of land to treat rapid response required little or no water for making pesticide tank mixtures logistical problems for supplies difficult terrain: poor access to target site. Equipment ULV equipment is designed to produce very small droplets, thus ensuring even coverage with low volumes. The equipment is based on aerosol, air-shear (mistblowers, exhaust gas sprayers) or better still, rotary nozzle techniques. An electrostatic charge may be applied to the droplets to aid their distribution and impaction (on earthed targets), but commercial equipment is rare at present. Ultra low volume fogging machines Ultra low volume (ULV) fogging machines are cold fogging machines that use large volumes of air at low pressures to transform liquid into droplets that are dispersed into the atmosphere. This type of fogging machine can produce extremely small droplets with diameters ranging from 1–150 μm. ULV machines are used for applying pesticides, herbicides, fungicides, sterilizers, and disinfectants amongst other chemicals. The size of the droplet is very important as each application has an optimal droplet size. The optimum droplet sizes are between 5 and 30 μm for flying insects, 20 to 40 μm for leaf nematodes and 30 to 50 μm for fungi. Low volume refers to the low volume of carrier fluid that is required with these types of machines. The droplets that are created are of such a small size that less carrier for the formulation is required to cover the required surface area. The best way to understand the concept of using less formulation to cover a larger surface area is to look at the mathematical side of the scenario. In the case where the diameter of a droplet is reduced to half its original size then the amount of droplets that can be formed from the same volume of formulation will increase eightfold. If the droplet diameter is reduced to 10 percent of its original size, then the amount of droplets that can be formed will increase a thousandfold. In this way the droplet diameter determines the amount of droplets that will form. Parts Ultra low volume fogging machines consists of a blower, a formulation-holding tank and in some equipment a pump. The machine can have an electric, battery or gasoline engine that drives the blower. The blower creates a low pressure area and forces air through the nozzle of the fog machine. Air pressure can be controlled by adjusting the engine speed. Formulation is delivered by means of either electric, gear, FMI or Diaphragm pump to deliver the formulation to the nozzle of the machine, or in other equipment it is delivered through creating a low air pressure in the formulation tank to force the formulation to the nozzle for easy application. The nozzle of the machine has a very specific shape, which causes a swirling motion of the air stream. The motion is achieved by means of several stationary fins that force the air to rotate. The formulation is delivered to the air by means of a supply tube that is situated in the center of the nozzle. The motion of the air shears the liquid formulation into very small droplets and then disperses it into the atmosphere. ULV fogging machines are the most preferred misting machines as the size of the droplets can be controlled by means of machine calibration. Advantages and disadvantages The chemicals used in this type of machine are more concentrated than the chemicals used in other spraying equipment, which also increases the killing efficiency. Other advantages of ULV misting machines includes lower risks of injury due to the fog cloud being nearly invisible, low volumes of carrier chemicals, lower application cost and low noise levels. Unfavorable aspects of these machines may include longer application times, wind drift, high concentrations of active ingredients causing environmental hazards, and the requirement of higher technical skills for calibration of the machines. Applications ULV fogging machines can be used in a number of different industries. Some applications includes pest control: mosquito control, bird control, agricultural applications such as grain storage, disinfectant purposes such as hospitals and laboratories, mold control and surface decontamination. A specific application for ULV machines that have been well researched is protecting avocado trees from different diseases. The most common diseases that these trees are prone to suffer from includes Cercospora spot, anthracnose and stem-end rot. The diseases affecting the avocado trees are controlled by applying high volume copper oxychloride fungicides to the trees. The original application techniques included the use of a hand gun sprayer. This technique posed the problem of high run-off of the formulation. The use of ULV machines for the application of the pesticide formulation yielded more than 80 percent healthy fruit that was free from Cercospora spot. These results compared very favorably with the traditional method of using a hand gun sprayer. Another industry that have benefited substantially from the technology provided by ULV fogging machines is the chicken industry. This industry suffers great losses due to the litter beetle and Aspergillus fungi. ULV fogging machines offers great solutions to kill both these pests in chicken houses. Another industry that uses fogging equipment is the cleaning industry, in particular the pre-occupation cleaning industry. Where there's construction, there's dust. Dust particles can vary in size, some dust particles are 5 micron and larger, other may be smaller. The dust created as a byproduct of cutting tiles or concrete can be as small as 1 micron. The real world implication of this is that a surface is always at risk of being layered in dust even though the surface has been thoroughly cleaned due to the dust laden air. This causes havoc during the commissioning stages of sensitive equipment that require near perfect conditions in which to do the testing of the equipment that are being commissioned. In these cases, ULV fogging machines are placed in strategic locations throughout a facility prior to the first cleaning phases. The ULV fogging equipment disperses water into the atmosphere that binds to the dust particles ultimately bringing the dust down against the walls and on the floor. This allows the ambient air to be free of dust contamination ensuring the cleaning of the fogged room is more effectively done. When this application is planned, ensure that all electrical outlets have been taken through the correct LOTO (Lock Out Tag Out) procedures ensuring a safe working environment. It is also prudent to enclose all electrical sockets and switches with water resistant material such as plastic bags and tape. See also Aerial spraying Locust control Pesticide application References External links ULV spraying equipment Pesticides
Ultra-low volume
[ "Biology", "Environmental_science" ]
1,606
[ "Biocides", "Toxicology", "Pesticides" ]
16,348,932
https://en.wikipedia.org/wiki/Incentivisation
Incentivisation or incentivization is the practice of building incentives into an arrangement or system in order to motivate the actors within it. It is based on the idea that individuals within such systems can perform better not only when they are coerced but also when they are given rewards. Concept Incentivization aims to motivate rather than encourage enthusiasm so that individuals perform better. It is distinguished from a bribery system in the sense that it provides the "spark to motivate, stimulate, move, and encourage workers to strive for a personal best." As a result of this motivation, it is proposed that incentivization can improve the efficiency of different systems. Incentivization follows certain notions proposed by psychological theories such as Self-Determination Theory, which highlights both extrinsic and intrinsic motivation. Incentivization highlights human behavior in response to factors which impact our extrinsic motivation. Extrinsic motivation refers to individuals changing their behavior in order to meet an external goal, or receive praise, approval or monetary rewards. Incentives act as extrinsic motivators, providing external ‘purpose’ to an individual, which has been key to developing a person's psychological health and wellbeing. This is different to intrinsic motivators, which are based on self-interest and are exempt from external pressure. For example, Wikipedia editors have an intrinsic incentive to contribute to the website as there is no financial reward but instead altruism, recognition, and reciprocity. There are different types of incentives that should be accounted for incentivization strategies. Economic incentives account for the material gains or losses, whereas social incentives account for reputational gains or losses. Psychological/Behavioral incentives refer to “external stimuli, such as a condition or object, that enhance or serve as a motive for behavior.” These are influenced in accordance with Bandura's Social Cognitive Theory. Bandura's theory suggests that we are likely to produce behavior when we are motivated to do so. Incentives are developed through social learning, for example, ‘vicarious reinforcements’, which refers to when an individual observes a ‘role model’ receiving positive reactions to their behavior and ‘reinforce’ why they must replicate this behavior. Vicarious reinforcement involves people developing incentives through empathizing and feeling people's behavior. For example, a student may observe a teacher praising a classmate for exceptional creativity and will be incentivized by that praise to recreate/imitate that behavior. However, while Bandura's theory allows us to draw links between social learning and incentivization, it does not consider ‘human agency’, which suggests that people are consciously able to affect their willingness to engage in behavior. Therefore, biological and cultural explanations are further needed to substantiate this notion. It is important to understand which type of incentive motivates the target group of an incentivization strategy. For instance, exposure to extrinsic monetary incentives may counteract other incentives/motivations and lead to less overall interest in the task. In cases like this, incentivization backfires.   An incentivization strategy can leverage an existing system of measures to address interrelated issues such as those involving risk, cost, and performance if done correctly.7 This can be adopted in multiple fields. Biological Psychology of Incentivization An individual's response to incentivization appears to be controlled by the ventromedial prefrontal cortex. The brain can express a decreased response to incentivization after experiencing damage to or near the nucleus accumbens. However, people may become more sensitive to incentives when there is damage to the subgenual ventromedial prefrontal cortex. The biological approach to incentivization is linked to biological explanations regarding human extrinsic motivation, emotion and learning signals. Firstly, we can attribute the majority of learning signals to mechanisms occurring within the visceral body. Through an evolutionary lens, it is known that human beings developed brain mechanisms that alter the direct functions of the visceral body. An example of this is the Hypothalamus, which slows down metabolism in order to maintain ideal homeostatic levels. If this mechanism is weaker in certain individuals, they are more likely to feel a lack of motivation and less affected by incentives. Through maintaining body homeostasis, individuals are able to get through their day feeling less lethargic, bored and distracted. The hypothalamus is linked to the aforementioned nucleus accumbens. The ventromedial prefrontal cortex regulates the activity of the ‘Amygdala’ within the brain of an individual. The Amygdala is involved in both emotion activation and stimulus/re-enforcement learning. Therefore, people are more likely to consider incentives when their Amygdala is not functioning at an optimal level, as their ability to regulate emotions is limited. This is due to research suggesting that the Amygdala allows for” stimulus re-enforcement association information”, which suggests how likely we are to turn our extrinsic motivators into incentives. Essentially, a change in Amygdala activity impacts our likelihood of undergoing ’Vicarious Reinforcement.’ Behavioural Economics of Incentivization Behavioral economics highlights that humans have two cognitive systems. One is automatic and the other is reflective. The reflective system is seen as more rational and/or cognitive. It is more controlled, effortful, rule-following, self-aware and deductive. The reflective system allows us to analyse available information as well as the incentives offered to us to act in their best interests. This contrasts with the automatic system, that takes short-cuts, uses heuristics, and biases. The two systems are also commonly referred to as the 'Fast and Slow' systems of thinking, with the reflective being ‘slow’ and the automatic being ‘fast’. Incentives change behaviour by changing people's minds. Once the possible rewards are highlighted, the reflective system weighs up the revised costs and benefits of our actions and responds accordingly. Cultural Approach of Incentivization Studies have found cultural differences in incentives. Western cultures are more likely to use incentive-based pay (such as commissions) than non-Western cultures. Researchers have also randomized workers across cultures to complete work tasks with piece rates for performance or fixed pay. Across cultures, paying for performance increased effort, but the effect of money was larger in the US and UK than in China, India, Mexico, and South Africa. Similarly, paying students for correct answers on a math test worked in the US, but not China. These findings fit with Institutional Theory and Hofstede's cultural dimension theories. Some of the most influential cultural factors on adopting incentives include in-group collectivism, performance orientation and uncertainty avoidance. However, the most detailed cultural explanation refers to the ‘individualism vs collectivism’ debate. In certain cultures, individual success is highly valued and sought after, whereas in others, the collective success of the group is prioritized. This may affect the way incentives are framed, received and acted upon. For example, individual prizes as incentives cater towards more individualistic societies, whereas rewards that can be shared amongst an in-group are more suited to collectivist cultures.   Cultural value systems are also important to consider when incentivizing individuals, as certain forms of incentives may be more suited than others. For example, in collectivist cultures, while monetary gains are important, they do not hold as much weight as social reputation within the in-group. Therefore, social incentives such as praise or social status approval may be more beneficial to the individual than receiving funds. Lastly, cultural dimensions may affect how we are motivated by incentives. For example, individualistic societies are likely to be affected more by extrinsic motivators such as recognition as opposed to collectivists, who seek sense of purpose or the fulfillment of learning/achieving goals. However, if we think critically, suggesting one cultural dimension is responsible for incentivization is a far too reductionist statement. Ultimately, it is a combination of Hofstede's cultural dimensions, such as power distance, masculinity and uncertainty, alongside biological and cognitive factors, which explain incentivization. The Effectiveness of Incentivization These factors should be considered when devising or analysing an incentivisation strategy. Framing How an incentive is presented and framed is important. This is known as the choice context and is commonly referred to as the choice architecture. For instance, if in a given context, information is low about how fun a specific task is, by offering a respective amount to complete the task, people might think that the task is unenjoyable and this incentive would fail. Loss Aversion We dislike losses more than we like gains of the same value. Individuals respond more strongly to losing something they value than being given a reward of equivalent value. This is known as loss aversion. A weight loss programme asked some participants to deposit money that would be returned to them if targets were met- those that did this lost significantly more weight than those who did not. Human's Mental Budgeting The importance of context is enhanced as people understand the monetary value of categories (salary, savings, expenses) differently. The same extrinsic incentives create different responses in people, depending on how they understand, value and organise their money. Type and Size of Incentives The impact also depends on type and size of the incentives. In terms of size- paying a high reward on the successful completion of a task could make the participant anxious about completing the task and reduce their ability to do so. The concept section discusses that economic incentives must be matched with suitable tasks and audiences. Timing and Frequency of Incentives In terms of timing- the value of rewards may change over time and in different situations, meaning the value of an incentive can too. The frequency of incentives is also important and can impact the effectiveness of incentivisation. Changes in behaviour may reverse back to their original form if one-off incentives are used. This is because they can condition us to behave in the new way only if we are rewarded. If the rewards stop, the new behaviour can also stop. Incentivisation tends to be more effective at forming permanent habits if they only reward us sometimes. On this note, incentivisation can be likened to operant conditioning, the association of a voluntary behaviour with a consequence. Take the example of someone receiving pocket money. When this money is given at a set specific time, every week for example, or is guaranteed every time a behaviour is expressed, it becomes expected. If money is given only sometimes when the behaviour is expressed, the individual is unsure whether they will receive a reward, which makes performing the behaviour more exciting- like gambling. This is why incentivisation can be more effective when rewards are random. Overemphasis of Small Probabilities People overemphasise small probabilities which means incentives may be skewed and based upon unrealistic expectations. Payoff Timing We prefer immediate payoffs rather than distant ones, even if the immediate payoffs are smaller than the distant ones. People would rather avoid the short-term pain of diabetes blood test rather than the long-term health gain. Criticisms The incentive theory of motivation (incentivization) is criticized by psychologists for not being able to explain when individuals carry out behaviors despite their being little to no incentive to do so. For example, a worker who works extremely hard but for a small salary. It also fails to explain situations whereby instead of an incentive, there is rather a threat in place, for example, going outside during a natural disaster in order to help others. Secondly, incentivization assumes that at any given point, an individual will be entirely extrinsically motivated, or if they do not react to incentives, entirely intrinsically motivated. It does not consider the cases where individuals are engaging with both extrinsic and intrinsic motivators. Developmental psychology can account for the complexities behind human motivation. For example, literature has shown that children are far more likely to be extrinsically motivated at young ages, therefore it is fair to assume that incentives are less appealing as intrinsic motivation develops over time. The repercussions of not considering a developmental focus are that the theory of incentivization is reductionist and lacks the ability to be extrapolated to a wide range of age groups, occupational groups and cultural demographics. Certain psychological theories counter incentive motivation, such as Skinner's theory of learning (1969), which argues that an individual's behavior is directly linked to their external environment, making it difficult to envision an incentivized individual within that framework. See also Reward system References Motivation
Incentivisation
[ "Biology" ]
2,613
[ "Ethology", "Behavior", "Motivation", "Human behavior" ]
16,348,997
https://en.wikipedia.org/wiki/Olive%20skin
Olive skin is a human skin tone. It is often associated with pigmentation in the Type III, Type IV, and Type V ranges of the Fitzpatrick scale. It generally refers to moderate or lighter tan or brownish skin, and it is often described as having tan, brown, cream, greenish, yellowish, or golden undertones. People with olive skin can sometimes become paler if their sun exposure is limited. However, lighter olive skin still tans more easily than light skin does, and generally still retains notable yellow or greenish undertones. Geographic distribution Type III pigmentation is frequent among populations from the Mediterranean region, Southern Europe, North Africa, the Near East and West Asia, parts of the Americas, East Asia and Central Asia. It ranges from cream or dark cream to darker olive or light brown skin tones. This skin type sometimes burns and tans gradually, but always tans. Type IV pigmentation is frequent among some populations from the Mediterranean, including Southern Europe, North Africa and West Asia, South Asia, Austronesia, Latin America, and parts of East Asia. It ranges from brownish or darker olive to moderate brown, typical Mediterranean skin tones. This skin type rarely burns and tans easily. Type V pigmentation is found among some populations in Southwest Asia, and including a few regions of North Africa. It is frequent among select indigenous populations of Latin America, parts of Sub-Saharan Africa, and South Asia. It ranges from olive to brown skin tones. This skin type very rarely burns and tans quite easily. See also Mediterranean race Latins Semitics Romani Hellenes Berbers Armenians Human skin color Light skin Dark skin References Mediterranean Human skin color
Olive skin
[ "Biology" ]
334
[ "Human skin color", "Pigmentation" ]
16,349,182
https://en.wikipedia.org/wiki/N%2CO-Dimethylhydroxylamine
N,O-Dimethylhydroxylamine is a methylated hydroxylamine used to form so called 'Weinreb amides' for use in the Weinreb ketone synthesis. It is commercially available as its hydrochloride salt. Synthesis It may be prepared by reacting ethyl chloroformate (or similar) with hydroxylamine followed by treatment with a methylating agent such as dimethyl sulfate. The N,O-dimethylhydroxylamine is then liberated by acid hydrolysis followed by neutralization. See also Methoxyamine N-methylhydroxylamine References Hydroxylamines Methyl compounds
N,O-Dimethylhydroxylamine
[ "Chemistry" ]
141
[ "Hydroxylamines", "Reducing agents" ]
16,350,616
https://en.wikipedia.org/wiki/Archaeocin
Archaeocin is the name given to a new type of potentially useful antibiotic that is derived from the Archaea group of organisms. Eight archaeocins have been partially or fully characterized, but hundreds of archaeocins are believed to exist, especially within the haloarchaea. Production of these archaeal proteinaceous antimicrobials is a nearly universal feature of the rod-shaped haloarchaea. The prevalence of archaeocins from other members of this domain is unknown simply because no one has looked for them. The discovery of new archaeocins hinges on recovery and cultivation of archaeal organisms from the environment. For example, samples from a novel hypersaline field site, Wilson Hot Springs in the Fish Springs National Wildlife Refuge in eastern Utah, recovered 350 halophilic organisms; preliminary analysis of 75 isolates showed that 48 were archaeal and 27 were bacterial. Halocins Halocins are classified as either peptide (≤ 10 kDa; 'microhalocins') or protein (> 10 kDa) antibiotics produced by members of the archaeal family Halobacteriaceae. To date, all of the known halocin genes are encoded on megaplasmids (> 100 kbp) and possess typical haloarcheal TATA and BRE promoter regions. Halocin transcripts are leaderless and the translated preproteins or preproproteins are most likely exported using the twin arginine translocation (Tat) pathway, as the Tat signal motif (two adjacent arginine residues) is present within the amino terminus. Halocin genes are almost universally expressed at the transition between exponential and stationary phases of growth; the only exception is halocin H1, which is induced during exponential phase. In contrast, the larger halocin proteins are heat-labile and typically obligately halophilic as they lose their activity (or activity is reduced) when desalted. Microhalocins, peptide halocins Currently, five peptide halocins have been partially or completely characterized at the protein and/or genetic levels: HalS8, HalR1, HalC8, HalH7, and HalU1. These antimicrobial peptides range from ~3 to 7.4 kDa in molecular mass, consisting of 36 to 76 amino acid residues. Two of the microhalocins (HalS8 and HalC8) are produced by proteolytic cleavage from a larger preproprotein by an unknown mechanism. Microhalocins are hydrophobic peptides that remain active even if desalted and/or stored at 4 °C and are fairly insensitive to heat and organic solvents. The first microhalocin to be characterized was HalS8, produced by the uncharacterized haloarchaeon S8a isolated from the Great Salt Lake, UT, USA. Protein halocins Two can be classified as protein halocins: HalH1 and HalH4; the molecular masses of the remaining halocins have yet to be elucidated. Halocin H1 is produced by Hfx. mediterranei M2a (formerly strain Xia3), isolated from a solar saltern near Alicante, Spain. It is a 31 kDa protein that is heat-labile, loses activity when desalted, and exhibits a broad range of inhibition within the haloarchaea. Halocin H1 has yet to be characterized at the protein and genetic levels. In contrast, HalH4, produced by Hfx. mediterranei R4 (ATCC 33500), also isolated from a solar saltern near Alicante, Spain was the first halocin discovered. The molecular mass of the mature HalH4 protein is 34.9 kDa (359 amino acids), processed from a preprotein of 39.6 kDa; the mechanism for processing is unknown. Halocin H4 is an archaeolytic halocin and adsorbs to sensitive Hbt. salinarum cells where it may be disrupting membrane permeability. Sulfolobicins The archaeocins produced by Sulfolobus are entirely different from halocins, since their activity is predominantly associated with the cells and not the supernatant. To date, the spectrum of sulfolobicin activity appears to be restricted to other members of the Sulfolobales: the sulfolobicin inhibited S. solfataricus P1, S. shibatae B12, and six nonproducing strains of S. islandicus. Activity appears to be archaeocidal but not archaeolytic. Two genes involved in sulfolobicin production have been identified in S. acidocaldarius and S. tokodaii. The sulfolobicins appear to represent a novel class of antimicrobial proteins. See also Archaea References Antibiotics Archaea biology
Archaeocin
[ "Biology" ]
1,030
[ "Archaea", "Biotechnology products", "Antibiotics", "Archaea biology", "Biocides" ]
16,350,686
https://en.wikipedia.org/wiki/CGNS
CGNS stands for CFD General Notation System. It is a general, portable, and extensible standard for the storage and retrieval of CFD analysis data. It consists of a collection of conventions, and free and open software implementing those conventions. It is self-descriptive, cross-platform also termed platform or machine independent, documented, and administered by an international steering committee. It is also an American Institute of Aeronautics and Astronautics (AIAA) recommended practice. The CGNS project originated in 1994 as a joint effort between Boeing and NASA, and has since grown to include many other contributing organizations worldwide. In 1999, control of CGNS was completely transferred to a public forum known as the CGNS Steering Committee . This Committee is made up of international representatives from government and private industry. The CGNS system consists of two parts: (1) a standard format (known as Standard Interface Data Structure, or SIDS) for recording the data, and (2) software that reads, writes, and modifies data in that format. The format is a conceptual entity established by the documentation; the software is a physical product supplied to enable developers to access and produce data recorded in that format. The CGNS system is designed to facilitate the exchange of data between sites and applications, and to help stabilize the archiving of aerodynamic data. The data are stored in a compact, binary format and are accessible through a complete and extensible library of functions. The application programming interface (API) is cross-platform and can be easily implemented in C, C++, Fortran and Fortran 90 applications. A MEX interface mexCGNS also exists for calling the CGNS API in high-level programming languages MATLAB and GNU Octave. Object oriented interface CGNS++ and Python module pyCGNS exist. The principal target of CGNS is data normally associated with compressible viscous flow (i.e., the Navier-Stokes equations), but the standard is also applicable to subclasses such as Euler and potential flows. The CGNS standard includes the following types of data. Structured, unstructured, and hybrid grids Flow solution data, which may be nodal, cell-centered, face-centered, or edge-centered Multizone interface connectivity, both abutting and overset Boundary conditions Flow equation descriptions, including the equation of state, viscosity and thermal conductivity models, turbulence models, multi-species chemistry models, and electromagnetics Time-dependent flow, including moving and deforming grids Dimensional units and nondimensionalization information Reference states Convergence history Association to CAD geometry definitions User-defined data Much of the standard and the software is applicable to computational field physics in general. Disciplines other than fluid dynamics would need to augment the data definitions and storage conventions, but the fundamental database software, which provides platform independence, is not specific to fluid dynamics. CGNS is self-describing, allowing an application to interpret the structure and contents of a file without any outside information. CGNS can make use of either two different low-level data formats: an internally developed and supported method called Advanced Data Format (ADF), based on a common file format system previously in use at McDonnell Douglas HDF5, a widely used hierarchical data format Tools and Guides In addition to the CGNS library itself, the following tools and guides are available from Github: CGNSTools - Includes ADFVIEWER, a browser and editor for CGNS files Users Guide code - small practical example CGNS programs written in both Fortran and C F77 Examples - example computer programs written in Fortran that demonstrate all CGNS functionality HDFql enables users to manage CGNS/HDF5 files through a high-level language (similar to SQL) in C, C++, Java, Python, C#, Fortran and R. See also Common Data Format (CDF) EAS3 (Ein-Ausgabe-System) FITS (Flexible Image Transport System) GRIB (GRIdded Binary) Hierarchical Data Format (HDF) NetCDF (Network Common Data Form) Tecplot binary files XMDF (eXtensible Model Data Format) External links CGNS home page CGNS Mid Level Library MEX interface of CGNS for MATLAB and Octave pyCGNS CGNS 4.5 Release notes Computer file formats Computational fluid dynamics C (programming language) libraries
CGNS
[ "Physics", "Chemistry" ]
926
[ "Computational fluid dynamics", "Fluid dynamics", "Computational physics" ]
16,350,783
https://en.wikipedia.org/wiki/Illusions%20of%20self-motion
Illusions of self-motion (or "vection") occur when one perceives bodily motion despite no movement taking place. One can experience illusory movements of the whole body or of individual body parts, such as arms or legs. Vestibular illusions The vestibular system is one of the major sources of information about one's own motion. Disorders of the visual system can lead to dizziness, vertigo, and feelings of instability. Vertigo is not associated with illusory self-motion as it does not typically make one feel as though they are moving; however, in a subclass of vertigo known as subjective vertigo one does experience their own motion. People experience themselves being pulled heavily in one direction. There are also specific self-motion illusions that can occur through abnormal stimulation of various parts of the vestibular system, often encountered in aviation. This includes an illusion of inversion, in which one feels like they're tumbling backwards. Through various stimuli, people can be made to feel as if they are moving when they are not, not moving when they are, tilted when they are not, or not tilted when they are. Visual illusions When a large part of the visual field moves, viewers feel like they have moved and that the world is stationary. For example, when one is in a train at a station, and a nearby train moves, one can have the illusion that one's own train has moved in the opposite direction. Common sorts of vection include circular vection, where an observer is placed at the center of rotation of a large vertically-oriented rotating drum, usually painted with vertical stripes; linear vection, where an observer views a field that either approaches or recedes; and roll vection, where an observer views a patterned disk rotating around their line of sight. During circular vection, the observer feels like they are rotating and the drum is stationary. During linear vection, the observer feels like they have moved forwards or backwards and the stimulus has stayed stationary. During roll vection, the observer feels like they have rotated around the line of sight and the disk has stayed stationary. Inducing vection can also induce motion sickness in susceptible individuals. Auditory illusions Compared to visually-induced vection, auditorily-induced vection is generally weaker. Auditory-induced vection can only be elicited in about 25% to 75% of the participants under laboratory conditions, and only when participants are blindfolded. Most of the research has focused on eliciting circular vection horizontally about the body. Researchers have induced circular vection by mechanically rotating a buzzer around a subject in the dark or by presenting sound sequentially in one of several speakers arranged in a circular array. Adding auditory stimuli can significantly enhance visual, vestibular, and biomechanical vections. Biomechanical illusions Sea legs, dock rock, or stillness illness After being on a small boat for a few hours and then going back onto land, it may feel like there is still rising and falling, as if one is still on the boat. It can also occur on other situations, such as after a long journey by train or by aircraft, or after working up a swaying tree. It is not clear whether sea legs are a form of aftereffect to the predominant frequency of the stimulation (e.g., the waves or the rocking of the train), whether it is a form of learning to adjust one's gait and posture. The "sea legs" condition needs to be distinguished from mal de debarquement, which is much more long-lasting. Treadmills Subjects report a strong sense of self-rotation from stepping along a circular treadmill in the dark, which can be further enhanced through auditory cues. After spending more than 10 minutes on a linear treadmill, it is common to experience the visual illusion of moving at a substantially accelerated pace for 2-3 minutes once back on solid ground. See also Balance disorder Broken escalator phenomenon Chronic subjective dizziness Ideomotor phenomenon Proprioception Seasickness Sense of balance, also known as equilibrioception Sensory illusions in aviation Spatial disorientation Tetris effect References Self-motion Motor control
Illusions of self-motion
[ "Biology" ]
852
[ "Behavior", "Motor control" ]
16,351,555
https://en.wikipedia.org/wiki/Tobacco-specific%20nitrosamines
Tobacco-specific nitrosamines (TSNAs) comprise one of the most important groups of carcinogens in tobacco products, particularly cigarettes (traditional and electronic) and fermented dipping snuff. Background These nitrosamine carcinogens are formed from nicotine and related compounds by a nitrosation reaction that occurs during the curing and processing of tobacco. Essentially the plant's natural alkaloids combine with nitrate forming the nitrosamines. They are called tobacco-specific nitrosamines because they are found only in tobacco products, and possibly in some other nicotine-containing products. The tobacco-specific nitrosamines are present in cigarette smoke and to a lesser degree in "smokeless" tobacco products such as dipping tobacco and chewing tobacco; additional information has shown that trace amounts of NNN and NNK have been detected in e-cigarettes. They are present in trace amounts in snus. They are important carcinogens in cigarette smoke, along with combustion products and other carcinogens. Among the tobacco-specific nitrosamines, nicotine-derived nitrosamine ketone (NNK) and N-nitrosonornicotine (NNN) are the most carcinogenic. Others include -nitrosoanatabine (NAT) and N-nitrosoanabasine (NAB). NNK and its metabolite 4-(methylnitrosamino)-1-(3-pyridyl)-1-butanol (NNAL) are potent systemic lung carcinogens in rats. Tumors of the nasal cavity, liver, and pancreas are also observed in NNK- or NNAL-treated rats. NNN is an effective esophageal carcinogen in the rat, and induces respiratory tract tumors in mice, hamsters, and mink. A mixture of NNK and NNN caused oral tumors when swabbed in the rat oral cavity. Thus, considerable evidence supports the role of tobacco-specific nitrosamines as important causative factors for cancers of the lung, pancreas, esophagus, and oral cavity in people who use tobacco products. Metabolism and chemical binding to DNA (adduct formation) are critical in cancer induction by NNK and NNN. Human metabolism of NNK and NNN varies widely from individual to individual, and current research is attempting to identify those individuals who are particularly sensitive to the carcinogenic effects of these compounds. Such individuals would be at higher risk for cancer when they use tobacco products or are exposed to secondhand smoke. Identification of high-risk individuals could lead to improved methods of prevention of tobacco-related cancer, and improved risk valuation for insurers. See also Polycyclic aromatic hydrocarbons, also found in cigarette smoke Nicotine References External links Carcinogens Nitrosamines
Tobacco-specific nitrosamines
[ "Chemistry", "Environmental_science" ]
594
[ "Carcinogens", "Toxicology" ]
16,351,567
https://en.wikipedia.org/wiki/Mass%20fraction%20%28chemistry%29
In chemistry, the mass fraction of a substance within a mixture is the ratio (alternatively denoted ) of the mass of that substance to the total mass of the mixture. Expressed as a formula, the mass fraction is: Because the individual masses of the ingredients of a mixture sum to , their mass fractions sum to unity: Mass fraction can also be expressed, with a denominator of 100, as percentage by mass (in commercial contexts often called percentage by weight, abbreviated wt.% or % w/w; see mass versus weight). It is one way of expressing the composition of a mixture in a dimensionless size; mole fraction (percentage by moles, mol%) and volume fraction (percentage by volume, vol%) are others. When the prevalences of interest are those of individual chemical elements, rather than of compounds or other substances, the term mass fraction can also refer to the ratio of the mass of an element to the total mass of a sample. In these contexts an alternative term is mass percent composition. The mass fraction of an element in a compound can be calculated from the compound's empirical formula or its chemical formula. Terminology Percent concentration does not refer to this quantity. This improper name persists, especially in elementary textbooks. In biology, the unit "%" is sometimes (incorrectly) used to denote mass concentration, also called mass/volume percentage. A solution with 1g of solute dissolved in a final volume of 100mL of solution would be labeled as "1%" or "1% m/v" (mass/volume). This is incorrect because the unit "%" can only be used for dimensionless quantities. Instead, the concentration should simply be given in units of g/mL. Percent solution or percentage solution are thus terms best reserved for mass percent solutions (m/m, m%, or mass solute/mass total solution after mixing), or volume percent solutions (v/v, v%, or volume solute per volume of total solution after mixing). The very ambiguous terms percent solution and percentage solutions with no other qualifiers continue to occasionally be encountered. In thermal engineering, vapor quality is used for the mass fraction of vapor in the steam. In alloys, especially those of noble metals, the term fineness is used for the mass fraction of the noble metal in the alloy. Properties The mass fraction is independent of temperature. Related quantities Mixing ratio The mixing of two pure components can be expressed introducing the (mass) mixing ratio of them . Then the mass fractions of the components will be The mass ratio equals the ratio of mass fractions of components: due to division of both numerator and denominator by the sum of masses of components. Mass concentration The mass fraction of a component in a solution is the ratio of the mass concentration of that component ρi (density of that component in the mixture) to the density of solution . Molar concentration The relation to molar concentration is like that from above substituting the relation between mass and molar concentration: where is the molar concentration, and is the molar mass of the component . Mass percentage Mass percentage is defined as the mass fraction multiplied by 100. Mole fraction The mole fraction can be calculated using the formula where is the molar mass of the component , and is the average molar mass of the mixture. Replacing the expression of the molar-mass products, Spatial variation and gradient In a spatially non-uniform mixture, the mass fraction gradient gives rise to the phenomenon of diffusion. See also Mass-flux fraction References Combustion Dimensionless quantities of chemistry
Mass fraction (chemistry)
[ "Physics", "Chemistry", "Mathematics" ]
728
[ "Physical quantities", "Quantity", "Chemical quantities", "Dimensionless quantities of chemistry", "Combustion", "Dimensionless quantities", "Dimensionless numbers of chemistry" ]
16,352,437
https://en.wikipedia.org/wiki/Unicode%20input
Unicode input is method to add a specific Unicode character to a computer file; it is a common way to input characters not directly supported by a physical keyboard. Characters can be entered either by selecting them from a display, by typing a certain sequence of keys on a physical keyboard, or by drawing the symbol by hand on touch-sensitive screen. In contrast to ASCII's 96 element character set (which it contains), Unicode encodes hundreds of thousands of graphemes (characters) from almost all of the world's written languages and many other signs and symbols. A Unicode input system must provide for a large repertoire of characters, ideally all valid Unicode code points. This is different from a keyboard layout which defines keys and their combinations only for a limited number of characters appropriate for a certain locale. Unicode numbers Unicode characters are distinguished by code points, which are conventionally represented by "U+" followed by four, five or six hexadecimal digits, for example U+00AE or U+1D310. Characters in the Basic Multilingual Plane (BMP), containing modern scripts – including many Chinese and Japanese characters – and many symbols, have a 4-digit code. Historic scripts, but also many modern symbols and pictographs (such as emoticons, emojis, playing cards and many CJK characters) have 5-digit codes. Glyph availability An application can display a character only if it can access a computer font which contains a glyph for that character. Fonts usually have incomplete Unicode coverage; most only contain the glyphs needed to support a few writing systems. However, most modern browsers and other text-processing applications are able to display multilingual content because they perform font substitution, automatically switching to a fallback font when necessary to display characters which are not supported in the current font. Which fonts are used for fallback and the thoroughness of Unicode coverage varies by software and operating system; some software will search for a suitable glyph in all of the installed fonts, others only search within certain fonts. If an application does not have access to a glyph for a required codepoint in the specified font, the character should be shown as the font's glyph . This often appears as an empty box, ☐ (nicknamed "tofu" based on the shape), a box with an X in it, ☒, a diamond with a question mark, �, or a box with a question mark in it, ⍰. Techniques Extended keyboard mapping Most operating systems support extended keyboard mapping the facility to increase the repertoire of characters available using techniques such as Alternate graphic ("AltGr") that gives a third and fourth meaning to every key; Compose key (sometimes called multi key), a key on a computer keyboard that indicates that the following (usually 2 or more) keystrokes trigger the insertion of an alternate character, typically a precomposed character or a symbol; dead keys typically used to attach a specific diacritic to a base letter; or indeed combinations of these. These techniques facilitate entry of character sets beyond the basic set provided as standard with the computer. Selection from a screen Many systems provide a way to select Unicode characters visually. ISO/IEC 14755 refers to this as a screen-selection entry method. Microsoft Windows has provided a Unicode version of the Character Map program, appearing in the consumer edition since XP. This is limited to characters in the Basic Multilingual Plane (BMP). Characters are searchable by Unicode character name, and the table can be limited to a particular code block. Starting with Windows 10 Microsoft Windows also contains so called "emoji keyboard". It can be started by holding down the Windows key (the one with the Windows symbol on it) and hitting the period or semicolon key. The emoji keyboard allows entering of emojis as well as symbols. More advanced third-party tools of the same type are also available (a notable freeware example is BabelMap, which supports all Unicode characters). On most Linux desktop environments, equivalent tools – such as gucharmap (GNOME) or kcharselect (KDE) – are available. Generally these tools let the user "copy" the selected characters into the clipboard, and then paste them into the document, rather than pretending to directly type them. It is often practical to just find the desired character on the web or in another document, and copy and paste it from there. Decimal input (Alt codes) Some programs running in Microsoft Windows, including recent versions of Word and Notepad, can produce characters from their Unicode code points expressed in decimal and entered on the numeric keypad with the key held down. For example, the Euro sign has 20AC as its hexadecimal code point, which is 8364 in decimal, so will produce the symbol. Similarly, produces the double-struck (blackboard bold) character . Decimal code points in the range 160 –255 must be entered with a leading zero (so that the Windows code page is chosen) and furthermore the Windows code page must be set to match Unicode (CP1252 must be used). For example, yields a , corresponding to its code point, but the character produced by depends on the , such as Code page 437, and may yield a . Also through yield the characters assigned in rows 8 and 9 in the CP1252 layout, rather than the C1 control codes that are assigned to those numbers in Unicode. In programs which were not designed to handle Alt codes over 255, the character retrieved usually corresponds to the remainder when the number is divided by 256. The text editor Vim allows characters to be specified by two-character mnemonics referred to as digraphs. The installed set can be augmented by custom mnemonics defined for arbitrary code points, specified in decimal. For example, as decimal 9881 is equal to hexadecimal 2699, associates "Gr" with . See below for use of decimal code points in HTML. Hexadecimal input Clause 5.1 of ISO/IEC 14755 describes a Basic method whereby a beginning sequence is followed by the hex number representation of the code point and the ending sequence. Most modern systems have some method to emulate this, sometimes limited to four digits (thus only the Basic Multilingual Plane). In Microsoft Windows Hexadecimal Unicode input can be enabled by adding a string type (REG_SZ) value called EnableHexNumpad to the registry key HKEY_CURRENT_USER\Control Panel\Input Method and assigning the value data 1 to it. Users will need to log off and back in after editing the registry for this input method to start working. (In versions earlier than Windows Vista, users needed to reboot for it to start working.) Unicode characters can then be entered by holding down , and typing on the numeric keypad, followed by the hexadecimal code, and then releasing . This may not work for 5-digit hexadecimal codes like . Some versions of Windows may require the digits 0-9 to be typed on the numeric keypad or require NumLock to be on. In some applications (Word, Notepad and LibreOffice programs) will replace the hexadecimal number to the left of the cursor with the matching Unicode character. Unless it is six hexadecimal digits long, the code must not be preceded by any digit or letters a–f as they may be treated as part of the code to be converted. For example, entering af1 followed by (or if using a French version) will produce '૱' (U+0AF1), but entering a0000f1 followed by will produce 'añ' ('a' followed by character U+00F1). This facility enables Unicode characters to be entered in other applications: one can create a desired character in Notepad, for example, and then cut and paste it wherever desired. In MacOS Hex input of Unicode must be enabled. In Mac OS 8.5 and later, one can choose the Unicode Hex Input keyboard layout; in OS X (10.10) Yosemite, this can be added in Keyboard → Input Sources. Holding down , one types the four-digit hexadecimal Unicode code point and the equivalent character appears; one can then release the key. Characters outside of the BMP (the Basic Multilingual Plane) exceed the four-digit limit of the Unicode hex input mechanism but can be entered by using surrogate pairs: holding down the key while entering the first surrogate, the , the second surrogate, then releasing the Option key. In X11 (Linux and other Unix variants including ChromeOS) In many applications one or both of the following methods work to directly input Unicode characters: Holding and typing followed by the hex digits, then releasing . Entering , releasing, then typing the hex digits and pressing (or or even, on some systems, pressing and releasing or ). This is supported by GTK and Qt applications, and possibly others. In ChromeOS, this is an operating system function. In platform-independent applications In Emacs, invokes the command, which accepts input either via hex code point or unicode char name. In LibreOffice 5.1 onwards, the method described above for Windows works. In Opera versions that use the Presto layout engine—i.e. up to and including version 12.xx—, entering the hexadecimal number of the desired symbol or character and then pressing (alternative shortcut on macOS). In the Vim editor, in insert mode, the user first types (for codepoints up to 4 hex digits long; using for longer), then types in the hexadecimal number of the symbol or character desired, and it will be converted into the symbol. (On Microsoft Windows, may be required instead of .) In AutoCAD or three shortcuts , , . HTML In HTML and XML, character codes to be rendered as characters are prefixed by ampersand and number sign (&#), and are followed by a semicolon (;). The code point can be either in decimal or in hexadecimal; in the latter case it is preceded by an "x". Leading zeros may be omitted. A number of characters may be represented by a named entity. Example: In HTML/XML, the copyright sign © (U+00A9) may be coded as: &#169; (decimal code point) &#xa9; (hexadecimal code point) &copy; (entity name) This works in many pieces of software that accept HTML markup, such as Thunderbird and Wikipedia editing. See also ASCII Digraphs and trigraphs (programming) Notes References Input Input methods
Unicode input
[ "Technology" ]
2,252
[ "Input methods", "Natural language and computing" ]
16,353,959
https://en.wikipedia.org/wiki/Mass%20provisioning
Mass provisioning is a form of parental investment in which an adult insect, most commonly a hymenopteran such as a bee or wasp, stocks all the food for each of her offspring in a small chamber (a "cell") before she lays the egg. This behavior is common in both solitary and eusocial bees, though essentially absent in eusocial wasps. Diversity In bees, stored provisions typically consist of masses of mixed pollen and nectar, though a few species store floral oils. In a few cases, such as stingless bees and some sweat bees, the number of cells in a single nest can number in the hundreds to thousands, but more typically a nest contains either a single cell, or a small number (fewer than 10). In predatory wasps, the food is typically in the form of paralyzed or dead prey items; after digging the nest they quickly catch one or a few prey animals, bring them to the nest and lay eggs on them, seal the nest and leave. Some wasp lineages (e.g. Crabronidae) show variation, with some species practicing mass provisioning, while related species may bring back prey after the egg has hatched, and then seal the nest (such "delayed provisioning" is considered to be a stage in the evolution of progressive provisioning and thus of parental care in insects), or re-open the nest and add more prey items as the larva grows, which is genuine progressive provisioning. In 1958, Howard E. Evans published a study of the nesting behaviour of Sphecini digger wasps, showing a range of ways of stocking their nests. In Prionyx, several Nearctic and Palaearctic species catch a grasshopper, and then dig a nest for it, so there is one prey per nest. The nest consists of a single cell, and the egg is laid touching the coxa of a hind leg. In contrast, a Neotropical species, P. spinolae, digs the nest first, creating multiple cells, and stocks each cell with 5–10 grasshoppers; the egg is laid on the underside of the thorax. No eusocial wasp species carries out mass provisioning in the strict sense, though the vespid wasp genus Brachygastra stores provisions of honey in its nests; the honey is used to supplement larval feeding (larvae are still fed masticated prey items, for protein), and also eaten by adults. The best-known examples from outside the Hymenoptera are dung beetles, which typically provision with either leaves or dung. Once the provisions are in place and the egg is laid, the cell is sealed, to protect the developing brood. Social behaviour While mass provisioning is typical of some eusocial lineages, such as some sweat bees and all stingless bees, many other eusocial insects, such as ants and honey bees, instead practise progressive provisioning, where the larvae are fed directly and continually during their development; as such, both highly eusocial and primitively eusocial lineages can perform either type of provisioning. References Sources Wilson, E.O. (1971) The Insect Societies. Harvard, Belknap Press. Ethology
Mass provisioning
[ "Biology" ]
661
[ "Behavioural sciences", "Ethology", "Behavior" ]
16,354,265
https://en.wikipedia.org/wiki/List%20of%20products%20based%20on%20FreeBSD
There are many products based on FreeBSD. Information about these products and the version of FreeBSD they are based on is often difficult to come by, since this fact is not widely publicised. Libre software and hardware using free software BSDRP – BSD Router Project: Open Source Router Distribution CheriBSD – ARM-embedded-focused FreeBSD adaptation ; Capability Enabled, Unix-like Operating System which takes advantage of Capability Hardware on Arm's Morello and CHERI-RISC-V platforms. ClonOS – FreeBSD based distro for virtual hosting platform and appliance. Darwin – The UNIX-based, open-source foundation of macOS, iOS, iPadOS, watchOS, tvOS, visionOS, and bridgeOS, includes code from FreeBSD and the Mach kernel from Carnegie Mellon DesktopBSD – [defunct] KDE-based desktop-oriented distribution DragonFlyBSD – FreeBSD independent fork FreeSBIE – Live CD GhostBSD – GTK-based distribution, that defaults Xfce and MATE as GUI HardenedBSD – HardenedBSD is a security-enhanced fork of FreeBSD. helloSystem – helloSystem is a desktop system for creators with a focus on simplicity, elegance, and usability, especially for ex macOS users disappointed by Apple strategy ravynOS - an OS aimed to provide the finesse of macOS with the freedom of FreeBSD. iXsystems TrueNAS storage appliances were based on FreeBSD 10.3 TrueNAS CORE and Enterprise (formerly known as FreeNAS), is based on FreeBSD ; however TrueNAS Scale, alternative of both TrueNAS Core/Entreprise, is based on Debian Gnu/Linux. TrueOS – discontinued FreeBSD distribution aimed at the server market, previously a desktop distribution, abandoned to focus on TrueNAS Core. MidnightBSD — A GNUstep-based independent fork of FreeBSD for desktops, however installer is not graphical MyBee – Open source and free distribution for managing containers (FreeBSD jail) and cloud VMs (Bhyve) through a simplified API. m0n0wall – Embedded firewall software package NAS4Free – Open source storage platform NomadBSD – a persistent live system for USB flash drives, based on FreeBSD. OPNsense – Open source and free firewall, fork of pfSense and successor to m0n0wall pfSense – Open source and free network firewall distribution Proprietary software and hardware using proprietary software Beckhoff TwinCAT/BSD for Industrial PCs Blue Coat Systems network appliances Borderware appliances (firewall, VPN, Anti-SPAM, Web filter etc.) are based on a FreeBSD kernel Check Point IPSO security appliances Citrix Systems Netscaler application delivery software is based on FreeBSD Coyote Point GX-series web acceleration and load balancer appliances Dell Compellent enterprise storage systems (all 64-bit versions) Hobnob WirelessWAN IronPort AsyncOS is based on a FreeBSD kernel Isilon Systems' OneFS, the operating system used on Isilon IQ-series clustered storage systems Juniper Networks Junos Junos prior to 5.0 was based on FreeBSD 2.2.6 Junos between 5.0 and 7.2 (inclusive) is based on FreeBSD 4.2 Junos 7.3 and higher is based on FreeBSD 4.10 Junos 8.5 is based on FreeBSD 6.1 Junos 15.1 is based on FreeBSD 10 Junos 18.1 is based on FreeBSD 11 KACE Networks's KBOX 1000 & 2000 Series Appliances and the Virtual KBOX Appliance Lynx Software Technologies LynxOS, uses FreeBSD's networking stack McAfee SecurOS, used in e.g. Firewall Enterprise (aka Sidewinder) NetApp filers based on Data ONTAP Netflix Open Connect appliances Panasas parallel network storage systems Panasonic uses FreeBSD in their Viera TV receivers QNAP's QES operating system Sandvine's network policy control products Silicon Graphics International uses FreeBSD in their ArcFiniti MAID disk arrays, formerly manufactured by COPAN. Sony Computer Entertainment's PlayStation 3, PlayStation 4 and PlayStation Vita and Playstation 5 consumer gaming consoles. Sophos Email Appliance Spectra Logic nTier Verde backup appliances Symmetricom Timing Solutions The Weather Channel's IntelliStar local forecast computer Xinuos OpenServer 10 References FreeBSD based products FreeBSD
List of products based on FreeBSD
[ "Technology" ]
952
[ "Computing-related lists", "Lists of software" ]
16,355,158
https://en.wikipedia.org/wiki/Iwasawa%20group
In mathematics, a group is called an Iwasawa group, M-group or modular group if its lattice of subgroups is modular. Alternatively, a group G is called an Iwasawa group when every subgroup of G is permutable in G . proved that a p-group G is an Iwasawa group if and only if one of the following cases happens: G is a Dedekind group, or G contains an abelian normal subgroup N such that the quotient group G/N is a cyclic group and if q denotes a generator of G/N, then for all n ∈ N, q−1nq = n1+ps where s ≥ 1 in general, but s ≥ 2 for p=2. In , Iwasawa's proof was deemed to have essential gaps, which were filled by Franco Napolitani and Zvonimir Janko. has provided an alternative proof along different lines in his textbook. As part of Schmidt's proof, he proves that a finite p-group is a modular group if and only if every subgroup is permutable, by . Every subgroup of a finite p-group is subnormal, and those finite groups in which subnormality and permutability coincide are called PT-groups. In other words, a finite p-group is an Iwasawa group if and only if it is a PT-group. Examples The Iwasawa group of order 16 is isomorphic to the modular maximal-cyclic group of order 16. See also Modular law for groups Further reading Both finite and infinite M-groups are presented in textbook form in . Modern study includes . References Finite groups Properties of groups
Iwasawa group
[ "Mathematics" ]
342
[ "Mathematical structures", "Algebraic structures", "Finite groups", "Properties of groups" ]
16,357,447
https://en.wikipedia.org/wiki/FPG-9
The FPG-9 Foam Plate Glider is a simple, hand-launched glider made from a 9 inch () foam dinner plate, featuring a moveable rudder and elevons, allowing for an inexpensive way to teach basic flight mechanics. The model was created by Jack Reynolds, a volunteer at the Academy of Model Aeronautics' (AMA) National Model Aviation Museum. Originally the model was used as a hands-on activity for museum visitors and museum outreach. In 2004, the AMA incorporated the model into Aerolab, an instructional program developed for middle school physical science and math programs, that uses simple flying model aircraft as tools to teach Force and Motion. Today, besides the AMA, numerous other groups are now using the FPG-9. It may be built and flown to satisfy an elective activity for the Boy Scouts of America Aviation Merit Badge. The Children's Museum of Indianapolis uses it as part of their CSI: Flight Adventures Project, an educational program highlighting the use of model aircraft as scientific tools for research for grades 3 - 5. The National Museum of the United States Air Force also uses the model as part of their educational programming to include Project Soar (Science in Ohio through Aerospace Resources). References External links Instructions for building an FPG-9, written By Jack Reynolds, Volunteer, National Model Aviation Museum Boy Scouts Aviation Merit Badge Requirements FPG-9 Cut-out Pattern, by Jack Reynolds PDF Build Guide, published by the Academy of Model Aeronautics http://amablog.modelaircraft.org/amamuseum/files/2012/05/MA-May-2003-FPG9.pdf http://amablog.modelaircraft.org/amamuseum/2012/05/17/birth-of-the-fpg-9/ https://www.youtube.com/watch?v=pNtew_VzzWg http://www.childrensmuseum.org/sites/default/files/Documents/Educators/3-5_FlightAdventures_UOS.pdf https://web.archive.org/web/20141127022857/http://www.nationalmuseum.af.mil/shared/media/document/AFD-121107-012.pdf Paper folding Educational toys
FPG-9
[ "Mathematics" ]
478
[ "Recreational mathematics", "Paper folding" ]
16,358,617
https://en.wikipedia.org/wiki/History%20of%20Microsoft%20Office
This is a history of the various versions of Microsoft Office, consisting of a bundle of several different applications which changed over time. This table only includes final releases and not pre-release or beta software. It also does not list the history of the constituent standalone applications which were released much earlier starting with Word in 1983, Excel in 1985, and PowerPoint in 1987. Office versions Summary Microsoft Office 95 Microsoft Office 97 Microsoft Office 2000 Microsoft Office 2000 Personal was an additional SKU, solely designed for the Japanese market, that included Word 2000, Excel 2000 and Outlook 2000. This compilation would later become widespread as Microsoft Office 2003 Basic. Microsoft Office XP Microsoft Office 2003 Microsoft Office 2007 1 Office Customization Tool is used to customize the installation of Office 2007 by creating a Windows Installer patch file (.MSP) and replacing the Custom Installation Wizard and Custom Deployment Wizard included in earlier versions of the Office Resource Kit that created a Windows Installer Transform (.MST). Microsoft Office 2010 Remarks 1 Office 2010 Personal was made available for distribution only in Japan. 2 The retail version of Office 2010 Home and Student can be installed on up to three machines in a single household for non-commercial use only. The Product Key Card version only allows a single installation on a single machine. 3 The retail versions of Office 2010 Home and Business and Office 2010 Professional can be installed on two devices including a primary machine, and a portable device such as a laptop, for use by a single user. The Product Key Card version only allows a single installation on a single machine. 4 On February 1, 2012, Office 2010 University replaced the previous Office 2010 Professional Academic edition in an effort to curtail fraudulent product use. 5 Office 2010 Professional Plus is only available for Volume License customers. The retail version is offered through MSDN or TechNet. 6 The Office Customization Tool is used to customize the installation of Office by creating a Windows Installer Patch (.MSP) file, and replaces the Custom Installation Wizard and Custom Deployment Wizard included in 2003 and earlier versions of the Office Resource Kit. It is only available in Volume License editions. Microsoft Office 2013 Remarks 1 The Windows RT versions do not include all of the functionality provided by other versions of Office. 2 Commercial use of Office RT is allowed through volume licensing or business subscriptions to Office 365. 3 Windows Store versions are also available. 4 InfoPath was initially part of Office 365 Small Business Premium. However, it no longer is. Microsoft Office 2016 As with previous versions, Office 2016 is made available in several distinct editions aimed towards different markets. All traditional editions of Microsoft Office 2016 contain Word, Excel, PowerPoint and OneNote and are licensed for use on one computer. The installation of retail channels of Office 2016 is Click-To-Run (C2R), however volume licensing channels Office 2016 are using traditional Microsoft Installer (MSI). Five traditional editions of Office 2016 were released for Windows: Home & Student: This retail suite includes the core applications only. Home & Business: This retail suite includes the core applications and Outlook. Standard: This suite, only available through volume licensing channels, includes the core applications, as well as Outlook and Publisher. Professional: This retail suite includes the core applications, as well as Outlook, Publisher and Access. Professional Plus: This suite available through MSDN retail channels and volume licensing channels, includes the core applications, as well as Outlook, Publisher, Access and Skype for Business. The deployment of this edition has C2R for MSDN retail channels and MSI for volume licensing channels. For a comparison chart for the new version of Office, Microsoft 365, click here. Three traditional editions of Office 2016 were released for Mac: Home & Student: This retail suite includes the core applications only. Home & Business: This retail suite includes the core applications and Outlook. Standard: This suite, only available through volume licensing channels, includes the core applications and Outlook. Mac versions Office 98 Notes References Microsoft Office History of Microsoft Microsoft Office
History of Microsoft Office
[ "Technology" ]
807
[ "History of software", "History of computing" ]
16,359,310
https://en.wikipedia.org/wiki/Human%20Microbiome%20Project
The Human Microbiome Project (HMP) was a United States National Institutes of Health (NIH) research initiative to improve understanding of the microbiota involved in human health and disease. Launched in 2007, the first phase (HMP1) focused on identifying and characterizing human microbiota. The second phase, known as the Integrative Human Microbiome Project (iHMP) launched in 2014 with the aim of generating resources to characterize the microbiome and elucidating the roles of microbes in health and disease states. The program received $170 million in funding by the NIH Common Fund from 2007 to 2016. Important components of the HMP were culture-independent methods of microbial community characterization, such as metagenomics (which provides a broad genetic perspective on a single microbial community), as well as extensive whole genome sequencing (which provides a "deep" genetic perspective on certain aspects of a given microbial community, i.e. of individual bacterial species). The latter served as reference genomic sequences — 3000 such sequences of individual bacterial isolates are currently planned — for comparison purposes during subsequent metagenomic analysis. The project also financed deep sequencing of bacterial 16S rRNA sequences amplified by polymerase chain reaction from human subjects. Introduction Prior to the HMP launch, it was often reported in popular media and scientific literature that there are about 10 times as many microbial cells and 100 times as many microbial genes in the human body as there are human cells; this figure was based on estimates that the human microbiome includes around 100 trillion bacterial cells and an adult human typically has around 10 trillion human cells. In 2014 the American Academy of Microbiology published a FAQ that emphasized that the number of microbial cells and the number of human cells are both estimates, and noted that recent research had arrived at a new estimate of the number of human cells at around 37 trillion cells, meaning that the ratio of microbial to human cells is probably about 3:1. In 2016 another group published a new estimate of ratio as being roughly 1:1 (1.3:1, with "an uncertainty of 25% and a variation of 53% over the population of standard 70 kg males"). Despite the staggering number of microbes in and on the human body, little was known about their roles in human health and disease. Many of the organisms that make up the microbiome have not been successfully cultured, identified, or otherwise characterized. Organisms thought to be found in the human microbiome, however, may generally be categorized as bacteria, members of domain Archaea, yeasts, and single-celled eukaryotes as well as various helminth parasites and viruses, the latter including viruses that infect the cellular microbiome organisms (e.g., bacteriophages). The HMP set out to discover and characterize the human microbiome, emphasizing oral, skin, vaginal, gastrointestinal, and respiratory sites. The HMP will address some of the most inspiring, vexing and fundamental scientific questions today. Importantly, it also has the potential to break down the artificial barriers between medical microbiology and environmental microbiology. It is hoped that the HMP will not only identify new ways to determine health and predisposition to diseases but also define the parameters needed to design, implement and monitor strategies for intentionally manipulating the human microbiota, to optimize its performance in the context of an individual's physiology. The HMP has been described as "a logical conceptual and experimental extension of the Human Genome Project." In 2007 the HMP was listed on the NIH Roadmap for Medical Research as one of the New Pathways to Discovery. Organized characterization of the human microbiome is also being done internationally under the auspices of the International Human Microbiome Consortium. The Canadian Institutes of Health Research, through the CIHR Institute of Infection and Immunity, is leading the Canadian Microbiome Initiative to develop a coordinated and focused research effort to analyze and characterize the microbes that colonize the human body and their potential alteration during chronic disease states. Contributing Institutions The HMP involved participation from many research institutions, including Stanford University, the Broad Institute, Virginia Commonwealth University, Washington University, Northeastern University, MIT, the Baylor College of Medicine, and many others. Contributions included data evaluation, construction of reference sequence data sets, ethical and legal studies, technology development, and more. Phase One (2007-2014) The HMP1 included research efforts from many institutions. The HMP1 set the following goals: Develop a reference set of microbial genome sequences and to perform preliminary characterization of the human microbiome Explore the relationship between disease and changes in the human microbiome Develop new technologies and tools for computational analysis Establish a resource repository Study the ethical, legal, and social implications of human microbiome research Phase Two (2014-2016) In 2014, the NIH launched the second phase of the project, known as the Integrative Human Microbiome Project (iHMP). The goal of the iHMP was to produce resources to create a complete characterization of the human microbiome, with a focus on understanding the presence of microbiota in health and disease states. The project mission, as stated by the NIH, was as follows: The iHMP will create integrated longitudinal datasets of biological properties from both the microbiome and host from three different cohort studies of microbiome-associated conditions using multiple "omics" technologies.The project encompassed three sub-projects carried out at multiple institutions. Study methods included 16S rRNA gene profiling, whole metagenome shotgun sequencing, whole genome sequencing, metatranscriptomics, metabolomics/lipidomics, and immunoproteomics. The key findings of the iHMP were published in 2019. Pregnancy & Preterm Birth The Vaginal Microbiome Consortium team at Virginia Commonwealth University led research on the Pregnancy & Preterm Birth project with a goal of understanding how the microbiome changes during the gestational period and influences the neonatal microbiome. The project was also concerned with the role of the microbiome in the occurrence of preterm births, which, according to the CDC, account for nearly 10% of all births and constitutes the second leading cause of neonatal death. The project received $7.44 million in NIH funding. Onset of Inflammatory Bowel Disease (IBD) The Inflammatory Bowel Disease Multi'omics Data (IBDMDB) team was a multi-institution group of researchers focused on understanding how the gut microbiome changes longitudinally in adults and children suffering from IBD. IBD is an inflammatory autoimmune disorder that manifests as either Crohn's disease or ulcerative colitis and affects about one million Americans. Research participants included cohorts from Massachusetts General Hospital, Emory University Hospital/Cincinnati Children's Hospital, and Cedars-Sinai Medical Center. Onset of Type 2 Diabetes (T2D) Researchers from Stanford University and the Jackson Laboratory of Genomic Medicine worked together to perform a longitudinal analysis on the biological processes that occur in the microbiome of patients at risk for Type 2 Diabetes. T2D affects nearly 20 million Americans with at least 79 million pre-diabetic patients, and is partially characterized by marked shifts in the microbiome compared to healthy individuals. The project aimed to identify molecules and signaling pathways that play a role in the etiology of the disease. Achievements The impact to date of the HMP may be partially assessed by examination of research sponsored by the HMP. Over 650 peer-reviewed publications were listed on the HMP website from June 2009 to the end of 2017, and had been cited over 70,000 times. At this point the website was archived and is no longer updated, although datasets do continue to be available. Major categories of work funded by HMP included: Development of new database systems allowing efficient organization, storage, access, search and annotation of massive amounts of data. These include IMG, the Integrated Microbial Genomes database and comparative analysis system; IMG/M, a related system that integrates metagenome data sets with isolate microbial genomes from the IMG system; CharProtDB, a database of experimentally characterized protein annotations; and the Genomes OnLine Database (GOLD), for monitoring the status of genomic and metagenomic projects worldwide and their associated metadata. Development of tools for comparative analysis that facilitate the recognition of common patterns, major themes and trends in complex data sets. These include RAPSearch2, a fast and memory-efficient protein similarity search tool for next-generation sequencing data; Boulder ALignment Editor (ALE), a web-based RNA alignment tool; WebMGA, a customizable web server for fast metagenomic sequence analysis; and DNACLUST, a tool for accurate and efficient clustering of phylogenetic marker genes Development of new methods and systems for assembly of massive sequence data sets. No single assembly algorithm addresses all the known problems of assembling short-length sequences, so next-generation assembly programs such as AMOS are modular, offering a wide range of tools for assembly. Novel algorithms have been developed for improving the quality and utility of draft genome sequences. Assembly of a catalog of sequenced reference genomes of pure bacterial strains from multiple body sites, against which metagenomic results can be compared. The original goal of 600 genomes has been far surpassed; the current goal is for 3000 genomes to be in this reference catalog, sequenced to at least a high-quality draft stage. , 742 genomes have been cataloged. Establishment of the Data Analysis and Coordination Center (DACC), which serves as the central repository for all HMP data. Various studies exploring legal and ethical issues associated with whole genome sequencing research. Developments funded by HMP included: New predictive methods for identifying active transcription factor binding sites. Identification, on the basis of bioinformatic evidence, of a widely distributed, ribosomally produced electron carrier precursor Time-lapse "moving pictures" of the human microbiome. Identification of unique adaptations adopted by segmented filamentous bacteria (SFB) in their role as gut commensals. SFB are medically important because they stimulate T helper 17 cells, thought to play a key role in autoimmune disease. Identification of factors distinguishing the microbiota of healthy and diseased gut. Identification of a hitherto unrecognized dominant role of Verrucomicrobiota in soil bacterial communities. Identification of factors determining the virulence potential of Gardnerella vaginalis strains in vaginosis. Identification of a link between oral microbiota and atherosclerosis. Demonstration that pathogenic species of Neisseria involved in meningitis, sepsis, and sexually transmitted disease exchange virulence factors with commensal species. Milestones Reference database established On 13 June 2012, a major milestone of the HMP was announced by the NIH director Francis Collins. The announcement was accompanied with a series of coordinated articles published in Nature and several journals including the Public Library of Science (PLoS) on the same day. By mapping the normal microbial make-up of healthy humans using genome sequencing techniques, the researchers of the HMP have created a reference database and the boundaries of normal microbial variation in humans. From 242 healthy U.S. volunteers, more than 5,000 samples were collected from tissues from 15 (men) to 18 (women) body sites such as mouth, nose, skin, lower intestine (stool) and vagina. All the DNA, human and microbial, were analyzed with DNA sequencing machines. The microbial genome data were extracted by identifying the bacterial specific ribosomal RNA, 16S rRNA. The researchers calculated that more than 10,000 microbial species occupy the human ecosystem and they have identified 81 – 99% of the genera. In addition to establishing the human microbiome reference database, the HMP project also discovered several "surprises", which include: Microbes contribute more genes responsible for human survival than humans' own genes. It is estimated that bacterial protein-coding genes are 360 times more abundant than human genes. Microbial metabolic activities; for example, digestion of fats; are not always provided by the same bacterial species. The presence of the activities seems to matter more. Components of the human microbiome change over time, affected by a patient disease state and medication. However, the microbiome eventually returns to a state of equilibrium, even though the composition of bacterial types has changed. Clinical application Among the first clinical applications utilizing the HMP data, as reported in several PLoS papers, the researchers found a shift to less species diversity in vaginal microbiome of pregnant women in preparation for birth, and high viral DNA load in the nasal microbiome of children with unexplained fevers. Other studies using the HMP data and techniques include role of microbiome in various diseases in the digestive tract, skin, reproductive organs and childhood disorders. Pharmaceutical application Pharmaceutical microbiologists have considered the implications of the HMP data in relation to the presence / absence of 'objectionable' microorganisms in non-sterile pharmaceutical products and in relation to the monitoring of microorganisms within the controlled environments in which products are manufactured. The latter also has implications for media selection and disinfectant efficacy studies. See also References External links Human Microbiome Project Data Analysis and Coordination Center for the Human Microbiome Project (HMP) The CIHR Canadian Microbiome Initiative The International Human Microbiome Consortium 2006, Lay summary of colon microbiome study (the actual study: Gill et al., 2006) Olivia Judson Microbes ‘R’ Us New York Times 22 July 2009 Gina Kolata Good Health? Thank Your 100 Trillion Bacteria New York Times 13 June 2012 Microbiomes Bacteriology Human genome projects Environmental microbiology Bacteria and humans Microbiology Bioinformatics Computational biology Genome projects
Human Microbiome Project
[ "Chemistry", "Engineering", "Biology", "Environmental_science" ]
2,894
[ "Biological engineering", "Computational biology", "Microbiology", "Bioinformatics", "Environmental microbiology", "Bacteria", "Microscopy", "Genome projects", "Human genome projects", "Bacteria and humans", "Microbiomes" ]
12,069,437
https://en.wikipedia.org/wiki/Microarray%20databases
A microarray database is a repository containing microarray gene expression data. The key uses of a microarray database are to store the measurement data, manage a searchable index, and make the data available to other applications for analysis and interpretation (either directly, or via user downloads). Microarray databases can fall into two distinct classes: A peer reviewed, public repository that adheres to academic or industry standards and is designed to be used by many analysis applications and groups. A good example of this is the Gene Expression Omnibus (GEO) from NCBI or ArrayExpress from EBI. A specialized repository associated primarily with the brand of a particular entity (lab, company, university, consortium, group), an application suite, a topic, or an analysis method, whether it is commercial, non-profit, or academic. These databases might have one or more of the following characteristics: A subscription or license may be needed to gain full access, The content may come primarily from a specific group (e.g. SMD, or UPSC-BASE), the Immunological Genome Project There may be constraints on who can use the data or for what purpose data can be used, Special permission may be required to submit new data, or there may be no obvious process at all, Only certain applications may be equipped to use the data, often also associated with the same entity (for example, caArray at NCI is specialized for the caBIG), Further processing or reformatting of the data may be required for standard applications or analysis, They claim to address the 'urgent need' to have a standard, centralized repository for microarray data. (See YMD, last updated in 2003, for example), There is a claim to an incremental improvement over one of the public repositories, A meta-analysis application, which incorporates studies from one or more public databases (e.g. Gemma primarily uses GEO studies; NextBio uses various sources) Some of the most known public, curated microarray databases are: See also Biological database List of biological databases DNA microarray Microarray analysis techniques External links ArrayExpress: Quick Tour on EBI Train OnLine Exploring functional genomics data with the ArrayExpress Archive on EBI Train OnLine Investigating gene expression patterns with the Gene Expression Atlas on EBI Train OnLine ArrayExpress:Submitting data using MAGE-TAB on EBI Train OnLine ArrayExplorer - A free tool to compare microarrays side by side. Microarrays Genetics databases
Microarray databases
[ "Chemistry", "Materials_science", "Biology" ]
522
[ "Biochemistry methods", "Genetics techniques", "Microtechnology", "Microarrays", "Bioinformatics", "Molecular biology techniques" ]
12,069,712
https://en.wikipedia.org/wiki/Pharmaceutical%20microbiology
Pharmaceutical microbiology is an applied branch of microbiology. It involves the study of microorganisms associated with the manufacture of pharmaceuticals e.g. minimizing the number of microorganisms in a process environment, excluding microorganisms and microbial byproducts like exotoxin and endotoxin from water and other starting materials, and ensuring the finished pharmaceutical product is sterile. Other aspects of pharmaceutical microbiology include the research and development of anti-infective agents, the use of microorganisms to detect mutagenic and carcinogenic activity in prospective drugs, and the use of microorganisms in the manufacture of pharmaceutical products like insulin and human growth hormone. Drug safety Drug safety is a major focus of pharmaceutical microbiology. Pathogenic bacteria, fungi (yeasts and moulds) and toxins produced by microorganisms are all possible contaminants of medicines- although stringent, regulated processes are in place to ensure the risk is minimal. Antimicrobial activity and disinfection Another major focus of pharmaceutical microbiology is to determine how a product will react in cases of contamination. For example: You have a bottle of cough medicine. Imagine you take the lid off, pour yourself a dose and forget to replace the lid. You come back to take your next dose and discover that you will indeed left the lid off for a few hours. What happens if a microorganism "fell in" whilst the lid was off? There are tests that look at that. The product is "challenged" with a known amount of specific microorganisms, such as E. coli and C. albicans and the anti-microbial activity monitored Pharmaceutical microbiology is additionally involved with the validation of disinfectants, either according to U.S. AOAC or European CEN standards, to evaluate the efficacy of disinfectants in suspension, on surfaces, and through field trials. Field trials help to establish the frequency of the application of detergents and disinfectants. Methods and specifications Testing of pharmaceutical products is carried out according to a Pharmacopeia of which there are a few types. For example: In America, the United States Pharmacopeia is used; in Japan there is the Japanese Pharmacopeia; in the United Kingdom there is the British Pharmacopoeia and in Europe the European Pharmacopeia. These contain a test method which is to be followed when testing, along with defined specifications for the amount of microorganisms allowed in a given amount of product. The specifications change depending on the product type and method in which it is introduced to the body. The pharmacopoeia also covers areas like sterility testing, endotoxin testing, the use of biological indicators, microbial limits testing and enumeration, and the testing of pharmaceutical grade water. Cleanrooms and controlled environments Pharmaceutical microbiologists are required to assess cleanrooms and controlled environments for contamination (viable and particulate) and to introduce contamination control strategies. This includes an understanding of risk assessment. Risk management has been successfully employed in various industrial sectors like US Space industry (NASA), nuclear power industry and automobile industry which benefited these industries in several areas. But in application, the pharmaceutical sector is still in its infancy and the utilization of risk assessment techniques to pharmaceutical production is just beginning and the potential gains are yet to be realized. Cleanrooms and zones are typically classified according to their use (the main activity within each room or zone) and confirmed by the cleanliness of the air by the measurement of particles. Cleanrooms are microbiologically assessed through environmental monitoring methods. Viable monitoring is designed to detect levels of bacteria and fungi present in defined locations /areas during a particular stage in the activity of processing and filling a product. Viable monitoring is designed to detect mesophilic micro-organisms in the aerobic state. However, some manufacturers may have requirements to examine for other types of microorganisms (such as anaerobes if nitrogen lines are used as part of the manufacturing process). Surface methods include testing various Surfaces for numbers of microorganisms, such as: • Product Contact Surfaces • Floors • Walls • Ceilings Using techniques like: • Contact Plates • Touch Plates • Swabs • Surface Rinse Method For air monitoring, this is undertaken using agar settle plates (placed in the locations of greatest risk) or active (volumetric) air-samplers (to provide a quantitative assessment of the number of microorganisms in the air per volume of air sampled). Active air-samplers generally fall into the following different models: • Slit to Agar • Membrane Filtration • Centrifugal Samplers Monitoring methods will all use either a general purpose culture medium like tryptone soya agar (TSA), which will be used at a dual incubation regime of 30 °C – 35 °C and 20 °C – 25 °C or two different culture media are used at two different temperatures, of which one of the media is selective for fungi (e.g. Sabouraud Dextrose agar, SDA). The choice of culture media, incubation times and temperatures requires validating. Professional guidance The main sources of education and professional guidance for pharmaceutical microbiology come from Dr Tim Sandle's Pharmaceutical Microbiology Resources, Dr Scott Sutton's Microbiology Network, and the UK and Ireland Pharmaceutical Microbiology Interest Group (Pharmig). References External links Pharmaceutical Microbiology educational resources Pharmaceutical Microbiology Company Microbiology Network Microbiology
Pharmaceutical microbiology
[ "Chemistry", "Biology" ]
1,139
[ "Microbiology", "Microscopy" ]
12,069,928
https://en.wikipedia.org/wiki/Gaps%20and%20gores
Gaps and gores are portions of land areas that do not conform to boundaries found in cadastre and other land surveys based upon imprecise measurements and other ambiguities of metes and bounds. A gap, also known as a hiatus, occurs where the descriptions in deeds describing adjacent properties (unintentionally) overlook a space or "gap" between them. A gore occurs where descriptions in larger administrative boundaries (towns, counties) of adjacent jurisdictions or, large parcels, all fail to include some portion of land between them, forming an unclaimed, characteristically triangular "sliver" of land. Disputes often arise regarding the ownership of gaps and gores when they are discovered, usually when developers detect sufficient value in the local land. Local laws will determine whether they are considered abandoned or rather adhere to (or may be absorbed by) one adjacent parcel or another. For example, in Tennessee law, tax map boundaries can become property boundaries (notwithstanding a survey and deed to the contrary) merely by paying the taxes on the land for twenty years in the belief that it was part of the ownership, even if it encompasses adjacent gaps and gores. See adverse possession. See also Gore (surveying) Gore (segment) Land survey Surveying Real property law
Gaps and gores
[ "Engineering" ]
257
[ "Surveying", "Civil engineering" ]
12,070,818
https://en.wikipedia.org/wiki/Tiger%20conservation
Tiger conservation attempts to prevent tigers from becoming extinct and preserving its natural habitat. This is one of the main objectives of the international animal conservation community. The Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES) has played a crucial role in improving international efforts for tiger conservation. CITES CITES is an international governance network employing tools and measures which adapt and become more efficient with time. One measure specifically aimed at protecting the tiger is visible in the network’s efforts to ban the trade of tigers or tiger derivatives. CITES members have agreed to adhere to this international trade ban; once a member states ratifies and implements CITES it bans such trade within its national borders. The CITES Secretariat is administrated by the UNEP which works closely with NGOs such as The Trade Records Analysis of Flora and Fauna in Commerce (TRAFFIC) to assist member states with the implementation of the convention. States are provided with training and information about requirements (when necessary), and their progress and a compliance are monitored and evaluated. In order for CITES to work effectively it requires the involvement of institutions, NGOs, civil society and member states: especially Asian tiger range member countries. The Tiger Range Countries (TRC) – countries where tigers still roam free – are: (Leads almost 70% population) While there have been no recent tiger sightings in North Korea, it is the only country listed which has not ratified CITES. The 13 TRC who are CITES member states recently held a conference in Russia and jointly vowed to double the estimated number of tigers left in the wild (3200). Poaching, however, remains a very significant problem in all 13 TRC, despite the implementation of CITES regulations within their borders. In the 15th CITES conference held in Doha, Qatar in March 2010, all party members agreed to stricter agreements between members states to protect the tiger. However the United Nations warned that tigers are still at risk of becoming extinct as members states are currently failing to clamp down hard on the illegal trade of tigers and tiger derivatives within their borders. Although CITES has been successful in curbing this illegal trade, CITES as an international institution relies on member states to effectively implement conventions within their national borders. The quality of such implementation varies significantly within member states. For example, Thailand implemented CITES policies to a very high standard but the illegal tiger trade is still rife within this country. A governance structure such as CITES is powerless to control issues such as poaching unless it has the full cooperation of all actors, including the state. Another reason why CITES seems to be failing could be ascribed to the lucrative nature of the tiger trade. The World Bank estimates that the illegal international trade of wildlife on the black market is worth an estimated $10bn per year. By selling one tiger skeleton, a poacher could make an amount equal to what some labourer would earn in 10 years. A report released by the International Union for Conservation of Nature reported that wild tiger populations were 40% higher than previously estimated, with between 3,726 and 5,578 tigers believed to be in the wild. Despite these improvements, populations of tigers have declined precipitously in Malaysia and are now likely extinct in Cambodia, Laos, and Vietnam. India Project Tiger started in 1973, is a major effort to conserve the tiger and its habitats in India. At the turn of the 20th century, one estimate of the tiger population in India placed the figure at 40,000, yet an Indian tiger census conducted in 1972 revealed the existence of only 1,827 tigers. Various pressures in the latter part of the 20th century led to the progressive decline of wilderness resulting in the disturbance of viable tiger habitats. At the International Union for Conservation of Nature and Natural Resources (IUCN) General Assembly meeting in Delhi in 1969, serious concern was voiced about the threat to several wildlife species, and the shrinkage of wilderness in India from poaching. In 1970, a national ban on tiger hunting was imposed, and in 1972 the Wildlife Protection Act came into force. The framework was then set to formulate a project for tiger conservation with an ecological approach. Project Tiger aims at tiger conservation in specially-constituted tiger reserves, which are representative of various bio-geographical regions in the country. It strives to maintain viable tiger populations in their natural environment. As of 2019, there are 50 tiger reserves in India, covering an area of . At the Kalachakra Tibetan Buddhist festival in India in January 2006, the Dalai Lama preached a ruling against using, selling, or buying wild animals, their products, or derivatives. When Tibetan pilgrims returned to Tibet afterwards, his words resulted in the widespread destruction by Tibetans of their wild animal skins, including tiger and leopard skins used as ornamental garments. In 2010 India signed an agreement, along with 12 other countries with tiger populations, to double its tiger numbers by 2022. India’s 2014 tiger census showed a population of 2,226, a sharp increase from its all-time low of 1,411 in 2006 and about a 30% increase from its tiger population in 2011. A comprehensive survey from 2018 showed a tiger population of 2,962, an increase of 33% from the 2014 numbers, although independent researchers and conservation experts have suggested that the promising tiger numbers be used with some caution. China In China, tigers became the target of large-scale ‘anti-pest’ campaigns in the early 1950s, where suitable habitats were fragmented following deforestation and resettlement of people to rural areas, who hunted tigers and prey species. Though tiger hunting was prohibited in 1977, the population continued to decline and is considered extinct in southern China since 2001. In northeastern China's Hunchun National Nature Reserve, camera-traps recorded a tiger with four cubs for the first time in 2012. During subsequent surveys, between 27 and 34 tigers were documented along the China–Russia border. During the early 1970s, such as in the United Nations Conference on the Human Environment, China rejected the Western-led environmentalist movement as an impeachment on the full use of its own resources. However, this stance softened during the 1980s, as China emerged from diplomatic isolation and desired normal trade relations with Western countries. China became a party to the CITES treaty in 1981, bolstering efforts at tiger conservation by transnational groups like Project Tiger, which were supported by the United Nations Development Programme and the World Bank. In 1988, China passed the Law on the Protection of Wildlife, listing the tiger as a Category I protected species. In 1993, China banned the trade on tiger parts, which led to a drop in the number of tiger bones harvested for use in traditional Chinese medicine. In 2008, the UK newspaper the Sunday Telegraph published an expose on the illegal sale of tiger bones from protect tiger sanctuaries. Subsequently, in 2018, the State Council of the People's Republic of China proposed a new order that would allow for the use of farmed tiger bones in medical research and treatment - this sparked a significant international backlash. While the implementation of the order was delayed, the order itself was not rescinded. On 25 October 2018, the 25-year ban against the use of rhino horn and tiger body parts was lifted. Tiger trade in Tibet However, as the tiger bone trade was undermined by effective Chinese legislation in the 1990s, the Tibetan people's trade in tiger pelts emerged as a relatively more important threat to tigers. As wealth in the Tibetan areas increased, singers and participants in annual Tibetan horse races began to wear chuba (traditional Tibetan robes) with trimmed with tiger, otter, and leopard fur. Clothing ornamented with tiger pelts became a standard of beauty, and even mandatory at weddings, with Tibetan families competing to buy larger and larger pelts to demonstrate their social status. In 2003, Chinese customs officials in Tibet intercepted 31 tigers, 581 leopards, and 778 otters, which, if sold in the Tibetan capital of Lhasa, would have netted $10,000, $850, and $250 respectively. By 2004, international conservation organizations such as World Wide Fund for Nature, Fauna and Flora International, and Conservation International were targeting Tibetans in China in successful environmental propaganda campaigns against the tiger skin trade. In the summer of 2005, the Environmental Investigation Agency sent undercover teams to Litang and Nagchu in order to film documentation of Tibetan violations of Chinese environmental law for submission to the Chinese CITES office. In April 2005, Care for the Wild International and Wildlife Trust of India confronted the 14th Dalai Lama about the Tibetan trade, and his response was recorded as "awkward" and "ambushed", with suspicion against the NGOs for trying to "dramatize" the situation as "mak[ing] it seem as if Tibetans were the culprit". Conservation efforts In 2017, China launched the world's largest protected area, the Northeast China Tiger and Leopard National Park spanning over in the southern part of the Changbai Mountains along the border with Russia and North Korea. Work during the pilot phase included closure of industrial and mining enterprises, removal of fences, buildings, farms, livestock and hunting gear, rescue and release of wildlife, establishment of feeding points for wildlife, and restoration of fragmented habitat. Researchers suggested that the park's capacity is insufficient to support the sustained existence of a Siberian tiger population. Other areas Forest cover in Vietnam has been reduced to less than 15% of the original extent before the 1940s, due to warfare, illegal logging, and slash and burn agricultural practices. The tiger is legally protected in the country since 1960, but trade of tiger body parts continued to the mid 1990s. Tigers were still present in northern Vietnam bordering China in the 1990s. As of 2015, this population is considered possibly extinct. In Laos, 14 tigers were documented in semi-evergreen and evergreen forest interspersed with grassland in Nam Et-Phou Louey National Protected Area during surveys from 2013 to 2017. In Sumatra, tiger populations range from lowland peat swamp forests to rugged montane forests. The tiger population in Laos was already depleted when National Biodiversity Conservation Areas were established in 1993. By the late 1990s, tigers were still present in at least five conservation areas. Hunting of tigers for illegal trade of body parts and opportunistic hunting of tiger prey species were considered the main threats to the country's tiger population. Five tigers were recorded in Nam Et-Phou Louey National Protected Area between April 2003 and June 2004. Large wild prey species occurred at low densities so that tigers hunted small prey and livestock, which probably affected their reproduction negatively. In Cambodia, tigers were sighted in remote forest areas in the mid 1980s. Protected areas were established in 1993, but large extents of forest outside these areas were given as logging concessions to foreign companies. In 1998, interviewed hunters corroborated tiger presence in the Cardamom and Dâmrei Mountains. During surveys between 1999 and 2007 in nine protected areas and more than 300 locations across the country, tigers were recorded only in the Mondulkiri Protected Forest and in Virachey National Park. The country's tiger population was therefore considered extremely small. As of 2015, it is considered possibly extinct. In Thailand, forests were protected by establishing 81 national parks, 39 wildlife sanctuaries and 49 non-hunting areas between 1962 and 1996, including 12 protected areas exceeding . Logging was banned in 1989. Despite this extensive protected area network, tigers were recorded in 10 of 17 protected area complexes during countrywide surveys between 2004 and 2007. Tiger density was lower than predicted on basis of available forest habitat. The Myanmar tiger population was limited to the Tanintharyi Region and Hukaung Valley Wildlife Sanctuary in 2006. The country is home to two tiger populations, Bengal and Indochinese tigers. In 1996, the composition of the two populations was 60% Bengal tigers and 40% Indochinese tigers. The natural ecological divide for these two populations is assumed to be the Irrawaddy River, but there is no scientific evidence for that hypothesis. DNA studies are needed to confirm it. Today, the presence of tigers was confirmed in the Hukawng Valley, Htamanthi Wildlife Sanctuary, and in two small areas in the Tanintharyi Region. The Tenasserim Hills is an important area, but forests are harvested there. In 2015, tigers were recorded by camera traps for the first time in the hill forests of Kayin State. In Peninsular Malaysia, tigers occur only in four protected areas exceeding . The last tiger in Singapore was shot in 1932. Conservation organisations One of the biggest threats to tiger populations is habitat fragmentation. A program called the Terai-Arc Landscape (TAL) has been working directly with improving tiger habitats, specifically fragmented habitats in Nepal and northern India. Their main strategy is to link up the subpopulations of tigers that have been separated by setting up special tiger corridors that connect the fragmented habitats. The corridors are built to promote migration and/or dispersion of certain tiger populations giving them the ability to unite with other tigers. Giving tigers the ability to mate with a larger selection of individuals will increase the gene pool for the tigers, which will lead to more diversity, higher birth rates, and higher cub survival. Panthera is a conservation organization that’s the main goal is to preserve wild cats focusing on tigers, lions, snow leopards, and jaguars. In July 2006, Panthera collaborated with the Wildlife Conservation Society (WCS) to form Tigers Forever, one of their main tiger projects. Tigers Forever plans to increase the number of tigers in key areas by 50% over ten years. Key Areas include: India, Myanmar, Thailand, Laos, Malaysia and Indonesia. This project is experimental and hopes to increase the number of tigers by eliminating human threats and monitoring tiger and prey populations. To accomplish these goals they are increasing the amount and quality of law enforcement in these areas and working with informants to catch poachers. Another project spearheaded by Panthera is the Tiger Corridor Initiative (TCI). Human development in the Tiger Range Countries (TRC) has left many tiger habitats fragmented. Habitat fragmentation leads to a division of tiger populations, which reduces the gene pool and makes it difficult for tigers to reproduce. The TCI is a new project, very similar to the Terai-Arc Landscape (TAL) project that plans to link protected core populations of tigers with one another using corridors that will provide safe passage for tigers. This will give the separated tiger populations access to each other, which in theory should increase the number of tigers as well as genetic diversity. Another organization involved with the conservation of tigers is the Save the Tiger Fund (STF). The STF was founded in 1995 by the National Fish and Wildlife Foundation (NFWF) and focuses on preserving wild tigers. The STF has contributed over $10.6 million and participated in a total of 196 conservation efforts that provide a number of services to help to mitigate the human-tiger conflict, protect tiger habitats, research tiger ecology, monitor tiger populations, and educate locals on the importance of saving the tiger. The STF also participates in a grant program and has given a total of $1700.3 billion in the form of 33600 grants to the tiger range countries (TRC) to help protect the existing populations. ExxonMobil is the number one contributor to the STF donating nearly $12 million between 1995 and 2004. Currently the STF has teamed up with Panthera to form the STF-Panthera Partnership. They plan to combine their expertise in tiger conservation to help save the wild tiger. The World Wildlife Fund (WWF) also contributes to tiger conservation. They have set an ambitious goal called Tx2 to double the wild tiger population by 2022, the next Chinese Year of the Tiger. To reach this goal, their primary efforts lie in protecting landscapes where they feel tigers have the highest chance of surviving and increasing, preventing poaching, and working to decrease demand for tiger parts. Much of the funding for this project comes from a partnership between the WWF and Leonardo DiCaprio called Save Tigers Now. Save Tigers Now focuses on fundraising to help the WWF meet their Tx2 goal. During the last Year of the Tiger, 2010, a summit called the International Tiger Conservation Forum was held in Russia to discuss efforts to save the tiger. This meeting led to contributions totaling $127 million from the governments involved to support tiger conservation and an agreement to participate in the Global Tiger Recovery Program developed by the Global Tiger Initiative over the next five years from all 13 of the Tiger Range Countries. The Global Tiger Initiative is an alliance between governments created to save wild tigers from going extinct founded in June 2008. Among other successful conservation programs, the GTI developed The Global Tiger Recovery Program (GRTP) to assist in reaching the goal of doubling the number of wild tigers through effective management and restoration of tiger habitats; the elimination of poaching, smuggling, and illegal trade of tigers, and their parts; collaboration to manage borders and in stopping illegal trade; working with indigenous and local communities; and returning tigers to their former range. WildTeam uses a social marketing approach to create innovative, community-based conservation solutions to help save tigers in the Sundarbans of India. WildTeam has developed a system of volunteer village teams that save tigers that stray into villages and reduce human-tiger conflict. Data collection techniques Data collection is required to know where conservation efforts and resources need to be applied. To collect such data, techniques such as radio collars and capture-recapture population estimation models have been used to collect population numbers. To specify, “tiger searching” is a basic method that involves either riding elephants or driving off-road vehicles into tiger territory and identifying individuals as well as their locations. The pugmark census technique is also used during these travels. This involves observing paw prints in the ground and taking measurements of width, length and indentation to determine the individual that was in the location. Dogs are also used to assist tracking the tiger by smell. Once the tigers are found, photographs, drawings and notes regarding sex, location, and other details of the individual are taken and sent back to the study camp. There are also multiple reserves that allow professional guided tourists to explore via elephant mahout, where sightings are recorded if tigers are seen along the trails. Another method, referred to as “camera traps” involves setting up surveying cameras that activate when there is movement detected and will spontaneously take multiple photographs of the area. Camera traps are not often used by reserve management due to their expense and the need for trained personal to operate the equipment, but are becoming more common in tiger research due to their accuracy. Capture-recapture models are now commonly used in conjunction with tiger tracking. They not only measure population numbers, but also measure demographic parameters This combination technique consists of camera traps and basic tiger search to collect sufficient data. Once researchers and conservation biologists are able to gain knowledge of the population and its numbers, conservation efforts are put to work. Selection of initial focus areas are determined by level of potential success once efforts are put into place. Factors determining success generally include size of protected area, biodiversity in the environment, number of tigers in the area, connectivity of the area to buffer zones, funding, and public and local community support. These factors are just a few of the aspects of conservation that are weighed, but public and community support has proven to be one of the major factors that can determine the success or failure of a conservation project. Rewilding and reintroduction projects In 1978, the Indian conservationist Billy Arjan Singh attempted to rewild a captive-bred tigress in Dudhwa National Park. Soon after the release, numerous people were killed and eaten by a tigress that was subsequently shot. Government officials claimed it was Tara, though Singh disputed this. Further controversy broke out with the discovery that Tara was partly Siberian tiger. Tigers were reintroduced to Sariska Tiger Reserve in 2008 and to Panna Tiger Reserve in 2009. The organisation Save China's Tigers has attempted to rewild the South China tigers, with a breeding and training programme in a South African reserve known as Laohu Valley Reserve (LVR) and eventually reintroduce them to the wild of China. A future rewilding project was proposed for Siberian tigers set to be reintroduced to northern Russia's Pleistocene park. The Siberian tigers sent to Iran for a captive breeding project in Tehran are set to be rewilded and reintroduced to the Miankaleh peninsula, to replace the now extinct Caspian tigers. See also Tiger hunting Tiger poaching in India International Tiger Day References Cat conservation Wildlife conservation
Tiger conservation
[ "Biology" ]
4,168
[ "Wildlife conservation", "Biodiversity" ]
12,070,886
https://en.wikipedia.org/wiki/Self%20unloading%20trailer
A belt trailer or self unloading belt trailer is a semi-trailer that uses either a chain and flap assembly or a continuous belt that runs lengthwise on the bottom of the trailer. The belt is bolted to bars that in turn bolt to a chain that runs the length of the trailer. This belt is usually composed of rubber that allows the belt to grip the product and various widths are available depending on manufacturer generally ranging from 25 inches to 61 inches wide. A planetary, which is powered by a PTO pump, electric, or gas motor, cycles the belt. Products hauled in these trailers include but are not limited to bulk commodities, agricultural commodities, municipal waste, and construction debris. The asphalt paving industry also uses this type of trailer or truck chassis mounted unit to haul hot mix asphalt from the batch plant to the job site. Aggregates used in road building are often hauled with these units. Belt trailers generally utilize a sloped side that in most cases is covered in plastic which allows the product to slide down the sidewall onto the belt. The unload times for belt trailers usually is less than five minutes and the driver does not need to climb into the trailer to sweep like a moving floor trailer or more commonly referred to as walking floor. Generally most belt trailers are made out of aluminum, but some manufacturers feature steel and stainless steel construction that can be used in off road conditions and for hauling corrosive materials. In 1974, the first belt trailer was made and patented by Trinity Trailer. Today, the company produces the EagleBridge—a frameless all steel trailer—as well as the AGRI-FLEX, a belt trailer primarily focused on agricultural products such as chopped hay or silage. Other belt trailer manufacturers include Aulick, Hi-Way, Wilson, and Western. References Vehicle technology
Self unloading trailer
[ "Engineering" ]
363
[ "Vehicle technology", "Mechanical engineering by discipline" ]
12,070,901
https://en.wikipedia.org/wiki/Philosophia%20Botanica
Philosophia Botanica ("Botanical Philosophy", ed. 1, Stockholm & Amsterdam, 1751.) was published by the Swedish naturalist and physician Carl Linnaeus (1707–1778) who greatly influenced the development of botanical taxonomy and systematics in the 18th and 19th centuries. It is "the first textbook of descriptive systematic botany and botanical Latin". It also contains Linnaeus's first published description of his binomial nomenclature. Philosophia Botanica represents a maturing of Linnaeus's thinking on botany and its theoretical foundations, being an elaboration of ideas first published in his Fundamenta Botanica (1736) and Critica Botanica (1737), and set out in a similar way as a series of stark and uncompromising principles (aphorismen). The book also establishes a basic botanical terminology. The following principle §79 demonstrates the style of presentation and Linnaeus's method of introducing his ideas. A detailed analysis of the work is given in Frans Stafleu's Linnaeus and the Linnaeans, pp. 25–78. Binomial nomenclature To understand the objectives of the Philosophia Botanica it is first necessary to appreciate the state of botanical nomenclature at the time of Linnaeus. In accordance with the provisions of the present-day International Code of Nomenclature for algae, fungi and plants the starting point for the scientific names of plants effectively dates back to the list of species enumerated in Linnaeus's Species Plantarum, ed. 1, published 1 May 1753. The Species Plantarum was, for European scientists, a comprehensive global Flora for its day. Linnaeus had learned plant names as short descriptive phrases (polynomials) known as nomina specifica. Each time a new species was described the diagnostic phrase-names had to be adjusted, and lists of names, especially those including synonyms (alternative names for the same plant) became extremely unwieldy. Linnaeus's solution was to associate with the generic name an additional single word, what he termed the nomen triviale (which he first introduced in the Philosophia Botanica), to designate a species. Linnaeus emphasized that this was simply a matter of convenience, it was not to replace the diagnostic nomen specificum. But over time the nomen triviale became the "real" name and the nomen specificum became the Latin "diagnosis" that must, according to the rules of the International Code of Nomenclature, accompany the description of all new plant species: it was that part of the plant description distinguishing that particular species from all others. Linnaeus did not invent the binomial system but he was the person who provided the theoretical framework that lead to its universal acceptance. The second word of the binomial, the nomen triviale as Linnaeus called it, is now known as the specific epithet and the two words, the generic name and specific epithet together make up the species name. The binomial expresses both resemblance and difference at the same time – resemblance and relationship through the generic name: difference and distinctness through the specific epithet. Until 1753 polynomials served two functions, to provide: a) a simple designation (label) b) a means of distinguishing that entity from others (diagnosis). Linnaeus's major achievement was not binomial nomenclature itself, but the separation of the designatory and diagnostic functions of names, the advantage of this being noted in Philosophia Botanica principle §257. He did this by linking species names to descriptions and the concepts of other botanists as expressed in their literature – all set within a structural framework of carefully drafted rules. In this he was an exemplary proponent of the general encyclopaedic and systematizing effort of the 18th century. Historical context of Linnaean publications Systema Naturæ was Linnaeus's early attempt to organise nature. The first edition was published in 1735 and in it he outlines his ideas for the hierarchical classification of the natural world (the "system of nature") by dividing it into the animal kingdom (Regnum animale), the plant kingdom (Regnum vegetabile) and the "mineral kingdom" (Regnum lapideum) each of which he further divided into classes, orders, genera and species, with [generic] characters, [specific] differences, synonyms, and places of occurrence. The tenth edition of this book in 1758 has been adopted as the starting point for zoological nomenclature. The first edition of 1735 was just eleven pages long, but this expanded with further editions until the final thirteenth edition of 1767 had reached over 3000 pages. In the early eighteenth century colonial expansion and exploration created a demand for the description of thousands of new organisms. This highlighted difficulties in communication about plants, the replication of descriptions, and the importance of an agreed way of presenting, publishing and applying plant names. From about 1730 when Linnaeus was in his early twenties and still in Uppsala, Sweden, he planned a listing all the genera and species of plants known to western science in his day. Before this could be achieved he needed to establish the principles of classification and nomenclature on which these works were to be based. The Dutch period From 1735 to 1738 Linnaeus worked in the Netherlands where he was personal physician to George Clifford (1685–1760) a wealthy Anglo-Dutch merchant–banker with the Dutch East India Company who had an impressive garden containing four large glasshouses that were filled with tropical and sub-tropical plants collected overseas. Linnaeus was enthralled by these collections and prepared a detailed systematic catalogue of the plants in the garden, which he published in 1738 as Hortus Cliffortianus ("in honour of Clifford's garden". It was during this exceptionally productive period of his life that he published the works that were to lay the foundations for biological nomenclature. These were Fundamenta Botanica ("Foundations of botany", 1736), Bibliotheca Botanica ("Botanical bibliography", 1736), and Critica Botanica ("Critique of botany", 1737). He soon put his theoretical ideas into practice in his Genera Plantarum ("Genera of plants", 1737), Flora Lapponica ("Flora of Lapland", 1737), Classes Plantarum ("Plant classes", 1738), and Hortus Cliffortianus (1738). The ideas he explored in these works were revised until, in 1751, his developed thinking was finally published as Philosophia Botanica ("Science of botany"), released simultaneously in Stockholm and Amsterdam. Species plantarum With the foundations of plant nomenclature and classification now in place Linnaeus then set about the monumental task of describing all the plants known in his day (about 5,900 species) and, with the publication of Species Plantarum in 1753, his ambitions of the 1730s were finally accomplished. Species Plantarum was his most acclaimed work and a summary of all his botanical knowledge. Here was a global Flora that codified the usage of morphological terminology, presented a bibliography of all the pre-Linnaean botanical literature of scientific importance, and first applied binomials to the plant kingdom as a whole. It presented his new 'sexual system' of plant classification and became the starting point for scientific botanical nomenclature for 6000 of the 10,000 species he estimated made up the world's flora. Here too, for the first time, the species, rather than the genus, becomes the fundamental taxonomic unit. Linnaeus defined species as "... all structures in nature that do not owe their shape to the conditions of the growth place and other occasional features." There was also the innovation of the now familiar nomen triviale (pl. nomina trivialia) of the binary name although Linnaeus still regarded the real names as the differentiae specificae or "phrase names" which were linked with the nomen triviale and embodied the diagnosis for the species – although he was eventually to regard the trivial name (specific epithet) as one of his great inventions. Sketches of the book are known from 1733 and the final effort resulted in his temporary collapse. Fundamenta, Critica and Philosophia The Fundamenta Botanica ("The Foundations of Botany") of 1736 consisted of 365 aphorisms (principles) with principles 210–324 devoted to nomenclature. He followed this form of presentation in his other work on nomenclature. Linnaeus apparently regarded these as a "grammar and a syntax" for the study of botany. Chapters VII to X comprised principles 210 to 324 to do with the nomenclature of genera, species and varieties and how to treat synonyms. The Critica Botanica was an extension of these nomenclatural chapters of the Fundamenta. Critica Botanica which was published a year later in July 1737, the principles of the Fundamenta are repeated essentially unchanged but with extensive additions in smaller print. It was this work, with its dogmatic, often amusing and provocative statements, that was to spread his ideas and enthrall intellects of the stature of Goethe. He was, however, dismissive of botanical work other than taxonomy and presented his principles as dogma rather than reasoned argument. These works established ground rules in a field which, at this time, had only "gentlemen's agreements". Conventions such as: no two genera should have the same name; no universally agreed mechanisms. Genera Plantarum ran to five editions, the first in 1737 containing short descriptions of the 935 plant genera known at that time. Observing his own principle to keep generic names as short, euphonious, distinctive and memorable as possible he rejected many names that had gone before, including those of his fellow botanists which was not popular. In their place he used names that commemorated patrons, friends and fellow botanists as well as many names taken from Greek and Roman mythology. Historical assessment Linnaeus's system of classification follows the principles of Aristotelian logic by which arranging subjects into classes is classification; distinguishing divisions of classes is logical division. The group to be divided is the genus; the parts into which it is divided are the species. The terms genus and species acquired their specialized biological usage from Linnaeus's predecessors, in particular Ray and Tournefort. There was also the question of whether plants should a) be put together or separated because they conform to a definition (essentialism) or b) put together with plants having similar characteristics generally, regardless of the definition (empiricism). Linnaeus was inclined to take the first approach using the Method of Logical Division based on definition, what he called in Philosophia Botanica §152 the dispositio theoretica – but in practice he employed both methods. Botanical historian Alan Morton, though praising Linnaeus's contribution to classification and nomenclature, is less complimentary about the theoretical ideas expressed in the publications discussed above: Linnaean historian, chronicler, and analyst Frans Stafleu points out that Linnaeus's training and background was scholastic. He excelled in logic, "... which was almost certainly the Aristotelian and Thomistic logic generally taught in secondary schools all over Europe": Linnaeus's philosophical approach to classification is also noted by botanist David Frodin who observed that applying the methodus naturalis to books and people as well as plants, animals and minerals, was a mark of Linnaeus's 'scholastic' view of the world: Finally, Linnaean scholar William T. Stearn has summarised Linnaeus's contribution to biology as follows: Bibliographic details Full bibliographic details for Philosophia Botanica including exact dates of publication, pagination, editions, facsimiles, brief outline of contents, location of copies, secondary sources, translations, reprints, manuscripts, travelogues, and commentaries are given in Stafleu and Cowan's Taxonomic Literature. Footnote References Bibliography Frodin, David 2002. Guide to Standard Floras of the World, 2nd ed. Cambridge University Press: Cambridge. Hort, Arthur 1938. The “Critica Botanica” of Linnaeus. London (English translation): Ray Society. Podani, János and Szilágyi, András. 2016. Bad math in Linnaeus’ Philosophia Botanica. History and Philosophy of the Life Sciences 38:10. Stafleu, Frans A. 1971. Linnaeus and the Linnaeans: the Spreading of their Ideas in Systematic Botany, 1735–1789. Utrecht: International Association for Plant Taxonomy. . Stafleu, Frans A. and Cowan, Richard S. 1981. "Taxonomic Literature. A Selective Guide to Botanical Publications with dates, Commentaries and Types. Vol III: Lh–O." Regnum Vegetabile 105. Stearn, William T.1959. "The Background of Linnaeus's Contributions to the Nomenclature and Methods of Systematic Biology". Systematic Zoology 8: 4–22. Stearn, William T. 1960. "Notes on Linnaeus's 'Genera Plantarum'". In Carl Linnaeus, Genera plantarum fifth edition 1754. Facsimile reprint Weinheim. Historiae Naturalis Classica 3. Stearn, William T. 1971. In Blunt, William. The Compleat Naturalist: a Life of Linnaeus. New York: Frances Lincoln. . Stearn, William T. 1983. Botanical Latin. London: David & Charles. . Stearn, William T. 1986. Linnaeus and his students. In "The Oxford Companion to Gardens". Jellicoe, Geoffrey et al. (eds). Oxford: Oxford University Press. . Van Den Broek, Gerard J. 1986. "Baleful Weeds and Precious – Juiced Flowers, a Semiotic Approach to Botanical Practice". Leiden. Works by Linnaeus 1751 non-fiction books 1751 in science 18th-century books in Latin Botanical nomenclature Botany books Carl Linnaeus Textbooks
Philosophia Botanica
[ "Biology" ]
2,837
[ "Botanical terminology", "Botanical nomenclature", "Biological nomenclature" ]
12,071,039
https://en.wikipedia.org/wiki/Nepafenac
Nepafenac, sold under the brand name Nevanac among others, is a nonsteroidal anti-inflammatory drug (NSAID), usually sold as a prescription eye drop 0.1% solution (Nevanac) or 0.3% solution (Ilevro). It is used to treat pain and inflammation associated with cataract surgery. Nepafenac is a prodrug of amfenac, an inhibitor of COX-1 and COX-2 activity. Medical uses Nepafenac is indicated for use in the treatment of pain and inflammation following cataract surgery. In the European Union nepafenac is also indicated for the reduction in the risk of postoperative macular edema associated with cataract surgery in people with diabetes. Pharmacology Mechanism of action Nepafenac is an NSAID, thought to be a prodrug of amfenac after conversion by ocular tissue hydrolases after penetration via the cornea. Amfenac, like other NSAIDs, is thought to inhibit cyclooxygenase action. Adverse events Side effects include headache; runny nose; pain or pressure in the face; nausea; vomiting; and dry, itchy, sticky eyes. Serious side effects include red or bloody eyes; foreign body sensation in the eye; sensitivity to light; decreased visual acuity; seeing specks or spots; teary eyes; or eye discharge or crusting. Regulatory Nevanac On February 25, 2005, Alcon filed a New Drug Application (NDA) with the U.S. Food and Drug Administration (FDA) for Nevanac 0.1%. Results from the two trials referenced in the NDA (Phase 2/3 study C-02-53; Phase 3 study C-03-32) have not been published. Study C-02-53 consisted of 228 patients across 10 centers in the United States. Study C-03-32 consisted of 522 patients across 22 centers in the United States. The efficacy results presented were confirmed in a study published in 2007. Nevanac was approved by the FDA on August 19, 2005, with application number 021–862. Ilevro An NDA for Ilevro was filed on December 15, 2011. In a one-month study, no new toxicities arose in the new formulation of nepafenac. Safety and efficacy information was derived from the previous Nevanac application. In June 2010, a confirmatory study began (Study C09055) consisting of over 2000 patients from 49 US sites and 37 European sites. A second phase 3 trial (Study C11003) was conducted in a population of 1,342 patients at 37 sites across the United States which failed to demonstrate superiority over Nevanac in an altered dosing regimen. Ilevro was approved by the FDA on October 16, 2012, with application number 203–491. Commercialization Both Nevanac and Ilevro are manufactured and sold by Alcon, Inc. Alcon is currently a division of Novartis International AG, which is primarily based out of Switzerland. Alcon, Inc. also holds locations in both Switzerland and the United States. The company has gone through several name changes, from Alcon Laboratories, Inc. to Alcon Universal, Ltd., to Alcon, Inc. Nevanac entered the market in 2005 as a product of Alcon, at the time a subsidiary of Nestlé. On April 6, 2008, Novartis agreed to purchase approximately 74 million shares of Alcon from Nestlé at $143.18 per share. On January 4, 2010, Novartis agreed to purchase all remaining shares of Alcon from Nestlé, totalling 156 million shares or 77% of the shares in the company. At the time of the purchase, a proposal for a merger under Swiss merger law was given to the Alcon board of directors. The merger was agreed upon on December 15, 2010, making Alcon "the second largest division within Novartis." The merger was completed on April 8, 2011. Ilevro was launched by Alcon on January 21, 2013. In 2014 and 2015, net sales by Alcon grew, contributed to in part by the increased volume in sales of Ilevro. That financial year, Novartis reported $18 billion in total financial debt. That figure has grown steadily since. In 2016, Novartis reported a total debt of $23.8 billion, up from the $21.9 billion reported in 2015 and the $20.4 billion reported in 2014. As of May 2017, Novartis is estimated to be worth $193.2 billion. On January 27, 2016, Alcon was moved to become a branch of the Innovative Medicines Division at Novartis. Early in 2016, Alcon formed agreements with both TrueVision and PowerVision, and acquired Transcend Medical. As of January 2017, Novartis is weighing options for Alcon in the business structure. Commercial risks Alcon faced declining growth in 2016, having faced challenges in development and marketing of new products. Marketing Novartis maintains a detailing unit geared toward health professionals consisting of over 3,000 employees within the United States and an additional 21,000 worldwide. Novartis is also seeking to expand direct-to-consumer advertising and entrance into specialty product markets. Novartis also notes the influence of position and preference on US Centers for Medicare & Medicaid formularies in expanding their market value. Nepafenac, Nevanac, and Ilevro are all absent from the 2016 Annual Report issued from Novartis. Intellectual property There are currently seven U.S. patents filed that are directly associated with the modernized formulations of nepafenac, all stemming from Novartis. There are three patents associated with Nevanac that are still active and four associated with Ilevro. The earliest patent related to the modern formulations of nepafenac was approved on June 11, 2002, after being filed in 1999, by Bahram Asgharian. A patent was filed by Warren Wong, associated with Alcon, Inc. based out of Fort Worth, Texas, on December 2, 2005, for aqueous suspensions of nepafenac. Another patent for a nepafenac-based drug was filed on May 8, 2006, by Geoffrey Owen, Amy Brooks, and Gustav Graff. A patent was filed by Masood A. Chowhan and Huagang Chen on February 9, 2007, and approved on May 24, 2011, followed closely by a patent filed by Warren Wong on September 23, 2010, and approved on December 6, 2011. Masood A. Chowhan, Malay Ghosh, Bahram Asgharian, and Wesley Wehsin Han filed another patent on December 1, 2010, and approved on December 30, 2014. The most recent patent was filed by Masood A. Chowhan, Malay Ghosh, Bahram Asgharian, and Wesley Weshin Han on November 12, 2014, and approved on May 30, 2017. These patents are in effect until dates ranging between July 17, 2018, and March 31, 2032. Novartis also maintains patents on nepafenac in 26 countries outside the United States. References External links Acetamides 2-Aminobenzophenones Nonsteroidal anti-inflammatory drugs Drugs developed by Novartis Prodrugs Ophthalmology drugs
Nepafenac
[ "Chemistry" ]
1,533
[ "Chemicals in medicine", "Prodrugs" ]
12,071,235
https://en.wikipedia.org/wiki/Integrated%20multi-trophic%20aquaculture
Integrated multi-trophic aquaculture (IMTA) is a type of aquaculture where the byproducts, including waste, from one aquatic species are used as inputs (fertilizers, food) for another. Farmers combine fed aquaculture (e.g., fish, shrimp) with inorganic extractive (e.g., seaweed) and organic extractive (e.g., shellfish) aquaculture to create balanced systems for environment remediation (biomitigation), economic stability (improved output, lower cost, product diversification and risk reduction) and social acceptability (better management practices). Selecting appropriate species and sizing the various populations to provide necessary ecosystem functions allows the biological and chemical processes involved to achieve a stable balance, mutually benefiting the organisms and improving ecosystem health. Ideally, the co-cultured species each yield valuable commercial "crops". IMTA can synergistically increase total output, even if some of the crops yield less than they would, short-term, in a monoculture. Terminology and related approaches "Integrated" refers to intensive and synergistic cultivation, using water-borne nutrient and energy transfer. "Multi-trophic" means that the various species occupy different trophic levels, i.e., different (but adjacent) links in the food chain. IMTA is a specialized form of the age-old practice of aquatic polyculture, which was the co-culture of various species, often without regard to trophic level. In this broader case, the organisms may share biological and chemical processes that may be minimally complementary, potentially leading to reduced production of both species due to competition for the same food resource. However, some traditional systems such as polyculture of carps in China employ species that occupy multiple niches within the same pond, or the culture of fish that is integrated with a terrestrial agricultural species, can be considered forms of IMTA. The more general term "Integrated Aquaculture" is used to describe the integration of monocultures through water transfer between the culture systems. The terms "IMTA" and "integrated aquaculture" differ primarily in their precision and are sometimes interchanged. Aquaponics, fractionated aquaculture, integrated agriculture-aquaculture systems, integrated peri-urban-aquaculture systems, and integrated fisheries-aquaculture systems are all variations of the IMTA concept. Range of approaches Today, low-intensity traditional/incidental multi-trophic aquaculture is much more common than modern IMTA. Most are relatively simple, such as fish, seaweed or shellfish. True IMTA can be land-based, using ponds or tanks, or even open-water marine or freshwater systems. Implementations have included species combinations such as shellfish/shrimp, fish/seaweed/shellfish, fish/seaweed, fish/shrimp and seaweed/shrimp. IMTA in open water (offshore cultivation) can be done by the use of buoys with lines on which the seaweed grows. The buoys/lines are placed next to the fishnets or cages in which the fish grows. In some tropical Asian countries some traditional forms of aquaculture of finfish in floating cages, nearby fish and shrimp ponds, and oyster farming integrated with some capture fisheries in estuaries can be considered a form of IMTA. Since 2010, IMTA has been used commercially in Norway, Scotland, and Ireland. In the future, systems with other components for additional functions, or similar functions but different size brackets of particles, are likely. Multiple regulatory issues remain open. Modern history of land-based systems Ryther and co-workers created modern, integrated, intensive, land mariculture. They originated, both theoretically and experimentally, the integrated use of extractive organisms—shellfish, microalgae and seaweeds—in the treatment of household effluents, descriptively and with quantitative results. A domestic wastewater effluent, mixed with seawater, was the nutrient source for phytoplankton, which in turn became food for oysters and clams. They cultivated other organisms in a food chain rooted in the farm's organic sludge. Dissolved nutrients in the final effluent were filtered by seaweed (mainly Gracilaria and Ulva) biofilters. The value of the original organisms grown on human waste effluents was minimal. In 1976, Huguenin proposed adaptations to the treatment of intensive aquaculture effluents in both inland and coastal areas. Tenore followed by integrating with their system of carnivorous fish and the macroalgivore abalone. In 1977, Hughes-Games described the first practical marine fish/shellfish/phytoplankton culture, followed by Gordin, et al., in 1981. By 1989, a semi-intensive (1 kg fish/m−3) seabream and grey mullet pond system by the Gulf of Aqaba (Eilat) on the Red Sea supported dense diatom populations, excellent for feeding oysters. Hundreds of kilos of fish and oysters cultured here were sold. Researchers also quantified the water quality parameters and nutrient budgets in (5 kg fish m−3) green water seabream ponds. The phytoplankton generally maintained reasonable water quality and converted on average over half the waste nitrogen into algal biomass. Experiments with intensive bivalve cultures yielded high bivalve growth rates. This technology supported a small farm in southern Israel. Sustainability IMTA promotes economic and environmental sustainability by converting byproducts and uneaten feed from fed organisms into harvestable crops, thereby reducing eutrophication, and increasing economic diversification. Properly managed multi-trophic aquaculture accelerates growth without detrimental side-effects. This increases the site's ability to assimilate the cultivated organisms, thereby reducing negative environmental impacts. IMTA enables farmers to diversify their output by replacing purchased inputs with byproducts from lower trophic levels, often without new sites. Initial economic research suggests that IMTA can increase profits and can reduce financial risks due to weather, disease and market fluctuations. Over a dozen studies have investigated the economics of IMTA systems since 1985. Nutrient flow Typically, carnivorous fish or shrimp occupy IMTA's higher trophic levels. They excrete soluble ammonia and phosphorus (orthophosphate). Seaweeds and similar species can extract these inorganic nutrients directly from their environment. Fish and shrimp also release organic nutrients which feed shellfish and deposit feeders. Species such as shellfish that occupy intermediate trophic levels often play a dual role, both filtering organic bottom-level organisms from the water and generating some ammonia. Waste feed may also provide additional nutrients; either by direct consumption or via decomposition into individual nutrients. In some projects, the waste nutrients are also gathered and reused in the food given to the fish in cultivation. This can happen by processing the seaweed grown into food. Recovery efficiency Nutrient recovery efficiency is a function of technology, harvest schedule, management, spatial configuration, production, species selection, trophic level biomass ratios, natural food availability, particle size, digestibility, season, light, temperature, and water flow. Since these factors significantly vary by site and region, recovery efficiency also varies. In a hypothetical family-scale fish/microalga /bivalve/seaweed farm, based on pilot scale data, at least 60% of nutrient input reached commercial products, nearly three times more than in modern net pen farms. Expected average annual yields of the system for a hypothetical were of seabream, of bivalves and of seaweeds. These results required precise water quality control and attention to suitability for bivalve nutrition, due to the difficulty in maintaining consistent phytoplankton populations. Seaweeds' nitrogen uptake efficiency ranges from 2-100% in land-based systems. Uptake efficiency in open-water IMTA is unknown. Food safety and quality Feeding the wastes of one species to another has the potential for contamination, although this has yet to be observed in IMTA systems. Mussels and kelp growing adjacent to Atlantic salmon cages in the Bay of Fundy have been monitored since 2001 for contamination by medicines, heavy metals, arsenic, PCBs and pesticides. Concentrations are consistently either non-detectable or well below regulatory limits established by the Canadian Food Inspection Agency, the United States Food and Drug Administration and European Community Directives. Taste testers indicate that these mussels are free of "fishy" taste and aroma and could not distinguish them from "wild" mussels. The mussels' meat yield is significantly higher, reflecting the increase in nutrient availability. Recent findings suggest mussels grown adjacent to salmon farms are advantageous for winter harvest because they maintain high meat weight and condition index (meat to shell ratio). This finding is of particular interest because the Bay of Fundy, where this research was conducted, produces low condition index mussels during winter months in monoculture situations, and seasonal presence of paralytic shellfish poisoning (PSP) typically restricts mussel harvest to the winter months. Selected projects Historic and ongoing research projects include: Asia Japan, China, South Korea, Thailand, Vietnam, Indonesia, Bangladesh, etc. have co-cultured aquatic species for centuries in marine, brackish and fresh water environments. Fish, shellfish and seaweeds have been cultured together in bays, lagoons and ponds. Trial and error has improved integration over time. The proportion of Asian aquaculture production that occurs in IMTA systems is unknown. After the 2004 tsunami, many of the shrimp farmers in Aceh Province of Indonesia and Ranong Province of Thailand were trained in IMTA. This has been especially important as the mono-culture of marine shrimp was widely recognized as unsustainable. Production of tilapia, mud crabs, seaweeds, milkfish, and mussels have been incorporated. AquaFish Collaborative Research Support Program Canada Bay of Fundy Industry, academia and government are collaborating here to expand production to commercial scale. The current system integrates Atlantic salmon, blue mussels and kelp; deposit feeders are under consideration. AquaNet (one of Canada's Networks of Centres of Excellence) funded phase one. The Atlantic Canada Opportunities Agency is funding phase two. The project leaders are Thierry Chopin (University of New Brunswick in Saint John) and Shawn Robinson (Department of Fisheries and Oceans, St. Andrews Biological Station). Pacific SEA-lab Pacific SEA-lab is researching and is licensed for the co-culture of sablefish, scallops, oysters, blue mussels, urchins and kelp. "SEA" stands for Sustainable Ecological Aquaculture. The project aims to balance four species. The project is headed by Stephen Cross under a British Columbia Innovation Award at the University of Victoria Coastal Aquaculture Research & Training (CART) network. Chile The i-mar Research Center at the Universidad de Los Lagos, in Puerto Montt is working to reduce the environmental impact of intensive salmon culture. Initial research involved trout, oysters and seaweeds. Present research is focusing on open waters with salmon, seaweeds and abalone. The project leader is Alejandro Buschmann. Israel SeaOr Marine Enterprises Ltd. SeaOr Marine Enterprises Ltd., which operated for several years on the Israeli Mediterranean coast, north of Tel Aviv, cultured marine fish (gilthead seabream), seaweeds (Ulva and Gracilaria) and Japanese abalone. Its approach leveraged local climate and recycled fish waste products into seaweed biomass, which was fed to the abalone. It also effectively purified the water sufficiently to allow the water to be recycled to the fishponds and to meet point-source effluent environmental regulations. PGP Ltd. PGP Ltd. is a small farm in Southern Israel. It cultures marine fish, microalgae, bivalves and Artemia. Effluents from seabream and seabass collect in sedimentation ponds, where dense populations of microalgae—mostly diatoms—develop. Clams, oysters and sometimes Artemia filter the microalgae from the water, producing a clear effluent. The farm sells the fish, bivalves and Artemia. The Netherlands In the Netherlands, Willem Brandenburg of UR Wageningen (Plant Sciences Group) has established the first seaweed farm in the Netherlands. The farm is called "De Wierderij" and is used for research. South Africa Three farms grow seaweeds for feed in abalone effluents in land-based tanks. Up to 50% of re-circulated water passes through the seaweed tanks. Somewhat uniquely, neither fish nor shrimp comprise the upper trophic species. The motivation is to avoid over-harvesting natural seaweed beds and red tides, rather than nutrient abatement. These commercial successes developed from research collaboration between Irvine and Johnson Cape Abalone and scientists from the University of Cape Town and the University of Stockholm. United Kingdom The Scottish Association for Marine Science, in Oban is developing co-cultures of salmon, oysters, sea urchins, and brown and red seaweeds via several projects. Research focuses on biological and physical processes, as well as production economics and implications for coastal zone management. Researchers include: M. Kelly, A. Rodger, L. Cook, S. Dworjanyn, and C. Sanderson. Bangladesh Indian carps and stinging catfish are cultured in Bangladesh, but the methods could be more productive. The pond and cage cultures used are based only on the fish. They don't take advantage of the productivity increases that could take place if other trophic levels were included. Expensive artificial feeds are used, partly to supply the fish with protein. These costs could be reduced if freshwater snails, such as Viviparus bengalensis, were simultaneously cultured, thus increasing the available protein. The organic and inorganic wastes produced as a byproduct of culturing could also be minimized by integrating freshwater snail and aquatic plants, such as water spinach, respectively. Gallery See also Agribusiness Extensive farming Factory farming Genetically modified organism History of agriculture Industrial agriculture Industrial agriculture (animals) Industrial agriculture (crops) Intensive farming Organic farming Sustainable agriculture Zero waste agriculture Notes References Neori A, Troell M, Chopin T, Yarish C, Critchley A and Buschmann AH. 2007. The need for a balanced ecosystem approach to blue revolution aquaculture. Environment 49(3): 36–43. External links AquaNet IMTA www.sams.ac.uk World Aquaculture Conference 2007: IMTA session Chopin lab The Comparative Roles of Suspension-Feeders in Ecosystems The use of bivalves as biofilters and valuable product in land based aquaculture systems - review. Seaweed Resources of the World Algae: key for sustainable mariculture. Ecological and Genetic Implications of Aquaculture Activities Evaluation of macroalgae, microalgae, and bivalves as biofilters in sustainable land-based mariculture systems. Hydrography Physical oceanography Aquaculture
Integrated multi-trophic aquaculture
[ "Physics", "Environmental_science" ]
3,128
[ "Hydrography", "Hydrology", "Applied and interdisciplinary physics", "Physical oceanography" ]
12,071,278
https://en.wikipedia.org/wiki/Eta2%20Hydri%20b
{{DISPLAYTITLE:Eta2 Hydri b}} Eta2 Hydri b (η2 Hyi b, η2 Hydri b), commonly known as HD 11977 b, is an extrasolar planet that is approximately 217 light-years away in the constellation of Hydrus. The presence of the planet around an intermediately massive giant star provides indirect evidence for the existence of planetary systems around A-type stars. References External links Hydrus Giant planets Exoplanets discovered in 2005 Exoplanets detected by radial velocity
Eta2 Hydri b
[ "Astronomy" ]
117
[ "Hydrus", "Constellations" ]
12,071,305
https://en.wikipedia.org/wiki/UNITYPER
The UNITYPER was an input device for the UNIVAC I computer manufactured by Remington Rand, which went on sale in mid-1951 but was not in operation until June 1952. It was an early direct data entry system. The UNITYPER accepted user inputs on a keyboard of a modified Remington typewriter, then wrote that data onto a metal magnetic tape using an integral tape drive. The UNITYPER II was an input device for the UNIVAC II. The UNITYPER II was a reduced-size, reduced-cost version of the UNITYPER I subsequently developed as a text-to-tape transcribing device for the UNIVAC I system and released in 1953, also sold as a peripheral to the UNIVAC II. The original required individual motors and control amplifiers to advance, rewind, fast-forward and maintain tension on the tape. UNITYPER II replaced these with a flexible cable and clutch system driven by a single within the typewriter. Coding was accomplished via mechanical lift arms and latching bails added to the typewriter's existing mechanical linkages in place for print-action. When a key was depressed, up to 8 affiliated lift arms were "caught" on latching bails which in turn connected 8 coding switches to the recording head. A commutator, powered by the internal drive motor, would momentarily complete the power circuit through the coding switches to the recording head before advancing the tape to the next recording position. When not encoding, a resistor balance network kept the recording head in an erase mode unless a rewind operation was commanded. This ensured a clearly defined magnetic space between bit patterns. Additional circuits prevented opening of the tape loading door once a tape was loaded. Because the supply and take-up spools of the recording tape were no longer individually powered as in the UNITYPER 1, a "differential moment" was created as the tape moved from one reel to the other during encoding, the supply reel constantly decreasing in effective diameter while the take-up reel correspondingly increased. A differential spring, ratching escapement, and slip clutches were added as a mechanical solution, which functioned also during backspacing and rewinding. References External links UNITYPER II, Data Entry Device for the Univac Computer Smithsonian National Museum of American History UNIVAC II (PDF) Has photo of UNITYPER II UNIVAC hardware
UNITYPER
[ "Technology" ]
479
[ "Computing stubs", "Computer hardware stubs" ]
12,071,862
https://en.wikipedia.org/wiki/UNIVAC%20High%20speed%20printer
The UNIVAC High speed printer read metal UNIVAC magnetic tape using a UNISERVO tape drive and printed the data from the tape at 600 lines per minute. Each line could contain 130 characters in its fixed-width font. External links UNIVAC II (PDF) Has photo of High speed printer UNIVAC hardware
UNIVAC High speed printer
[ "Technology" ]
68
[ "Computing stubs", "Computer hardware stubs" ]
12,072,846
https://en.wikipedia.org/wiki/Jugend%20forscht
Jugend forscht (literal translation: “Youth researches”) is a German youth science competition. With more than 10,000 participants annually, it is the biggest youth science and technology competition in Europe. It was initiated in 1965 by Henri Nannen, then editor-in-chief of the Stern magazine. Participants work on a self-chosen research project, hand in a written report about their work, and then present their results first at regional levels and later at a national contest to an expert jury, usually in the form of a poster session, often including a practical demonstration. Contest juries often invite university or industry experts to referee some of the projects, especially at the national contest, due to a high level of specialization. Participants can enter in one of seven subject groups: Biology Chemistry Geosciences and Astronomy Mathematics and Computer Science Physics Technology Work environment Participants must not be older than 21 years and can enter the competition either on their own or in teams of up to three. University students are only allowed to participate during their first year of study. Participants younger than 15 years compete in a separate contest called “Schüler experimentieren” (“Pupils experiment”). Winners receive prizes donated by industrial sponsors. At the national level, one project in each of the subject groups is selected as the national winner each year. In addition, there is a special prize for the best interdisciplinary project by the German Research Foundation, as well as additional special prizes for particularly distinguished projects by the President of Germany and the Chancellor of Germany. Some of the winning projects are nominated for the European Union Contest for Young Scientists, and all winners are nominated for the Studienstiftung des deutschen Volkes. External links Jugend forscht website (German) English Information on jugend forscht website Science competitions Youth science Education competitions in Germany
Jugend forscht
[ "Technology" ]
372
[ "Science and technology awards", "Science competitions" ]
12,073,631
https://en.wikipedia.org/wiki/Satraplatin
Satraplatin (INN, codenamed JM216) is a platinum-based antineoplastic agent that was under investigation as a treatment of patients with advanced prostate cancer who have failed previous chemotherapy. It has not yet received approval from the U.S. Food and Drug Administration. First mentioned in the medical literature in 1993, satraplatin is the first orally active platinum-based chemotherapeutic drug; other available platinum analogues—cisplatin, carboplatin, and oxaliplatin—must be given intravenously. The drug has also been used in the treatment of lung and ovarian cancers. The proposed mode of action is that the compound binds to the DNA of cancer cells rendering them incapable of dividing. Mode of action The proposed mode of action is that the compound binds to the DNA of cancer cells rendering them incapable of dividing. In addition some cisplatin resistant tumour cell lines were sensitive to satraplatin treatment in vitro. This may be due to an altered mechanism of cellular uptake (satraplatin by passive diffusion instead of active transport for e.g. cisplatin). Clinical development Satraplatin has been developed for the treatment of men with castrate-refractory, metastatic prostate cancer for several reasons. Its relative ease of administration, potential lack of cross-resistance with other platinum agents, clinical benefits seen in early studies of prostate cancer, and an unmet need in this patient population after Docetaxel failure at that time. The only Phase III trial with satraplatin (SPARC Trial) was conducted in pretreated metastatic castrate-resistant prostate cancer (CRPC), revealing a 33% reduction in risk of progression or death versus a placebo. However, no difference in overall survival was observed. An FDA or EMA-approved indication has not yet been achieved. Satraplatin appears to have clinical activity against a variety of malignancies such as Breast Cancer, Prostate cancer and Lung cancer. Especially in combination with radiotherapy it appears to have good efficacy in combination for lung and squamous head and neck cancer. In a phase I study from Vanderbilt University, seven of eight patients with squamous cell carcinoma of the head and neck, who were treated with 10 to 30 mg of satraplatin thrice a week concurrently with radiotherapy achieved a complete response. Side effects Satraplatin is similar in toxicity profile to carboplatin, with no nephrotoxicity, neurotoxicity, or ototoxicity observed. Moreover, it is much better tolerated than cisplatin and does not require hydration for each dose. A somewhat more intense hematotoxicity is observed. Anemia, diarrhea, constipation, nausea or vomiting, increase risk of infection, bruising. Possible risks and complications Thrombus: Cancer can increase the risk of developing a blood clot, and chemotherapy may increase this risk further. A blood clot may cause symptoms such as pain, redness and swelling in a leg, or breathlessness and chest pain. Most clots can be treated with drugs that thin the blood. Fertility: Satraplatin can affect a person's ability to become pregnant and may cause sterility in men. Use Contraception: Satraplatin may harm a developing baby. It is important to use effective contraception while taking this drug and for at least a few months afterwards Detailed mechanism of action Many human tumors including testicular, bladder, lung, head, neck, and cervical cancers have been treated with platinum compounds. All of the marketed platinum analogues must be administered via intravenous infusion is one of the main disadvantages for these platinum compounds due to severe, dose-limiting effects. An acquired resistance to cisplatin/carboplatin in ovarian cancer was discovered due to insufficient amounts of platinum reaching the target DNA or failure to achieve cell death. These drawbacks led to the development of the next generation of platinum analogues such as satraplatin Satraplatin is a prodrug, meaning it is metabolized in the body and transformed into its working form. The two polar acetate groups on satraplatin increase the drugs bioavailability, which in turn allows for a large fraction of the administered dose to make it into the bloodstream where metabolism begins. Once the molecule makes it to the bloodstream the drug loses its acetate groups. At this point the drug is structurally similar to cisplatin with the exception of one cyclohexylamine group in place of an amine group. Since the drug is now structurally similar to cisplatin its mechanism of action is also very similar. The chlorine atoms are displaced and the platinum atom in the drug binds to guanine residues in DNA. This unfortunately happens to not only cancer cells but other regular functioning cells as well causing some of the harsh side effects. By binding to guanine residues satraplatin inhibits DNA replication and transcription which leads to subsequent apoptosis. Where satraplatin differs is its cyclohexamine group. In cisplatin the two amine groups are symmetrical while satraplatin's cyclohexamine makes it asymmetrical which contributes to some of the drug's special properties. A large problem with cisplatin and other platinum based anti-cancer drugs is that the body can develop resistance to them. A major way that this happens is through a mammalian nucleotide excision repair pathway which repairs damaged DNA. However, some studies show that satraplatin compared to other platinum anti-cancer drugs can be elusive and are not recognized by the DNA repair proteins due to the different adducts on the molecule (cyclohexamine). Since satraplatin is not recognized by the DNA repair proteins the DNA remains damaged, the DNA cannot be replicated, the cell dies, and the problem of resistance is solved. In vitro experiments have shown that satraplatin is more effective in well-defined hematological cancers than cisplatin. MTAP deficiency and Bcl-2 gene mutation were identified as biomarkers of enhanced efficacy. References Coordination complexes Platinum(IV) compounds Acetates Platinum-based antineoplastic agents Ammine complexes
Satraplatin
[ "Chemistry" ]
1,304
[ "Coordination chemistry", "Coordination complexes" ]
12,074,089
https://en.wikipedia.org/wiki/Komatsu%20930E
The Komatsu 930E is an off-highway, ultra class, rigid frame, two-axle, diesel/AC electric powertrain haul truck designed and manufactured by Komatsu in Peoria, Illinois, United States. Although the 930E is neither Komatsu's largest nor highest payload capacity haul truck, Komatsu considers the 930E to be the flagship of their haul truck product line. The 930E is the best selling ultra class haul truck in the world. As of September 2016, Komatsu has sold 1,900 units of 930E. The current model, the 930E-5 offer a payload capacity of up to . Public Debut The 930E was introduced in Morenci, Arizona in May, 1995 with a payload capacity of up to . Innovations The 930E was the first two-axle, six tire haul truck to be offered with a payload capacity in excess of , making it the world's first regular production "ultra class" haul truck. The 930E remained the world's largest, highest payload capacity haul truck until the September, 1998 debut of the payload capacity Caterpillar 797. Prior to the introduction of the 930E, diesel/electric haul trucks employed AC from an electric alternator where it was rectified to power the DC traction motors at the rear wheels. The 930E was the first haul truck to employ two AC electric traction motors. The diesel/electric AC powertrain is more efficient, offers better operating characteristics and is more cost effective than a comparable DC powertrain. Product Improvements In 1996, the 930E-2 debuted, offering an increased payload capacity by adding larger Bridgestone 50-80R57 radial tires. In 2000, at MINExpo International, Komatsu debuted the 930E-2SE featuring a Komatsu SSDA18V170 V-18, twin-turbocharged, diesel engine developed by Industrial Power Alliance, a joint venture between Komatsu and Cummins. This is the same engine that powers the larger Komatsu 960E-1 and allows the 930E-2SE to operate at elevations up to without deration. On December 15, 2003 Komatsu introduced the 930E-3, powered by a Komatsu SSDAl6V160 V-16 engine and a GDY106 traction motor on each side of the rear axle. The current models are the 930E-4 with a Komatsu SSDAl6V160 V-16 diesel engine and the 930E-4SE with a Komatsu SSDA18V170 V-18 diesel engine. Assembly All Komatsu electric drive haul trucks, including the 930E, are manufactured at Komatsu America Corp's Peoria Manufacturing Operation located at 2300 NE Adams Street in Peoria, Illinois, USA. Standing in the Komatsu model lineup The 930E was the largest, highest capacity haul truck in Komatsu's model lineup prior to the May 27, 2008 introduction of the , payload capacity 960E-1. The payload capacity 930-E4 and 930E-4SE are now the second highest payload capacity haul trucks in Komatsu's line up, although the 930E-4SE uses the same Komatsu SSDA18V170 V-18 engine as the 960E-1. Specifications See also Haul truck Komatsu Limited References External links Komatsu 930E-4 Product Brochure (PDF) Komatsu 930E-4 Website - Komatsu America Corp. Komatsu 930E-4SE Product Brochure (PDF) Komatsu 930E-4SE Website - Komatsu America Corp. Komatsu 930E-3 Product Brochure (PDF) _930E-2.pdf]_930E-2.pdf Komatsu 930E-2 Product Brochure (PDF)] Haul trucks Hybrid trucks Komatsu vehicles Vehicles introduced in 1995
Komatsu 930E
[ "Engineering" ]
810
[ "Engineering vehicles", "Komatsu vehicles", "Mining equipment", "Haul trucks" ]
12,074,499
https://en.wikipedia.org/wiki/Ester%20pyrolysis
Ester pyrolysis in organic chemistry is a vacuum pyrolysis reaction converting esters containing a β-hydrogen atom into the corresponding carboxylic acid and the alkene. The reaction is an Ei elimination and operates in a syn fashion. Examples include the synthesis of acrylic acid from ethyl acrylate at 590 °C, the synthesis of 1,4-pentadiene from 1,5-pentanediol diacetate at 575 °C or the construction of a cyclobutene framework at 700 °C References Organic reactions
Ester pyrolysis
[ "Chemistry" ]
119
[ "Organic reactions" ]
12,074,649
https://en.wikipedia.org/wiki/NFPA%2070B
NFPA 70B (Standard for Electrical Equipment Maintenance) is a standard of the National Fire Protection Association that addresses preventive maintenance for electrical, electronic, and communication systems and equipment—such as those used in industrial plants, institutional and commercial buildings, and large multi-family residential complexes—to prevent equipment failures and worker injuries. NFPA 70B is component of the electrical cycle of safety which includes NFPA 70 and NFPA 70E. Purpose This recommended practice applies to preventive maintenance for electrical, electronic, and communication systems and equipment and is not intended to duplicate or supersede instructions that manufacturers normally provide. Systems and equipment covered are typical of those installed in industrial plants, institutional and commercial buildings, and large multifamily residential complexes. History On January 1, 2023, NFPA 70B transitioned from the Recommended Practice for Electrical Equipment Maintenance to the Standard for Electrical Equipment Maintenance. References Related NFPA standards NFPA 70 — National Electrical Code (NEC) NFPA 70E — Standard for Electrical Safety in the Workplace External links NFPA 70B: Standard for Electrical Equipment Maintenance Firefighting in the United States Safety codes Safety organizations Electrical standards NFPA Standards
NFPA 70B
[ "Physics" ]
245
[ "Physical systems", "Electrical standards", "Electrical systems" ]
12,074,745
https://en.wikipedia.org/wiki/IEEE%201584
IEEE Std 1584-2018 (Guide for Performing Arc-Flash Hazard Calculations) is a standard of the Institute of Electrical and Electronics Engineers that provides a method of calculating the incident energy of arc flash event. Purpose IEEE 1584-2018 is an update to IEEE 1584-2002 and was developed to help protect people from arc-flash hazard dangers. The predicted arc current and incident energy are used in selecting appropriate overcurrent protective devices and personal protective equipment (generally abbreviated as PPE), as well as defining safe working distance. Since the magnitude of the arc current is inherently linked with the degree of arc hazard, the arc is examined as a circuit parameter. Furthermore, since estimations are often useful, simple equations for predicting ballpark arc current, arc power, and incident energy values and probable ranges are presented in this work. Procedure Arc Flash Hazard calculations are currently implemented in most of the industry plants due to OSHA regulations. The IEEE 1584 empirically derived model accurately accounts for a wide variety of setup parameters including: Voltages in the range of 208–15,000 V, three-phase. Frequencies of 50 Hz to 60 Hz. Bolted fault current in the range of 700–106,000 A. Grounded or ungrounded. Equipment enclosures of commonly available sizes with various conductor configurations, or open air. Gaps between conductors. Faults involving three phases. For cases where voltage is over 15 kV or gap is outside the range of the model, the theoretically derived Lee method can be applied. IEEE 1584.1 is a guide published in July 2022 for the specification of requirements for an Arc Flash Hazard Calculation study in accordance with the IEEE 1584 Standard. References Electrical safety IEEE standards
IEEE 1584
[ "Technology" ]
346
[ "Computer standards", "IEEE standards" ]
12,074,874
https://en.wikipedia.org/wiki/Industry%20Technology%20Facilitator
Industry Technology Facilitator (ITF) is an oil industry trade organisation established in 1999. It is owned by 30 major global oil majors and oilfield service companies. The group has offices in Aberdeen, UK, Houston, USA, Abu Dhabi, UAE, Perth, Australia and Kuala Lumpur, Malaysia. Members ITF currently has a membership of 30 global operators and service companies including: Aramco Services Company BG Group BP Chevron ConocoPhillips DONG Energy ENI EnQuest ExxonMobil GE Oil and Gas Kuwait Oil Company Maersk Marathon Oil Corporation Nexen Petrofac Petronas Petroleum Development Oman Premier Oil PTTEP QatarEnergy Schlumberger Shell Siemens Statoil Technip Total Tullow Oil Weatherford Wintershall Wood Group Woodside Energy Awards and recognition Alick Buchanan Smith Spirit of Enterprise: 2009 Winner – ITF Investors in People - Gold Scottish Offshore Achievement Awards: 2009 Rising Star Winner - Ryan McPherson IoD Scotland - Emerging Director Finalist: Neil Poxon The topics addressed by these ITF sponsored technologies include seismic resolution, complex reservoirs, cost-effective drilling and intervention, subsea, maximising production, integrity management, and environmental performance. References External links ITF official website ITF Single Strategy Offshore Industry Energy KTN Petroleum organizations Organizations established in 1999 Energy business associations Organisations based in Scotland 1999 establishments in Scotland
Industry Technology Facilitator
[ "Chemistry", "Engineering" ]
274
[ "Petroleum", "Petroleum organizations", "Energy organizations" ]
12,075,392
https://en.wikipedia.org/wiki/Computer%20compatibility
A family of computer models is said to be compatible if certain software that runs on one of the models can also be run on all other models of the family. The computer models may differ in performance, reliability or some other characteristic. These differences may affect the outcome of the running of the software. Software compatibility Software compatibility can refer to the compatibility that a particular software has running on a particular CPU architecture such as Intel or PowerPC. Software compatibility can also refer to ability for the software to run on a particular operating system. Very rarely is a compiled software compatible with multiple different CPU architectures. Normally, an application is compiled for different CPU architectures and operating systems to allow it to be compatible with the different system. Interpreted software, on the other hand, can normally run on many different CPU architectures and operating systems if the interpreter is available for the architecture or operating system. Software incompatibility occurs many times for new software released for a newer version of an operating system which is incompatible with the older version of the operating system because it may miss some of the features and functionality that the software depends on. Hardware compatibility Hardware compatibility can refer to the compatibility of computer hardware components with a particular CPU architecture, bus, motherboard or operating system. Hardware that is compatible may not always run at its highest stated performance, but it can nevertheless work with legacy components. An example is RAM chips, some of which can run at a lower (or sometimes higher) clock rate than rated. Hardware that was designed for one operating system may not work for another, if device or kernel drivers are unavailable. As an example, Android is not able to be run on a phone with iOS. Free and open-source software Compatibility layer Interchangeability Forward compatibility Backward compatibility Cross-platform Emulator List of computer standards Portability Plug compatible Hardware security References Interoperability Computer hardware Software
Computer compatibility
[ "Technology", "Engineering" ]
373
[ "Telecommunications engineering", "Computer engineering", "Computer hardware", "Computer systems", "Software", "Software engineering", "Computer science", "nan", "Interoperability", "Computers" ]
12,076,308
https://en.wikipedia.org/wiki/EPOXI
EPOXI was a compilation of NASA Discovery program missions led by the University of Maryland and principal investigator Michael A'Hearn, with co-operation from the Jet Propulsion Laboratory and Ball Aerospace. EPOXI uses the Deep Impact spacecraft in a campaign consisting of two missions: the Deep Impact Extended Investigation (DIXI) and Extrasolar Planet Observation and Characterization (EPOCh). DIXI aimed to send the Deep Impact spacecraft on a flyby of another comet, after its primary mission was completed in July 2005, while EPOCh saw the spacecraft's photographic instruments as a space observatory, studying extrasolar planets. DIXI successfully sent the Deep Impact spacecraft on a flyby of comet Hartley 2 on November 4, 2010, revealing a "hyperactive, small and feisty" comet, after three gravity assists from Earth in December 2007, December 2008 and June 2010. The DIXI mission was not without problems, however; the spacecraft had initially been targeted for a December 5, 2008 flyby of comet Boethin, though, the comet could not be located, and was later declared a lost comet, prompting mission planners to reorganize a flyby of an alternative target, Hartley 2. After its flyby of Hartley 2, the spacecraft was also set to make a close flyby of the Apollo asteroid (163249) 2002 GT in 2020. The mission was suspended altogether, however, after contact with the spacecraft was suddenly lost in August 2013 and attempts to re-establish contact in the month following had failed. Mission scientists theorized that a Y2K-like problem had plagued the spacecraft's software. Mission The Deep Impact mission was finished with the visit to comet Tempel 1. But the spacecraft still had plenty of maneuvering fuel left, so NASA approved a second mission, called EPOXI (Extrasolar Planet Observation and Deep Impact Extended Investigation), which included a visit to a second comet (DIXI component) as well as observations of extrasolar planets (EPOCh component). Comet Boethin lost On July 21, 2005, Deep Impact executed a trajectory correction maneuver that placed the spacecraft on course to fly past Earth on December 31, 2007. The maneuver allowed the spacecraft to use Earth's gravity to begin a new mission in a path towards another comet. In January 2008 Deep Impact began studying the stars with several known extrasolar planets in an attempt to find other such stars nearby. The larger of the spacecraft's two telescopes attempts to find the planets using the transit method. The initial plan was for a December 5, 2008 flyby of Comet Boethin, with the spacecraft coming within . The spacecraft did not carry a second impactor to collide with the comet and would observe the comet to compare it to various characteristics found on 9P/Tempel. A'Hearn, the Deep Impact team leader reflected on the upcoming project at that time: "We propose to direct the spacecraft for a flyby of Comet Boethin to investigate whether the results found at Comet Tempel 1 are unique or are also found on other comets." He explained that the mission would provide only about half of the information collected during the collision with Tempel 1 but at a fraction of the cost. (EPOXI's low mission cost of $40 million is achieved by reusing the existing Deep Impact spacecraft.) Deep Impact would use its spectrometer to study the comet's surface composition and its telescopes for viewing the surface features. However, as the Earth gravity assist approached, astronomers were unable to locate Comet Boethin, which is too faint to be observed. Consequently, its orbit could not be calculated with sufficient precision to permit a flyby. Instead, the team decided to send Deep Impact to comet 103P/Hartley requiring an extra two years. NASA approved the additional funding required and retargeted the spacecraft. Mission controllers at the Jet Propulsion Laboratory began redirecting EPOXI on November 1, 2007. They commanded the spacecraft to perform a three-minute rocket burn that changed the spacecraft's velocity. EPOXI's new trajectory set the stage for three Earth flybys, the first on December 31, 2007. This placed the spacecraft into an orbital "holding pattern" so that it could encounter comet 103P/Hartley in 2010. "It's exciting that we can send the Deep Impact spacecraft on a new mission that combines two totally independent science investigations, both of which can help us better understand how solar systems form and evolve," said in December 2007 Deep Impact leader and University of Maryland astronomer Michael A'Hearn who is principal investigator for both the overall EPOXI mission and its DIXI component. In June 2009, EPOXI's spectrometer scanned the Moon on its way to Hartley, and discovered traces of "water or hydroxyl", confirming a Moon Mineralogy Mapper observation — a discovery announced in late September, 2009. EPOCh Before the 2008 flyby to re-orient for the comet 103P/Hartley encounter, the spacecraft used the High Resolution Instrument, the larger of its two telescopes, to perform photometric observations of previously discovered transiting extrasolar planets from January to August 2008. The goal of photometric observations is to measure the quantity of light, not necessarily resolve an image. An aberration in the primary mirror of the HRI allowed the HRI to spread the light from observations over more pixels without saturating the CCD, effectively obtaining better data. A total of 198,434 images were exposed. EPOCh's goals were to study the physical properties of giant planets and search for rings, moons and planets as small as three Earth masses. It also looked at Earth as though it were an extrasolar planet to provide data that could characterize Earth-type planets for future missions, and it imaged the Earth over 24 hours to capture the Moon passing in front on 2008-05-29. Comet flyby The spacecraft used Earth's gravity for the second gravity assist in December 2008 and made two distant flybys of Earth in June and December 2009. On May 30, 2010 it successfully fired its engines for an 11.3 second trajectory correction maneuver, for a velocity change (Δv) of , in preparation for the third Earth flyby on June 27. Observations of 103P/Hartley began on September 5 and ended November 25, 2010. For a diagram of the EPOXI solar orbits see here. The mission's closest approach to 103P/Hartley occurred at 10 am EDT on 4 November 2010, passing to within of this small comet. The flyby speed was 12.3 km/s. The spacecraft employed the same suite of three science instruments—two telescopes and an infrared spectrometer—that the Deep Impact spacecraft used during its prime mission to guide an impactor into comet Tempel 1 in July 2005 and observe the results. Early results of the observations show that the comet is powered by dry ice, not water vapor as was previously thought. The images were clear enough for scientists to link jets of dust and gas with specific surface features. "When comet Boethin could not be located, we went to our backup, which is every bit as interesting but about two years farther down the road," said Tom Duxbury, EPOXI project manager at NASA's Jet Propulsion Laboratory in Pasadena, California. "Hartley 2 is scientifically just as interesting as comet Boethin because both have relatively small, active nuclei," said Michael A'Hearn, principal investigator for EPOXI at the University of Maryland, College Park. Sundry opportunities In November 2010, EPOXI was used to make some test-training deep sky observations, using the MRI camera that is optimised for cometary imagery. Images were made of the Dumbbell Nebula (M27), the Veil Nebula (NGC6960) and the Whirlpool Galaxy (M51a). References External links EPOXI home page NASA's EPOXI page NASA's Deep Impact Begins Hunt For Alien Worlds - 8 Feb 2008 Movie of the Moon transiting the Earth EPOXI Mission Archive at the NASA Planetary Data System, Small Bodies Node Missions to comets Discovery Program Exoplanet search projects ru:Дип Импакт (КА)#EPOXI — расширенная миссия
EPOXI
[ "Astronomy" ]
1,707
[ "Astronomy projects", "Exoplanet search projects" ]
12,076,692
https://en.wikipedia.org/wiki/Numerical%20error
In software engineering and mathematics, numerical error is the error in the numerical computations. Types It can be the combined effect of two kinds of error in a calculation. the first is caused by the finite precision of computations involving floating-point or integer values the second usually called truncation error is the difference between the exact mathematical solution and the approximate solution obtained when simplifications are made to the mathematical equations to make them more amenable to calculation. The term truncation comes from the fact that either these simplifications usually involve the truncation of an infinite series expansion so as to make the computation possible and practical, or because the least significant bits of an arithmetic operation are thrown away. Measure Floating-point numerical error is often measured in ULP (unit in the last place). See also Loss of significance Numerical analysis Error analysis (mathematics) Round-off error Kahan summation algorithm Numerical sign problem References Accuracy and Stability of Numerical Algorithms, Nicholas J. Higham, "Computational Error And Complexity In Science And Engineering", V. Lakshmikantham, S.K. Sen, Computer arithmetic Numerical analysis
Numerical error
[ "Mathematics", "Engineering" ]
231
[ "Applied mathematics", "Software engineering stubs", "Computational mathematics", "Software engineering", "Computer arithmetic", "Arithmetic", "Mathematical relations", "Applied mathematics stubs", "Numerical analysis", "Approximations" ]
12,076,921
https://en.wikipedia.org/wiki/Water-repellent%20glass
Water-repellent glass (WRG) is a transparent coating film fabricated onto glass, enabling the glass to exhibit hydrophobicity and durability. WRGs are often manufactured out of materials including derivatives from per- and polyfluoroalkyl substances (PFAS), tetraethylorthosilicate (TEOS), polydimethylsilicone (PDMS), and fluorocarbons. In order to prepare WRGs, sol-gel processes involving dual-layer enrichments of large size glasses are commonly implemented. Glasses enriched with WRG coatings prevent water droplets from sticking to the surface due to hydrophobic properties. These properties are achieved through high water-sliding property and high contact angles with water drops (over 100°). Additionally, durability against both chemical and mechanical attack allows the coating to protect the glass from abrasion due to windshield wipers, rainwater, and other weather conditions.        WRGs are most commonly used commercially for automobile windows to increase visibility in precipitous weather conditions and nighttime driving. In industry, WRG's were first used by Volvo Cars first on their late-2005 vehicles, and have also been used by Japanese automobile makers such as Toyota, Honda, and Mazda. Additionally, WRG has other practical applications such as eyewear and photocatalysts. Properties Hydrophobicity Hydrophobic properties of WRG glass windows are crucial to its repellency abilities. High water-sliding property of WRG films is necessary for hydrophobicity. The higher the water-sliding angle, or angle of a surface in which a water droplet begins to slide down, the easier a water drop can slide down the film surface. A film's water-sliding angle is often dependent on the film coating substance. For instance, a study revealed that coating a WRG film with Fluoroalkylsilane (FAS) produced a higher water-sliding angle than coating with Polydimethylsilicone (PDMS). High contact angles of over 100 degrees are associated with more effective water-repellency properties. The greater the contact angle between the water droplet and glass surface, the less the contact between the water and the glass, and the easier the water droplet can slide off of the glass. This can be achieved by increasing surface roughness, since the contact angle becomes larger as surface particles become larger. Mechanical durability Water-repellent films' mechanical durability properties are dependent on state of the surface roughness of the film and density of adsorbed water-repellent molecules. Mechanical durability of WRG can be characterized as wear and weather resistance, an important attribute for manufacturing automobile windows. The greater the surface roughness, the more resistant the film will be to abrasion. Average surface roughness of a glass substrate indicates the size of surface particles, is measured using an atomic force microscope (AFM), and recorded in nanometers. A study analyzed different samples of silica films and found minimum and maximum surface roughnesses of 0.4 and 16.1 nm respectively. Surface roughnesses greater than 8 nm are considered large. After rubbing each sample with a flannel cloth, the study was able to determine each's resistance against wear. Films with higher surface roughnesses exhibit the highest mechanical durability. Additionally, films formed on top of silica were more durable than films formed on soda-lime glass. The WRG's mechanical durability can also be increased by a larger density of reaction sites per surface area. An increased density of reaction sites on the film is also a result of a higher surface roughness. This works to increase durability since a higher density means more rigid chemical bonds. For instance, forming a WRG film out of polyfluoroalkyl isocyanate creates a surface with siloxane bonding. There exists a direct correlation between the density of silanol groups on the film surface and the adhesion density of the film. Production Sol-gel process The sol-gel process is a common method of preparing water-repellent glass coating films done with various materials and often resulting in dual-layer films. This process is advantageous for automobile window applications since it works with large, curved safety glasses and allows qualities such as durability and hydrophobicity to be controlled. In a study done by the University of Massachusetts, the sol-gel process was employed to prepare a dual-layer film using layers composed of silica and fluorocarbon. The silica layer was selected to enhance durability and placed at the glass-film interface, while the fluorocarbon layer was placed at the film-air interface and incorporated a specific surface roughness into the design. The process involved the following distinct steps: preparing both the silica sol and water-repellent solutions, spraying the solution onto the glass, treating the glass through drying, and treating through heating. In addition, the Nippon Sheet Glass Co. in Japan discussed a sol-gel treatment involving fluoroalkylsilane (FAS) and polydimethylsilicone (PDMS). Both materials were mixed with catalysts in solvents, fabricated onto glasses, and dried. The use of the sol-gel treatment allowed for flexibility in experimenting with contact angle, sliding angle, and durability. The study pointed out that this process could be also used in automobile industry. Applications The table below provides an overview of some notable applications of WRG films. Automobile windows and mirrors WRG is commonly used as a coating for windows and mirrors of automobiles in order to increase visibility through wicking away rainwater, snow, and dirt. Several millions of WRG windows have already been manufactured and installed in industry. For instance, Central Glass Company developed a hydrophobic glass film exhibiting excellent repellency, durability, and transparency. Many Japanese automobile companies including Honda and Mazda are selling cars with these glass films. Additionally, water-repellent coatings are being applied to automobile side mirrors. Eyewear The eyeglass industry is also moving toward implementing water and dust repellent glasses to decrease fogging due to rain, sweat, and other water sources. When glasses experience condensation, the small water droplets begin scattering light, impairing the vision of the glasses wearer. The eyecare company Nasho is innovating toward WRG technology to improve vision, but is currently limited financially for the research and development. Photocatalysts Photocatalyst coatings allow for the self-cleaning of surfaces of road signs, building materials, and solar panels. Multiple photocatalyst WRG film such as CLEARTECT and HYDRAP have been commercialized. A WRG film can be added on top of solar panels in order to increase their efficiency. The cover glass technology is self-cleaning, allowing for maximized light transmission into the solar cell. The hydrophobic film acts as a barrier that causes water droplets to roll off the solar panel, rather than adhering and blocking sunlight from being absorbed. Solar panels enhanced with anti-reflective, water-repellent layers show a 6.6% increase in performance when compared to those without a coating. References Glass coating and surface modification Volvo Cars
Water-repellent glass
[ "Chemistry" ]
1,493
[ "Glass chemistry", "Coatings", "Glass coating and surface modification" ]
12,077,900
https://en.wikipedia.org/wiki/UK%20Threat%20Levels
The United Kingdom Terror Threat Levels, often referred to as UK Threat Levels, are the alert states that have been in use since 1 August 2006 by the British government to warn of forms of terrorist activity. In September 2010 the threat levels for Northern Ireland-related terrorism were also made available. In July 2019 changes were made to the terrorism threat level system, to reflect the threat posed by all forms of terrorism, irrespective of ideology. There is now a single national threat level describing the threat to the UK, which includes Islamist, Northern Ireland, left-wing and right-wing terrorism. Before 2006, a colour-based alert scheme known as BIKINI state was used. The response indicates how government departments and agencies and their staffs should react to each threat level. Categories of threat Since 23 July 2019, the Home Office has reported two different categories of terrorist threat: National Threat Level. Northern Ireland-related Threat Level to Northern Ireland Previously, since 24 September 2010, the Home Office has reported three different categories of terrorist threat: Threat from international terrorism. Terrorism threat related to Northern Ireland in Northern Ireland itself. Terrorism threat related to Northern Ireland in Great Britain (i.e. excluding Northern Ireland). A fourth category of terrorist threat is also assessed but is not disclosed, relating to threats to sectors of the UK's critical national infrastructure such as the London Underground, National Rail network and power stations. The Joint Terrorism Analysis Centre (JTAC) is responsible for setting the threat level from international terrorism and the Security Service (MI5) is responsible for setting both threat levels related to Northern Ireland. The threat level informs decisions on protective security measures taken by public bodies, the police and the transport sector. Threat levels Threat Levels are decided using the following information: Available intelligence. It is rare that specific threat information is available and can be relied upon. More often, judgements about the threat will be based on a wide range of information, which is often fragmentary, including the level and nature of current terrorist activity, comparison with events in other countries and previous attacks. Intelligence is only ever likely to reveal part of the picture. Terrorist capability. An examination of what is known about the capabilities of the terrorists in question and the method they may use based on previous attacks or from intelligence. This would also analyse the potential scale of the attack. Terrorist intentions. Using intelligence and publicly available information to examine the overall aims of the terrorists and the ways they may achieve them including what sort of targets they would consider attacking. Timescale. The threat level expresses the likelihood of an attack in the near term. We know from past incidents that some attacks take years to plan, while others are put together more quickly. In the absence of specific intelligence, a judgement will need to be made about how close an attack might be to fruition. Threat levels do not have any set expiry date, but are regularly subject to review in order to ensure that they remain current. History Threat levels were originally produced by MI5's Counter-Terrorism Analysis Centre for internal use within the British government. Assessments known as Security Service Threat Reports or Security Service Reports were issued to assess the level of threat to British interests in a given country or region. They had six levels: Imminent, High, Significant, Moderate, Low and Negligible. Following terrorist attacks in Indonesia in 2002, the system was criticised by the Intelligence and Security Committee of Parliament (ISC) as insufficiently clear and needing to be of greater use to "customer departments". The 7 July 2005 London bombings prompted the government to update the threat level system following a recommendation from the ISC that it should deliver "a greater transparency of the threat level and alert systems as a whole, and in particular [it is recommended] that more thought is given to what is put in the public domain about the level of threat and required level of alert." The system was accordingly simplified and made easier to understand. Since 2006, MI5 and the Home Office have published international terrorism threat levels for the entire UK on their websites, and since 2010 they have also published threat levels for Northern Ireland, with separate threat levels for Northern Ireland and the rest of the UK. 2019 'New Reporting Format' In July 2019 changes were made to the terrorism threat level system creating a 'New Format' of threat levels, to reflect the threat posed by all forms of terrorism, irrespective of ideology. There is now a single national threat level describing the threat to the UK, which includes Islamist, Northern Ireland, left-wing and right-wing terrorism. Changes to threat levels The following table records changes to the threat levels from July 2019 – Present: Old-format historical threat levels Since 2006, information about the national threat level has been available on the MI5 and Home Office websites. In September 2010 the threat levels for Northern Ireland-related terrorism were also made available. The following table records changes to the threat levels from August 2006 – July 2019 before the 'New Format' was put into place: See also Historic/Defunct: References External links Current Threat Level, Home Office 2006 introductions Alert measurement systems Threat Levels Emergency management in the United Kingdom
UK Threat Levels
[ "Technology" ]
1,039
[ "Warning systems", "Alert measurement systems" ]
12,077,957
https://en.wikipedia.org/wiki/Voice%20call%20continuity
The 3GPP has defined the Voice Call Continuity (VCC) specifications in order to describe how a voice call can be persisted, as a mobile phone moves between circuit switched and packet switched radio domains (3GPP TS 23.206). Many mobile phones are becoming available that support both cellular and other broadband radio technologies. For example, the Nokia N Series and E Series devices support both GSM and WiFi. Similar devices from Sony Ericsson, BlackBerry, Samsung, HTC, Motorola and even the Apple iPhone provide comparable dual mode technology. WiMAX support is also being added and further handsets are emerging from Kyocera and other vendors, which provide dual mode technology in CDMA phones. A wide range of Internet applications can then be accessed from mobile devices using wireless broadband technologies like WiFi and WiMAX. For example, VoIP traffic can be carried over these alternative radio interfaces. Whereas VoIP calls from mobile devices are controlled by IP infrastructure, according to the VCC specifications, calls to and from a cellular phone in the circuit switched domain are also anchored in an IP domain, for example the IP Multimedia Subsystem (IMS). As the handset becomes attached and detached from wireless access points such as WiFi hotspots, a client application in the device provides notifications of the radio conditions to a VCC platform in the network. This allows circuit switched and IP call legs to be originated and terminated such that the speech path is transferred between domains, transparently to the end user. This technology is of interest to users as an example of the benefits that are achievable through Fixed Mobile Convergence (FMC). Since most WiFi and WiMAX access points will use fixed backhaul technologies, seamlessly moving between for example WiFi and GSM domains allows the best quality and most cost efficient radio to be used at any given point in time, irrespective of the transport technology used for the media. Similarly, service providers are interested in VCC in order to offer FMC products towards specific market segments, such as enterprise users. Cellular operators in particular can offer bundled services that consist of for example, a broadband connection with a WiFi router and a set of dual mode devices. This supports a Fixed Mobile Substitution (FMS) business case where calls from the office can be carried as VoIP over WiFi and a broadband connection, while VCC technology allows these calls to be seamlessly handed over to cellular networks as the device moves to areas of poor WiFi coverage. One limitation of VCC however, relates to the focus on voice service. In order to preserve the cellular telephony experience while users are WiFi attached, other features need to be replicated in the packet switched domain. For example, the 3GPP has defined SMS over IP specifications (3GPP TS 23.204) in order to describe how messaging functionality can be provided to end users that are present within IP based access networks. However, over several years a range of other business logic, such as GSM Supplementary Services within the Home Location Register (HLR) has been embedded within cellular networks. This functionality must also be realized within the IP domain in order to provide full service continuity between multiple access networks. Evolution In the context of the Release 8 of 3GPP standards, VCC was replaced by a wider concept that covers all services provided by IMS. This work resulted in the specification of IMS Service Continuity and IMS Centralized Services (ICS), which are meant to be used in particular to provide the continuity of voice calls between LTE and legacy 2G/3G networks. See also Handoff References External links Article by John Meredith IMS services Mobile telecommunications standards 3GPP standards
Voice call continuity
[ "Technology" ]
759
[ "Mobile telecommunications", "Mobile telecommunications standards", "IMS services" ]
12,079,734
https://en.wikipedia.org/wiki/All-pay%20auction
In economics and game theory, an all-pay auction is an auction in which every bidder must pay regardless of whether they win the prize, which is awarded to the highest bidder as in a conventional auction. As shown by Riley and Samuelson (1981), equilibrium bidding in an all pay auction with private information is revenue equivalent to bidding in a sealed high bid or open ascending price auction. In the simplest version, there is complete information. The Nash equilibrium is such that each bidder plays a mixed strategy and expected pay-offs are zero. The seller's expected revenue is equal to the value of the prize. However, some economic experiments and studies have shown that over-bidding is common. That is, the seller's revenue frequently exceeds that of the value of the prize, in hopes of securing the winning bid. In repeated games even bidders that win the prize frequently will most likely take a loss in the long run. The all-pay auction with complete information does not have a Nash equilibrium in pure strategies, but does have a Nash equilibrium in mixed-strategies. Forms of all-pay auctions The most straightforward form of an all-pay auction is a Tullock auction, sometimes called a Tullock lottery after Gordon Tullock, in which everyone submits a bid but both the losers and the winners pay their submitted bids. This is instrumental in describing certain ideas in public choice economics. The dollar auction is a two player Tullock auction, or a multiplayer game in which only the two highest bidders pay their bids. Another practical examples are the bidding fee auction and the penny raffle (pejoratively known as a "Chinese auction"). Other forms of all-pay auctions exist, such as a war of attrition (also known as biological auctions), in which the highest bidder wins, but all (or more typically, both) bidders pay only the lower bid. The war of attrition is used by biologists to model conventional contests, or agonistic interactions resolved without recourse to physical aggression. Rules The following analysis follows a few basic rules. Each bidder submits a bid, which only depends on their valuation. Bidders do not know the valuations of other bidders. The analysis is based on an independent private value (IPV) environment where the valuation of each bidder is drawn independently from a uniform distribution [0,1]. In the IPV environment, if my value is 0.6 then the probability that some other bidder has a lower value is also 0.6. Accordingly, the probability that two other bidders have lower value is . Symmetry Assumption In IPV bidders are symmetric because valuations are from the same distribution. These make the analysis focus on symmetric and monotonic bidding strategies. This implies that two bidders with the same valuation will submit the same bid. As a result, under symmetry, the bidder with the highest value will always win. Using revenue equivalence to predict bidding function Consider the two-player version of the all-pay auction and be the private valuations independent and identically distributed on a uniform distribution from [0,1]. We wish to find a monotone increasing bidding function, , that forms a symmetric Nash Equilibrium. If player bids , he wins the auction only if his bid is larger than player 's bid . The probability for this to happen is , since is monotone and Thus, the probability of allocation of good to is . Thus, 's expected utility when he bids as if his private value is is given by . For to be a Bayesian-Nash Equilibrium, should have its maximum at so that has no incentive to deviate given sticks with his bid of . Upon integrating, we get . We know that if player has private valuation , then they will bid 0; . We can use this to show that the constant of integration is also 0. Thus, we get . Since this function is indeed monotone increasing, this bidding strategy constitutes a Bayesian-Nash Equilibrium. The revenue from the all-pay auction in this example is Since are drawn iid from Unif[0,1], the expected  revenue is . Due to the revenue equivalence theorem, all auctions with 2 players will have an expected revenue of when the private valuations are iid from Unif[0,1]. Bidding Function in the Generic Symmetric Case Suppose the auction has risk-neutral bidders. Each bidder has a private value drawn i.i.d. from a common smooth distribution . Given free disposal, each bidder's value is bounded below by zero. Without loss of generality, then, normalize the lowest possible value to zero. Because the game is symmetric, the optimal bidding function must be the same for all players. Call this optimal bidding function . Because each player's payoff is defined as their expected gain minus their bid, we can recursively define the optimal bid function as follows: Note because F is smooth the probability of a tie is zero. This means the probability of winning the auction will be equal to the CDF raised to the number of players minus 1: i.e., . The objective now satisfies the requirements for the envelope theorem. Thus, we can write: This yields the unique symmetric Nash Equilibrium bidding function . Examples Consider a corrupt official who is dealing with campaign donors: Each wants him to do a favor that is worth somewhere between $0 and $1000 to them (uniformly distributed). Their actual valuations are $250, $500 and $750. They can only observe their own valuations. They each treat the official to an expensive present - if they spend X Dollars on the present then this is worth X dollars to the official. The official can only do one favor and will do the favor to the donor who is giving him the most expensive present. This is a typical model for all-pay auction. To calculate the optimal bid for each donor, we need to normalize the valuations {250, 500, 750} to {0.25, 0.5, 0.75} so that IPV may apply. According to the formula for optimal bid: The optimal bids for three donors under IPV are: To get the real optimal amount that each of the three donors should give, simply multiplied the IPV values by 1000: This example implies that the official will finally get $375 but only the third donor, who donated $281.3 will win the official's favor. Note that the other two donors know their valuations are not high enough (low chance of winning),  so they do not donate much, thus balancing the possible huge winning profit and the low chance of winning. References All-pay auction Mathematical economics
All-pay auction
[ "Mathematics" ]
1,376
[ "Applied mathematics", "Game theory", "Non-cooperative games", "Mathematical economics" ]
12,079,977
https://en.wikipedia.org/wiki/Flood%20insurance%20rate%20map
A flood insurance rate map (FIRM) is an official map of a community within the United States that displays the floodplains, more explicitly special hazard areas and risk premium zones, as delineated by the Federal Emergency Management Agency (FEMA). The term is used mainly in the United States but similar maps exist in many other countries, such as Australia. Uses FIRMs display areas that fall within the 100-year flood boundary. Areas that fall within the boundary are called special flood hazard areas (SFHAs) and they are further divided into insurance risk zones. The term 100-year flood indicates that the area has a one-percent chance of flooding in any given year, not that a flood will occur every 100 years. Such maps are used in town planning, in the insurance industry, and by individuals who want to avoid moving into a home at risk of flooding or to know how to protect their property. FIRMs are used to set rates of insurance against risk of flood and whether buildings are insurable at all against flood. It is similar to a topographic map, but is designed to show floodplains. Towns and municipalities use FIRMs to plan zoning areas. Most places will not allow construction in a flood way. Creation process In the United States the FIRM for each town is occasionally updated. At that time a preliminary FIRM will be published, and available for public viewing and comment. FEMA sells the official FIRMs, called community kits, as well as an updating access service to the maps. There are also some companies that sell software to locate land parcels or real estate on digitized FIRMs. These FIRMs are used in identifying whether a land or building is in flood zone and, if so, which of the different flood zones are in effect. In 2004, FEMA began a project to update and digitize the flood plain maps at a yearly cost of $200 million. The new maps usually take around 18 months to go from a preliminary release to the final product. During that time period FEMA works with local communities to determine the final maps. Louisiana and FEMA In early 2014, two congressmen from Louisiana, Bill Cassidy and Steve Scalise, asked FEMA to consider the width of drainage canals, water flow levels, drainage improvements, pumping stations and computer models when deciding the final flood insurance rate maps. See also National Flood Insurance Program Floodplain Special Flood Hazard Area References External links FIRMettes from FEMA Hydrology and urban planning Flood control in the United States Flood insurance Federal Emergency Management Agency Geologic maps
Flood insurance rate map
[ "Environmental_science" ]
507
[ "Hydrology", "Hydrology and urban planning" ]
1,615,818
https://en.wikipedia.org/wiki/Pulse%20wave
A pulse wave or pulse train or rectangular wave is a non-sinusoidal waveform that is the periodic version of the rectangular function. It is held high a percent each cycle (period) called the duty cycle and for the remainder of each cycle is low. A duty cycle of 50% produces a square wave, a specific case of a rectangular wave. The average level of a rectangular wave is also given by the duty cycle. The pulse wave is used as a basis for other waveforms that modulate an aspect of the pulse wave, for instance: Pulse-width modulation (PWM) refers to methods that encode information by varying the duty cycle of a pulse wave. Pulse-amplitude modulation (PAM) refers to methods that encode information by varying the amplitude of a pulse wave. Frequency-domain representation The Fourier series expansion for a rectangular pulse wave with period , amplitude and pulse length is where . Equivalently, if duty cycle is used, and : Note that, for symmetry, the starting time () in this expansion is halfway through the first pulse. Alternatively, can be written using the Sinc function, using the definition , as or with as Generation A pulse wave can be created by subtracting a sawtooth wave from a phase-shifted version of itself. If the sawtooth waves are bandlimited, the resulting pulse wave is bandlimited, too. Applications The harmonic spectrum of a pulse wave is determined by the duty cycle. Acoustically, the rectangular wave has been described variously as having a narrow/thin, nasal/buzzy/biting, clear, resonant, rich, round and bright sound. Pulse waves are used in many Steve Winwood songs, such as "While You See a Chance". See also Gibbs phenomenon Pulse shaping Sinc function Sine wave References Waves
Pulse wave
[ "Physics" ]
369
[ "Waves", "Physical phenomena", "Motion (physics)" ]
1,615,846
https://en.wikipedia.org/wiki/Libdvdcss
libdvdcss (or libdvdcss2 in some repositories) is a free and open-source software library for accessing and unscrambling DVDs encrypted with the Content Scramble System (CSS). libdvdcss is part of the VideoLAN project and is used by VLC media player and other DVD player software packages, such as Ogle, xine-based players, and MPlayer. Comparison with DeCSS libdvdcss is not to be confused with DeCSS. Whereas DeCSS uses a cracked DVD player key to perform authentication, libdvdcss uses a generated list of possible player keys. If none of them work (for instance, when the DVD drive enforces region coding), libdvdcss brute-forces the key, ignoring the DVD's region code (if any). The legal status of libdvdcss is controversial but there has been—unlike DeCSS—no known legal challenge to it as of June 2022. Distribution Many Linux distributions do not contain libdvdcss (for example, Debian, Ubuntu, Fedora and openSUSE) due to fears of running afoul of DMCA-style laws, but they often provide the tools to let the user install it themselves. For example, it used to be available in Ubuntu through Medibuntu, which is no longer available. Distributions which come pre-installed with libdvdcss include BackTrack, CrunchBang Linux, LinuxMCE, Linux Mint, PCLinuxOS, Puppy Linux 4.2.1, Slax, Super OS, Pardus, and XBMC Live. It is also in Arch Linux official package repositories. Usage Libdvdcss alone is only a library and cannot play DVDs. DVD player applications, such as VLC media player, use this library to decode DVDs. Libdvdcss is optional in many open-source DVD players, but without it, only non-encrypted discs will play. Using HandBrake or VidCoder for DVD ripping requires that one install libdvdcss (with compilation or Homebrew on macOS). See also Advanced Access Content System Blu-ray References External links C (programming language) libraries Compact Disc and DVD copy protection Cryptographic software DVD Free codecs Free computer libraries
Libdvdcss
[ "Mathematics" ]
512
[ "Cryptographic software", "Mathematical software" ]
1,615,880
https://en.wikipedia.org/wiki/Link%20%28unit%29
The link (usually abbreviated as "l.", "li." or "lnk."), sometimes called a Gunter’s link, is a unit of length formerly used in many English-speaking countries. In US customary units modern definition, the link is exactly of a US survey foot, or exactly 7.92 inches or 20.1168 cm. The unit is based on Gunter's chain, a metal chain 66 feet long with 100 links, that was formerly used in land surveying. Even after the original tool was replaced by later instruments of higher precision, the unit was commonly used throughout the English-speaking world, for example in the United States customary units and the Imperial system. The length of the foot, and hence the link, varied slightly from place to place and time to time. In modern times the difference between the US survey foot and the international foot is two parts per million. The link fell out of general use in the 20th century. Proportions to other customary units Twenty-five links make a rod, pole or perch (16.5 feet). One hundred links make a chain. One thousand links make a furlong. Eight thousand links make a mile. {| |- |1 link ||≡  |align=right|0.01||chain |- |||≡ |align=right|0.04||rod |- |||≡ |align=right|0.66||foot |- |||≡ |align=right|0.22||yard |- |||≡ |align=right|7.92||inches |} History Edmund Gunter designed and introduced the Gunter's chain in England in 1620. By correlating traditional English land measurements with the new decimal number system (which had just replaced Roman numerals), it combined ease and flexibility in taking surveying measurements in the field with ease of calculating results afterward. It rapidly gained acceptance in English surveying practice, which also began to adopt the tool's chain and link lengths as units of measure within the English system of units. As English dominions grew over time, its system of measures came to be used in many parts of the world. When the American colonies broke their ties with Great Britain in 1776, they needed to establish a system of units that fell under their own political authority. While they adopted many of the British units, the length of the yard (which determined all other units of length) was by necessity governed by the length of a physical artifact. The one in American possession was slightly different in actual length from the British one, due to imprecision of manufacture. It was of only minor significance at the time. In 1824, the United Kingdom officially reformed their system of units in legislation that established what came to be known as the Imperial system, but the standard of the yard remained the length of the artifact. The last replacement imperial artifact was made in bronze in 1845, and the most accurate measurement ever made of its length (much later) was 0.914 398 416 meters. In the U.S., the Mendenhall Order of 1893 tied the length of the U.S. yard to the meter, with the equivalence 39.37 inches = 1 meter, or approximately 0.914 401 828 803 658 meters per yard. In 1959, the international yard and pound agreement established the "international" yard length of 0.9144 meters, upon which both the customary U.S. and imperial units of length have since been based. Even so, the Mendenhall Order length of the yard continues in use even in 2013 in the United States as the basis for the survey foot. The prior land survey data for North America of 1927 (NAD27) had been based on the survey foot, and a new triangulation based on the metric system (NAD83) was not released until 1986. Since that time, the State Plane Coordinate Systems (SPCSs) established by the U.S. Geodetic Survey have been based in SI units in all states. But a few states have established by law that they must remain available in survey feet as well. In October 2019, the U.S. National Geodetic Survey and the National Institute of Standards and Technology announced their joint intent to retire the U.S. survey foot, with effect from the end of 2022. The link in U.S. Customary units is thereafter defined based on the International 1959 foot. Absolute length In many measurement systems based on former English units, the link has remained fixed at 0.66 feet, therefore 0.22 yards or 7.92 inches; it is the absolute length of the yard that has varied. A rare remaining application of the link is in the service of some surveying in the United States, which relates to the definition of the survey foot. During most of its useful life, a modern degree of precision in the link's measure was neither expected nor possible. With various definitions, 1 link is equal to: exact 201.168 mm (based on the International 1959 foot) approximate 201.167 652 mm (based on the per-1959 imperial foot) approximate 201.168 402 mm (based on the U.S. survey foot) See also Edmund Gunter References Units of length Customary units of measurement in the United States
Link (unit)
[ "Mathematics" ]
1,086
[ "Quantity", "Units of measurement", "Units of length" ]
1,616,027
https://en.wikipedia.org/wiki/Neon-burning%20process
The neon-burning process is a set of nuclear fusion reactions that take place in evolved massive stars with at least 8 Solar masses. Neon burning requires high temperatures and densities (around 1.2×109 K or 100 keV and 4×109 kg/m3). At such high temperatures photodisintegration becomes a significant effect, so some neon nuclei decompose, absorbing 4.73 MeV and releasing alpha particles. This free helium nucleus can then fuse with neon to produce magnesium, releasing 9.316 MeV. :{| border="0" |- style="height:2em;" | ||+ ||γ ||→ || ||+ || |- style="height:2em;" | ||+ || ||→ || ||+ ||γ |} Alternatively: :{| border="0" |- style="height:2em;" | ||+ ||n ||→ || ||+ ||γ |- style="height:2em;" | ||+ || ||→ || ||+ ||n |} where the neutron consumed in the first step is regenerated in the second. A secondary reaction causes helium to fuse with magnesium to produce silicon: + → + γ Contraction of the core leads to an increase of temperature, allowing neon to fuse directly as follows: + → + Neon burning takes place after carbon burning has consumed all carbon in the core and built up a new oxygen–neon–sodium–magnesium core. The core ceases producing fusion energy and contracts. This contraction increases density and temperature up to the ignition point of neon burning. The increased temperature around the core allows carbon to burn in a shell, and there will be shells burning helium and hydrogen outside. During neon burning, oxygen and magnesium accumulate in the central core while neon is consumed. After a few years the star consumes all its neon and the core ceases producing fusion energy and contracts. Again, gravitational pressure takes over and compresses the central core, increasing its density and temperature until the oxygen-burning process can start. References External links Arnett, W. D. Advanced evolution of massive stars. V – Neon burning / Astrophysical Journal, vol. 193, Oct. 1, 1974, pt. 1, p. 169–176. Nucleosynthesis
Neon-burning process
[ "Physics", "Chemistry", "Astronomy" ]
497
[ "Nuclear fission", "Astronomy stubs", "Astrophysics", "Nucleosynthesis", "Stellar astronomy stubs", "Nuclear chemistry stubs", "Nuclear physics", "Nuclear fusion" ]
1,616,044
https://en.wikipedia.org/wiki/Auxetics
Auxetic metamaterials are a type of metamaterial with a negative Poisson's ratio, so that axial elongation causes transversal elongation (in contrast to an ordinary material, where stretching in one direction causes compression in the other direction). Auxetics can be single molecules, crystals, or a particular structure of macroscopic matter. Auxetic materials are used in protective equipment such as body armor, helmets, and knee pads, as they absorb energy more effectively than traditional materials. They are also used in devices such as medical stents or implants. Auxetic fabrics can be used to create comfortable and flexible clothing, as well as technical fabrics for applications such as aerospace and sports equipment. Auxetic materials can also be used to create acoustic metamaterials for controlling sound and vibration. History The term auxetic derives from the Greek word () which means 'that which tends to increase' and has its root in the word (), meaning 'increase' (noun). This terminology was coined by Professor Ken Evans of the University of Exeter. One of the first artificially produced auxetic materials, the RFS structure (diamond-fold structure), was invented in 1978 by the Berlin researcher K. Pietsch. Although he did not use the term auxetics, he describes for the first time the underlying lever mechanism and its non-linear mechanical reaction so he is therefore considered the inventor of the auxetic net. The earliest published example of a material with negative Poisson's constant is due to A. G. Kolpakov in 1985, "Determination of the average characteristics of elastic frameworks"; the next synthetic auxetic material was described in Science in 1987, entitled "Foam structures with a Negative Poisson's Ratio" by R.S. Lakes from the University of Wisconsin Madison. The use of the word auxetic to refer to this property probably began in 1991. Recently, cells were shown to display a biological version of auxeticity under certain conditions. Designs of composites with inverted hexagonal periodicity cell (auxetic hexagon), possessing negative Poisson ratios, were published in 1985. For these reasons, gradually, many researchers have become interested in the unique properties of Auxetics. This phenomenon is visible in the number of publications (Scopus search engine), as shown in the following figure. In 1991, there was only one publication. However, in 2016, around 165 publications were released, so the number of publications has exploded - a 165-fold increase in just 25 years - clearly showing that the topic of Auxetics is drawing considerable attention. However, although Auxetics are promising structures and have a lot of potential in science and engineering, their widespread application in multiple fields is still a challenge. Therefore, additional research related to Auxetics is required for widespread applications. Properties Typically, auxetic materials have low density, which is what allows the hinge-like areas of the auxetic microstructures to flex. At the macroscale, auxetic behaviour can be illustrated with an inelastic string wound around an elastic cord. When the ends of the structure are pulled apart, the inelastic string straightens while the elastic cord stretches and winds around it, increasing the structure's effective volume. Auxetic behaviour at the macroscale can also be employed for the development of products with enhanced characteristics such as footwear based on the auxetic rotating triangles structures developed by Grima and Evans and prosthetic feet with human-like toe joint properties. Auxetic materials also occur organically, although they are structurally different from man-made metamaterials. For example, the nuclei of mouse embryonic stem cells in a transition state display auxetic behavior. Examples Examples of auxetic materials include: Auxetic polyurethane foam Nuclei of mouse embryonic stem cells in exiting pluripotent state α-Cristobalite. Certain states of crystalline materials: Li, Na, K, Cu, Rb, Ag, Fe, Ni, Co, Cs, Au, Be, Ca, Zn, Sr, Sb, MoS2, BAsO4, and others. Certain rocks and minerals Graphene, which can be made auxetic through the introduction of vacancy defects Carbon diamond-like phases Two-dimensional tungsten semicarbide Noncarbon nanotubes Living bone tissue (although this is only suspected) Tendons within their normal range of motion. Specific variants of polytetrafluorethylene polymers such as Gore-Tex Several types of origami folds like the Diamond-Folding-Structure (RFS), the herringbone-fold-structure (FFS) or the miura fold, and other periodic patterns derived from it. Tailored structures designed to exhibit special designed Poisson's ratios. Chain organic molecules. Recent researches revealed that organic crystals like n-paraffins and similar to them may demonstrate an auxetic behavior. See also Acoustic metamaterial Mechanical metamaterial Metamaterial Parallelogon Zetix, a type of commercially manufactured auxetic material References External links Materials with negative Poisson's ratio Auxetic foam in youtube General Information about Auxetic Materials Materials Geometric shapes
Auxetics
[ "Physics", "Mathematics" ]
1,064
[ "Geometric shapes", "Mathematical objects", "Materials", "Geometric objects", "Matter" ]
1,616,138
https://en.wikipedia.org/wiki/International%20Council%20for%20Harmonisation%20of%20Technical%20Requirements%20for%20Pharmaceuticals%20for%20Human%20Use
The International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) is an initiative that brings together regulatory authorities and pharmaceutical industry to discuss scientific and technical aspects of pharmaceutical product development and registration. The mission of the ICH is to promote public health by achieving greater harmonisation through the development of technical guidelines and requirements for pharmaceutical product registration. Harmonisation leads to a more rational use of human, animal and other resources, the elimination of unnecessary delay in the global development, and availability of new medicines while maintaining safeguards on quality, safety, efficacy, and regulatory obligations to protect public health. Junod notes in her 2005 treatise on clinical drug trials that "[a]bove all, the ICH has succeeded in aligning clinical trial requirements." History In the 1980s, the European Union began harmonising regulatory requirements. In 1989, Europe, Japan, and the United States began creating plans for harmonisation. The International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (ICH) was created in April 1990 at a meeting in Brussels. ICH had the initial objective of coordinating the regulatory activities of the European, Japanese and American regulatory bodies in consultation with the pharmaceutical trade associations from these regions, to discuss and agree the scientific aspects arising from product registration. Since the new millennium, ICH's attention has been directed towards extending the benefits of harmonisation beyond the founding ICH regions. In 2015, ICH underwent several reforms and changed its name to the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use while becoming a legal entity in Switzerland as a non-profit association. The aim of these reforms was to transform ICH into a truly global initiative supported by a robust and transparent governance structure. The ICH association established an assembly as the over-arching governing body with the aim of focusing global pharmaceutical regulatory harmonisation work in one venue that allows pharmaceutical regulatory authorities and concerned industry organisations to be more actively involved in ICH's harmonisation work. The new assembly met for the first time on 23 October 2015. Structure The ICH comprises the following bodies: ICH Assembly ICH Management Committee MedDRA Management Committee ICH Secretariat The ICH assembly brings together all members and observers of the ICH association as the overarching governing body of ICH. It adopts decisions in particular on matters such as on the adoption of ICH guidelines, admission of new members and observers, and the ICH association's work plans and budget. Member representatives appointed to the assembly are supported by ICH coordinators who represent each member to the ICH secretariat on a daily basis. The ICH Management Committee (MC) is the body that oversees operational aspects of ICH on behalf of all members, including administrative and financial matters and oversight of the working groups (WGs). The MedDRA Management Committee (MC) is responsible for direction of MedDRA, ICH's standardised medical terminology. The MedDRA MC has the role of managing, supporting, and facilitating the maintenance, development, and dissemination of MedDRA. The ICH secretariat is responsible for day-to-day management of ICH, coordinating ICH activities as well as providing support to the assembly, the MC and working groups. The ICH secretariat also provides support for the MedDRA MC. The ICH secretariat is located in Geneva, Switzerland. The ICH WGs are established by the assembly when a new technical topic is accepted for harmonisation, and are charged with developing a harmonised guideline that meets the objectives outlined in the concept paper and business plan. Face-to-face meetings of the WG will normally only take place during the biannual ICH meetings. Interim reports are made at each meeting of the assembly and made publicly available on the ICH website. Process of Harmonisation ICH harmonisation activities fall into 4 categories: Formal ICH Procedure, Q&A Procedure, Revision Procedure and Maintenance Procedure, depending on the activity to be undertaken. The development of a new harmonised guideline and its implementation (the formal ICH procedure) involves 5 steps: Step 1: Consensus building The WG works to prepare a consensus draft of the technical document, based on the objectives set out in the concept paper. When consensus on the draft is reached within the WG, the technical experts of the WG will sign the Step 1 Experts sign-off sheet. The Step 1 Experts' technical document is then submitted to the assembly to request adoption under Step 2 of the ICH process. Step 2a: Confirmation of consensus on the technical document Step 2a is reached when the assembly agrees, based on the report of the WG, that there is sufficient scientific consensus on the technical issues for the technical document to proceed to the next stage of regulatory consultation. The assembly then endorses the Step 2a technical document. Step 2b: Endorsement of draft guideline by regulatory members Step 2b is reached when the regulatory members of the assembly further endorse the draft guideline. Step 3: Regulatory consultation and discussion Step 3 occurs in three distinct stages: regulatory consultation, discussion, and finalisation of the Step 3 expert draft guideline. Stage I - Regional regulatory consultation: The guideline embodying the scientific consensus leaves the ICH process and becomes the subject of normal wide-ranging regulatory consultation in the ICH regions. Regulatory authorities and industry associations in other regions may also comment on the draft consultation documents by providing their comments to the ICH Secretariat. Stage II - Discussion of regional consultation comments: After obtaining all comments from the consultation process, the EWG works to address the comments received and reach consensus on what is called the Step 3 experts draft guideline. Stage III - Finalisation of Step 3 experts draft guideline: If, after due consideration of the consultation results by the WG, consensus is reached amongst the experts on a revised version of the Step 2b draft guideline, the Step 3 expert draft guideline is signed by the experts of the ICH regulatory members. The Step 3 expert draft guideline with regulatory EWG signatures is submitted to the regulatory members of the assembly to request adoption at Step 4 of the ICH process. Step 4: Adoption of an ICH harmonised guideline Step 4 is reached when the regulatory members of the assembly agree that there is sufficient scientific consensus on the draft guideline and adopt the ICH harmonised guideline. Step 5: Implementation The ICH harmonised guideline moves immediately to the final step of the process that is the regulatory implementation. This step is carried out according to the same national or regional procedures that apply to other regional regulatory guidelines and requirements in the ICH regions. Information on the regulatory action taken and implementation dates are reported back to the assembly and published by the ICH secretariat on the ICH website. Work products Guidelines The ICH topics are divided into four categories and ICH topic codes are assigned according to these categories: Q: Quality Guidelines S: Safety Guidelines E: Efficacy Guidelines M: Multidisciplinary Guidelines ICH guidelines are not binding, and instead implemented by regulatory members through national and regional governance. MedDRA MedDRA is a rich and highly specific standardised medical terminology developed by ICH to facilitate sharing of regulatory information internationally for medical products used by humans. It is used for registration, documentation and safety monitoring of medical products both before and after a product has been authorised for sale. Products covered by the scope of MedDRA include pharmaceuticals, vaccines and drug-device combination products. See also Brazilian Health Regulatory Agency Australia New Zealand Therapeutic Products Authority Biotechnology Innovation Organization Clinical study report Clinical trial Common Technical Document Council for International Organizations of Medical Sciences European Federation of Pharmaceutical Industries and Associations Food and Drug Administration, US Good clinical practice (GCP) Health Canada HSA, Singapore International Federation of Pharmaceutical Manufacturers & Associations International Pharmaceutical Federation Japan Pharmaceutical Manufacturers Association Ministry of Food and Drug Safety, Republic of Korea Ministry of Health, Labour and Welfare, Japan National pharmaceuticals policy Pharmaceutical policy Pharmacopoeia Pharmaceutical Research and Manufacturers of America Pharmaceuticals and Medical Devices Agency, Japan Regulation of therapeutic goods Swissmedic, Switzerland Food and Drug Administration (Taiwan) Uppsala Monitoring Centre Notes External links ICH website Analysis: New ICH M2 Requirements into eCTD NMV (=RPS) ANVISA, Brazil BIO EC, Europe EFPIA FDA, US Health Canada, Canada HSA, Singapore IGBA JPMA MedDRA website MFDS, Republic of Korea MHLW/PMDA, Japan PhRMA Swissmedic, Switzerland TFDA, Chinese Taipei WSMI Clinical research Pharmaceuticals policy Drug safety Life sciences industry International standards
International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use
[ "Chemistry", "Biology" ]
1,688
[ "Life sciences industry", "Drug safety" ]
1,616,141
https://en.wikipedia.org/wiki/Topological%20module
In mathematics, a topological module is a module over a topological ring such that scalar multiplication and addition are continuous. Examples A topological vector space is a topological module over a topological field. An abelian topological group can be considered as a topological module over where is the ring of integers with the discrete topology. A topological ring is a topological module over each of its subrings. A more complicated example is the -adic topology on a ring and its modules. Let be an ideal of a ring The sets of the form for all and all positive integers form a base for a topology on that makes into a topological ring. Then for any left -module the sets of the form for all and all positive integers form a base for a topology on that makes into a topological module over the topological ring See also References Abstract algebra Topology Topological algebra Topological groups
Topological module
[ "Physics", "Mathematics" ]
171
[ "Algebra stubs", "Abstract algebra", "Space (mathematics)", "Topological algebra", "Topological spaces", "Fields of abstract algebra", "Topology", "Space", "Topology stubs", "Geometry", "Topological groups", "Spacetime", "Algebra" ]
1,616,185
https://en.wikipedia.org/wiki/Electronic%20lab%20notebook
An electronic lab notebook (also known as electronic laboratory notebook, or ELN) is a computer program designed to replace paper laboratory notebooks. Lab notebooks in general are used by scientists, engineers, and technicians to document research, experiments, and procedures performed in a laboratory. A lab notebook is often maintained to be a legal document and may be used in a court of law as evidence. Similar to an inventor's notebook, the lab notebook is also often referred to in patent prosecution and intellectual property litigation. Electronic lab notebooks offer many benefits to the user as well as organizations; they are easier to search upon, simplify data copying and backups, and support collaboration amongst many users. ELNs can have fine-grained access controls, and can be more secure than their paper counterparts. They also allow the direct incorporation of data from instruments, replacing the practice of printing out data to be stapled into a paper notebook. Types ELNs can be divided into two categories: "Specific ELNs" contain features designed to work with specific applications, scientific instrumentation or data types. "Cross-disciplinary ELNs" or "Generic ELNs" are designed to support access to all data and information that needs to be recorded in a lab notebook. Lab Platforms that combine an ELN, LIMS, and scientific data management together, all-in-one configurable software environment. Solutions range from specialized programs designed from the ground up for use as an ELN, to modifications or direct use of more general programs. Examples of using more general software as an ELN include using OpenWetWare, a MediaWiki install (running the same software that Wikipedia uses), WordPress, or the use of general note taking software such as OneNote as an ELN. Examples of lab platforms that provide an all-in one lab data management solutions that include a combination of ELN, LIMS, SDMS, inventory management and informatics include Labguru, Benchling, Dotmatics, Sapio Sciences and more. ELN's come in many different forms. They can be standalone programs, use a client-server model, or be entirely web-based. Some use a lab-notebook approach, others resemble a blog. ELNs are embracing artificial intelligence and LLM technology to provide scientific AI chat assistants such as ELaiN. A good many variations on the "ELN" acronym have appeared. Differences between systems with different names are often subtle, with considerable functional overlap between them. Examples include "ERN" (Electronic Research Notebook), "ERMS" (Electronic Resource (or Research or Records) Management System (or Software) and SDMS (Scientific Data (or Document) Management System (or Software). Ultimately, these types of systems all strive to do the same thing: Capture, record, centralize and protect scientific data in a way that is highly searchable, historically accurate, and legally stringent, and which also promotes secure collaboration, greater efficiency, reduced mistakes and lowered total research costs. Objectives A good electronic laboratory notebook should offer a secure environment to protect the integrity of both data and process, whilst also affording the flexibility to adopt new processes or changes to existing processes without recourse to further software development. The package architecture should be a modular design, so as to offer the benefit of minimizing validation costs of any subsequent changes that you may wish to make in the future as your needs change. A good electronic laboratory notebook should be an "out of the box" solution that, as standard, has fully configurable forms to comply with the requirements of regulated analytical groups through to a sophisticated ELN for inclusion of structures, spectra, chromatograms, pictures, text, etc. where a preconfigured form is less appropriate. All data within the system may be stored in a database (e.g. MySQL, MS-SQL, Oracle) and be fully searchable. The system should enable data to be collected, stored and retrieved through any combination of forms or ELN that best meets the requirements of the user. The application should enable secure forms to be generated that accept laboratory data input via PCs and/or laptops / palmtops, and should be directly linked to electronic devices such as laboratory balances, pH meters, etc. Networked or wireless communications should be accommodated for by the package which will allow data to be interrogated, tabulated, checked, approved, stored and archived to comply with the latest regulatory guidance and legislation. A system should also include a scheduling option for routine procedures such as equipment qualification and study related timelines. It should include configurable qualification requirements to automatically verify that instruments have been cleaned and calibrated within a specified time period, that reagents have been quality-checked and have not expired, and that workers are trained and authorized to use the equipment and perform the procedures. Regulatory and legal aspects The laboratory accreditation criteria found in the ISO 17025 standard needs to be considered for the protection and computer backup of electronic records. These criteria can be found specifically in clause 4.13.1.4 of the standard. Electronic lab notebooks used for development or research in regulated industries, such as medical devices or pharmaceuticals, are expected to comply with FDA regulations related to software validation. The purpose of the regulations is to ensure the integrity of the entries in terms of time, authorship, and content. Unlike ELNs for patent protection, FDA is not concerned with patent interference proceedings, but is concerned with avoidance of falsification. Typical provisions related to software validation are included in the medical device regulations at 21 CFR 820 (et seq.) and Title 21 CFR Part 11. Essentially, the requirements are that the software has been designed and implemented to be suitable for its intended purposes. Evidence to show that this is the case is often provided by a Software Requirements Specification (SRS) setting forth the intended uses and the needs that the ELN will meet; one or more testing protocols that, when followed, demonstrate that the ELN meets the requirements of the specification and that the requirements are satisfied under worst-case conditions. Security, audit trails, prevention of unauthorized changes without substantial collusion of otherwise independent personnel (i.e., those having no interest in the content of the ELN such as independent quality unit personnel) and similar tests are fundamental. Finally, one or more reports demonstrating the results of the testing in accordance with the predefined protocols are required prior to release of the ELN software for use. If the reports show that the software failed to satisfy any of the SRS requirements, then corrective and preventive action ("CAPA") must be undertaken and documented. Such CAPA may extend to minor software revisions, or changes in architecture or major revisions. CAPA activities need to be documented as well. Aside from the requirements to follow such steps for regulated industry, such an approach is generally a good practice in terms of development and release of any software to assure its quality and fitness for use. There are standards related to software development and testing that can be applied (see ref.). See also List of ELN software packages Data management Laboratory informatics Jupyter References Further reading Research Scientific documents Notebooks Electronic documents Data management Content management systems Data management software
Electronic lab notebook
[ "Technology" ]
1,482
[ "Data management", "Data" ]
1,616,204
https://en.wikipedia.org/wiki/Network%20Load%20Balancing%20Services
Network Load Balancing Services (NLBS) is a Microsoft implementation of clustering and load balancing that is intended to provide high availability and high reliability, as well as high scalability. NLBS is intended for applications with relatively small data sets that rarely change (one example would be web pages), and do not have long-running in-memory states. These types of applications are called stateless applications, and typically include Web, File Transfer Protocol (FTP), and virtual private networking (VPN) servers. Every client request to a stateless application is a separate transaction, so it is possible to distribute the requests among multiple servers to balance the load. One attractive feature of NLBS is that all servers in a cluster monitor each other with a heartbeat signal, so there is no single point of failure. In its current incarnation in Windows Server 2003, NLBS does not support automatic removal of a failed server from a cluster unless the server is completely offline, or if its NLBS service is stopped. For example, if a web server is returning an error page instead of correct content, it is still perceived as "alive" by NLBS. As such, a monitoring script is typically required on every participating node, which checks the correctness of local web page delivery, and calls the nlb.exe utility to add or remove itself from the cluster as needed. History Windows NT Load Balancing Service (WLBS) is a feature of Windows NT that provides load balancing and clustering for applications. WLBS dynamically distributes IP traffic across multiple cluster nodes, and provides automatic failover in the event of node failure. WLBS was replaced by Network Load Balancing Services in Windows 2000. Auto fail over is also a part in this frame. Internet Protocol based network software Microsoft server technology Load balancing (computing)
Network Load Balancing Services
[ "Technology" ]
370
[ "Computing stubs", "Computer network stubs" ]
1,616,221
https://en.wikipedia.org/wiki/141%20%28number%29
141 (one hundred [and] forty-one) is the natural number following 140 and preceding 142. In mathematics 141 is: a centered pentagonal number. the sum of the sums of the divisors of the first 13 positive integers. the second n to give a prime Cullen number (of the form n2n + 1). an undulating number in base 10, with the previous being 131, and the next being 151. the sixth hendecagonal (11-gonal) number. a semiprime: a product of two prime numbers, namely 3 and 47. Since those prime factors are Gaussian primes, this means that 141 is a Blum integer. a Hilbert prime References Integers
141 (number)
[ "Mathematics" ]
148
[ "Elementary mathematics", "Integers", "Mathematical objects", "Numbers" ]
1,616,518
https://en.wikipedia.org/wiki/Betaxolol
Betaxolol is a selective beta1 receptor blocker used in the treatment of hypertension and angina. It is also a adrenergic blocker with no partial agonist action and minimal membrane stabilizing activity. Being selective for beta1 receptors, it typically has fewer systemic side effects than non-selective beta-blockers, for example, not causing bronchospasm (mediated by beta2 receptors) as timolol may. Betaxolol also shows greater affinity for beta1 receptors than metoprolol. In addition to its effect on the heart, betaxolol reduces the pressure within the eye (intraocular pressure). This effect is thought to be caused by reducing the production of the liquid (which is called the aqueous humor) within the eye. The precise mechanism of this effect is not known. The reduction in intraocular pressure reduces the risk of damage to the optic nerve and loss of vision in patients with elevated intraocular pressure due to glaucoma. It was patented in 1975 and approved for medical use in 1983. Medical uses Hypertension Betaxolol is most commonly ingested orally alone or with other medications for the management of essential hypertension. It is a cardioselective beta blocker, targeting beta-1 adrenergic receptors found in the cardiac muscle. Blood pressure is decreased by the mechanism of blood vessels relaxing and improving the flow of blood. Glaucoma Ophthalmic betaxolol is an available treatment for primary open angle glaucoma (POAG) and optical hypertension. Betaxolol effectively prevents the increase of intracellular calcium, which leads to increased production of the aqueous humor. In the context of open angle glaucoma, increased aqueous humor produced by ciliary bodies increases intraocular pressure, causing degeneration of retinal ganglion cells and the optic nerve. Furthermore, betaxolol is additionally able to protect retinal neurones following topical application from excitotoxicity or ischemia-reperfusion, providing a neuroprotective effect. This is thought to be attributed to its capacity to attenuate neuronal calcium and sodium influx. Betaxolol is also an effective treatment for Intraocular pressure Paronychia One study showed that topical betaxolol can be used in treating relapsed paronychia. Contraindications Hypersensitivity to the drug Patients with sinus bradycardia, heart block greater than first degree, cardiogenic shock, and overt cardiac failure Side effects The adverse side-effects of betaxolol can be categorized into local and systemic effects. The local effects include: transient irritation (20-40% of patients) burning pruritus, or general itching punctate keratitis blurry vision Systemically, patients taking betaxolol might experience: bradycardia hypotension fatigue sexual impotence hair loss confusion headache dizziness bronchospasm at higher doses cardiac problems such as arrhythmia, bundle branch block, myocardial infarction, sinus arrest, and congestive heart failure mental effects such as depression, disorientation, vertigo, sleepwalking, rhinitis dysuria metabolic side effects such as an increase in LDL cholesterol levels can mask the symptoms of hypoglycemia diabetic patients History Betaxolol was approved by the U.S. Food and Drug Administration (FDA) for ocular use as a 0.5% solution (Betoptic) in 1985 and as a 0.25% solution (Betoptic S) in 1989. Society and culture Brand names Brand names include Betoptic, Betoptic S, Lokren, Kerlone. See also Levobetaxolol Cicloprolol References External links Kerlone prescribing information Beta blockers Cyclopropyl compounds Ethers N-isopropyl-phenoxypropanolamines Ophthalmology drugs
Betaxolol
[ "Chemistry" ]
849
[ "Organic compounds", "Functional groups", "Ethers" ]
1,616,580
https://en.wikipedia.org/wiki/Chilbolton%20Observatory
The Chilbolton Observatory is a facility for atmospheric and radio research located on the edge of the village of Chilbolton near Stockbridge in Hampshire, England. The facilities are run by the STFC Radio Communications Research Unit of the Rutherford Appleton Laboratory and form part of the Science and Technology Facilities Council. Overview The Chilbolton Observatory operates many pieces of research equipment associated with radar propagation and meteorology. , these include: An S band Doppler weather radar with its distinctive, fully steerable, 25 metre (82') parabolic antenna. This equipment can be referred to as CAMRa (Chilbolton Advanced Meteorological Radar). An L band Clear-air radar A W band bistatic zenith radar A UV Raman Lidar Multiple Ka band radiometers Multiple rain gauges The observatory also hosts the UK's LOFAR station. Timeline of projects 1998 - CLARE'98 Cloud Lidar and Radar experiment, which eventually fed into the European Space Agency EarthCARE programme 2001 to 2004 - CLOUDMAP2 project to assist in Numerical weather prediction models 2006 - Chilbolton Observatory joined forces with several European Space Agency sites to verify the L band radio transmissions from the GIOVE-A satellite 2006 - NERC Cirrus and Anvils: European Satellite and Airborne Radiation measurements project 2008 - In-Orbit Test (IOT) performed for GIOVE-B 2008-9 - APPRAISE, during which the CAMRa and Lidar were used to direct airborne measurements in mixed-phase clouds 2010 - LOFAR station UK608 constructed History Construction of Chilbolton Observatory started in 1963. It was built partially on the site of RAF Chilbolton, which was decommissioned in 1946. Several sites around the south-east of England were considered for the construction. The site at Chilbolton, on the edge of Salisbury Plain, was chosen in part because of excellent visibility of the horizon and its relative remoteness from major roads whose cars could cause interference. The facility was opened in April 1967. Within several months of being commissioned the azimuth bearing of the antenna suffered a catastrophic failure. GEC were contracted to repair the bearing and devised a system to replace the failed part while leaving the 400 tonne dish ostensibly in-place. Originally, the antenna was engaged in Ku band radio astronomy, but now operates as a S and L band radar. References External links Chilbolton Observatory Facilities retrieved May 17, 2006 CLOUDMAP2 project homepage ESA News 'GIOVE A transmits loud and clear', ESA Portal - Improving Daily Life, March 9, 2006, retrieved May 17, 2006 Astronomical observatories in England Buildings and structures in Hampshire Low-Frequency Array Research institutes in Hampshire Science and Technology Facilities Council Space Situational Awareness Programme Test Valley Weather radars Meteorological observatories
Chilbolton Observatory
[ "Environmental_science" ]
576
[ "Space Situational Awareness Programme" ]
1,616,583
https://en.wikipedia.org/wiki/Generic%20cell%20rate%20algorithm
The generic cell rate algorithm (GCRA) is a leaky bucket-type scheduling algorithm for the network scheduler that is used in Asynchronous Transfer Mode (ATM) networks. It is used to measure the timing of cells on virtual channels (VCs) and or Virtual Paths (VPs) against bandwidth and jitter limits contained in a traffic contract for the VC or VP to which the cells belong. Cells that do not conform to the limits given by the traffic contract may then be re-timed (delayed) in traffic shaping, or may be dropped (discarded) or reduced in priority (demoted) in traffic policing. Nonconforming cells that are reduced in priority may then be dropped, in preference to higher priority cells, by downstream components in the network that are experiencing congestion. Alternatively they may reach their destination (VC or VP termination) if there is enough capacity for them, despite them being excess cells as far as the contract is concerned: see priority control. The GCRA is given as the reference for checking the traffic on connections in the network, i.e. usage/network parameter control (UPC/NPC) at user–network interfaces (UNI) or inter-network interfaces or network-network interfaces (INI/NNI) . It is also given as the reference for the timing of cells transmitted (ATM PDU Data_Requests) onto an ATM network by a network interface card (NIC) in a host, i.e. on the user side of the UNI . This ensures that cells are not then discarded by UPC/NCP in the network, i.e. on the network side of the UNI. However, as the GCRA is only given as a reference, the network providers and users may use any other algorithm that gives the same result. Description of the GCRA The GCRA is described by the ATM Forum in its User-Network Interface (UNI) and by the ITU-T in recommendation I.371 Traffic control and congestion control in B-ISDN . Both sources describe the GCRA in two equivalent ways: as a virtual scheduling algorithm and as a continuous state leaky bucket algorithm (figure 1). Leaky bucket description The description in terms of the leaky bucket algorithm may be the easier of the two to understand from a conceptual perspective, as it is based on a simple analogy of a bucket with a leak: see figure 1 on the leaky bucket page. However, there has been confusion in the literature over the application of the leaky bucket analogy to produce an algorithm, which has crossed over to the GCRA. The GCRA should be considered as a version of the leaky bucket as a meter rather than the leaky bucket as a queue. However, while there are possible advantages in understanding this leaky bucket description, it does not necessarily result in the best (fastest) code if implemented directly. This is evidenced by the relative number of actions to be performed in the flow diagrams for the two descriptions (figure 1). The description in terms of the continuous state leaky bucket algorithm is given by the ITU-T as follows: "The continuous-state leaky bucket can be viewed as a finite capacity bucket whose real-valued content drains out at a continuous rate of 1 unit of content per time unit and whose content is increased by the increment T for each conforming cell... If at a cell arrival the content of the bucket is less than or equal to the limit value τ, then the cell is conforming; otherwise, the cell is non-conforming. The capacity of the bucket (the upper bound of the counter) is (T + τ)" . It is worth noting that because the leak is one unit of content per unit time, the increment for each cell T and the limit value τ are in units of time. Considering the flow diagram of the continuous state leaky bucket algorithm, in which T is the emission interval and τ is the limit value: What happens when a cell arrives is that the state of the bucket is calculated from its state when the last conforming cell arrived, X, and how much has leaked out in the interval, ta – LCT. This current bucket value is then stored in X' and compared with the limit value τ. If the value in X' is not greater than τ, the cell did not arrive too early and so conforms to the contract parameters; if the value in X' is greater than τ, then it does not conform. If it conforms then, if it conforms because it was late, i.e. the bucket empty (X' <= 0), X is set to T; if it was early, but not too early, (τ >= X' > 0), X is set to X' + T. Thus the flow diagram mimics the leaky bucket analogy (used as a meter) directly, with X and X' acting as the analogue of the bucket. Virtual scheduling description The virtual scheduling algorithm, while not so obviously related to such an easily accessible analogy as the leaky bucket, gives a clearer understanding of what the GCRA does and how it may be best implemented. As a result, direct implementation of this version can result in more compact, and thus faster, code than a direct implementation of the leaky bucket description. The description in terms of the virtual scheduling algorithm is given by the ITU-T as follows: "The virtual scheduling algorithm updates a Theoretical Arrival Time (TAT), which is the 'nominal' arrival time of the cell assuming cells are sent equally spaced at an emission interval of T corresponding to the cell rate Λ [= 1/T] when the source is active. If the actual arrival time of a cell is not 'too early' relative to the TAT and tolerance τ associated to the cell rate, i.e. if the actual arrival time is after its theoretical arrive time minus the limit value (ta > TAT – τ), then the cell is conforming; otherwise, the cell is nonconforming" . If the cell is nonconforming then TAT is left unchanged. If the cell is conforming, and arrived before its TAT (equivalent to the bucket not being empty but being less than the limit value), then the next cell's TAT is simply TAT + T. However, if a cell arrives after its TAT, then the TAT for the next cell is calculated from this cell's arrival time, not its TAT. This prevents credit building up when there is a gap in the transmission (equivalent to the bucket becoming less than empty). This version of the algorithm works because τ defines how much earlier a cell can arrive than it would if there were no jitter: see leaky bucket: delay variation tolerance. Another way to see it is that TAT represents when the bucket will next empty, so a time τ before that is when the bucket is exactly filled to the limit value. So, in either view, if it arrives more than τ before TAT, it is too early to conform. Comparison with the token bucket The GCRA, unlike implementations of the token bucket algorithm, does not simulate the process of updating the bucket (the leak or adding tokens regularly). Rather, each time a cell arrives it calculates the amount by which the bucket will have leaked since its level was last calculated or when the bucket will next empty (= TAT). This is essentially replacing the leak process with a (realtime) clock, which most hardware implementations are likely to already have. This replacement of the process with an RTC is possible because ATM cells have a fixed length (53 bytes), thus T is always a constant, and the calculation of the new bucket level (or of TAT) does not involve any multiplication or division. As a result, the calculation can be done quickly in software, and while more actions are taken when a cell arrives than are taken by the token bucket, in terms of the load on a processor performing the task, the lack of a separate update process more than compensates for this. Moreover, because there is no simulation of the bucket update, there is no processor load at all when the connection is quiescent. However, if the GCRA were to be used to limit to a bandwidth, rather than a packet/frame rate, in a protocol with variable length packets (Link Layer PDUs), it would involve multiplication: basically the value added to the bucket (or to TAT) for each conforming packet would have to be proportionate to the packet length: whereas, with the GCRA as described, the water in the bucket has units of time, for variable length packets it would have to have units that are the product of packet length and time. Hence, applying the GCRA to limit the bandwidth of variable length packets without access to a fast, hardware multiplier (as in an FPGA) may not be practical. However, it can always be used to limit the packet or cell rate, as long as their lengths are ignored. Dual Leaky Bucket Controller Multiple implementations of the GCRA can be applied concurrently to a VC or a VP, in a dual leaky bucket traffic policing or traffic shaping function, e.g. applied to a Variable Bit Rate (VBR) VC. This can limit ATM cells on this VBR VC to a Sustained Cell Rate (SCR) and a Maximum Burst Size (MBS). At the same time, the dual leaky bucket traffic policing function can limit the rate of cells in the bursts to a Peak Cell Rate (PCR) and a maximum Cell Delay Variation tolerance (CDVt): see Traffic Contract#Traffic Parameters. This may be best understood where the transmission on an VBR VC is in the form of fixed length messages (CPCS-PDUs), which are transmitted with some fixed interval or the Inter Message Time (IMT) and take a number of cells, MBS, to carry them; however, the description of VBR traffic and the use of the dual leaky bucket are not restricted to such situations. In this case, the average cell rate over the interval of IMT is the SCR (=MBS/IMT). The individual messages can be transmitted at a PCR, which can be any value between the bandwidth for the physical link (1/δ) and the SCR. This allows the message to be transmitted in a period that is smaller than the message interval IMT, with gaps between instances of the message. In the dual leaky bucket, one bucket is applied to the traffic with an emission interval of 1/SCR and a limit value τSCR that gives an MBS that is the number of cells in the message: see leaky bucket#Maximum burst size. The second bucket has an emission interval of 1/PCR and a limit value τPCR that allows for the CDV up to that point in the path of the connection: see leaky bucket#Delay Variation Tolerance. Cells are then allowed through at the PCR, with jitter of τPCR, up to a maximum number of MBS cells. The next burst of MBS cells will then be allowed through starting MBS x 1/SCR after the first. If the cells arrive in a burst at a rate higher than 1/PCR (MBS cells arrive in less than (MBS - 1)/PCR - τPCR), or more than MBS cells arrive at the PCR, or bursts of MBS cells arrive closer than IMT apart, the dual leaky bucket will detect this and delay (shaping) or drop or de-prioritize (policing) enough cells to make the connection conform. Figure 3 shows the reference algorithm for SCR and PCR control for both Cell Loss Priority (CLP) values 1 (low) and 0 (high) cell flows, i.e. where the cells with both priority values are treated the same. Similar reference algorithms where the high and low priority cells are treated differently are also given in Annex A to I.371 . See also Asynchronous Transfer Mode Leaky bucket UPC and NPC NNI Traffic contract Connection admission control Traffic shaping Traffic policing (communications) Token bucket References Networking algorithms Teletraffic Network scheduling algorithms Asynchronous Transfer Mode
Generic cell rate algorithm
[ "Engineering" ]
2,553
[ "Asynchronous Transfer Mode", "Computer networks engineering" ]
1,616,712
https://en.wikipedia.org/wiki/Advanced%20Simulation%20and%20Computing%20Program
The Advanced Simulation and Computing Program (ASC) is a super-computing program run by the National Nuclear Security Administration, in order to simulate, test, and maintain the United States nuclear stockpile. The program was created in 1995 in order to support the Stockpile Stewardship Program (or SSP). The goal of the initiative is to extend the lifetime of the current aging stockpile. History After the United States' 1992 moratorium on live nuclear testing, the Stockpile Stewardship Program was created in order to find a way to test, and maintain the nuclear stockpile. In response, the National Nuclear Security Administration began to simulate the nuclear warheads using supercomputers. As the stockpile ages, the simulations have become more complex, and the maintenance of the stockpile requires more computing power. Over the years, due to Moore's Law, the ASC program has created several different supercomputers with increasing power, in order to compute the simulations and mathematics. In celebration of 25 years of ASC accomplishments, the Advanced Simulation and Computing Program has published this report. Research The majority of ASC's research is done on supercomputers in three different laboratories. The calculations are verified by human calculations. Laboratories The ASC program has three laboratories: Sandia National Laboratories Los Alamos National Laboratory Lawrence Livermore National Laboratory Computing Current supercomputers The ASC program currently houses numerous supercomputers on the TOP500 list for computing power. This list changes every six months, so please visit https://top500.org/lists/top500/ for the latest list of NNSA machines. Although these computers may be in separate laboratories, remote computing has been established between the three main laboratories. Previous supercomputers ASCI Purple Red Storm Blue Gene/L: World's fastest supercomputer, November 2004 – November 2007 Blue Gene Q (aka, Sequoia) ASCI Q: Installed in 2003, it was a AlphaServer SC45/GS Cluster and reached 7.727 Teraflops. ASQI Q used DEC Alpha 1250 MHz (2.5 GFlops) processors and a Quadrics interconnect. ASCI Q placed as the 2nd fastest supercomputer in the world in 2003. ASCI White: World's fastest supercomputer, November 2000 – November 2001 ASCI Blue Mountain ASCI Blue Pacific ASCI Red: World's fastest supercomputer, June 1997 – June 2000 Newsletter The ASC program publishes a quarterly newsletter describing many of its research accomplishments and hardware milestones. Elements Within the ASC program, there are six subdivisions, each having their own role in the extension of the life of the stockpile. Facility Operations and User Support The Facility Operations and User Support subdivision is responsible for the physical computers and facilities and the computing network within ASC. They are responsible for making sure the tri-lab network, computing storage space, power usage, and the customer computing resources are all in line. Computational Systems and Software Environment The Computational and User Support subdivision is responsible for maintaining and creating the supercomputer software according to NNSA's standards. They also deal with the data, networking and software tools. The ASCI Path Forward project substantially funded the initial development of the Lustre parallel file system from 2001 to 2004. Verification and Validation The Verification and Validation subdivision is responsible for mathematically verifying the simulations and outcomes. They also help software engineers write more precise codes in order to decrease the margin of error when the computations are run. Physics and Engineering Models The Physics and Engineering Models subdivision is responsible for deciphering the mathematical and physical analysis of nuclear weapons. They integrate physics models into the codes in order to gain a more accurate simulation. They deal with the way that the nuclear weapon will act under certain conditions based on physics. They also study nuclear properties, vibrations, high explosives, advanced hydrodynamics, material strength and damage, thermal and fluid response, and radiation and electrical responses. Integrated Codes The Integrated Codes subdivision is responsible for the mathematical codes that are produced by the supercomputers. They use these mathematical codes, and present them in a way that is understandable to humans. These codes are then used by the National Nuclear Society Administration, the Stockpile Steward Program, Life Extension Program, and Significant Finding Investigation, in order to decide the next steps that need to be taken in order to secure and lengthen the life of the nuclear stockpile. Advanced Technology Development and Mitigation The Advanced Technology Development and Mitigation subdivision is responsible for researching developments in high performance computing. Once information is found on the next generation of high performance computing, they decide what software and hardware needs to be adapted in order to prepare for the next generation of computers. References Supercomputing Research and development in the United States
Advanced Simulation and Computing Program
[ "Technology" ]
988
[ "Supercomputing" ]
1,616,775
https://en.wikipedia.org/wiki/Dissociation%20%28chemistry%29
Dissociation in chemistry is a general process in which molecules (or ionic compounds such as salts, or complexes) separate or split into other things such as atoms, ions, or radicals, usually in a reversible manner. For instance, when an acid dissolves in water, a covalent bond between an electronegative atom and a hydrogen atom is broken by heterolytic fission, which gives a proton (H+) and a negative ion. Dissociation is the opposite of association or recombination. Dissociation constant For reversible dissociations in a chemical equilibrium AB <=> A + B the dissociation constant Kd is the ratio of dissociated to undissociated compound where the brackets denote the equilibrium concentrations of the species. Dissociation degree The dissociation degree is the fraction of original solute molecules that have dissociated. It is usually indicated by the Greek symbol α. More accurately, degree of dissociation refers to the amount of solute dissociated into ions or radicals per mole. In case of very strong acids and bases, degree of dissociation will be close to 1. Less powerful acids and bases will have lesser degree of dissociation. There is a simple relationship between this parameter and the van 't Hoff factor . If the solute substance dissociates into ions, then For instance, for the following dissociation KCl <=> K+ + Cl- As , we would have that . Salts The dissociation of salts by solvation in a solution, such as water, means the separation of the anions and cations. The salt can be recovered by evaporation of the solvent. An electrolyte refers to a substance that contains free ions and can be used as an electrically conductive medium. Most of the solute does not dissociate in a weak electrolyte, whereas in a strong electrolyte a higher ratio of solute dissociates to form free ions. A weak electrolyte is a substance whose solute exists in solution mostly in the form of molecules (which are said to be "undissociated"), with only a small fraction in the form of ions. Simply because a substance does not readily dissolve does not make it a weak electrolyte. Acetic acid () and ammonium () are good examples. Acetic acid is extremely soluble in water, but most of the compound dissolves into molecules, rendering it a weak electrolyte. Weak bases and weak acids are generally weak electrolytes. In an aqueous solution there will be some and some and . A strong electrolyte is a solute that exists in solution completely or nearly completely as ions. Again, the strength of an electrolyte is defined as the percentage of solute that is ions, rather than molecules. The higher the percentage, the stronger the electrolyte. Thus, even if a substance is not very soluble, but does dissociate completely into ions, the substance is defined as a strong electrolyte. Similar logic applies to a weak electrolyte. Strong acids and bases are good examples, such as HCl and . These will all exist as ions in an aqueous medium. Gases The degree of dissociation in gases is denoted by the symbol , where refers to the percentage of gas molecules which dissociate. Various relationships between and exist depending on the stoichiometry of the equation. The example of dinitrogen tetroxide () dissociating to nitrogen dioxide () will be taken. If the initial concentration of dinitrogen tetroxide is 1 mole per litre, this will decrease by at equilibrium giving, by stoichiometry, moles of . The equilibrium constant (in terms of pressure) is given by the equation where represents the partial pressure. Hence, through the definition of partial pressure and using to represent the total pressure and to represent the mole fraction; The total number of moles at equilibrium is , which is equivalent to . Thus, substituting the mole fractions with actual values in term of and simplifying; This equation is in accordance with Le Chatelier's principle. will remain constant with temperature. The addition of pressure to the system will increase the value of , so must decrease to keep constant. In fact, increasing the pressure of the equilibrium favours a shift to the left favouring the formation of dinitrogen tetroxide (as on this side of the equilibrium there is less pressure since pressure is proportional to number of moles) hence decreasing the extent of dissociation . Acids in aqueous solution The reaction of an acid in water solvent is often described as a dissociation HA <=> H+ + A- where HA is a proton acid such as acetic acid, CH3COOH. The double arrow means that this is an equilibrium process, with dissociation and recombination occurring at the same time. This implies that the acid dissociation constant However a more explicit description is provided by the Brønsted–Lowry acid–base theory, which specifies that the proton H+ does not exist as such in solution but is instead accepted by (bonded to) a water molecule to form the hydronium ion H3O+. The reaction can therefore be written as HA + H2O <=> H3O+ + A- and better described as an ionization or formation of ions (for the case when HA has no net charge). The equilibrium constant is then where [H_2O] is not included because in dilute solution the solvent is essentially a pure liquid with a thermodynamic activity of one. Ka is variously named a dissociation constant, an acid ionization constant, an acidity constant or an ionization constant. It serves as an indicator of the acid strength: stronger acids have a higher Ka value (and a lower pKa value). Fragmentation Fragmentation of a molecule can take place by a process of heterolysis or homolysis. Receptors Receptors are proteins that bind small ligands. The dissociation constant Kd is used as indicator of the affinity of the ligand to the receptor. The higher the affinity of the ligand for the receptor the lower the Kd value (and the higher the pKd value). See also Bond-dissociation energy Photodissociation, dissociation of molecules by photons (light, gamma rays, x-rays) Radiolysis, dissociation of molecules by ionizing radiation Thermal decomposition References Chemical processes Equilibrium chemistry
Dissociation (chemistry)
[ "Chemistry" ]
1,357
[ "Chemical process engineering", "nan", "Chemical processes", "Equilibrium chemistry" ]
1,616,827
https://en.wikipedia.org/wiki/Adulterant
An adulterant is caused by the act of adulteration, a practice of secretly mixing a substance with another. Typical substances that are adulterated include but are not limited to food, cosmetics, pharmaceuticals, fuel, or other chemicals, that compromise the safety or effectiveness of the said substance. Definition Adulteration is the practice of secretly mixing a substance with another. The secretly added substance will not normally be present in any specification or declared substances due to accident or negligence rather than intent, and also for the introduction of unwanted substances after the product has been made. Adulteration, therefore, implies that the adulterant was introduced deliberately in the initial manufacturing process, or sometimes that it was present in the raw materials and should have been removed, but was not. An adulterant is distinct from, for example, permitted food preservatives. There can be a fine line between adulterant and additive; chicory may be added to coffee to reduce the cost or achieve a desired flavor—this is adulteration if not declared, but may be stated on the label. Chalk was often added to bread flour; this reduces the cost and increases whiteness, but the calcium confers health benefits, and in modern bread, a little chalk may be included as an additive for this reason. In wartime, adulterants have been added to make foodstuffs "go further" and prevent shortages. The German word ersatz is widely recognised for such practices during World War II. Such adulteration was sometimes deliberately hidden from the population to prevent loss of morale and propaganda reasons. Some goods considered luxurious in the Soviet Bloc such as coffee were adulterated to make them affordable to the general population. In food and beverages Past and present examples of adulterated food, some dangerous, include: Apple jellies (jams), as substitutes for more expensive fruit jellies, with added colorant and sometimes even specks of wood that simulate raspberry or strawberry seeds High fructose corn syrup or cane sugar, used to adulterate honey Red ochre–soaked brown bread to give the appearance of beef sausage for sausage roll filling. Olive oil adulteration Roasted chicory roots used as an adulterant for coffee (if not mentioned or conveyed the same in any manor) Water, for diluting milk and alcoholic beverages Water or brine injected into chicken, pork, or other meats to increase their weight Urea, melamine and other nonprotein nitrogen sources, added to protein products to inflate crude protein content measurements History Historically, the use of adulterants has been common; sometimes dangerous substances have been used. In the United Kingdom up to the Victorian era, adulterants were common; for example, cheeses were sometimes colored with lead. Similar adulteration issues were seen in industries in the United States, during the 19th century. There is a dispute over whether these practices declined primarily due to government regulation or to increased public awareness and concern over the practices. In the early 21st century, cases of dangerous adulteration occurred in the People's Republic of China. In some African countries, it is not uncommon for thieves to break electric transformers to steal transformer oil, which is then sold to the operators of roadside food stalls to be used for deep frying. When used for frying, it is reported that transformer oil lasts much longer than regular cooking oil. The downside of this misuse of the transformer oil is the threat to the health of the consumers, due to the presence of PCBs. Adulterant use was first investigated in 1820 by the German chemist Frederick Accum, who identified many toxic metal colorings in food and drink. His work antagonized food suppliers, and he was ultimately discredited by a scandal over his alleged mutilation of books in the Royal Institution library. The physician Arthur Hill Hassall conducted extensive studies in the early 1850s, which were published in The Lancet and led to the 1860 Food Adulteration Act and other legislation. John Postgate led a further campaign, leading to another Act of 1875, which forms the basis of the modern legislation and a system of public analyst who test for adulteration. At the turn of the 20th century, industrialization in the United States led to a increase in adulteration, which inspired some protest. Accounts of adulteration led the New York Evening Post to parody: Mary had a little lamb, And when she saw it sicken, She shipped it off to Packingtown, And now it's labeled chicken. However, even in the 18th century, people complained about adulteration in food:"The bread I eat in London is a deleterious paste, mixed up with chalk, alum and bone ashes, insipid to the taste and destructive to the constitution. The good people are not ignorant of this adulteration; but they prefer it to wholesome bread, because it is whiter than the meal of corn [wheat]. Thus they sacrifice their taste and their health. . . to a most absurd gratification of a misjudged eye; and the miller or the baker is obliged to poison them and their families, in order to live by his profession." – Tobias Smollett, The Expedition of Humphry Clinker (1771) Incidents In 1981, denaturated Colza oil was added to Olive oil in Spain and 600 people were killed (See Toxic oil syndrome) In 1987, Beech-Nut was fined for violating the US Federal Food, Drug, and Cosmetic Act by selling flavored sugar water as apple juice. In 1997, ConAgra Foods illegally sprayed water on stored grain to increase its weight. In 2007, samples of wheat gluten mixed with melamine, presumably to produce inflated results from tests for protein content, were discovered in the USA. They were found to have come from China. (See: Chinese protein adulteration.) In the 2008 Chinese milk scandal, significant portions of China's milk supply were found to have been adulterated with melamine. Infant formula produced from this milk killed at least six children and is believed to have harmed two hundred thousand children. In 2012, a study in India across 29 states and union territories found that milk was adulterated with detergent, fat, and even urea, and diluted with water. Just 31.5% of samples conformed to FSSAI standards. In the 2013 meat adulteration scandal in Europe, horsemeat was passed off as beef. In 2019, it was discovered that lead chromate was widely added to turmeric sold in Bangladesh to enhance its yellow color, which was largely responsible for consistently high lead poisoning rates in the country and prompted a government crackdown. By 2021, the practice had been eradicated in the country, and blood lead levels had dropped. See also Anthropogenic hazard Surrogate alcohol: harmful substances which are used as substitutes for alcoholic beverages Denatured alcohol: alcohol which is deliberately poisoned to discourage its recreational use Impurity Fake food Cutting agent References Further reading (1820) by Friedrich Accum External links Doping in sport Drug culture Food additives Food industry Food safety Pejorative terms
Adulterant
[ "Chemistry" ]
1,451
[ "Adulteration", "Drug safety" ]
1,616,845
https://en.wikipedia.org/wiki/Aerated%20lagoon
An aerated lagoon (or aerated pond) is a simple wastewater treatment system consisting of a pond with artificial aeration to promote the biological oxidation of wastewaters. There are many other aerobic biological processes for treatment of wastewaters, for example activated sludge, trickling filters, rotating biological contactors and biofilters. They all have in common the use of oxygen (or air) and microbial action to reduce the pollutants in wastewaters. Types Suspension mixed lagoons, where there is less energy provided by the aeration equipment to keep the sludge in suspension. Facultative lagoons, where there is insufficient energy provided by the aeration equipment to keep the sludge in suspension and solids settle to the lagoon floor. The biodegradable solids in the settled sludge then degrade as in an anaerobic lagoon. Suspension mixed lagoons Suspension mixed lagoons flow through activated sludge systems where the effluent has the same composition as the mixed liquor in the lagoon. Typically the sludge will have a residence time or sludge age of 1 to 5 days. This means that the chemical oxygen demand (COD) removed is relatively little and the effluent is therefore unacceptable for discharge into receiving waters. The objective of the lagoon is therefore to act as a biologically assisted flocculator which converts the soluble biodegradable organics in the influent to a biomass which is able to settle as a sludge. Usually the effluent is then put in a second pond where the sludge can settle. The effluent can then be removed from the top with a low chemical oxygen demand, while the sludge accumulates on the floor and undergoes anaerobic stabilisation. Methods of aerating lagoons or basins There are many methods for aerating a lagoon or basin: Motor-driven submerged or floating jet aerators Motor-driven floating surface aerators Motor-driven fixed-in-place surface aerators Injection of compressed air through submerged diffusers Floating surface aerators Ponds or basins using floating surface aerators achieve 80 to 90% removal of BOD with retention times of 1 to 10 days. The ponds or basins may range in depth from 1.5 to 5.0 meters. In a surface-aerated system, the aerators provide two functions: they transfer air into the basins required by the biological oxidation reactions, and they provide the mixing required for dispersing the air and for contacting the reactants (that is, oxygen, wastewater and microbes). Typically, the floating high speed surface aerators are rated to deliver the amount of air equivalent to 1 to 1.2 kg [[O2]]/kWh. However, they do not provide as good mixing as is normally achieved in activated sludge systems and therefore aerated basins do not achieve the same performance level as activated sludge units. With low speed surface aerators SOTE (Standard Oxygen Transfer Efficiency) is higher thanks to better mixing capacity. This mixing capacity of an impeller depends highly on the impeller diameter. Low speed surface aerator present such high diameter. Therefore SOTE for low speed surface aerators is about 2 to 2.5 kg O2/kWh. This is why low speed surface aerators are mostly used in sewage or industrial treatment as WWTP are bigger and sparing energy becomes very interesting. Biological oxidation processes are sensitive to temperature and, between 0 °C and 40 °C, the rate of biological reactions increase with temperature. Most surface aerated vessels operate at between 4 °C and 32 °C. Submerged diffused aeration Submerged diffused air is essentially a form of a diffuser grid inside a lagoon. There are two main types of submerged diffused aeration systems for lagoon applications: floating lateral and submerged lateral. Both these systems utilize fine or medium bubble diffusers to provide aeration and mixing to the process water. The diffusers can be suspended slightly above the lagoon floor or may rest on the bottom. Flexible airline or weighted air hose supplies air to the diffuser unit from the air lateral (either floating or submerged). See also Industrial wastewater treatment List of waste water treatment technologies Retention basin Rotating biological contactor Sewage treatment Waste stabilization pond Water aeration Water pollution References External links Wastewater Lagoon Systems in Maine Aerated, Partial Mix Lagoons (Wastewater Technology Fact Sheet by the U.S. Environmental Protection Agency) Aerated Lagoon Technology (Linvil G. Rich, Professor Emeritus, Department of Environmental Engineering and Science, Clemson University) Waste treatment technology
Aerated lagoon
[ "Chemistry", "Engineering" ]
933
[ "Water treatment", "Waste treatment technology", "Environmental engineering" ]
1,616,855
https://en.wikipedia.org/wiki/Carbon%20chauvinism
Carbon chauvinism is a neologism meant to disparage the assumption that the chemical processes of hypothetical extraterrestrial life must be constructed primarily from carbon (organic compounds) because as far as is known, carbon's chemical and thermodynamic properties render it far superior to all other elements at forming molecules used in living organisms. The expression "carbon chauvinism" is also used to criticize the idea that artificial intelligence cannot in theory be sentient or truly intelligent because the underlying matter is not biological. Furthermore, the term is used by transhumanists to object to the commonly held view that life has an inherently higher moral value than hypothetical artificial consciousness. Concept The term was used as early as 1973, when scientist Carl Sagan described it and other human chauvinisms that limit imagination of possible extraterrestrial life. It suggests that human beings, as carbon-based life forms who have never encountered any life that has evolved outside the Earth's environment, may find it difficult to envision radically different biochemistries. Carbon alternatives Like carbon, silicon can form four stable bonds with itself and other elements, and long chemical chains known as silane polymers, which are very similar to the hydrocarbons essential to life on Earth. Silicon is more reactive than carbon, which could make it optimal for extremely cold environments. However, silanes spontaneously burn in the presence of oxygen at relatively low temperatures, so an oxygen atmosphere may be deadly to silicon-based life. On the other hand, it is worth considering that alkanes are as a rule quite flammable, but carbon-based life on Earth does not store energy directly as alkanes, but as sugars, lipids, alcohols, and other hydrocarbon compounds with very different properties. Water as a solvent would also react with silanes, but again, this only matters if for some reason silanes are used or mass-produced by such organisms. Silicon lacks an important property of carbon: single, double, and triple carbon-carbon bonds are all relatively stable. Aromatic carbon structures underpin DNA, which could not exist without this property of carbon. By comparison, compounds containing silene double bonds (such as silabenzene, an unstable analogue of benzene) exhibit far lower stability than the equivalent carbon compound. A pair of silane single bonds have significantly greater total enthalpy than a single silene double bond, so simple disilenes readily autopolymerise, and silicon favors the formation of linear chains of single bonds (see the double bond rule). Hydrocarbons and organic compounds are abundant in meteorites, comets, and interstellar clouds, while their silicon analogs have never been observed in nature. Silicon does, however, form complex one-, two- and three-dimensional polymers in which oxygen atoms form bridges between silicon atoms. These are termed silicates. They are both stable and abundant under terrestrial conditions, and have been proposed as a basis for a pre-organic form of evolution on Earth (see clay hypothesis). See also References Astrobiology Biological hypotheses Astronomical hypotheses Chauvinism Carbon Biochemistry
Carbon chauvinism
[ "Chemistry", "Astronomy", "Biology" ]
648
[ "Astronomical hypotheses", "Origin of life", "Speculative evolution", "Astrobiology", "Astronomical controversies", "nan", "Biochemistry", "Astronomical sub-disciplines", "Biological hypotheses" ]